diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe CS6 Response Code Generator A Guide to Activate Your Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe CS6 Response Code Generator A Guide to Activate Your Software.md deleted file mode 100644 index aa0634e7242fe270f22ecd7cc086d9c9b2d38c1d..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Adobe CS6 Response Code Generator A Guide to Activate Your Software.md +++ /dev/null @@ -1,140 +0,0 @@ - -

Adobe CS6 Response Code Generator: How to Activate Adobe Products Offline

-

If you are a creative professional or enthusiast who uses Adobe CS6 products, such as Photoshop, Illustrator, InDesign, and more, you may have encountered situations where you need to activate your software offline. This could be because you are traveling, have internet connection issues, or work in a secure environment where online activation is not possible.

-

In this article, we will explain what is Adobe CS6 and why do you need a response code generator for offline activation. We will also show you how to generate a response code for Adobe CS6 offline activation using an internet-enabled device and your product's serial number. Finally, we will discuss the benefits and limitations of using a response code generator for Adobe CS6 offline activation.

-

adobe cs6 response code generator


Download Ziphttps://byltly.com/2uKxK2



-

What is Adobe CS6 and why do you need a response code generator?

-

Adobe CS6 is a suite of creative software products that includes Photoshop, Illustrator, InDesign, and more.

-

Adobe CS6 stands for Creative Suite 6, which is a collection of software products that enable you to create, edit, design, and publish various types of digital content. Some of the most popular products in Adobe CS6 are:

- -

These are just some of the products in Adobe CS6. There are many more products that cater to different creative needs and workflows.

-

A response code generator is a tool that helps you activate Adobe products offline when you cannot connect to the internet or Adobe servers.

-

To use Adobe products, you need to activate them with your Adobe ID and password. This process verifies that you have a valid license for the product and prevents unauthorized use or piracy.

-

Normally, this process is done online by connecting to the internet and signing in with your Adobe ID and password. However, there may be situations where you cannot connect to the internet or Adobe servers due to various reasons. For example:

- -

In these situations, you need to use an alternative method of activation called offline activation. Offline activation allows you to activate your Adobe products without an internet connection or access to Adobe servers.

-

adobe cs6 activation code generator online
-adobe cs6 serial number generator mac
-adobe cs6 master collection response code crack
-adobe cs6 offline activation code generator
-adobe cs6 keygen generator download
-adobe cs6 license key generator free
-adobe cs6 product code generator
-adobe cs6 registration code generator
-adobe cs6 authorization code generator windows
-adobe cs6 activation code generator for pc
-adobe cs6 serial number generator windows 10
-adobe cs6 master collection response code keygen
-adobe cs6 offline activation code generator mac
-adobe cs6 keygen generator online free
-adobe cs6 license key generator online
-adobe cs6 product code generator mac
-adobe cs6 registration code generator online
-adobe cs6 authorization code generator mac
-adobe cs6 activation code generator for mac
-adobe cs6 serial number generator windows 7
-adobe cs6 master collection response code generator online
-adobe cs6 offline activation code generator windows 10
-adobe cs6 keygen generator download free
-adobe cs6 license key generator mac
-adobe cs6 product code generator online free
-adobe cs6 registration code generator mac
-adobe cs6 authorization code generator windows 10
-adobe cs6 activation code generator for windows 10
-adobe cs6 serial number generator windows 8.1
-adobe cs6 master collection response code crack download
-adobe cs6 offline activation code generator windows 7
-adobe cs6 keygen generator online no survey
-adobe cs6 license key generator windows 10
-adobe cs6 product code generator windows 10
-adobe cs6 registration code generator windows 10
-adobe cs6 authorization code generator windows 7
-adobe cs6 activation code generator for windows 7
-adobe cs6 serial number generator windows xp
-adobe cs6 master collection response code hack
-adobe cs6 offline activation code generator windows 8.1
-adobe cs6 keygen generator free download no survey
-adobe cs6 license key generator windows 7
-adobe cs6 product code generator windows 7
-adobe cs6 registration code generator windows 7
-adobe cs6 authorization code generator windows 8.1
-adobe cs6 activation code generator for windows 8.1
-adobe cs6 serial number generator mac os x
-adobe cs6 master collection response code bypass
-adobe cs6 offline activation code generator mac os x
-adobe cs6 keygen generator mac download

-

To perform offline activation, you need a tool called a response code generator. A response code generator is a web page that helps you generate a unique code called a response code that you can use to activate your Adobe products offline.

-

How to generate a response code for Adobe CS6 offline activation?

-

Step 1: Follow the installation or product launch screens until you see a link that says "I cannot connect to the internet" or "Having trouble connecting to the internet". Click the link and follow the instructions to generate a request code.

-

The first step of offline activation is to generate a request code on your offline computer where you want to use your Adobe product. A request code is another unique code that identifies your computer and product.

-

To generate a request code:

-
    -
  1. Install or launch your Adobe product on your offline computer as usual.
  2. -
  3. Follow the installation or product launch screens until you see a link that says "I cannot connect to the internet" or "Having trouble connecting to the internet". Click the link.
  4. -
  5. You will see a screen that asks you to enter your product's serial number. Enter it and click Next.
  6. -
  7. You will see another screen that shows your request code. Write it down or copy it somewhere safe. You will need it later.
  8. -
-

Step 2: Use an internet-enabled device to visit https://exception.licenses.adobe.com/aoes/aoes/v1/t1?locale=en and sign in with your Adobe ID and password. Enter the request code and your product's serial number to generate a response code.

-

The second step of offline activation is to generate a response code using an internet-enabled device such as another computer, a smartphone, or a tablet. A response code is the final code that you can use to activate your Adobe product offline.

-

To generate a response code:

-
    -
  1. Use an internet-enabled device to visit https://exception.licenses.adobe.com/aoes/aoes/v1/t1?locale=en
  2. -
  3. Sign in with your Adobe ID and password. If you do not have an Adobe ID, you can create one for free by clicking Create an account.
  4. -
  5. Enter the request code that you generated in step 1 and your product's serial number in the corresponding fields. Click Generate Response Code.
  6. -
  7. You will see your response code on the screen. Write it down or copy it somewhere safe. You will need it later.
  8. -
-

Step 3: Enter the response code on the installation or launch product screen of your offline computer when you are prompted to complete the offline activation process.

-

The third step of offline activation is to enter the response code on your offline computer where you want to use your Adobe product. This will complete the offline activation process and allow you to use your product normally.

-

To enter the response code:

-
    -
  1. Go back to your offline computer where you installed or launched your Adobe product in step 1.
  2. -
  3. You should see a screen that prompts you to enter your response code. Enter it exactly as it appears and click Activate. This will complete the offline activation process and allow you to use your product normally.
  4. -
-

What are the benefits of using a response code generator for Adobe CS6 offline activation?

-

You can activate your Adobe products without an internet connection or access to Adobe servers.

-

One of the main benefits of using a response code generator for Adobe CS6 offline activation is that you can activate your Adobe products without an internet connection or access to Adobe servers. This means that you can use your products anytime and anywhere, even when you are offline or in a secure environment where online activation is not possible.

-

You can use your Adobe products on secure environments like government, banking, etc. where online activation is not possible.

-

Another benefit of using a response code generator for Adobe CS6 offline activation is that you can use your Adobe products on secure environments like government, banking, etc. where online activation is not possible due to security policies or restrictions. For example, if you work in a government agency or a bank that does not allow internet access or connection to external servers, you can still use your Adobe products by activating them offline using a response code generator.

-

You can avoid activation errors or issues that may occur due to network problems or server outages.

-

A third benefit of using a response code generator for Adobe CS6 offline activation is that you can avoid activation errors or issues that may occur due to network problems or server outages. For example, if you have a slow or unstable internet connection that prevents you from connecting to Adobe servers or completing the online activation process, you can still use your Adobe products by activating them offline using a response code generator. Similarly, if Adobe servers are down or undergoing maintenance, you can still use your Adobe products by activating them offline using a response code generator.

-

What are the limitations of using a response code generator for Adobe CS6 offline activation?

-

You need an internet-enabled device and your product's serial number to generate a response code.

-

One of the limitations of using a response code generator for Adobe CS6 offline activation is that you need an internet-enabled device and your product's serial number to generate a response code. This means that you cannot activate your Adobe products offline without having access to another device that has internet access and your product's serial number. For example, if you lose your product's serial number or do not have another device that has internet access, you cannot generate a response code and activate your Adobe products offline.

-

You need to complete the offline activation within 7 days of the first launch of your Adobe product or it will stop working.

-

Another limitation of using a response code generator for Adobe CS6 offline activation is that you need to complete the offline activation within 7 days of the first launch of your Adobe product or it will stop working. This means that you cannot use your Adobe products indefinitely without connecting to the internet or Adobe servers at least once every 7 days. For example, if you travel for more than 7 days without internet access or access to Adobe servers, you will not be able to use your Adobe products until you complete the online activation and registration process.

-

The request code is machine-specific and valid for 72 hours. If it takes longer than 72 hours to complete the offline activation, you need to generate a new request code.

-

A third limitation of using a response code generator for Adobe CS6 offline activation is that the request code is machine-specific and valid for 72 hours. This means that you cannot use the same request code on different computers or after 72 hours have passed since you generated it. For example, if you want to activate your Adobe products on another computer or if it takes longer than 72 hours to generate a response code and enter it on your offline computer, you need to generate a new request code and repeat the offline activation process.

-

Conclusion

-

In this article, we have explained what is Adobe CS6 and why do you need a response code generator for offline activation. We have also shown you how to generate a response code for Adobe CS6 offline activation using an internet-enabled device and your product's serial number. Finally, we have discussed the benefits and limitations of using a response code generator for Adobe CS6 offline activation.

-

We hope that this article has helped you understand how to activate your Adobe products offline using a response code generator. If you have any questions or feedback, please feel free to leave a comment below.

-

Frequently Asked Questions

-
    -
  1. What is the difference between online and offline activation?
  2. -

    Online activation is the process of activating your Adobe products by connecting to the internet and signing in with your Adobe ID and password. Offline activation is the process of activating your Adobe products without an internet connection or access to Adobe servers by using a response code generator.

    -
  3. Can I use both online and offline activation for my Adobe products?
  4. -

    Yes, you can use both online and offline activation for your Adobe products depending on your situation and preference. However, you cannot use both methods simultaneously for the same product on the same computer.

    -
  5. How many times can I use offline activation for my Adobe products?
  6. -

    You can use offline activation for your Adobe products as many times as you need as long as you have an internet-enabled device and your product's serial number to generate a response code. However, each time you use offline activation, you need to generate a new request code and enter it on your offline computer within 72 hours.

    -
  7. What happens if I lose my product's serial number or my response code?
  8. -

    If you lose your product's serial number or your response code, you will not be able to activate your Adobe products offline until you find them again. If you lose your product's serial number, you can try to recover it by contacting Adobe customer support or by checking your email confirmation or receipt when you purchased the product. If you lose your response code, you can try to generate it again by visiting https://exception.licenses.adobe.com/aoes/aoes/v1/t1?locale=en and entering the request code and your product's serial number.

    -
  9. What are some alternatives to using a response code generator for Adobe CS6 offline activation?
  10. -

    Some alternatives to using a response code generator for Adobe CS6 offline activation are:

    - -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Version of JR Typing Tutor A Risky and Unethical Choice.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Version of JR Typing Tutor A Risky and Unethical Choice.md deleted file mode 100644 index 5bdb659bef9e467f0b804bc4150e86ca649f2b43..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Crack Version of JR Typing Tutor A Risky and Unethical Choice.md +++ /dev/null @@ -1,24 +0,0 @@ - -

    How to Download Crack Version of JR Typing Tutor for Free

    -

    JR Typing Tutor is a software that helps you learn and improve your typing skills in Hindi, English, and other languages. It is specially designed for government typing tests, such as Allahabad High Court RO/ARO, UKPSC RO/ARO, UPPCL, CPCT, M.P. High Court, U.P. Computer Operator, IA, Rajasthan LDC, Tax Assistant, RSMSSB LDC Efficiency & Type Test. It also supports various fonts and keyboard layouts, such as DevLys010, KrutiDev010, Mangal, Raavi, Asees.

    -

    crack version of jr typing tutor


    DOWNLOAD 🆗 https://byltly.com/2uKvsb



    -

    If you want to download JR Typing Tutor for free, you may be tempted to look for a crack version of the software. A crack version is a modified version of the software that bypasses the license verification and allows you to use it without paying. However, downloading a crack version of JR Typing Tutor is not a good idea for several reasons.

    -

    Why You Should Avoid Crack Version of JR Typing Tutor

    -

    Here are some of the risks and disadvantages of downloading a crack version of JR Typing Tutor:

    - -

    How to Download JR Typing Tutor Legally

    -

    If you want to download JR Typing Tutor legally and safely, you have two options:

    -
      -
    1. Download a free trial. You can download a free 14-day trial of JR Typing Tutor from the official website. This will allow you to test the software and see if it meets your needs and expectations. You can access all the features and functions of the software during the trial period.
    2. -
    3. Buy a license. If you are satisfied with the software and want to continue using it after the trial period, you can buy a license from the official website. The price of the license depends on the duration and number of users. You can choose from 1 month, 3 months, 6 months, 1 year, 2 years, 3 years, or lifetime licenses. You can also choose from single user or multi user licenses. Buying a license will give you unlimited access to the software and its updates and support.
    4. -
    -

    Conclusion

    -

    JR Typing Tutor is a useful software that can help you learn and improve your typing skills in Hindi, English, and other languages. It is specially designed for government typing tests and supports various fonts and keyboard layouts. However, downloading a crack version of JR Typing Tutor is not advisable because it is illegal, unsafe, unreliable, and unethical. Instead, you should download a free trial or buy a license from the official website to enjoy the benefits of the software legally and safely.

    -

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download HD Tune Pro The Ultimate Tool for HDD and SSD Optimization.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download HD Tune Pro The Ultimate Tool for HDD and SSD Optimization.md deleted file mode 100644 index a121c042337ddf179b1976fc6af2e9bf5270ae4b..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download HD Tune Pro The Ultimate Tool for HDD and SSD Optimization.md +++ /dev/null @@ -1,28 +0,0 @@ -
    -

    How to Download HD Tune Pro and Why You Need It

    -

    HD Tune Pro is a powerful tool that can help you monitor, benchmark, and optimize your hard disk drives (HDDs) and solid state drives (SSDs). It can also scan for errors, check the health status (S.M.A.R.T.), securely erase all data, and more. In this article, we will show you how to download HD Tune Pro and what features it offers.

    -

    How to Download HD Tune Pro

    -

    HD Tune Pro is a paid software that costs $34.95 USD or 24.95 EUR for a single user license. You can purchase it from the official website at http://www.hdtune.com/download.html. After you complete the payment, you will receive a serial number that you can use to activate the software.

    -

    download hd tune pro


    Download Zip === https://byltly.com/2uKA0A



    -

    If you want to try out the software before buying it, you can download a 15-day trial version from the same website. The trial version has all the features of the full version, except for the file benchmark and the folder usage view. You can also download an older version of HD Tune (2.55) for free, but it has fewer features and supports fewer operating systems.

    -

    To install HD Tune Pro, you need to have Windows 2000, Windows XP, Windows Vista, Windows 7, Windows 8, or Windows 10. You also need to have a hard disk (internal or external), SSD, USB stick, or memory card reader. Note that some drives may not support all functions due to hardware limitations.

    -

    What Features Does HD Tune Pro Offer

    -

    HD Tune Pro offers many features that can help you test and improve the performance of your drives. Here are some of them:

    - -

    Conclusion

    -

    HD Tune Pro is a comprehensive and reliable software that can help you monitor, benchmark, and optimize your hard disk drives and solid state drives. It can also scan for errors, check the health status, securely erase all data, and more. If you want to download HD Tune Pro, you can visit the official website at http

    -

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (kokurikozaka kara 720p or 1080p) la storia romantica ambientata nella Yokohama degli anni 60.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (kokurikozaka kara 720p or 1080p) la storia romantica ambientata nella Yokohama degli anni 60.md deleted file mode 100644 index 553c1bfd9442a959f218aa599b6d4b0f9a9a04ba..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/HD Online Player (kokurikozaka kara 720p or 1080p) la storia romantica ambientata nella Yokohama degli anni 60.md +++ /dev/null @@ -1,141 +0,0 @@ -
    -

    HD Online Player (kokurikozaka kara 720p or 1080p)

    -

    If you are a fan of Japanese animation, you might have heard of kokurikozaka kara, or From Up on Poppy Hill, a 2011 film by Studio Ghibli. This film is a beautiful and nostalgic story set in the 1960s, about a group of high school students who try to save their clubhouse from demolition. It is also a touching romance between two young people who discover a surprising connection.

    -

    HD Online Player (kokurikozaka kara 720p or 1080p)


    Download Filehttps://byltly.com/2uKvKx



    -

    In this article, we will tell you everything you need to know about kokurikozaka kara, and how you can watch it online in high definition. We will also help you decide whether to choose 720p or 1080p resolution for your viewing pleasure. So, let's get started!

    -

    What is kokurikozaka kara?

    -

    Kokurikozaka kara, or From Up on Poppy Hill, is a Japanese animated drama film directed by Gorō Miyazaki, the son of the legendary Hayao Miyazaki. It is based on a manga series of the same name by Tetsurō Sayama and Chizuru Takahashi. It was produced by Studio Ghibli, the renowned animation studio behind classics like Spirited Away, My Neighbor Totoro, and Princess Mononoke.

    -

    A brief summary of the plot and characters

    -

    The film is set in 1963 Yokohama, Japan, a year before the Tokyo Olympics. The main character is Umi Matsuzaki, a 16-year-old girl who lives in a boarding house called Coquelicot Manor with her grandmother and younger siblings. Her father was a sailor who died in the Korean War, and her mother is a medical professor studying in the United States. Every morning, Umi raises a set of signal flags with the message "I pray for safe voyages" in honor of her father.

    -

    One day, Umi meets Shun Kazama, a boy who writes for the school newspaper. He is also a member of the "Latin Quarter", an old building that houses various clubs and activities. The Latin Quarter is threatened with demolition by the school board, who wants to build a new modern building instead. Umi and Shun join forces with other students to clean up and renovate the Latin Quarter, hoping to persuade the board to reconsider.

    -

    Watch From Up On Poppy Hill online HD free
    -Kokuriko-zaka Kara full movie download 1080p
    -HD Online Player for Kokurikozaka Kara 720p
    -From Up On Poppy Hill 2011 streaming 1080p
    -Kokuriko-zaka Kara HD online player portable
    -How to watch From Up On Poppy Hill in HD
    -Kokurikozaka Kara 720p or 1080p download
    -From Up On Poppy Hill full movie online HD
    -Kokuriko-zaka Kara HD online player free
    -Watch From Up On Poppy Hill 1080p streaming
    -Kokurikozaka Kara full movie HD online player
    -From Up On Poppy Hill 720p download free
    -Kokuriko-zaka Kara HD online player soundcloud
    -Watch From Up On Poppy Hill online free HD
    -Kokurikozaka Kara 1080p download link
    -From Up On Poppy Hill HD online player burgerhouse
    -Kokuriko-zaka Kara HD online player elinquar
    -Watch From Up On Poppy Hill 720p or 1080p
    -Kokurikozaka Kara full movie streaming HD
    -From Up On Poppy Hill HD online player kindlansuxt
    -Kokuriko-zaka Kara HD online player upd
    -Watch From Up On Poppy Hill full movie HD
    -Kokurikozaka Kara 720p or 1080p streaming
    -From Up On Poppy Hill HD online player fatalitron
    -Kokuriko-zaka Kara HD online player black and white
    -Watch From Up On Poppy Hill in HD quality
    -Kokurikozaka Kara full movie download free HD
    -From Up On Poppy Hill HD online player action movies
    -Kokuriko-zaka Kara HD online player USA
    -Watch From Up On Poppy Hill 2011 online free
    -Kokurikozaka Kara 720p or 1080p full movie
    -From Up On Poppy Hill HD online player best movies 2019
    -Kokuriko-zaka Kara HD online player beautiful animation
    -Watch From Up On Poppy Hill in 1080p quality
    -Kokurikozaka Kara full movie watch online free HD
    -From Up On Poppy Hill HD online player studio ghibli
    -Kokuriko-zaka Kara HD online player kokurikozakakara.com
    -Watch From Up On Poppy Hill with subtitles HD
    -Kokurikozaka Kara 720p or 1080p watch online free
    -From Up On Poppy Hill HD online player ghibli fan club

    -

    As Umi and Shun work together, they develop feelings for each other. However, they soon discover that they share a shocking secret that could tear them apart. Will they be able to overcome their past and save their future?

    -

    The production and release of the film

    -

    The film was announced by Studio Ghibli in December 2010, as Gorō Miyazaki's second directorial work after Tales from Earthsea (2006). His father, Hayao Miyazaki, co-wrote the screenplay with Keiko Niwa, based on the manga by Sayama and Takahashi. The music was composed by Satoshi Takebe, who also worked on Tales from Earthsea.

    -

    The film was released in Japan on July 16, 2011, by Toho. It was a commercial success, grossing over $61 million worldwide. It was also well received by critics, who praised its animation, story, and characters. It won several awards, including the Japan Academy Prize for Animation of the Year, and was nominated for the Asia Pacific Screen Award for Best Animated Feature Film.

    -

    The film was dubbed into English by GKIDS, with a voice cast that includes Sarah Bolger as Umi, Anton Yelchin as Shun, Gillian Anderson as Umi's mother Ryoko, Jamie Lee Curtis as Umi's grandmother Hana, Beau Bridges as Shun's father Yūichirō Sawamura , Bruce Dern as Shun's adoptive father Yoshio Onodera , Christina Hendricks as Miki Hokuto , Aubrey Plaza as Sachiko Hirokōji , Chris Noth as Tokumaru , Ron Howard as Akio Kazama , Jeff Dunham as Gen Shiraki , Emily Osment as Nobuko Yokoyama , Charlie Saxton as Shiro Mizunuma , Isabelle Fuhrman as Sora Matsuzaki , Alex Wolff as Riku Matsuzaki , Jake Steinfeld as Oyaji , James Marsden as Mr. Tokumaru , Masami Nagasawa as Umi Matsuzaki (Japanese version), Junichi Okada as Shun Kazama (Japanese version), Keiko Takeshita as Hana Matsuzaki (Japanese version), Yuriko Ishida as Ryoko Matsuzaki (Japanese version), Jun Fubuki as Miki Hokuto (Japanese version), Takashi Naito as Yūichirō Sawamura (Japanese version), Shunsuke Kazama as Shiro Mizunuma (Japanese version), Nao Ōmori as Yoshio Onodera (Japanese version), Teruyuki Kagawa as Tokumaru (Japanese version). It was released in North America on March 15, 2013.

    -

    The reception and awards of the film

    -

    The film received positive reviews from most critics and audiences. It has a rating of 86% on Rotten Tomatoes based on 97 reviews, with an average score of 7/10. The website's critical consensus reads: "Gentle and nostalgic, From Up on Poppy Hill is one of Studio Ghibli's sweeter efforts -- and if it doesn't push the boundaries of the genre, it remains as engagingly lovely as Ghibli fans have come to expect."

    -

    On Metacritic , which assigns a normalized rating out of 100 to reviews from mainstream critics, the film has an average score of 71 based on 25 reviews, indicating "generally favorable reviews".

    -

    The film won several awards, including:

    -

    Why watch kokurikozaka kara online?

    -

    If you are interested in watching kokurikozaka kara, you might wonder why you should stream it online instead of buying or renting a DVD or Blu-ray disc. Here are some reasons why watching it online is a good idea:

    -

    The benefits of streaming the film online

    -

    Streaming kokurikozaka kara online has many advantages, such as:

    -

    The best platforms and devices to watch the film online

    -

    There are many options for streaming kokurikozaka kara online, but some are better than others. Here are some of the best platforms and devices to watch the film online:

    -
    PlatformDeviceFeatures
    NetflixSmart TV, laptop, tablet, smartphone, gaming console,

    The themes and messages of the film

    -

    Kokurikozaka kara is not only a romantic and nostalgic film, but also a film that explores various themes and messages that are relevant to today's society. Some of the themes and messages are:

    -
      -
    • The importance of preserving history and culture. The film shows how the students of the Latin Quarter value their old building and its memories, and how they fight to save it from destruction. The film also depicts the contrast between the traditional and the modern, the old and the new, and the rural and the urban in Japan during the 1960s.
    • -
    • The impact of war and loss on families and individuals. The film portrays how Umi and Shun cope with the absence of their fathers, who died in the Korean War. The film also reveals how their fathers' pasts affect their present and future. The film also touches on the issues of identity, belonging, and inheritance.
    • -
    • The power of love and friendship. The film illustrates how Umi and Shun's relationship grows from friendship to romance, and how they support each other through their challenges. The film also shows how their friends and family help them along the way, and how they form a community of solidarity and harmony.
    • -
    -

    The film conveys a message of hope and optimism, despite the difficulties and uncertainties of life. It celebrates the beauty and joy of everyday life, and the potential of young people to make a difference in the world.

    -

    Why watch kokurikozaka kara online?

    -

    If you are interested in watching kokurikozaka kara, you might wonder why you should stream it online instead of buying or renting a DVD or Blu-ray disc. Here are some reasons why watching it online is a good idea:

    -

    The benefits of streaming the film online

    -

    Streaming kokurikozaka kara online has many advantages, such as:

      -
    • You can watch it anytime and anywhere you want, as long as you have an internet connection and a compatible device.
    • -
    • You can choose between different platforms and services that offer different prices and features.
    • -
    • You can avoid paying extra fees for shipping or late returns.
    • -
    • You can avoid damaging or losing your physical copy of the film.
    • -
    • You can enjoy high-quality video and audio without any scratches or glitches.
    • -
    • You can access bonus features and extras that might not be available on discs.
    • -

    -

    The best platforms and devices to watch the film online

    -

    There are many options for streaming kokurikozaka kara online, but some are better than others. Here are some of the best platforms and devices to watch the film online:

    -
    PlatformDeviceFeatures
    NetflixSmart TV, laptop, tablet, smartphone, gaming console, streaming device- Offers a wide range of movies and shows, including kokurikozaka kara
    - Allows you to download content for offline viewing
    - Supports HD quality and 5.1 surround sound
    - Has a user-friendly interface and personalized recommendations
    - Charges a monthly fee based on your plan
    - Requires an internet connection of at least 5 Mbps for HD streaming
    Amazon Prime VideoSmart TV, laptop, tablet, smartphone, gaming console, streaming device- Offers a large library of movies and shows, including kokurikozaka kara
    - Allows you to rent or buy content that is not included in your subscription
    - Supports HD quality and 5.1 surround sound
    - Has a simple interface and parental controls
    - Charges an annual or monthly fee for Prime membership
    - Requires an internet connection of at least 5 Mbps for HD streaming
    HuluSmart TV, laptop, tablet, smartphone, gaming console, streaming device- Offers a variety of movies and shows, including kokurikozaka kara
    - Allows you to add live TV channels and premium networks to your subscription
    - Supports HD quality and 5.1 surround sound
    - Has a sleek interface and multiple profiles
    - Charges a monthly fee based on your plan
    - Requires an internet connection of at least 6 Mbps for HD streaming
    -

    The tips and tricks to enhance your viewing experience

    -

    To make sure you enjoy watching kokurikozaka kara online, here are some tips and tricks to follow:

    -
      -
    • Choose a platform that suits your preferences and budget.
    • -
    • Check your internet speed and bandwidth before streaming.
    • -
    • Select a device that has a good screen and sound quality.
    • -
    • Adjust your brightness and volume settings according to your environment.
    • -
    • Use headphones or speakers for better audio effects.
    • -
    • Avoid spoilers and distractions while watching.
    • -
    • Watch with friends or family for more fun.
    • -
    -

    How to choose between 720p and 1080p?

    -

    One of the questions you might have when streaming kokurikozaka kara online is whether to choose 720p or 1080p resolution. What is the difference between them, and which one is better for you? Let's find out!

    -

    The difference between 720p and 1080p resolution

    -

    The resolution of a video refers to the number of pixels that make up its image. The more pixels there are, the sharper and clearer the image will be. The term 720p means that the video has 720 horizontal lines of pixels, while 1080p means that it has 1080 horizontal lines of pixels. Therefore, 1080p has more pixels than 720p, resulting in higher image quality.

    -

    The factors that affect your resolution choice

    -

    However, choosing between 720p and 1080p is not as simple as picking the one with more pixels. There are other factors that affect your resolution choice, such as:

    -
      -
    • Your device's screen size and resolution. If your device has a small screen or a low resolution, you might not notice much difference between 720p and 1080p. On the other hand, if your device has a large screen or a high resolution, you might appreciate the extra details that 1080p offers.
    • -
    • Your internet speed and data usage. Streaming 1080p requires more bandwidth than streaming 720p, which means it will consume more data and load slower if your internet connection is weak or unstable. If you have a fast and reliable internet connection, you can enjoy smooth streaming at 1080p. However, if you have a slow or limited internet connection, you might want to stick with 720p to avoid buffering or extra charges.
    • -
    • Your personal preference and realism, you might prefer 1080p. If you are more concerned about speed and data, you might opt for 720p.
    • -
    -

    The pros and cons of 720p and 1080p for kokurikozaka kara

    -

    To help you decide between 720p and 1080p for kokurikozaka kara, here are some pros and cons of each resolution:

    -
    ResolutionProsCons
    720p- Faster loading and streaming
    - Less data consumption
    - Suitable for smaller screens
    - Good enough for most animated films
    - Lower image quality
    - Less details and sharpness
    - Not ideal for larger screens
    - Might miss some nuances and subtleties of the film
    1080p- Higher image quality
    - More details and sharpness
    - Ideal for larger screens
    - Can appreciate the artistry and beauty of the film
    - Slower loading and streaming
    - More data consumption
    - Might not be supported by some devices
    - Might not notice much difference on some animated films
    -

    Conclusion

    -

    Kokurikozaka kara, or From Up on Poppy Hill, is a wonderful film that you can enjoy watching online in high definition. It is a film that tells a story of love, friendship, and history, set in the 1960s Japan. It is also a film that showcases the talent and charm of Studio Ghibli and its creators.

    -

    If you want to watch kokurikozaka kara online, you have many options to choose from. You can stream it on various platforms and devices, depending on your preferences and budget. You can also choose between 720p and 1080p resolution, depending on your device's screen size and resolution, your internet speed and data usage, and your personal expectations.

    -

    No matter what you choose, we hope you have a great time watching kokurikozaka kara online. It is a film that will make you smile, cry, and dream.

    -

    FAQs

    -

    Here are some frequently asked questions about kokurikozaka kara and watching it online:

    -
      -
    • Q: Is kokurikozaka kara based on a true story?
      A: No, kokurikozaka kara is not based on a true story. It is based on a manga series by Tetsurō Sayama and Chizuru Takahashi. However, it does depict some historical events and aspects of Japan in the 1960s.
    • -
    • Q: Is kokurikozaka kara suitable for children?
      A: Yes, kokurikozaka kara is suitable for children. It is rated PG by the MPAA for mild thematic elements and some incidental smoking images. It is also rated U by the BBFC for very mild threat. It is a family-friendly film that can be enjoyed by people of all ages.
    • -
    • Q: Where can I find the soundtrack of kokurikozaka kara?
      A: You can find the soundtrack of kokurikozaka kara on various music platforms and services, such as Spotify , Apple Music , YouTube Music , Amazon Music , etc. You can also buy the CD or digital album from online stores, such as Amazon , iTunes , etc.
    • -
    • Q: Who sings the theme song of kokurikozaka kara?
      A: The theme song of kokurikozaka kara is called "Summer of Farewells — From Up on Poppy Hill" (「さよならの夏~コクリコ坂から~」, "Sayonara no Natsu ~Kokuriko-zaka kara~"). It is sung by Aoi Teshima , a Japanese singer and voice actress who also voiced Theru in Tales from Earthsea . She also sings another song in the film called "Breakfast Song" (「朝ご飯の歌」, "Asagohan no Uta").
    • -
    • Q: What are some other films by Studio Ghibli that I can watch online?
      A: There are many other films by Studio Ghibli that you can watch online, such as Spirited Away , My Neighbor Totoro , Princess Mononoke , Howl's Moving Castle , Ponyo , The Wind Rises , etc. You can find them on various streaming platforms and services, such as Netflix , Amazon Prime Video , Hulu , HBO Max , etc.
    • -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/ActivationacronisTIH 6514 6.md b/spaces/1gistliPinn/ChatGPT4/Examples/ActivationacronisTIH 6514 6.md deleted file mode 100644 index 462545797e6237544cfa6f600dd2983ca0ccc9f2..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/ActivationacronisTIH 6514 6.md +++ /dev/null @@ -1,10 +0,0 @@ -
    -

    If you've recently installed antivirus programs or adware, then search your computer for 'activationacronistih.exe' to detect them. While activationacronistih.exe activationacronistih.exe viruses, you can delete activationacronistih.exe. Malware may not always change the name, but removing it is always advisable. Find activationacronistih.exe and delete it.

    -

    activationacronisTIH 6514 6


    Downloadhttps://imgfil.com/2uy1xQ



    -

    Deleting the files responsible for the problem may help in stopping them from activating. First, follow the instructions in the preceding article to locate the activationacronistih.exe file and delete it. If you are not sure which program is causing the activationacronistih.exe problems, you can use a Web browser to access your control panel, and then locate the window or icon that has activationacronistih.exe on the error screen.

    -

    After the restore finishes, then you can restart your computer. If this doesn't resolve your activationacronistih.exe problem, you can proceed with the next step, which is to run a virus scan on your hard drive.

    -

    ActivationAcronisTIH is an helpful tool that can be run on the infected PC to help fix. It can fix the activationacronistih.exe errors, optimize the PC's performance, and protect it from other threats. The tool is safe, it does not void your product’s license. The activationacronistih.exe is a part of Microsoft Windows Operating System program developed by not by Acronis.Some activationacronistih.exe errors may have the following reasons:

    -

    -

    - Some system files have been incorrectly installed, corrupted or removed. - Some other programs are misbehaving on your system. Fixing activationacronistih.exe may require you to perform different steps, depending on the cause of the error.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Trucker - Overloaded Trucks APK and Haul Ore for Profit.md b/spaces/1phancelerku/anime-remove-background/Download Trucker - Overloaded Trucks APK and Haul Ore for Profit.md deleted file mode 100644 index edb8d6d9b4dd97348c2aee9e58cd12430607ecd6..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Trucker - Overloaded Trucks APK and Haul Ore for Profit.md +++ /dev/null @@ -1,132 +0,0 @@ - -

    Trucker - Overloaded Trucks APK: A Fun and Challenging Driving Game

    -

    Do you love driving big trucks and hauling heavy loads? Do you enjoy testing your skills and reflexes on different terrains and weather conditions? If you answered yes, then you should try Trucker - Overloaded Trucks APK, a fun and challenging driving game for Android devices.

    -

    trucker overloaded trucks apk


    Download ····· https://jinyurl.com/2uNRL2



    -

    What is Trucker - Overloaded Trucks APK?

    -

    Trucker - Overloaded Trucks APK is a game developed by LQG, a studio that specializes in creating realistic and immersive simulation games. In this game, you will take the role of a truck driver who has to deliver various cargoes across different locations. You will have to deal with different obstacles, such as traffic, bridges, tunnels, hills, mud, snow, and more. You will also have to manage your fuel, speed, brakes, and steering to avoid accidents and damages.

    -

    The gameplay of Trucker - Overloaded Trucks APK

    -

    The gameplay of Trucker - Overloaded Trucks APK is simple but addictive. You will start with a basic truck and a simple cargo. You will have to drive from point A to point B without losing your cargo or crashing your truck. You will earn money for each successful delivery. You can use the money to upgrade your truck or buy new trucks with different features and capacities. You can also unlock new cargoes and locations as you progress in the game.

    -

    The features of Trucker - Overloaded Trucks APK

    -

    Trucker - Overloaded Trucks APK has many features that make it an enjoyable and realistic driving game. Some of these features are:

    -
      -
    • High-quality graphics and sound effects that create a realistic atmosphere.
    • -
    • Multiple camera angles that let you view your truck from different perspectives.
    • -
    • A variety of trucks and cargoes that have different characteristics and challenges.
    • -
    • A dynamic weather system that affects the driving conditions and the physics of your truck.
    • -
    • A map that shows your current location, destination, and route.
    • -
    • A leaderboard that ranks your performance against other players around the world.
    • -
    -

    How to download and install Trucker - Overloaded Trucks APK on your Android device?

    -

    If you want to play Trucker - Overloaded Trucks APK on your Android device, you will need to download and install it from a reliable source. Here are the requirements and steps for doing so:

    -

    The requirements for Trucker - Overloaded Trucks APK

    -

    To play Trucker - Overloaded Trucks APK on your Android device, you will need:

    -

    trucker overloaded trucks game download
    -trucker overloaded trucks simulator apk
    -trucker overloaded trucks mod apk
    -trucker overloaded trucks android app
    -trucker overloaded trucks online emulator
    -trucker overloaded trucks free apk
    -trucker overloaded trucks gameplay
    -trucker overloaded trucks latest version apk
    -trucker overloaded trucks ore transport
    -trucker overloaded trucks dump truck driver
    -trucker overloaded trucks apk for pc
    -trucker overloaded trucks review
    -trucker overloaded trucks cheats
    -trucker overloaded trucks tips and tricks
    -trucker overloaded trucks best price
    -trucker overloaded trucks apk mirror
    -trucker overloaded trucks offline apk
    -trucker overloaded trucks unlimited money apk
    -trucker overloaded trucks realistic physics
    -trucker overloaded trucks graphics quality
    -trucker overloaded trucks update apk
    -trucker overloaded trucks trailer
    -trucker overloaded trucks features
    -trucker overloaded trucks how to play
    -trucker overloaded trucks system requirements
    -trucker overloaded trucks apk pure
    -trucker overloaded trucks hack apk
    -trucker overloaded trucks premium apk
    -trucker overloaded trucks full version apk
    -trucker overloaded trucks no ads apk
    -trucker overloaded trucks fun and addictive
    -trucker overloaded trucks challenges and missions
    -trucker overloaded trucks buy and sell ore
    -trucker overloaded trucks earn money and upgrade
    -trucker overloaded trucks different types of ore
    -trucker overloaded trucks various locations and routes
    -trucker overloaded trucks realistic sound effects
    -trucker overloaded trucks easy controls and interface
    -trucker overloaded trucks support and feedback
    -trucker overloaded trucks bug fixes and improvements

    -
      -
    • An Android device that runs on Android 4.4 or higher.
    • -
    • At least 65 MB of free storage space on your device.
    • -
    • A stable internet connection to download the game and access its online features.
    • -
    -

    The steps to download and install Trucker - Overloaded Trucks APK

    -

    To download and install Trucker - Overloaded Trucks APK on your Android device, follow these steps:

    -
      -
    1. Go to [this link](^1^) to download the latest version of Trucker - Overloaded Trucks APK.
    2. -
    3. Once the download is complete, locate the file on your device and tap on it to start the installation process.
    4. -
    5. If you see a warning message that says "For your security, your phone is not allowed to install unknown apps from this source", go to your device settings and enable the option to install apps from unknown sources.
    6. -
    7. Follow the on-screen instructions to complete the installation process.
    8. -
    9. Once the installation is done, you can launch the game from your app drawer or home screen and enjoy playing Trucker - Overloaded Trucks APK.
    10. -
    -

    How to play Trucker - Overloaded Trucks APK?

    -

    Playing Trucker - Overloaded Trucks APK is easy and fun. Here are the controls and tips for playing the game:

    -

    The controls of Trucker - Overloaded Trucks APK

    -

    The controls of Trucker - Overloaded Trucks APK are simple and intuitive. You can use the following buttons on your screen to control your truck:

    -
      -
    • The gas pedal to accelerate your truck.
    • -
    • The brake pedal to slow down or stop your truck.
    • -
    • The steering wheel to turn your truck left or right.
    • -
    • The horn to honk at other vehicles or pedestrians.
    • -
    • The camera button to switch between different camera angles.
    • -
    • The pause button to pause the game or access the settings menu.
    • -
    -

    The tips and tricks for Trucker - Overloaded Trucks APK

    -

    To play Trucker - Overloaded Trucks APK well, you will need some tips and tricks. Here are some of them:

    -
      -
    • Pay attention to the road signs and traffic rules. They will help you avoid accidents and penalties.
    • -
    • Balance your speed and fuel consumption. Driving too fast will consume more fuel and increase the risk of losing control. Driving too slow will waste time and reduce your earnings.
    • -
    • Choose the right truck and cargo for each mission. Different trucks and cargoes have different advantages and disadvantages. For example, some trucks have more power and speed, but less fuel efficiency and maneuverability. Some cargoes are lighter and easier to transport, but less valuable and rewarding.
    • -
    • Upgrade your truck or buy new trucks as you earn more money. Upgrading your truck will improve its performance and durability. Buying new trucks will give you access to more missions and challenges.
    • -
    • Use the map to plan your route and avoid getting lost. The map will show you your current location, destination, and route. You can also zoom in or out of the map to see more details.
    • -
    -

    Why should you play Trucker - Overloaded Trucks APK?

    -

    Trucker - Overloaded Trucks APK is a game that will give you a lot of fun and satisfaction. Here are some reasons why you should play it:

    -

    The benefits of playing Trucker - Overloaded Trucks APK

    -

    Playing Trucker - Overloaded Trucks APK will give you many benefits, such as:

    -
      -
    • Improving your driving skills and reflexes. You will learn how to drive a big truck in different situations and environments.
    • -
    • Enhancing your creativity and problem-solving abilities. You will have to find the best way to deliver your cargo safely and efficiently.
    • -
    • Relaxing your mind and relieving your stress. You will enjoy the scenery and the sound of your truck engine as you drive along the road.
    • -
    • Entertaining yourself and killing time. You will never get bored with the variety of missions and challenges that the game offers.
    • -
    -

    The drawbacks of playing Trucker - Overloaded Trucks APK

    -

    Playing Trucker - Overloaded Trucks APK also has some drawbacks, such as:

    -
      -
    • Taking up some storage space on your device. The game requires at least 65 MB of free storage space on your device, which may be a problem if you have a low-end device or limited storage space.
    • -
    • Consuming some battery power on your device. The game uses high-quality graphics and sound effects, which may drain your battery faster than usual.
    • -
    • Requiring an internet connection to access some features. The game needs an internet connection to download the game, update the game, access the leaderboard, and share your achievements with other players.
    • -
    -

    Conclusion

    -

    In conclusion, Trucker - Overloaded Trucks APK is a fun and challenging driving game that will test your skills and reflexes as a truck driver. You will have to deliver various cargoes across different locations while dealing with different obstacles, such as traffic, bridges, tunnels, hills, mud, snow, and more. You will also have to manage your fuel, speed, brakes, and steering to avoid accidents and damages. You will earn money for each successful delivery, which you can use to upgrade your truck or buy new trucks with different features and capacities and capacities. You can also unlock new cargoes and locations as you progress in the game. The game has high-quality graphics and sound effects that create a realistic atmosphere. The game also has multiple camera angles that let you view your truck from different perspectives. The game also has a dynamic weather system that affects the driving conditions and the physics of your truck. The game also has a map that shows your current location, destination, and route. The game also has a leaderboard that ranks your performance against other players around the world. Playing Trucker - Overloaded Trucks APK will improve your driving skills and reflexes, enhance your creativity and problem-solving abilities, relax your mind and relieve your stress, and entertain yourself and kill time. However, playing Trucker - Overloaded Trucks APK also has some drawbacks, such as taking up some storage space on your device, consuming some battery power on your device, and requiring an internet connection to access some features. If you are looking for a fun and challenging driving game for your Android device, you should try Trucker - Overloaded Trucks APK.

    -

    FAQs

    -

    Here are some frequently asked questions about Trucker - Overloaded Trucks APK:

    -
      -
    1. What is the latest version of Trucker - Overloaded Trucks APK?
    2. -

      The latest version of Trucker - Overloaded Trucks APK is 1.0.3, which was released on June 15, 2023.

      -
    3. How many trucks and cargoes are available in Trucker - Overloaded Trucks APK?
    4. -

      There are 10 trucks and 20 cargoes available in Trucker - Overloaded Trucks APK, each with different characteristics and challenges.

      -
    5. How can I share my achievements with other players in Trucker - Overloaded Trucks APK?
    6. -

      You can share your achievements with other players in Trucker - Overloaded Trucks APK by connecting your game to your Facebook account. You can also invite your friends to play the game with you.

      -
    7. How can I contact the developer of Trucker - Overloaded Trucks APK?
    8. -

      You can contact the developer of Trucker - Overloaded Trucks APK by sending an email to lqgstudio@gmail.com or visiting their website at https://lqgstudio.com/.

      -
    9. Is Trucker - Overloaded Trucks APK safe to download and install?
    10. -

      Yes, Trucker - Overloaded Trucks APK is safe to download and install from a reliable source. However, you should always scan the file for viruses before installing it on your device.

      -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download apk 5play.ru No ads no limits no worries.md b/spaces/1phancelerku/anime-remove-background/Download apk 5play.ru No ads no limits no worries.md deleted file mode 100644 index bd2579c291caf2c3a57486996fdabc366149a005..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download apk 5play.ru No ads no limits no worries.md +++ /dev/null @@ -1,121 +0,0 @@ - -

    How to Download APK 5play.ru for Android

    -

    If you are a fan of android games, you might have heard of APK 5play.ru. It is a website that offers free downloads of android games, including mods, hacks, and premium versions. In this article, we will tell you everything you need to know about APK 5play.ru, including what it is, why you should download it, how to download it, how to use it, and what are its benefits and drawbacks.

    -

    What is APK 5play.ru?

    -

    APK 5play.ru is a website that provides free android games for download. It has a huge collection of games from various genres and categories, such as action, adventure, arcade, puzzle, racing, simulation, sports, strategy, and more. You can find popular games like PUBG Mobile, Minecraft, GTA San Andreas, Among Us, Genshin Impact, etc., as well as indie games from different developers. You can also download mods and hacks for some games, which give you unlimited resources, features, or cheats.

    -

    download apk 5play.ru


    Download 🗸 https://jinyurl.com/2uNSy8



    -

    APK 5play.ru is not only a website but also a platform that supports android gamers. It has a user-friendly interface that allows you to easily search, browse, and download games. It also has a community that provides feedback and ratings for each game. You can read the reviews of other users, leave your own comments, and rate the games according to your experience. You can also request new games or mods from the developers or other users.

    -

    Why download APK 5play.ru?

    -

    There are many reasons why you should download APK 5play.ru for your android device. Here are some of them:

    -

    To enjoy the latest and best android games for free

    -

    One of the main reasons why you should download APK 5play.ru is that it gives you free access to premium and paid games. You don't have to spend any money to enjoy the latest and best android games. You can download them directly from the website without any registration or subscription. You can also update them regularly to get new features and fixes.

    -

    To access exclusive mods and hacks for popular games

    -

    Another reason why you should download APK 5play.ru is that it offers exclusive mods and hacks for popular games. You can get unlimited resources, features, or cheats for games like PUBG Mobile, GTA San Andreas, Among Us, Genshin Impact, etc. You can also customize the games according to your preferences and needs. You can change the graphics, the gameplay, the characters, the items, and more. You can also unlock new levels, modes, skins, and weapons.

    -

    To discover new and interesting games from different developers

    -

    A third reason why you should download APK 5play.ru is that it helps you discover new and interesting games from different developers. You can find games that are not available on the Google Play Store or other platforms. You can also find games that are unique, creative, and innovative. You can explore different genres and categories of games and find the ones that suit your taste and mood.

    -

    How to download APK 5play.ru?

    -

    Downloading APK 5play.ru for your android device is very easy and simple. You just need to follow these steps:

    -

    Step 1: Visit the official website of APK 5play.ru

    -

    The first step is to visit the official website of APK 5play.ru. You can use any browser on your device to access the website. The website has a domain name of https://5play.ru/en/. You can also use a VPN or proxy service if the website is blocked or restricted in your region.

    -

    download apk 5play.ru games
    -download apk 5play.ru mods
    -download apk 5play.ru android
    -download apk 5play.ru obb
    -download apk 5play.ru latest version
    -download apk 5play.ru free
    -download apk 5play.ru offline
    -download apk 5play.ru online
    -download apk 5play.ru action
    -download apk 5play.ru adventure
    -download apk 5play.ru simulation
    -download apk 5play.ru strategy
    -download apk 5play.ru racing
    -download apk 5play.ru sports
    -download apk 5play.ru puzzle
    -download apk 5play.ru arcade
    -download apk 5play.ru rpg
    -download apk 5play.ru shooter
    -download apk 5play.ru horror
    -download apk 5play.ru casual
    -download apk 5play.ru sandbox
    -download apk 5play.ru platformer
    -download apk 5play.ru fighting
    -download apk 5play.ru stealth
    -download apk 5play.ru survival
    -download apk 5play.ru tower defense
    -download apk 5play.ru card
    -download apk 5play.ru board
    -download apk 5play.ru trivia
    -download apk 5play.ru word
    -download apk 5play.ru educational
    -download apk 5play.ru music
    -download apk 5play.ru role playing
    -download apk 5play.ru multiplayer
    -download apk 5play.ru co-op
    -download apk 5play.ru vr
    -download apk 5play.ru ar
    -download apk 5play.ru premium
    -download apk 5play.ru unlocked
    -download apk 5play.ru hacked
    -download apk 5play.ru cracked
    -download apk 5play.ru patched
    -download apk 5play.ru full version
    -download apk 5play.ru pro version
    -download apk 5play.ru modded version
    -download apk 5play.ru unlimited money
    -download apk 5play.ru unlimited gems
    -download apk 5play.ru unlimited coins
    -download apk 5play.ru unlimited lives

    -

    Step 2: Choose the game you want to download

    -

    The second step is to choose the game you want to download. You can use the search bar on the top of the website to type the name of the game or the keyword related to it. You can also use the filters on the left side of the website to narrow down your search by genre, category, rating, popularity, etc. You can also browse through the featured, new, or updated games on the homepage of the website.

    -

    Step 3: Click on the download button and select the APK or OBB file

    -

    The third step is to click on the download button and select the APK or OBB file. Once you have found the game you want to download, click on its name or image to open its page. On the game page, you will see a green download button on the right side. Click on it and you will see a list of files available for download. You can choose either the APK file or the OBB file depending on your preference. The APK file is the application file that installs the game on your device. The OBB file is the data file that contains the additional content of the game such as graphics, sounds, etc.

    -

    Step 4: Install the APK file on your device and copy the OBB file to the appropriate folder

    -

    The fourth step is to install the APK file on your device and copy the OBB file to the appropriate folder. After downloading the files, you need to install the APK file on your device. To do that, you need to enable the installation of unknown sources on your device. You can do that by going to the settings of your device, then security, then unknown sources. Once you have enabled that, you can tap on the APK file and follow the instructions to install it. If you have downloaded the OBB file, you need to copy it to the right folder on your device. You can do that by using a file manager app or a USB cable. The OBB file should be copied to the folder named Android/obb/ on your device's internal or external storage. Make sure that the OBB file has the same name as the game's package name.

    -

    Step 5: Launch the game and enjoy

    -

    The fifth and final step is to launch the game and enjoy. Once you have installed the APK file and copied the OBB file, you can launch the game from your device's app drawer or home screen. You can also create a shortcut for the game on your device's desktop for easy access. You can now enjoy the game with all its features and mods.

    -

    How to use APK 5play.ru?

    -

    Using APK 5play.ru is very easy and simple as well. You just need to follow these tips:

    -

    Browse through the different categories and genres of games

    -

    One of the best ways to use APK 5play.ru is to browse through the different categories and genres of games. You can find games from various genres such as action, adventure, arcade, puzzle, racing, simulation, sports, strategy, and more. You can also find games from different categories such as online, offline, multiplayer, single-player, 3D, 2D, etc. You can also sort the games by popularity, rating, date, or alphabet.

    -

    Read the description and reviews of each game

    -

    Another way to use APK 5play.ru is to read the description and reviews of each game. You can find useful information about each game such as its features, gameplay, graphics, controls, requirements, etc. You can also read the reviews of other users who have downloaded and played the game. You can see their ratings, comments, feedback, and suggestions. You can also leave your own review and rating for each game.

    -

    Check the compatibility and requirements of each game

    -

    A third way to use APK 5play.ru is to check the compatibility and requirements of each game. You can see if the game is compatible with your device's model, version of android, screen size, etc. You can also see if the game requires any additional permissions or data such as internet connection, storage space, location access, etc. You can also see if the game has any in-app purchases or ads.

    -

    Update the games regularly to get new features and fixes

    -

    A fourth way to use APK 5play.ru is to update the games regularly to get new features and fixes. You can see if there are any new versions or updates available for each game on the website. You can also enable notifications for updates on your device's settings. You can download and install the updates easily from the website or from your device's app manager.

    -

    What are the benefits of APK 5play.ru?

    -

    APK 5play.ru has many benefits for android gamers. Here are some of them:

    -

    Free access to premium and paid games

    -

    One of the main benefits of APK 5play.ru is that it gives you free access to premium and paid games. You don't have to spend any money to enjoy the latest and best android games. You can download them directly from the website without any registration or subscription. You can also update them regularly to get new features and fixes.

    -

    Unlimited resources and features with mods and hacks

    -

    Another benefit of APK 5play.ru is that it offers unlimited resources and features with mods and hacks for some games. You can get unlimited coins, gems, lives, ammo, health, etc. for games like PUBG Mobile, GTA San Andreas, Among Us, Genshin Impact, etc. You can also customize the games according to your preferences and needs. You can change the graphics, the gameplay, the characters, the items, and more. You can also unlock new levels, modes, skins, and weapons.

    -

    High-quality graphics and performance with optimized games

    -

    A third benefit of APK 5play.ru is that it provides high-quality graphics and performance with optimized games. You can enjoy the games with smooth and fast gameplay, stunning visuals, realistic sounds, and immersive effects. You can also adjust the settings of the games to match your device's capabilities and preferences. You can also save battery and data by playing offline or online games.

    -

    Safe and secure downloads with no viruses or malware

    -

    A fourth benefit of APK 5play.ru is that it ensures safe and secure downloads with no viruses or malware. You don't have to worry about harming your device or compromising your privacy by downloading games from the website. The website has a strict policy of checking and verifying each game before uploading it to the website. The website also uses encryption and protection technologies to prevent any unauthorized access or interference.

    -

    What are the drawbacks of APK 5play.ru?

    -

    APK 5play.ru has some drawbacks as well for android gamers. Here are some of them:

    -

    Potential risk of violating the terms and conditions of some games

    -

    One of the main drawbacks of APK 5play.ru is that it poses a potential risk of violating the terms and conditions of some games. You might be breaking the rules or laws of some games by downloading or using mods or hacks for them. You might also be infringing the intellectual property rights or copyrights of some game developers or publishers by downloading or using their games without their permission. This could result in legal actions or penalties against you.

    -

    Possible compatibility issues with some devices or versions of android

    -

    Another drawback of APK 5play.ru is that it might cause compatibility issues with some devices or versions of android. You might not be able to download or install some games on your device due to its model, version of android, screen size, etc. You might also experience crashes, glitches, errors, or bugs with some games due to their requirements, permissions, data, etc. You might also face difficulties in updating or uninstalling some games from your device.

    -

    Occasional bugs or errors with some games or mods

    -

    A third drawback of APK 5play.ru is that it might have occasional bugs or errors with some games or mods. You might encounter problems with some games or mods such as missing content, corrupted files, wrong language, invalid links, etc. You might also find some games or mods that are outdated, incomplete, or fake. You might also face issues with some games or mods that are not compatible with each other or with your device.

    -

    Conclusion

    -

    APK 5play.ru is a website that offers free downloads of android games, including mods, hacks, and premium versions. It has many benefits for android gamers such as free access to premium and paid games, unlimited resources and features with mods and hacks, high-quality graphics and performance with optimized games, and safe and secure downloads with no viruses or malware. It also has some drawbacks such as potential risk of violating the terms and conditions of some games, possible compatibility issues with some devices or versions of android, and occasional bugs or errors with some games or mods.

    -

    If you are interested in downloading APK 5play.ru for your android device, you can follow the steps mentioned above in this article. You can also use the tips provided above to use APK 5play.ru effectively and efficiently. However, you should also be aware of the risks and consequences involved in downloading or using APK 5play.ru. You should always respect the rights and rules of the game developers and publishers as well as your own device's security and privacy.

    -

    FAQs

    -

    Here are some frequently asked questions about APK 5play.ru:

    -

    Q: Is APK 5play.ru legal?

    -

    A: APK 5play.ru is not legal in some countries or regions where downloading or using pirated or modded games is prohibited by law. You should also check the laws and regulations of your country or region before downloading or using APK 5play.ru.

    -

    Q: Is APK 5play.ru safe?

    -

    A: APK 5play.ru is safe in terms of downloading and installing games without any viruses or malware. The website has a strict policy of checking and verifying each game before uploading it to the website. The website also uses encryption and protection technologies to prevent any unauthorized access or interference. However, APK 5play.ru is not safe in terms of violating the terms and conditions of some games or compromising your device's security or privacy. You should always be careful and cautious when downloading or using APK 5play.ru.

    -

    Q: How to update APK 5play.ru?

    -

    A: You can update APK 5play.ru by visiting the official website of APK 5play.ru and downloading the latest version of the games you want. You can also enable notifications for updates on your device's settings. You can download and install the updates easily from the website or from your device's app manager.

    -

    Q: How to uninstall APK 5play.ru?

    -

    A: You can uninstall APK 5play.ru by deleting the APK file and the OBB file from your device's storage. You can also use a file manager app or a USB cable to do that. You can also uninstall the games you have downloaded from APK 5play.ru by using your device's app manager or settings.

    -

    Q: How to contact APK 5play.ru?

    -

    A: You can contact APK 5play.ru by using the feedback form on the website. You can also use the email address, phone number, or social media accounts provided on the website. You can also use the comment section on each game page to communicate with other users or developers.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/README.md b/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/README.md deleted file mode 100644 index b60dbab2851e5266d5f6acabd167772203329cc2..0000000000000000000000000000000000000000 --- a/spaces/4eJIoBek/Stable_Diffusion_1.4_openvino/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: (Not working for now) Stable Diffusion 1.4 openvino -emoji: 🌚 -colorFrom: blue -colorTo: pink -sdk: streamlit -sdk_version: 1.15.2 -app_file: demo_web.py -pinned: false -license: apache-2.0 -duplicated_from: timboie/test ---- diff --git a/spaces/AI-Hobbyist/Hoyo-RVC/go-web.bat b/spaces/AI-Hobbyist/Hoyo-RVC/go-web.bat deleted file mode 100644 index db1dec52006bc631e4e68bafd619a3a65f202532..0000000000000000000000000000000000000000 --- a/spaces/AI-Hobbyist/Hoyo-RVC/go-web.bat +++ /dev/null @@ -1,2 +0,0 @@ -runtime\python.exe infer-web.py --pycmd runtime\python.exe --port 7897 -pause diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan_light.py b/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan_light.py deleted file mode 100644 index 9e1f823996bf559e9b015ea9aa2b3cd38dd13af1..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_audio/Make_An_Audio/ldm/modules/image_degradation/bsrgan_light.py +++ /dev/null @@ -1,650 +0,0 @@ -# -*- coding: utf-8 -*- -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - - wd2 = wd2/4 - wd = wd/4 - - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', random.randint(2, 4) + 3, wd * random.random()) - img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(80, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=8) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - # elif i == 1: - # image = add_blur(image, sf=sf) - - if i == 0: - pass - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.8: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=1, noise_level2=2) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - # - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - example = {"image": image} - return example - - - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_hq = img - img_lq = deg_fn(img)["image"] - img_hq, img_lq = util.uint2single(img_hq), util.uint2single(img_lq) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img_hq)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), - (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/transformer.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/transformer.py deleted file mode 100644 index a7920480cf23606128e6662511cfd7a8a3d9f896..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/modules/commons/transformer.py +++ /dev/null @@ -1,747 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import Parameter, Linear -from text_to_speech.modules.commons.layers import LayerNorm, Embedding -from text_to_speech.utils.nn.seq_utils import get_incremental_state, set_incremental_state, softmax, make_positions -import torch.nn.functional as F - -DEFAULT_MAX_SOURCE_POSITIONS = 2000 -DEFAULT_MAX_TARGET_POSITIONS = 2000 - - -class SinusoidalPositionalEmbedding(nn.Module): - """This module produces sinusoidal positional embeddings of any length. - - Padding symbols are ignored. - """ - - def __init__(self, embedding_dim, padding_idx, init_size=1024): - super().__init__() - self.embedding_dim = embedding_dim - self.padding_idx = padding_idx - self.weights = SinusoidalPositionalEmbedding.get_embedding( - init_size, - embedding_dim, - padding_idx, - ) - self.register_buffer('_float_tensor', torch.FloatTensor(1)) - - @staticmethod - def get_embedding(num_embeddings, embedding_dim, padding_idx=None): - """Build sinusoidal embeddings. - - This matches the implementation in tensor2tensor, but differs slightly - from the description in Section 3.5 of "Attention Is All You Need". - """ - half_dim = embedding_dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, dtype=torch.float) * -emb) - emb = torch.arange(num_embeddings, dtype=torch.float).unsqueeze(1) * emb.unsqueeze(0) - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=1).view(num_embeddings, -1) - if embedding_dim % 2 == 1: - # zero pad - emb = torch.cat([emb, torch.zeros(num_embeddings, 1)], dim=1) - if padding_idx is not None: - emb[padding_idx, :] = 0 - return emb - - def forward(self, input, incremental_state=None, timestep=None, positions=None, **kwargs): - """Input is expected to be of size [bsz x seqlen].""" - bsz, seq_len = input.shape[:2] - max_pos = self.padding_idx + 1 + seq_len - if self.weights is None or max_pos > self.weights.size(0): - # recompute/expand embeddings if needed - self.weights = SinusoidalPositionalEmbedding.get_embedding( - max_pos, - self.embedding_dim, - self.padding_idx, - ) - self.weights = self.weights.to(self._float_tensor) - - if incremental_state is not None: - # positions is the same for every token when decoding a single step - pos = timestep.view(-1)[0] + 1 if timestep is not None else seq_len - return self.weights[self.padding_idx + pos, :].expand(bsz, 1, -1) - - positions = make_positions(input, self.padding_idx) if positions is None else positions - return self.weights.index_select(0, positions.view(-1)).view(bsz, seq_len, -1).detach() - - def max_positions(self): - """Maximum number of supported positions.""" - return int(1e5) # an arbitrary large number - - -class TransformerFFNLayer(nn.Module): - def __init__(self, hidden_size, filter_size, padding="SAME", kernel_size=1, dropout=0., act='gelu'): - super().__init__() - self.kernel_size = kernel_size - self.dropout = dropout - self.act = act - if padding == 'SAME': - self.ffn_1 = nn.Conv1d(hidden_size, filter_size, kernel_size, padding=kernel_size // 2) - elif padding == 'LEFT': - self.ffn_1 = nn.Sequential( - nn.ConstantPad1d((kernel_size - 1, 0), 0.0), - nn.Conv1d(hidden_size, filter_size, kernel_size) - ) - self.ffn_2 = Linear(filter_size, hidden_size) - - def forward(self, x, incremental_state=None): - # x: T x B x C - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if 'prev_input' in saved_state: - prev_input = saved_state['prev_input'] - x = torch.cat((prev_input, x), dim=0) - x = x[-self.kernel_size:] - saved_state['prev_input'] = x - self._set_input_buffer(incremental_state, saved_state) - - x = self.ffn_1(x.permute(1, 2, 0)).permute(2, 0, 1) - x = x * self.kernel_size ** -0.5 - - if incremental_state is not None: - x = x[-1:] - if self.act == 'gelu': - x = F.gelu(x) - if self.act == 'relu': - x = F.relu(x) - x = F.dropout(x, self.dropout, training=self.training) - x = self.ffn_2(x) - return x - - def _get_input_buffer(self, incremental_state): - return get_incremental_state( - self, - incremental_state, - 'f', - ) or {} - - def _set_input_buffer(self, incremental_state, buffer): - set_incremental_state( - self, - incremental_state, - 'f', - buffer, - ) - - def clear_buffer(self, incremental_state): - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if 'prev_input' in saved_state: - del saved_state['prev_input'] - self._set_input_buffer(incremental_state, saved_state) - - -class MultiheadAttention(nn.Module): - def __init__(self, embed_dim, num_heads, kdim=None, vdim=None, dropout=0., bias=True, - add_bias_kv=False, add_zero_attn=False, self_attention=False, - encoder_decoder_attention=False): - super().__init__() - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.num_heads = num_heads - self.dropout = dropout - self.head_dim = embed_dim // num_heads - assert self.head_dim * num_heads == self.embed_dim, "embed_dim must be divisible by num_heads" - self.scaling = self.head_dim ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - - assert not self.self_attention or self.qkv_same_dim, 'Self-attention requires query, key and ' \ - 'value to be of the same size' - - if self.qkv_same_dim: - self.in_proj_weight = Parameter(torch.Tensor(3 * embed_dim, embed_dim)) - else: - self.k_proj_weight = Parameter(torch.Tensor(embed_dim, self.kdim)) - self.v_proj_weight = Parameter(torch.Tensor(embed_dim, self.vdim)) - self.q_proj_weight = Parameter(torch.Tensor(embed_dim, embed_dim)) - - if bias: - self.in_proj_bias = Parameter(torch.Tensor(3 * embed_dim)) - else: - self.register_parameter('in_proj_bias', None) - - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias) - - if add_bias_kv: - self.bias_k = Parameter(torch.Tensor(1, 1, embed_dim)) - self.bias_v = Parameter(torch.Tensor(1, 1, embed_dim)) - else: - self.bias_k = self.bias_v = None - - self.add_zero_attn = add_zero_attn - - self.reset_parameters() - - self.enable_torch_version = False - if hasattr(F, "multi_head_attention_forward"): - self.enable_torch_version = True - else: - self.enable_torch_version = False - self.last_attn_probs = None - - def reset_parameters(self): - if self.qkv_same_dim: - nn.init.xavier_uniform_(self.in_proj_weight) - else: - nn.init.xavier_uniform_(self.k_proj_weight) - nn.init.xavier_uniform_(self.v_proj_weight) - nn.init.xavier_uniform_(self.q_proj_weight) - - nn.init.xavier_uniform_(self.out_proj.weight) - if self.in_proj_bias is not None: - nn.init.constant_(self.in_proj_bias, 0.) - nn.init.constant_(self.out_proj.bias, 0.) - if self.bias_k is not None: - nn.init.xavier_normal_(self.bias_k) - if self.bias_v is not None: - nn.init.xavier_normal_(self.bias_v) - - def forward( - self, - query, key, value, - key_padding_mask=None, - incremental_state=None, - need_weights=True, - static_kv=False, - attn_mask=None, - before_softmax=False, - need_head_weights=False, - enc_dec_attn_constraint_mask=None, - reset_attn_weight=None - ): - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - need_weights (bool, optional): return the attention weights, - averaged over heads (default: False). - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - before_softmax (bool, optional): return the raw attention - weights and values before the attention softmax. - need_head_weights (bool, optional): return the attention - weights for each head. Implies *need_weights*. Default: - return the average attention weights over all heads. - """ - if need_head_weights: - need_weights = True - - tgt_len, bsz, embed_dim = query.size() - assert embed_dim == self.embed_dim - assert list(query.size()) == [tgt_len, bsz, embed_dim] - if self.enable_torch_version and incremental_state is None and not static_kv and reset_attn_weight is None: - if self.qkv_same_dim: - return F.multi_head_attention_forward(query, key, value, - self.embed_dim, self.num_heads, - self.in_proj_weight, - self.in_proj_bias, self.bias_k, self.bias_v, - self.add_zero_attn, self.dropout, - self.out_proj.weight, self.out_proj.bias, - self.training, key_padding_mask, need_weights, - attn_mask) - else: - return F.multi_head_attention_forward(query, key, value, - self.embed_dim, self.num_heads, - torch.empty([0]), - self.in_proj_bias, self.bias_k, self.bias_v, - self.add_zero_attn, self.dropout, - self.out_proj.weight, self.out_proj.bias, - self.training, key_padding_mask, need_weights, - attn_mask, use_separate_proj_weight=True, - q_proj_weight=self.q_proj_weight, - k_proj_weight=self.k_proj_weight, - v_proj_weight=self.v_proj_weight) - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if 'prev_key' in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention: - # self-attention - q, k, v = self.in_proj_qkv(query) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.in_proj_q(query) - if key is None: - assert value is None - k = v = None - else: - k = self.in_proj_k(key) - v = self.in_proj_v(key) - - else: - q = self.in_proj_q(query) - k = self.in_proj_k(key) - v = self.in_proj_v(value) - q *= self.scaling - - if self.bias_k is not None: - assert self.bias_v is not None - k = torch.cat([k, self.bias_k.repeat(1, bsz, 1)]) - v = torch.cat([v, self.bias_v.repeat(1, bsz, 1)]) - if attn_mask is not None: - attn_mask = torch.cat([attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [key_padding_mask, key_padding_mask.new_zeros(key_padding_mask.size(0), 1)], dim=1) - - q = q.contiguous().view(tgt_len, bsz * self.num_heads, self.head_dim).transpose(0, 1) - if k is not None: - k = k.contiguous().view(-1, bsz * self.num_heads, self.head_dim).transpose(0, 1) - if v is not None: - v = v.contiguous().view(-1, bsz * self.num_heads, self.head_dim).transpose(0, 1) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads, seq_len, head_dim) - if 'prev_key' in saved_state: - prev_key = saved_state['prev_key'].view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - k = prev_key - else: - k = torch.cat((prev_key, k), dim=1) - if 'prev_value' in saved_state: - prev_value = saved_state['prev_value'].view(bsz * self.num_heads, -1, self.head_dim) - if static_kv: - v = prev_value - else: - v = torch.cat((prev_value, v), dim=1) - if 'prev_key_padding_mask' in saved_state and saved_state['prev_key_padding_mask'] is not None: - prev_key_padding_mask = saved_state['prev_key_padding_mask'] - if static_kv: - key_padding_mask = prev_key_padding_mask - else: - key_padding_mask = torch.cat((prev_key_padding_mask, key_padding_mask), dim=1) - - saved_state['prev_key'] = k.view(bsz, self.num_heads, -1, self.head_dim) - saved_state['prev_value'] = v.view(bsz, self.num_heads, -1, self.head_dim) - saved_state['prev_key_padding_mask'] = key_padding_mask - - self._set_input_buffer(incremental_state, saved_state) - - src_len = k.size(1) - - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.shape == torch.Size([]): - key_padding_mask = None - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - if self.add_zero_attn: - src_len += 1 - k = torch.cat([k, k.new_zeros((k.size(0), 1) + k.size()[2:])], dim=1) - v = torch.cat([v, v.new_zeros((v.size(0), 1) + v.size()[2:])], dim=1) - if attn_mask is not None: - attn_mask = torch.cat([attn_mask, attn_mask.new_zeros(attn_mask.size(0), 1)], dim=1) - if key_padding_mask is not None: - key_padding_mask = torch.cat( - [key_padding_mask, torch.zeros(key_padding_mask.size(0), 1).type_as(key_padding_mask)], dim=1) - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - attn_weights = self.apply_sparse_mask(attn_weights, tgt_len, src_len, bsz) - - assert list(attn_weights.size()) == [bsz * self.num_heads, tgt_len, src_len] - - if attn_mask is not None: - if len(attn_mask.shape) == 2: - attn_mask = attn_mask.unsqueeze(0) - elif len(attn_mask.shape) == 3: - attn_mask = attn_mask[:, None].repeat([1, self.num_heads, 1, 1]).reshape( - bsz * self.num_heads, tgt_len, src_len) - attn_weights = attn_weights + attn_mask - - if enc_dec_attn_constraint_mask is not None: # bs x head x L_kv - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.masked_fill( - enc_dec_attn_constraint_mask.unsqueeze(2).bool(), - -1e8, - ) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - if key_padding_mask is not None: - # don't attend to padding symbols - attn_weights = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2), - -1e8, - ) - attn_weights = attn_weights.view(bsz * self.num_heads, tgt_len, src_len) - - attn_logits = attn_weights.view(bsz, self.num_heads, tgt_len, src_len) - - if before_softmax: - return attn_weights, v - - attn_weights_float = softmax(attn_weights, dim=-1) - attn_weights = attn_weights_float.type_as(attn_weights) - attn_probs = F.dropout(attn_weights_float.type_as(attn_weights), p=self.dropout, training=self.training) - - if reset_attn_weight is not None: - if reset_attn_weight: - self.last_attn_probs = attn_probs.detach() - else: - assert self.last_attn_probs is not None - attn_probs = self.last_attn_probs - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [bsz * self.num_heads, tgt_len, self.head_dim] - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim) - attn = self.out_proj(attn) - - if need_weights: - attn_weights = attn_weights_float.view(bsz, self.num_heads, tgt_len, src_len).transpose(1, 0) - if not need_head_weights: - # average attention weights over heads - attn_weights = attn_weights.mean(dim=0) - else: - attn_weights = None - - return attn, (attn_weights, attn_logits) - - def in_proj_qkv(self, query): - return self._in_proj(query).chunk(3, dim=-1) - - def in_proj_q(self, query): - if self.qkv_same_dim: - return self._in_proj(query, end=self.embed_dim) - else: - bias = self.in_proj_bias - if bias is not None: - bias = bias[:self.embed_dim] - return F.linear(query, self.q_proj_weight, bias) - - def in_proj_k(self, key): - if self.qkv_same_dim: - return self._in_proj(key, start=self.embed_dim, end=2 * self.embed_dim) - else: - weight = self.k_proj_weight - bias = self.in_proj_bias - if bias is not None: - bias = bias[self.embed_dim:2 * self.embed_dim] - return F.linear(key, weight, bias) - - def in_proj_v(self, value): - if self.qkv_same_dim: - return self._in_proj(value, start=2 * self.embed_dim) - else: - weight = self.v_proj_weight - bias = self.in_proj_bias - if bias is not None: - bias = bias[2 * self.embed_dim:] - return F.linear(value, weight, bias) - - def _in_proj(self, input, start=0, end=None): - weight = self.in_proj_weight - bias = self.in_proj_bias - weight = weight[start:end, :] - if bias is not None: - bias = bias[start:end] - return F.linear(input, weight, bias) - - def _get_input_buffer(self, incremental_state): - return get_incremental_state( - self, - incremental_state, - 'attn_state', - ) or {} - - def _set_input_buffer(self, incremental_state, buffer): - set_incremental_state( - self, - incremental_state, - 'attn_state', - buffer, - ) - - def apply_sparse_mask(self, attn_weights, tgt_len, src_len, bsz): - return attn_weights - - def clear_buffer(self, incremental_state=None): - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if 'prev_key' in saved_state: - del saved_state['prev_key'] - if 'prev_value' in saved_state: - del saved_state['prev_value'] - self._set_input_buffer(incremental_state, saved_state) - - -class EncSALayer(nn.Module): - def __init__(self, c, num_heads, dropout, attention_dropout=0.1, - relu_dropout=0.1, kernel_size=9, padding='SAME', act='gelu'): - super().__init__() - self.c = c - self.dropout = dropout - self.num_heads = num_heads - if num_heads > 0: - self.layer_norm1 = LayerNorm(c) - self.self_attn = MultiheadAttention( - self.c, num_heads, self_attention=True, dropout=attention_dropout, bias=False) - self.layer_norm2 = LayerNorm(c) - self.ffn = TransformerFFNLayer( - c, 4 * c, kernel_size=kernel_size, dropout=relu_dropout, padding=padding, act=act) - - def forward(self, x, encoder_padding_mask=None, **kwargs): - layer_norm_training = kwargs.get('layer_norm_training', None) - if layer_norm_training is not None: - self.layer_norm1.training = layer_norm_training - self.layer_norm2.training = layer_norm_training - if self.num_heads > 0: - residual = x - x = self.layer_norm1(x) - x, _, = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=encoder_padding_mask - ) - x = F.dropout(x, self.dropout, training=self.training) - x = residual + x - x = x * (1 - encoder_padding_mask.float()).transpose(0, 1)[..., None] - - residual = x - x = self.layer_norm2(x) - x = self.ffn(x) - x = F.dropout(x, self.dropout, training=self.training) - x = residual + x - x = x * (1 - encoder_padding_mask.float()).transpose(0, 1)[..., None] - return x - - -class DecSALayer(nn.Module): - def __init__(self, c, num_heads, dropout, attention_dropout=0.1, relu_dropout=0.1, - kernel_size=9, act='gelu'): - super().__init__() - self.c = c - self.dropout = dropout - self.layer_norm1 = LayerNorm(c) - self.self_attn = MultiheadAttention( - c, num_heads, self_attention=True, dropout=attention_dropout, bias=False - ) - self.layer_norm2 = LayerNorm(c) - self.encoder_attn = MultiheadAttention( - c, num_heads, encoder_decoder_attention=True, dropout=attention_dropout, bias=False, - ) - self.layer_norm3 = LayerNorm(c) - self.ffn = TransformerFFNLayer( - c, 4 * c, padding='LEFT', kernel_size=kernel_size, dropout=relu_dropout, act=act) - - def forward( - self, - x, - encoder_out=None, - encoder_padding_mask=None, - incremental_state=None, - self_attn_mask=None, - self_attn_padding_mask=None, - attn_out=None, - reset_attn_weight=None, - **kwargs, - ): - layer_norm_training = kwargs.get('layer_norm_training', None) - if layer_norm_training is not None: - self.layer_norm1.training = layer_norm_training - self.layer_norm2.training = layer_norm_training - self.layer_norm3.training = layer_norm_training - residual = x - x = self.layer_norm1(x) - x, _ = self.self_attn( - query=x, - key=x, - value=x, - key_padding_mask=self_attn_padding_mask, - incremental_state=incremental_state, - attn_mask=self_attn_mask - ) - x = F.dropout(x, self.dropout, training=self.training) - x = residual + x - - attn_logits = None - if encoder_out is not None or attn_out is not None: - residual = x - x = self.layer_norm2(x) - if encoder_out is not None: - x, attn = self.encoder_attn( - query=x, - key=encoder_out, - value=encoder_out, - key_padding_mask=encoder_padding_mask, - incremental_state=incremental_state, - static_kv=True, - enc_dec_attn_constraint_mask=get_incremental_state(self, incremental_state, - 'enc_dec_attn_constraint_mask'), - reset_attn_weight=reset_attn_weight - ) - attn_logits = attn[1] - elif attn_out is not None: - x = self.encoder_attn.in_proj_v(attn_out) - if encoder_out is not None or attn_out is not None: - x = F.dropout(x, self.dropout, training=self.training) - x = residual + x - - residual = x - x = self.layer_norm3(x) - x = self.ffn(x, incremental_state=incremental_state) - x = F.dropout(x, self.dropout, training=self.training) - x = residual + x - return x, attn_logits - - def clear_buffer(self, input, encoder_out=None, encoder_padding_mask=None, incremental_state=None): - self.encoder_attn.clear_buffer(incremental_state) - self.ffn.clear_buffer(incremental_state) - - def set_buffer(self, name, tensor, incremental_state): - return set_incremental_state(self, incremental_state, name, tensor) - - -class TransformerEncoderLayer(nn.Module): - def __init__(self, hidden_size, dropout, kernel_size=9, num_heads=2): - super().__init__() - self.hidden_size = hidden_size - self.dropout = dropout - self.num_heads = num_heads - self.op = EncSALayer( - hidden_size, num_heads, dropout=dropout, - attention_dropout=0.0, relu_dropout=dropout, - kernel_size=kernel_size) - - def forward(self, x, **kwargs): - return self.op(x, **kwargs) - - -class TransformerDecoderLayer(nn.Module): - def __init__(self, hidden_size, dropout, kernel_size=9, num_heads=2): - super().__init__() - self.hidden_size = hidden_size - self.dropout = dropout - self.num_heads = num_heads - self.op = DecSALayer( - hidden_size, num_heads, dropout=dropout, - attention_dropout=0.0, relu_dropout=dropout, - kernel_size=kernel_size) - - def forward(self, x, **kwargs): - return self.op(x, **kwargs) - - def clear_buffer(self, *args): - return self.op.clear_buffer(*args) - - def set_buffer(self, *args): - return self.op.set_buffer(*args) - - -class FFTBlocks(nn.Module): - def __init__(self, hidden_size, num_layers, ffn_kernel_size=9, dropout=0.0, - num_heads=2, use_pos_embed=True, use_last_norm=True, - use_pos_embed_alpha=True): - super().__init__() - self.num_layers = num_layers - embed_dim = self.hidden_size = hidden_size - self.dropout = dropout - self.use_pos_embed = use_pos_embed - self.use_last_norm = use_last_norm - if use_pos_embed: - self.max_source_positions = DEFAULT_MAX_TARGET_POSITIONS - self.padding_idx = 0 - self.pos_embed_alpha = nn.Parameter(torch.Tensor([1])) if use_pos_embed_alpha else 1 - self.embed_positions = SinusoidalPositionalEmbedding( - embed_dim, self.padding_idx, init_size=DEFAULT_MAX_TARGET_POSITIONS, - ) - - self.layers = nn.ModuleList([]) - self.layers.extend([ - TransformerEncoderLayer(self.hidden_size, self.dropout, - kernel_size=ffn_kernel_size, num_heads=num_heads) - for _ in range(self.num_layers) - ]) - if self.use_last_norm: - self.layer_norm = nn.LayerNorm(embed_dim) - else: - self.layer_norm = None - - def forward(self, x, padding_mask=None, attn_mask=None, return_hiddens=False): - """ - :param x: [B, T, C] - :param padding_mask: [B, T] - :return: [B, T, C] or [L, B, T, C] - """ - padding_mask = x.abs().sum(-1).eq(0).data if padding_mask is None else padding_mask - nonpadding_mask_TB = 1 - padding_mask.transpose(0, 1).float()[:, :, None] # [T, B, 1] - if self.use_pos_embed: - positions = self.pos_embed_alpha * self.embed_positions(x[..., 0]) - x = x + positions - x = F.dropout(x, p=self.dropout, training=self.training) - # B x T x C -> T x B x C - x = x.transpose(0, 1) * nonpadding_mask_TB - hiddens = [] - for layer in self.layers: - x = layer(x, encoder_padding_mask=padding_mask, attn_mask=attn_mask) * nonpadding_mask_TB - hiddens.append(x) - if self.use_last_norm: - x = self.layer_norm(x) * nonpadding_mask_TB - if return_hiddens: - x = torch.stack(hiddens, 0) # [L, T, B, C] - x = x.transpose(1, 2) # [L, B, T, C] - else: - x = x.transpose(0, 1) # [B, T, C] - return x - - -class FastSpeechEncoder(FFTBlocks): - def __init__(self, dict_size, hidden_size=256, num_layers=4, kernel_size=9, num_heads=2, - dropout=0.0): - super().__init__(hidden_size, num_layers, kernel_size, num_heads=num_heads, - use_pos_embed=False, dropout=dropout) # use_pos_embed_alpha for compatibility - self.embed_tokens = Embedding(dict_size, hidden_size, 0) - self.embed_scale = math.sqrt(hidden_size) - self.padding_idx = 0 - self.embed_positions = SinusoidalPositionalEmbedding( - hidden_size, self.padding_idx, init_size=DEFAULT_MAX_TARGET_POSITIONS, - ) - - def forward(self, txt_tokens, attn_mask=None): - """ - - :param txt_tokens: [B, T] - :return: { - 'encoder_out': [B x T x C] - } - """ - encoder_padding_mask = txt_tokens.eq(self.padding_idx).data - x = self.forward_embedding(txt_tokens) # [B, T, H] - if self.num_layers > 0: - x = super(FastSpeechEncoder, self).forward(x, encoder_padding_mask, attn_mask=attn_mask) - return x - - def forward_embedding(self, txt_tokens): - # embed tokens and positions - x = self.embed_scale * self.embed_tokens(txt_tokens) - if self.use_pos_embed: - positions = self.embed_positions(txt_tokens) - x = x + positions - x = F.dropout(x, p=self.dropout, training=self.training) - return x - - -class FastSpeechDecoder(FFTBlocks): - def __init__(self, hidden_size=256, num_layers=4, kernel_size=9, num_heads=2): - super().__init__(hidden_size, num_layers, kernel_size, num_heads=num_heads) diff --git a/spaces/ALSv/FSW/roop/metadata.py b/spaces/ALSv/FSW/roop/metadata.py deleted file mode 100644 index aea9e16d897ede57f566ccc773d0d2ee17905dfb..0000000000000000000000000000000000000000 --- a/spaces/ALSv/FSW/roop/metadata.py +++ /dev/null @@ -1,2 +0,0 @@ -name = 'roop' -version = '1.3.2' diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/label/Label.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/label/Label.js deleted file mode 100644 index 7fd126d772a0f42dc26439814b6a1105746a791f..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/label/Label.js +++ /dev/null @@ -1,297 +0,0 @@ -import Sizer from '../sizer/Sizer.js'; -import AddChildMask from '../../../plugins/gameobjects/container/containerlite/mask/AddChildMask.js'; -import SetDisplaySize from '../../../plugins/utils/size/SetDisplaySize.js'; -import Methods from './methods/Methods.js'; - -const GetValue = Phaser.Utils.Objects.GetValue; - -class Label extends Sizer { - constructor(scene, config) { - // Create sizer - super(scene, config); - this.type = 'rexLabel'; - - // Add elements - var background = GetValue(config, 'background', undefined); - var icon = GetValue(config, 'icon', undefined); - var iconMask = GetValue(config, 'iconMask', undefined); - var text = GetValue(config, 'text', undefined); - var action = GetValue(config, 'action', undefined); - var actionMask = GetValue(config, 'actionMask', undefined); - // Align - var align = GetValue(config, 'align', undefined); // undefined/left/top: no space - - - if (background) { - this.addBackground(background); - } - - // Add space - if ( - (align === 'right') || - (align === 'bottom') || - (align === 'center') - ) { - this.addSpace(); - } - - if (icon) { - var iconSpace = GetValue(config, 'space.icon', 0); - var padding; - if (this.orientation === 0) { - if (text || action) { - padding = { right: iconSpace }; - } - } else { - if (text || action) { - padding = { bottom: iconSpace }; - } - } - var fitRatio = GetValue(config, 'squareFitIcon', false) ? 1 : 0; - - this.add( - icon, - { proportion: 0, padding: padding, fitRatio: fitRatio } - ); - - if (iconMask) { - iconMask = AddChildMask.call(this, icon, icon, 1); // Circle mask - } - - if (!fitRatio) { - var iconSize = GetValue(config, 'iconSize', undefined); - this.setIconSize( - GetValue(config, 'iconWidth', iconSize), - GetValue(config, 'iconHeight', iconSize) - ); - } - } - - - if (text) { - var textSpace = GetValue(config, 'space.text', 0); - var expandTextWidth = GetValue(config, 'expandTextWidth', false); - var expandTextHeight = GetValue(config, 'expandTextHeight', false); - var proportion, padding, expand; - if (this.orientation === 0) { - proportion = (expandTextWidth) ? 1 : 0; - if (action) { - padding = { right: textSpace }; - } - expand = expandTextHeight; - } else { - proportion = (expandTextHeight) ? 1 : 0; - if (action) { - padding = { bottom: textSpace }; - } - expand = expandTextWidth; - } - - this.add( - text, - { proportion: proportion, expand: expand, padding: padding, } - ); - } - - if (action) { - var fitRatio = GetValue(config, 'squareFitAction', false) ? 1 : 0; - this.add( - action, - { proportion: 0, fitRatio: fitRatio } - ); - - if (actionMask) { - actionMask = AddChildMask.call(this, action, action, 1); // Circle mask - } - - if (!fitRatio) { - var actionSize = GetValue(config, 'actionSize'); - this.setActionSize( - GetValue(config, 'actionWidth', actionSize), - GetValue(config, 'actionHeight', actionSize) - ); - } - } - - // Add space - if (align === 'center') { - this.addSpace(); - } - - this.addChildrenMap('background', background); - this.addChildrenMap('icon', icon); - this.addChildrenMap('iconMask', iconMask); - this.addChildrenMap('text', text); - this.addChildrenMap('action', action); - this.addChildrenMap('actionMask', actionMask); - } - - // Access text game object - get text() { - var textObject = this.childrenMap.text; - if (textObject === undefined) { - return ''; - } - return textObject.text; - } - - set text(value) { - var textObject = this.childrenMap.text; - if (textObject === undefined) { - return; - } - textObject.setText(value); - } - - setText(value) { - this.text = value; - return this; - } - - // Access icon game object - setIconTexture(key, frame) { - var imageObject = this.childrenMap.icon; - if (imageObject === undefined) { - return this; - } - imageObject.setTexture(key, frame); - - if (this.iconWidth !== undefined) { - SetDisplaySize(imageObject, this.iconWidth, this.iconHeight); - this.resetChildScaleState(imageObject); - } - - return this; - } - - setTexture(key, frame) { - this.setIconTexture(key, frame); - return this; - } - - setIconSize(width, height) { - if (height === undefined) { - height = width; - } - - this.iconWidth = width; - this.iconHeight = height; - - return this; - } - - get texture() { - var imageObject = this.childrenMap.icon; - if (imageObject === undefined) { - return undefined; - } - return imageObject.texture; - } - - get frame() { - var imageObject = this.childrenMap.icon; - if (imageObject === undefined) { - return undefined; - } - return imageObject.frame; - } - - setActionTexture(key, frame) { - var imageObject = this.childrenMap.action; - if (imageObject === undefined) { - return this; - } - imageObject.setTexture(key, frame); - - if (this.actionWidth !== undefined) { - SetDisplaySize(imageObject, this.actionWidth, this.actionHeight); - this.resetChildScaleState(imageObject); - } - - return this; - } - - get actionTexture() { - var imageObject = this.childrenMap.action; - if (imageObject === undefined) { - return undefined; - } - return imageObject.texture; - } - - get actionFrame() { - var imageObject = this.childrenMap.action; - if (imageObject === undefined) { - return undefined; - } - return imageObject.frame; - } - - setActionSize(width, height) { - if (height === undefined) { - height = width; - } - - this.actionWidth = width; - this.actionHeight = height; - - return this; - } - - preLayout() { - var icon = this.childrenMap.icon; - if (icon && (this.iconWidth !== undefined)) { - SetDisplaySize(icon, this.iconWidth, this.iconHeight); - } - - var action = this.childrenMap.action; - if (action && (this.actionWidth !== undefined)) { - SetDisplaySize(action, this.actionWidth, this.actionHeight); - } - - super.preLayout(); - } - - runLayout(parent, newWidth, newHeight) { - if (this.ignoreLayout) { - return this; - } - - super.runLayout(parent, newWidth, newHeight); - // Pin icon-mask to icon game object - var iconMask = this.childrenMap.iconMask; - if (iconMask) { - iconMask.setPosition(); - this.resetChildPositionState(iconMask); - } - // Pin action-mask to action game object - var actionMask = this.childrenMap.actionMask; - if (actionMask) { - actionMask.setPosition(); - this.resetChildPositionState(actionMask); - } - return this; - } - - resize(width, height) { - super.resize(width, height); - // Resize icon-mask to icon game object - var iconMask = this.childrenMap.iconMask; - if (iconMask) { - iconMask.resize(); - } - // Resize action-mask to icon game object - var actionMask = this.childrenMap.actionMask; - if (actionMask) { - actionMask.resize(); - } - return this; - } -} - -Object.assign( - Label.prototype, - Methods, -) - -export default Label; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/Factory.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/Factory.js deleted file mode 100644 index f05c0611e14230ed6062ddef64e3bfaead791a09..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/menu/Factory.js +++ /dev/null @@ -1,13 +0,0 @@ -import Menu from './Menu.js'; -import ObjectFactory from '../ObjectFactory.js'; -import SetValue from '../../../plugins/utils/object/SetValue.js'; - -ObjectFactory.register('menu', function (config) { - var gameObject = new Menu(this.scene, config); - this.scene.add.existing(gameObject); - return gameObject; -}); - -SetValue(window, 'RexPlugins.UI.Menu', Menu); - -export default Menu; \ No newline at end of file diff --git a/spaces/AlexWang/lama/saicinpainting/training/modules/multiscale.py b/spaces/AlexWang/lama/saicinpainting/training/modules/multiscale.py deleted file mode 100644 index 65f0a54925593e9da8106bfc6d65a4098ce001d7..0000000000000000000000000000000000000000 --- a/spaces/AlexWang/lama/saicinpainting/training/modules/multiscale.py +++ /dev/null @@ -1,244 +0,0 @@ -from typing import List, Tuple, Union, Optional - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from saicinpainting.training.modules.base import get_conv_block_ctor, get_activation -from saicinpainting.training.modules.pix2pixhd import ResnetBlock - - -class ResNetHead(nn.Module): - def __init__(self, input_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect', conv_kind='default', activation=nn.ReLU(True)): - assert (n_blocks >= 0) - super(ResNetHead, self).__init__() - - conv_layer = get_conv_block_ctor(conv_kind) - - model = [nn.ReflectionPad2d(3), - conv_layer(input_nc, ngf, kernel_size=7, padding=0), - norm_layer(ngf), - activation] - - ### downsample - for i in range(n_downsampling): - mult = 2 ** i - model += [conv_layer(ngf * mult, ngf * mult * 2, kernel_size=3, stride=2, padding=1), - norm_layer(ngf * mult * 2), - activation] - - mult = 2 ** n_downsampling - - ### resnet blocks - for i in range(n_blocks): - model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer, - conv_kind=conv_kind)] - - self.model = nn.Sequential(*model) - - def forward(self, input): - return self.model(input) - - -class ResNetTail(nn.Module): - def __init__(self, output_nc, ngf=64, n_downsampling=3, n_blocks=9, norm_layer=nn.BatchNorm2d, - padding_type='reflect', conv_kind='default', activation=nn.ReLU(True), - up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0, - add_in_proj=None): - assert (n_blocks >= 0) - super(ResNetTail, self).__init__() - - mult = 2 ** n_downsampling - - model = [] - - if add_in_proj is not None: - model.append(nn.Conv2d(add_in_proj, ngf * mult, kernel_size=1)) - - ### resnet blocks - for i in range(n_blocks): - model += [ResnetBlock(ngf * mult, padding_type=padding_type, activation=activation, norm_layer=norm_layer, - conv_kind=conv_kind)] - - ### upsample - for i in range(n_downsampling): - mult = 2 ** (n_downsampling - i) - model += [nn.ConvTranspose2d(ngf * mult, int(ngf * mult / 2), kernel_size=3, stride=2, padding=1, - output_padding=1), - up_norm_layer(int(ngf * mult / 2)), - up_activation] - self.model = nn.Sequential(*model) - - out_layers = [] - for _ in range(out_extra_layers_n): - out_layers += [nn.Conv2d(ngf, ngf, kernel_size=1, padding=0), - up_norm_layer(ngf), - up_activation] - out_layers += [nn.ReflectionPad2d(3), - nn.Conv2d(ngf, output_nc, kernel_size=7, padding=0)] - - if add_out_act: - out_layers.append(get_activation('tanh' if add_out_act is True else add_out_act)) - - self.out_proj = nn.Sequential(*out_layers) - - def forward(self, input, return_last_act=False): - features = self.model(input) - out = self.out_proj(features) - if return_last_act: - return out, features - else: - return out - - -class MultiscaleResNet(nn.Module): - def __init__(self, input_nc, output_nc, ngf=64, n_downsampling=2, n_blocks_head=2, n_blocks_tail=6, n_scales=3, - norm_layer=nn.BatchNorm2d, padding_type='reflect', conv_kind='default', activation=nn.ReLU(True), - up_norm_layer=nn.BatchNorm2d, up_activation=nn.ReLU(True), add_out_act=False, out_extra_layers_n=0, - out_cumulative=False, return_only_hr=False): - super().__init__() - - self.heads = nn.ModuleList([ResNetHead(input_nc, ngf=ngf, n_downsampling=n_downsampling, - n_blocks=n_blocks_head, norm_layer=norm_layer, padding_type=padding_type, - conv_kind=conv_kind, activation=activation) - for i in range(n_scales)]) - tail_in_feats = ngf * (2 ** n_downsampling) + ngf - self.tails = nn.ModuleList([ResNetTail(output_nc, - ngf=ngf, n_downsampling=n_downsampling, - n_blocks=n_blocks_tail, norm_layer=norm_layer, padding_type=padding_type, - conv_kind=conv_kind, activation=activation, up_norm_layer=up_norm_layer, - up_activation=up_activation, add_out_act=add_out_act, - out_extra_layers_n=out_extra_layers_n, - add_in_proj=None if (i == n_scales - 1) else tail_in_feats) - for i in range(n_scales)]) - - self.out_cumulative = out_cumulative - self.return_only_hr = return_only_hr - - @property - def num_scales(self): - return len(self.heads) - - def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \ - -> Union[torch.Tensor, List[torch.Tensor]]: - """ - :param ms_inputs: List of inputs of different resolutions from HR to LR - :param smallest_scales_num: int or None, number of smallest scales to take at input - :return: Depending on return_only_hr: - True: Only the most HR output - False: List of outputs of different resolutions from HR to LR - """ - if smallest_scales_num is None: - assert len(self.heads) == len(ms_inputs), (len(self.heads), len(ms_inputs), smallest_scales_num) - smallest_scales_num = len(self.heads) - else: - assert smallest_scales_num == len(ms_inputs) <= len(self.heads), (len(self.heads), len(ms_inputs), smallest_scales_num) - - cur_heads = self.heads[-smallest_scales_num:] - ms_features = [cur_head(cur_inp) for cur_head, cur_inp in zip(cur_heads, ms_inputs)] - - all_outputs = [] - prev_tail_features = None - for i in range(len(ms_features)): - scale_i = -i - 1 - - cur_tail_input = ms_features[-i - 1] - if prev_tail_features is not None: - if prev_tail_features.shape != cur_tail_input.shape: - prev_tail_features = F.interpolate(prev_tail_features, size=cur_tail_input.shape[2:], - mode='bilinear', align_corners=False) - cur_tail_input = torch.cat((cur_tail_input, prev_tail_features), dim=1) - - cur_out, cur_tail_feats = self.tails[scale_i](cur_tail_input, return_last_act=True) - - prev_tail_features = cur_tail_feats - all_outputs.append(cur_out) - - if self.out_cumulative: - all_outputs_cum = [all_outputs[0]] - for i in range(1, len(ms_features)): - cur_out = all_outputs[i] - cur_out_cum = cur_out + F.interpolate(all_outputs_cum[-1], size=cur_out.shape[2:], - mode='bilinear', align_corners=False) - all_outputs_cum.append(cur_out_cum) - all_outputs = all_outputs_cum - - if self.return_only_hr: - return all_outputs[-1] - else: - return all_outputs[::-1] - - -class MultiscaleDiscriminatorSimple(nn.Module): - def __init__(self, ms_impl): - super().__init__() - self.ms_impl = nn.ModuleList(ms_impl) - - @property - def num_scales(self): - return len(self.ms_impl) - - def forward(self, ms_inputs: List[torch.Tensor], smallest_scales_num: Optional[int] = None) \ - -> List[Tuple[torch.Tensor, List[torch.Tensor]]]: - """ - :param ms_inputs: List of inputs of different resolutions from HR to LR - :param smallest_scales_num: int or None, number of smallest scales to take at input - :return: List of pairs (prediction, features) for different resolutions from HR to LR - """ - if smallest_scales_num is None: - assert len(self.ms_impl) == len(ms_inputs), (len(self.ms_impl), len(ms_inputs), smallest_scales_num) - smallest_scales_num = len(self.heads) - else: - assert smallest_scales_num == len(ms_inputs) <= len(self.ms_impl), \ - (len(self.ms_impl), len(ms_inputs), smallest_scales_num) - - return [cur_discr(cur_input) for cur_discr, cur_input in zip(self.ms_impl[-smallest_scales_num:], ms_inputs)] - - -class SingleToMultiScaleInputMixin: - def forward(self, x: torch.Tensor) -> List: - orig_height, orig_width = x.shape[2:] - factors = [2 ** i for i in range(self.num_scales)] - ms_inputs = [F.interpolate(x, size=(orig_height // f, orig_width // f), mode='bilinear', align_corners=False) - for f in factors] - return super().forward(ms_inputs) - - -class GeneratorMultiToSingleOutputMixin: - def forward(self, x): - return super().forward(x)[0] - - -class DiscriminatorMultiToSingleOutputMixin: - def forward(self, x): - out_feat_tuples = super().forward(x) - return out_feat_tuples[0][0], [f for _, flist in out_feat_tuples for f in flist] - - -class DiscriminatorMultiToSingleOutputStackedMixin: - def __init__(self, *args, return_feats_only_levels=None, **kwargs): - super().__init__(*args, **kwargs) - self.return_feats_only_levels = return_feats_only_levels - - def forward(self, x): - out_feat_tuples = super().forward(x) - outs = [out for out, _ in out_feat_tuples] - scaled_outs = [outs[0]] + [F.interpolate(cur_out, size=outs[0].shape[-2:], - mode='bilinear', align_corners=False) - for cur_out in outs[1:]] - out = torch.cat(scaled_outs, dim=1) - if self.return_feats_only_levels is not None: - feat_lists = [out_feat_tuples[i][1] for i in self.return_feats_only_levels] - else: - feat_lists = [flist for _, flist in out_feat_tuples] - feats = [f for flist in feat_lists for f in flist] - return out, feats - - -class MultiscaleDiscrSingleInput(SingleToMultiScaleInputMixin, DiscriminatorMultiToSingleOutputStackedMixin, MultiscaleDiscriminatorSimple): - pass - - -class MultiscaleResNetSingle(GeneratorMultiToSingleOutputMixin, SingleToMultiScaleInputMixin, MultiscaleResNet): - pass diff --git a/spaces/Alpaca233/SadTalker/src/face3d/extract_kp_videos.py b/spaces/Alpaca233/SadTalker/src/face3d/extract_kp_videos.py deleted file mode 100644 index 21616a3b4b5077ffdce99621395237b4edcff58c..0000000000000000000000000000000000000000 --- a/spaces/Alpaca233/SadTalker/src/face3d/extract_kp_videos.py +++ /dev/null @@ -1,108 +0,0 @@ -import os -import cv2 -import time -import glob -import argparse -import face_alignment -import numpy as np -from PIL import Image -from tqdm import tqdm -from itertools import cycle - -from torch.multiprocessing import Pool, Process, set_start_method - -class KeypointExtractor(): - def __init__(self, device): - self.detector = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, - device=device) - - def extract_keypoint(self, images, name=None, info=True): - if isinstance(images, list): - keypoints = [] - if info: - i_range = tqdm(images,desc='landmark Det:') - else: - i_range = images - - for image in i_range: - current_kp = self.extract_keypoint(image) - if np.mean(current_kp) == -1 and keypoints: - keypoints.append(keypoints[-1]) - else: - keypoints.append(current_kp[None]) - - keypoints = np.concatenate(keypoints, 0) - np.savetxt(os.path.splitext(name)[0]+'.txt', keypoints.reshape(-1)) - return keypoints - else: - while True: - try: - keypoints = self.detector.get_landmarks_from_image(np.array(images))[0] - break - except RuntimeError as e: - if str(e).startswith('CUDA'): - print("Warning: out of memory, sleep for 1s") - time.sleep(1) - else: - print(e) - break - except TypeError: - print('No face detected in this image') - shape = [68, 2] - keypoints = -1. * np.ones(shape) - break - if name is not None: - np.savetxt(os.path.splitext(name)[0]+'.txt', keypoints.reshape(-1)) - return keypoints - -def read_video(filename): - frames = [] - cap = cv2.VideoCapture(filename) - while cap.isOpened(): - ret, frame = cap.read() - if ret: - frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) - frame = Image.fromarray(frame) - frames.append(frame) - else: - break - cap.release() - return frames - -def run(data): - filename, opt, device = data - os.environ['CUDA_VISIBLE_DEVICES'] = device - kp_extractor = KeypointExtractor() - images = read_video(filename) - name = filename.split('/')[-2:] - os.makedirs(os.path.join(opt.output_dir, name[-2]), exist_ok=True) - kp_extractor.extract_keypoint( - images, - name=os.path.join(opt.output_dir, name[-2], name[-1]) - ) - -if __name__ == '__main__': - set_start_method('spawn') - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument('--input_dir', type=str, help='the folder of the input files') - parser.add_argument('--output_dir', type=str, help='the folder of the output files') - parser.add_argument('--device_ids', type=str, default='0,1') - parser.add_argument('--workers', type=int, default=4) - - opt = parser.parse_args() - filenames = list() - VIDEO_EXTENSIONS_LOWERCASE = {'mp4'} - VIDEO_EXTENSIONS = VIDEO_EXTENSIONS_LOWERCASE.union({f.upper() for f in VIDEO_EXTENSIONS_LOWERCASE}) - extensions = VIDEO_EXTENSIONS - - for ext in extensions: - os.listdir(f'{opt.input_dir}') - print(f'{opt.input_dir}/*.{ext}') - filenames = sorted(glob.glob(f'{opt.input_dir}/*.{ext}')) - print('Total number of videos:', len(filenames)) - pool = Pool(opt.workers) - args_list = cycle([opt]) - device_ids = opt.device_ids.split(",") - device_ids = cycle(device_ids) - for data in tqdm(pool.imap_unordered(run, zip(filenames, args_list, device_ids))): - None diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/__init__.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/__init__.py deleted file mode 100644 index 2e8cee9ce697901d1b8f724660089d84a167dc4f..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/pipelines/__init__.py +++ /dev/null @@ -1,188 +0,0 @@ -from ..utils import ( - OptionalDependencyNotAvailable, - is_flax_available, - is_k_diffusion_available, - is_librosa_available, - is_note_seq_available, - is_onnx_available, - is_torch_available, - is_transformers_available, -) - - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_pt_objects import * # noqa F403 -else: - from .auto_pipeline import AutoPipelineForImage2Image, AutoPipelineForInpainting, AutoPipelineForText2Image - from .consistency_models import ConsistencyModelPipeline - from .dance_diffusion import DanceDiffusionPipeline - from .ddim import DDIMPipeline - from .ddpm import DDPMPipeline - from .dit import DiTPipeline - from .latent_diffusion import LDMSuperResolutionPipeline - from .latent_diffusion_uncond import LDMPipeline - from .pipeline_utils import AudioPipelineOutput, DiffusionPipeline, ImagePipelineOutput - from .pndm import PNDMPipeline - from .repaint import RePaintPipeline - from .score_sde_ve import ScoreSdeVePipeline - from .stochastic_karras_ve import KarrasVePipeline - -try: - if not (is_torch_available() and is_librosa_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_torch_and_librosa_objects import * # noqa F403 -else: - from .audio_diffusion import AudioDiffusionPipeline, Mel - -try: - if not (is_torch_available() and is_transformers_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_torch_and_transformers_objects import * # noqa F403 -else: - from .alt_diffusion import AltDiffusionImg2ImgPipeline, AltDiffusionPipeline - from .audioldm import AudioLDMPipeline - from .controlnet import ( - StableDiffusionControlNetImg2ImgPipeline, - StableDiffusionControlNetInpaintPipeline, - StableDiffusionControlNetPipeline, - StableDiffusionXLControlNetPipeline, - ) - from .deepfloyd_if import ( - IFImg2ImgPipeline, - IFImg2ImgSuperResolutionPipeline, - IFInpaintingPipeline, - IFInpaintingSuperResolutionPipeline, - IFPipeline, - IFSuperResolutionPipeline, - ) - from .kandinsky import ( - KandinskyCombinedPipeline, - KandinskyImg2ImgCombinedPipeline, - KandinskyImg2ImgPipeline, - KandinskyInpaintCombinedPipeline, - KandinskyInpaintPipeline, - KandinskyPipeline, - KandinskyPriorPipeline, - ) - from .kandinsky2_2 import ( - KandinskyV22CombinedPipeline, - KandinskyV22ControlnetImg2ImgPipeline, - KandinskyV22ControlnetPipeline, - KandinskyV22Img2ImgCombinedPipeline, - KandinskyV22Img2ImgPipeline, - KandinskyV22InpaintCombinedPipeline, - KandinskyV22InpaintPipeline, - KandinskyV22Pipeline, - KandinskyV22PriorEmb2EmbPipeline, - KandinskyV22PriorPipeline, - ) - from .latent_diffusion import LDMTextToImagePipeline - from .paint_by_example import PaintByExamplePipeline - from .semantic_stable_diffusion import SemanticStableDiffusionPipeline - from .shap_e import ShapEImg2ImgPipeline, ShapEPipeline - from .stable_diffusion import ( - CycleDiffusionPipeline, - StableDiffusionAttendAndExcitePipeline, - StableDiffusionDepth2ImgPipeline, - StableDiffusionDiffEditPipeline, - StableDiffusionImageVariationPipeline, - StableDiffusionImg2ImgPipeline, - StableDiffusionInpaintPipeline, - StableDiffusionInpaintPipelineLegacy, - StableDiffusionInstructPix2PixPipeline, - StableDiffusionLatentUpscalePipeline, - StableDiffusionLDM3DPipeline, - StableDiffusionModelEditingPipeline, - StableDiffusionPanoramaPipeline, - StableDiffusionParadigmsPipeline, - StableDiffusionPipeline, - StableDiffusionPix2PixZeroPipeline, - StableDiffusionSAGPipeline, - StableDiffusionUpscalePipeline, - StableUnCLIPImg2ImgPipeline, - StableUnCLIPPipeline, - ) - from .stable_diffusion_safe import StableDiffusionPipelineSafe - from .stable_diffusion_xl import ( - StableDiffusionXLImg2ImgPipeline, - StableDiffusionXLInpaintPipeline, - StableDiffusionXLInstructPix2PixPipeline, - StableDiffusionXLPipeline, - ) - from .t2i_adapter import StableDiffusionAdapterPipeline - from .text_to_video_synthesis import TextToVideoSDPipeline, TextToVideoZeroPipeline, VideoToVideoSDPipeline - from .unclip import UnCLIPImageVariationPipeline, UnCLIPPipeline - from .unidiffuser import ImageTextPipelineOutput, UniDiffuserModel, UniDiffuserPipeline, UniDiffuserTextDecoder - from .versatile_diffusion import ( - VersatileDiffusionDualGuidedPipeline, - VersatileDiffusionImageVariationPipeline, - VersatileDiffusionPipeline, - VersatileDiffusionTextToImagePipeline, - ) - from .vq_diffusion import VQDiffusionPipeline - - -try: - if not is_onnx_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_onnx_objects import * # noqa F403 -else: - from .onnx_utils import OnnxRuntimeModel - -try: - if not (is_torch_available() and is_transformers_available() and is_onnx_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_torch_and_transformers_and_onnx_objects import * # noqa F403 -else: - from .stable_diffusion import ( - OnnxStableDiffusionImg2ImgPipeline, - OnnxStableDiffusionInpaintPipeline, - OnnxStableDiffusionInpaintPipelineLegacy, - OnnxStableDiffusionPipeline, - OnnxStableDiffusionUpscalePipeline, - StableDiffusionOnnxPipeline, - ) - -try: - if not (is_torch_available() and is_transformers_available() and is_k_diffusion_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_torch_and_transformers_and_k_diffusion_objects import * # noqa F403 -else: - from .stable_diffusion import StableDiffusionKDiffusionPipeline - -try: - if not is_flax_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_flax_objects import * # noqa F403 -else: - from .pipeline_flax_utils import FlaxDiffusionPipeline - - -try: - if not (is_flax_available() and is_transformers_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_flax_and_transformers_objects import * # noqa F403 -else: - from .controlnet import FlaxStableDiffusionControlNetPipeline - from .stable_diffusion import ( - FlaxStableDiffusionImg2ImgPipeline, - FlaxStableDiffusionInpaintPipeline, - FlaxStableDiffusionPipeline, - ) -try: - if not (is_transformers_available() and is_torch_available() and is_note_seq_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ..utils.dummy_transformers_and_torch_and_note_seq_objects import * # noqa F403 -else: - from .spectrogram_diffusion import MidiProcessor, SpectrogramDiffusionPipeline diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_torchsde_objects.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_torchsde_objects.py deleted file mode 100644 index a81bbb316f32267c31b06598519f1eef9ddde643..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/utils/dummy_torch_and_torchsde_objects.py +++ /dev/null @@ -1,17 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -from ..utils import DummyObject, requires_backends - - -class DPMSolverSDEScheduler(metaclass=DummyObject): - _backends = ["torch", "torchsde"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["torch", "torchsde"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["torch", "torchsde"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["torch", "torchsde"]) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_euler.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_euler.py deleted file mode 100644 index 0c3b065161db4286a150aaad35685f27131f86e8..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/tests/schedulers/test_scheduler_euler.py +++ /dev/null @@ -1,146 +0,0 @@ -import torch - -from diffusers import EulerDiscreteScheduler -from diffusers.utils import torch_device - -from .test_schedulers import SchedulerCommonTest - - -class EulerDiscreteSchedulerTest(SchedulerCommonTest): - scheduler_classes = (EulerDiscreteScheduler,) - num_inference_steps = 10 - - def get_scheduler_config(self, **kwargs): - config = { - "num_train_timesteps": 1100, - "beta_start": 0.0001, - "beta_end": 0.02, - "beta_schedule": "linear", - } - - config.update(**kwargs) - return config - - def test_timesteps(self): - for timesteps in [10, 50, 100, 1000]: - self.check_over_configs(num_train_timesteps=timesteps) - - def test_betas(self): - for beta_start, beta_end in zip([0.00001, 0.0001, 0.001], [0.0002, 0.002, 0.02]): - self.check_over_configs(beta_start=beta_start, beta_end=beta_end) - - def test_schedules(self): - for schedule in ["linear", "scaled_linear"]: - self.check_over_configs(beta_schedule=schedule) - - def test_prediction_type(self): - for prediction_type in ["epsilon", "v_prediction"]: - self.check_over_configs(prediction_type=prediction_type) - - def test_full_loop_no_noise(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps) - - generator = torch.manual_seed(0) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma - sample = sample.to(torch_device) - - for i, t in enumerate(scheduler.timesteps): - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample, generator=generator) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 10.0807) < 1e-2 - assert abs(result_mean.item() - 0.0131) < 1e-3 - - def test_full_loop_with_v_prediction(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config(prediction_type="v_prediction") - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps) - - generator = torch.manual_seed(0) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma - sample = sample.to(torch_device) - - for i, t in enumerate(scheduler.timesteps): - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample, generator=generator) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 0.0002) < 1e-2 - assert abs(result_mean.item() - 2.2676e-06) < 1e-3 - - def test_full_loop_device(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config) - - scheduler.set_timesteps(self.num_inference_steps, device=torch_device) - - generator = torch.manual_seed(0) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma.cpu() - sample = sample.to(torch_device) - - for t in scheduler.timesteps: - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample, generator=generator) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 10.0807) < 1e-2 - assert abs(result_mean.item() - 0.0131) < 1e-3 - - def test_full_loop_device_karras_sigmas(self): - scheduler_class = self.scheduler_classes[0] - scheduler_config = self.get_scheduler_config() - scheduler = scheduler_class(**scheduler_config, use_karras_sigmas=True) - - scheduler.set_timesteps(self.num_inference_steps, device=torch_device) - - generator = torch.manual_seed(0) - - model = self.dummy_model() - sample = self.dummy_sample_deter * scheduler.init_noise_sigma.cpu() - sample = sample.to(torch_device) - - for t in scheduler.timesteps: - sample = scheduler.scale_model_input(sample, t) - - model_output = model(sample, t) - - output = scheduler.step(model_output, t, sample, generator=generator) - sample = output.prev_sample - - result_sum = torch.sum(torch.abs(sample)) - result_mean = torch.mean(torch.abs(sample)) - - assert abs(result_sum.item() - 124.52299499511719) < 1e-2 - assert abs(result_mean.item() - 0.16213932633399963) < 1e-3 diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/emanet/emanet_r101-d8_769x769_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/emanet/emanet_r101-d8_769x769_80k_cityscapes.py deleted file mode 100644 index c5dbf20b0fcc7bc1dd077bd8b7077772251d4c1a..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/emanet/emanet_r101-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './emanet_r50-d8_769x769_80k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r50b-d16_512x1024_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r50b-d16_512x1024_80k_cityscapes.py deleted file mode 100644 index 0749ff14a3e7d207e82572e0516b2555ccacc7d9..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/fcn/fcn_d6_r50b-d16_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './fcn_d6_r50-d16_512x1024_80k_cityscapes.py' -model = dict(pretrained='torchvision://resnet50', backbone=dict(type='ResNet')) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101b-d8_769x769_80k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101b-d8_769x769_80k_cityscapes.py deleted file mode 100644 index 1a7cb708e551e90a12ad4267e2af6938c353f0ba..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/pspnet/pspnet_r101b-d8_769x769_80k_cityscapes.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = './pspnet_r50-d8_769x769_80k_cityscapes.py' -model = dict( - pretrained='torchvision://resnet101', - backbone=dict(type='ResNet', depth=101)) diff --git a/spaces/Anonymous-sub/Rerender/flow/flow_utils.py b/spaces/Anonymous-sub/Rerender/flow/flow_utils.py deleted file mode 100644 index e1efa131397eaf4378c8ce62b75c3012fd665872..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/flow/flow_utils.py +++ /dev/null @@ -1,218 +0,0 @@ -import os -import sys - -import numpy as np -import torch -import torch.nn.functional as F - -parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) -gmflow_dir = os.path.join(parent_dir, 'gmflow_module') -sys.path.insert(0, gmflow_dir) - -from gmflow.gmflow import GMFlow # noqa: E702 E402 F401 -from utils.utils import InputPadder # noqa: E702 E402 - -import huggingface_hub - -repo_name = 'Anonymous-sub/Rerender' - -global_device = 'cuda' if torch.cuda.is_available() else 'cpu' -gmflow_path = huggingface_hub.hf_hub_download( - repo_name, 'models/gmflow_sintel-0c07dcb3.pth', local_dir='./') - - -def coords_grid(b, h, w, homogeneous=False, device=None): - y, x = torch.meshgrid(torch.arange(h), torch.arange(w)) # [H, W] - - stacks = [x, y] - - if homogeneous: - ones = torch.ones_like(x) # [H, W] - stacks.append(ones) - - grid = torch.stack(stacks, dim=0).float() # [2, H, W] or [3, H, W] - - grid = grid[None].repeat(b, 1, 1, 1) # [B, 2, H, W] or [B, 3, H, W] - - if device is not None: - grid = grid.to(global_device) - - return grid - - -def bilinear_sample(img, - sample_coords, - mode='bilinear', - padding_mode='zeros', - return_mask=False): - # img: [B, C, H, W] - # sample_coords: [B, 2, H, W] in image scale - if sample_coords.size(1) != 2: # [B, H, W, 2] - sample_coords = sample_coords.permute(0, 3, 1, 2) - - b, _, h, w = sample_coords.shape - - # Normalize to [-1, 1] - x_grid = 2 * sample_coords[:, 0] / (w - 1) - 1 - y_grid = 2 * sample_coords[:, 1] / (h - 1) - 1 - - grid = torch.stack([x_grid, y_grid], dim=-1) # [B, H, W, 2] - - img = F.grid_sample(img, - grid, - mode=mode, - padding_mode=padding_mode, - align_corners=True) - - if return_mask: - mask = (x_grid >= -1) & (y_grid >= -1) & (x_grid <= 1) & ( - y_grid <= 1) # [B, H, W] - - return img, mask - - return img - - -def flow_warp(feature, - flow, - mask=False, - mode='bilinear', - padding_mode='zeros'): - b, c, h, w = feature.size() - assert flow.size(1) == 2 - - grid = coords_grid(b, h, w).to(flow.device) + flow # [B, 2, H, W] - - return bilinear_sample(feature, - grid, - mode=mode, - padding_mode=padding_mode, - return_mask=mask) - - -def forward_backward_consistency_check(fwd_flow, - bwd_flow, - alpha=0.01, - beta=0.5): - # fwd_flow, bwd_flow: [B, 2, H, W] - # alpha and beta values are following UnFlow - # (https://arxiv.org/abs/1711.07837) - assert fwd_flow.dim() == 4 and bwd_flow.dim() == 4 - assert fwd_flow.size(1) == 2 and bwd_flow.size(1) == 2 - flow_mag = torch.norm(fwd_flow, dim=1) + torch.norm(bwd_flow, - dim=1) # [B, H, W] - - warped_bwd_flow = flow_warp(bwd_flow, fwd_flow) # [B, 2, H, W] - warped_fwd_flow = flow_warp(fwd_flow, bwd_flow) # [B, 2, H, W] - - diff_fwd = torch.norm(fwd_flow + warped_bwd_flow, dim=1) # [B, H, W] - diff_bwd = torch.norm(bwd_flow + warped_fwd_flow, dim=1) - - threshold = alpha * flow_mag + beta - - fwd_occ = (diff_fwd > threshold).float() # [B, H, W] - bwd_occ = (diff_bwd > threshold).float() - - return fwd_occ, bwd_occ - - -@torch.no_grad() -def get_warped_and_mask(flow_model, - image1, - image2, - image3=None, - pixel_consistency=False): - if image3 is None: - image3 = image1 - padder = InputPadder(image1.shape, padding_factor=8) - image1, image2 = padder.pad(image1[None].to(global_device), - image2[None].to(global_device)) - results_dict = flow_model(image1, - image2, - attn_splits_list=[2], - corr_radius_list=[-1], - prop_radius_list=[-1], - pred_bidir_flow=True) - flow_pr = results_dict['flow_preds'][-1] # [B, 2, H, W] - fwd_flow = padder.unpad(flow_pr[0]).unsqueeze(0) # [1, 2, H, W] - bwd_flow = padder.unpad(flow_pr[1]).unsqueeze(0) # [1, 2, H, W] - fwd_occ, bwd_occ = forward_backward_consistency_check( - fwd_flow, bwd_flow) # [1, H, W] float - if pixel_consistency: - warped_image1 = flow_warp(image1, bwd_flow) - bwd_occ = torch.clamp( - bwd_occ + - (abs(image2 - warped_image1).mean(dim=1) > 255 * 0.25).float(), 0, - 1).unsqueeze(0) - warped_results = flow_warp(image3, bwd_flow) - return warped_results, bwd_occ, bwd_flow - - -class FlowCalc(): - - def __init__(self, model_path='./models/gmflow_sintel-0c07dcb3.pth'): - flow_model = GMFlow( - feature_channels=128, - num_scales=1, - upsample_factor=8, - num_head=1, - attention_type='swin', - ffn_dim_expansion=4, - num_transformer_layers=6, - ).to(global_device) - checkpoint = torch.load(model_path, - map_location=lambda storage, loc: storage) - weights = checkpoint['model'] if 'model' in checkpoint else checkpoint - flow_model.load_state_dict(weights, strict=False) - flow_model.eval() - self.model = flow_model - - @torch.no_grad() - def get_flow(self, image1, image2, save_path=None): - if save_path is not None and os.path.exists(save_path): - bwd_flow = read_flow(save_path) - return bwd_flow - - image1 = torch.from_numpy(image1).permute(2, 0, 1).float() - image2 = torch.from_numpy(image2).permute(2, 0, 1).float() - padder = InputPadder(image1.shape, padding_factor=8) - image1, image2 = padder.pad(image1[None].to(global_device), - image2[None].to(global_device)) - results_dict = self.model(image1, - image2, - attn_splits_list=[2], - corr_radius_list=[-1], - prop_radius_list=[-1], - pred_bidir_flow=True) - flow_pr = results_dict['flow_preds'][-1] # [B, 2, H, W] - bwd_flow = padder.unpad(flow_pr[1]).unsqueeze(0) # [1, 2, H, W] - if save_path is not None: - flow_np = bwd_flow.cpu().numpy() - np.save(save_path, flow_np) - - return bwd_flow - - def warp(self, img, flow, mode='bilinear'): - expand = False - if len(img.shape) == 2: - expand = True - img = np.expand_dims(img, 2) - - img = torch.from_numpy(img).permute(2, 0, 1).unsqueeze(0) - dtype = img.dtype - img = img.to(torch.float) - res = flow_warp(img, flow, mode=mode) - res = res.to(dtype) - res = res[0].cpu().permute(1, 2, 0).numpy() - if expand: - res = res[:, :, 0] - return res - - -def read_flow(save_path): - flow_np = np.load(save_path) - bwd_flow = torch.from_numpy(flow_np) - return bwd_flow - - -flow_calc = FlowCalc() diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-coverage.sh b/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-coverage.sh deleted file mode 100644 index 2377ab8927e881ded93afc92d1e2a2a943b75807..0000000000000000000000000000000000000000 --- a/spaces/AnthonyTruchetPoC/persistent-docker/scripts/run-coverage.sh +++ /dev/null @@ -1,4 +0,0 @@ -#!/usr/bin/env sh -poetry run coverage run --parallel -m pytest -poetry run coverage combine -poetry run coverage report diff --git a/spaces/Apex-X/nono/app.py b/spaces/Apex-X/nono/app.py deleted file mode 100644 index ac81150d2a4acd6d7124fd9c15115ab12892b61a..0000000000000000000000000000000000000000 --- a/spaces/Apex-X/nono/app.py +++ /dev/null @@ -1,69 +0,0 @@ -# -* coding:UTF-8 -* -# !/usr/bin/env python -import numpy as np -import gradio as gr -import roop.globals -from roop.core import ( - start, - decode_execution_providers, - suggest_max_memory, - suggest_execution_threads, -) -from roop.processors.frame.core import get_frame_processors_modules -from roop.utilities import normalize_output_path -import os -from PIL import Image - - -def swap_face(source_file, target_file): - - source_path = "input.jpg" - target_path = "target.jpg" - - source_image = Image.fromarray(source_file) - source_image.save(source_path) - target_image = Image.fromarray(target_file) - target_image.save(target_path) - - print("source_path: ", source_path) - print("target_path: ", target_path) - - roop.globals.source_path = source_path - roop.globals.target_path = target_path - output_path = "output.jpg" - roop.globals.output_path = normalize_output_path( - roop.globals.source_path, roop.globals.target_path, output_path - ) - roop.globals.frame_processors = ["face_swapper"] - roop.globals.headless = True - roop.globals.keep_fps = True - roop.globals.keep_audio = True - roop.globals.keep_frames = False - roop.globals.many_faces = False - roop.globals.video_encoder = "libx264" - roop.globals.video_quality = 18 - roop.globals.max_memory = suggest_max_memory() - roop.globals.execution_providers = decode_execution_providers(["cpu"]) - roop.globals.execution_threads = suggest_execution_threads() - - print( - "start process", - roop.globals.source_path, - roop.globals.target_path, - roop.globals.output_path, - ) - - for frame_processor in get_frame_processors_modules( - roop.globals.frame_processors - ): - if not frame_processor.pre_check(): - return - - start() - return output_path - - -app = gr.Interface( - fn=swap_face, inputs=[gr.Image(), gr.Image()], outputs="image" -) -app.launch() diff --git a/spaces/Aqdas/YouTube_Video_OpenAI_whisper/whisper.py b/spaces/Aqdas/YouTube_Video_OpenAI_whisper/whisper.py deleted file mode 100644 index 88273fa62294cfce1ab74b949dbff3e17354e622..0000000000000000000000000000000000000000 --- a/spaces/Aqdas/YouTube_Video_OpenAI_whisper/whisper.py +++ /dev/null @@ -1,18 +0,0 @@ -def dowload_youtube_video(url): - from pytube import YouTube - yt = YouTube(url) - global audio_stream - audio_stream = yt.streams.filter(only_audio=True, file_extension='mp4').first() - audio_stream.download() - return 'download successfully' - - -def transcribe_audio(): - import openai - from openai import OpenAI - import os - client = OpenAI(api_key=os.environ['openai_api_key']) - file = open(audio_stream.default_filename, "rb") - transcription = client.audio.transcriptions.create(model="whisper-1", file=file, response_format='text', language='ur') - - return transcription \ No newline at end of file diff --git a/spaces/Artrajz/vits-simple-api/logger.py b/spaces/Artrajz/vits-simple-api/logger.py deleted file mode 100644 index 886d4c39940d9d49e878a3ff992216ede0d46da2..0000000000000000000000000000000000000000 --- a/spaces/Artrajz/vits-simple-api/logger.py +++ /dev/null @@ -1,40 +0,0 @@ -import os -import sys -import logging -import logzero -import config -from logging.handlers import TimedRotatingFileHandler - -logzero.loglevel(logging.WARNING) -logger = logging.getLogger("vits-simple-api") -level = getattr(config, "LOGGING_LEVEL", "DEBUG") -level_dict = {'DEBUG': logging.DEBUG, 'INFO': logging.INFO, 'WARNING': logging.WARNING, 'ERROR': logging.ERROR, - 'CRITICAL': logging.CRITICAL} -logging.basicConfig(level=level_dict[level]) -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger("langid.langid").setLevel(logging.INFO) -logging.getLogger("apscheduler.scheduler").setLevel(logging.INFO) - -os.makedirs(config.LOGS_PATH, exist_ok=True) -log_file = os.path.join(config.LOGS_PATH, 'latest.log') -backup_count = getattr(config, "LOGS_BACKUPCOUNT", 30) -handler = TimedRotatingFileHandler(log_file, when="midnight", interval=1, backupCount=backup_count, encoding='utf-8') -handler.suffix = "%Y-%m-%d.log" -formatter = logging.Formatter('%(levelname)s:%(name)s %(message)s') -handler.setFormatter(formatter) - -logging.getLogger().addHandler(handler) - - -# Custom function to handle uncaught exceptions -def handle_exception(exc_type, exc_value, exc_traceback): - # If it's a keyboard interrupt, don't handle it, just return - if issubclass(exc_type, KeyboardInterrupt): - sys.__excepthook__(exc_type, exc_value, exc_traceback) - return - - logger.error("Uncaught exception", exc_info=(exc_type, exc_value, exc_traceback)) - - -# Set the global exception handler in Python -sys.excepthook = handle_exception diff --git a/spaces/AsakuraMizu/moe-tts/text/ngu_dialect.py b/spaces/AsakuraMizu/moe-tts/text/ngu_dialect.py deleted file mode 100644 index 69d0ce6fe5a989843ee059a71ccab793f20f9176..0000000000000000000000000000000000000000 --- a/spaces/AsakuraMizu/moe-tts/text/ngu_dialect.py +++ /dev/null @@ -1,30 +0,0 @@ -import re -import opencc - - -dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou', - 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing', - 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang', - 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan', - 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen', - 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'} - -converters = {} - -for dialect in dialects.values(): - try: - converters[dialect] = opencc.OpenCC("chinese_dialect_lexicons/"+dialect) - except: - pass - - -def ngu_dialect_to_ipa(text, dialect): - dialect = dialects[dialect] - text = converters[dialect].convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/README_D2.md b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/README_D2.md deleted file mode 100644 index a88ad7e21ce1d8651ec0d73848ce6dcd17f19d00..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/README_D2.md +++ /dev/null @@ -1,62 +0,0 @@ - - -Detectron2 is Facebook AI Research's next generation software system -that implements state-of-the-art object detection algorithms. -It is a ground-up rewrite of the previous version, -[Detectron](https://github.com/facebookresearch/Detectron/), -and it originates from [maskrcnn-benchmark](https://github.com/facebookresearch/maskrcnn-benchmark/). - -
    - -
    - -### What's New -* It is powered by the [PyTorch](https://pytorch.org) deep learning framework. -* Includes more features such as panoptic segmentation, Densepose, Cascade R-CNN, rotated bounding boxes, PointRend, - DeepLab, etc. -* Can be used as a library to support [different projects](projects/) on top of it. - We'll open source more research projects in this way. -* It [trains much faster](https://detectron2.readthedocs.io/notes/benchmarks.html). -* Models can be exported to TorchScript format or Caffe2 format for deployment. - -See our [blog post](https://ai.facebook.com/blog/-detectron2-a-pytorch-based-modular-object-detection-library-/) -to see more demos and learn about detectron2. - -## Installation - -See [INSTALL.md](INSTALL.md). - -## Getting Started - -Follow the [installation instructions](https://detectron2.readthedocs.io/tutorials/install.html) to -install detectron2. - -See [Getting Started with Detectron2](https://detectron2.readthedocs.io/tutorials/getting_started.html), -and the [Colab Notebook](https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5) -to learn about basic usage. - -Learn more at our [documentation](https://detectron2.readthedocs.org). -And see [projects/](projects/) for some projects that are built on top of detectron2. - -## Model Zoo and Baselines - -We provide a large set of baseline results and trained models available for download in the [Detectron2 Model Zoo](MODEL_ZOO.md). - - -## License - -Detectron2 is released under the [Apache 2.0 license](LICENSE). - -## Citing Detectron2 - -If you use Detectron2 in your research or wish to refer to the baseline results published in the [Model Zoo](MODEL_ZOO.md), please use the following BibTeX entry. - -```BibTeX -@misc{wu2019detectron2, - author = {Yuxin Wu and Alexander Kirillov and Francisco Massa and - Wan-Yen Lo and Ross Girshick}, - title = {Detectron2}, - howpublished = {\url{https://github.com/facebookresearch/detectron2}}, - year = {2019} -} -``` diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn.py deleted file mode 100644 index 565e2940ad0e4c43ec2172d4a79a9bd72adef09e..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/backbone/bifpn.py +++ /dev/null @@ -1,425 +0,0 @@ -# Modified from https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/efficientdet.py -# The original file is under Apache-2.0 License -import math -from os.path import join -import numpy as np -from collections import OrderedDict -from typing import List - -import torch -from torch import nn -import torch.utils.model_zoo as model_zoo -import torch.nn.functional as F -import fvcore.nn.weight_init as weight_init - -from detectron2.layers import ShapeSpec, Conv2d -from detectron2.modeling.backbone.resnet import build_resnet_backbone -from detectron2.modeling.backbone.build import BACKBONE_REGISTRY -from detectron2.layers.batch_norm import get_norm -from detectron2.modeling.backbone import Backbone -from .dlafpn import dla34 - -def get_fpn_config(base_reduction=8): - """BiFPN config with sum.""" - p = { - 'nodes': [ - {'reduction': base_reduction << 3, 'inputs_offsets': [3, 4]}, - {'reduction': base_reduction << 2, 'inputs_offsets': [2, 5]}, - {'reduction': base_reduction << 1, 'inputs_offsets': [1, 6]}, - {'reduction': base_reduction, 'inputs_offsets': [0, 7]}, - {'reduction': base_reduction << 1, 'inputs_offsets': [1, 7, 8]}, - {'reduction': base_reduction << 2, 'inputs_offsets': [2, 6, 9]}, - {'reduction': base_reduction << 3, 'inputs_offsets': [3, 5, 10]}, - {'reduction': base_reduction << 4, 'inputs_offsets': [4, 11]}, - ], - 'weight_method': 'fastattn', - } - return p - - -def swish(x, inplace: bool = False): - """Swish - Described in: https://arxiv.org/abs/1710.05941 - """ - return x.mul_(x.sigmoid()) if inplace else x.mul(x.sigmoid()) - - -class Swish(nn.Module): - def __init__(self, inplace: bool = False): - super(Swish, self).__init__() - self.inplace = inplace - - def forward(self, x): - return swish(x, self.inplace) - - -class SequentialAppend(nn.Sequential): - def __init__(self, *args): - super(SequentialAppend, self).__init__(*args) - - def forward(self, x): - for module in self: - x.append(module(x)) - return x - - -class SequentialAppendLast(nn.Sequential): - def __init__(self, *args): - super(SequentialAppendLast, self).__init__(*args) - - # def forward(self, x: List[torch.Tensor]): - def forward(self, x): - for module in self: - x.append(module(x[-1])) - return x - - -class ConvBnAct2d(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, stride=1, dilation=1, padding='', bias=False, - norm='', act_layer=Swish): - super(ConvBnAct2d, self).__init__() - # self.conv = create_conv2d( - # in_channels, out_channels, kernel_size, stride=stride, dilation=dilation, padding=padding, bias=bias) - self.conv = Conv2d( - in_channels, out_channels, kernel_size=kernel_size, stride=stride, - padding=kernel_size // 2, bias=(norm == '')) - self.bn = get_norm(norm, out_channels) - self.act = None if act_layer is None else act_layer(inplace=True) - - def forward(self, x): - x = self.conv(x) - if self.bn is not None: - x = self.bn(x) - if self.act is not None: - x = self.act(x) - return x - - -class SeparableConv2d(nn.Module): - """ Separable Conv - """ - def __init__(self, in_channels, out_channels, kernel_size=3, stride=1, dilation=1, padding='', bias=False, - channel_multiplier=1.0, pw_kernel_size=1, act_layer=Swish, - norm=''): - super(SeparableConv2d, self).__init__() - - # self.conv_dw = create_conv2d( - # in_channels, int(in_channels * channel_multiplier), kernel_size, - # stride=stride, dilation=dilation, padding=padding, depthwise=True) - - self.conv_dw = Conv2d( - in_channels, int(in_channels * channel_multiplier), - kernel_size=kernel_size, stride=stride, padding=kernel_size // 2, bias=bias, - groups=out_channels) - # print('conv_dw', kernel_size, stride) - # self.conv_pw = create_conv2d( - # int(in_channels * channel_multiplier), out_channels, pw_kernel_size, padding=padding, bias=bias) - - self.conv_pw = Conv2d( - int(in_channels * channel_multiplier), out_channels, - kernel_size=pw_kernel_size, padding=pw_kernel_size // 2, bias=(norm=='')) - # print('conv_pw', pw_kernel_size) - - self.bn = get_norm(norm, out_channels) - self.act = None if act_layer is None else act_layer(inplace=True) - - def forward(self, x): - x = self.conv_dw(x) - x = self.conv_pw(x) - if self.bn is not None: - x = self.bn(x) - if self.act is not None: - x = self.act(x) - return x - - -class ResampleFeatureMap(nn.Sequential): - def __init__(self, in_channels, out_channels, reduction_ratio=1., pad_type='', pooling_type='max', - norm='', apply_bn=False, conv_after_downsample=False, - redundant_bias=False): - super(ResampleFeatureMap, self).__init__() - pooling_type = pooling_type or 'max' - self.in_channels = in_channels - self.out_channels = out_channels - self.reduction_ratio = reduction_ratio - self.conv_after_downsample = conv_after_downsample - - conv = None - if in_channels != out_channels: - conv = ConvBnAct2d( - in_channels, out_channels, kernel_size=1, padding=pad_type, - norm=norm if apply_bn else '', - bias=not apply_bn or redundant_bias, act_layer=None) - - if reduction_ratio > 1: - stride_size = int(reduction_ratio) - if conv is not None and not self.conv_after_downsample: - self.add_module('conv', conv) - self.add_module( - 'downsample', - # create_pool2d( - # pooling_type, kernel_size=stride_size + 1, stride=stride_size, padding=pad_type) - # nn.MaxPool2d(kernel_size=stride_size + 1, stride=stride_size, padding=pad_type) - nn.MaxPool2d(kernel_size=stride_size, stride=stride_size) - ) - if conv is not None and self.conv_after_downsample: - self.add_module('conv', conv) - else: - if conv is not None: - self.add_module('conv', conv) - if reduction_ratio < 1: - scale = int(1 // reduction_ratio) - self.add_module('upsample', nn.UpsamplingNearest2d(scale_factor=scale)) - - -class FpnCombine(nn.Module): - def __init__(self, feature_info, fpn_config, fpn_channels, inputs_offsets, target_reduction, pad_type='', - pooling_type='max', norm='', apply_bn_for_resampling=False, - conv_after_downsample=False, redundant_bias=False, weight_method='attn'): - super(FpnCombine, self).__init__() - self.inputs_offsets = inputs_offsets - self.weight_method = weight_method - - self.resample = nn.ModuleDict() - for idx, offset in enumerate(inputs_offsets): - in_channels = fpn_channels - if offset < len(feature_info): - in_channels = feature_info[offset]['num_chs'] - input_reduction = feature_info[offset]['reduction'] - else: - node_idx = offset - len(feature_info) - # print('node_idx, len', node_idx, len(fpn_config['nodes'])) - input_reduction = fpn_config['nodes'][node_idx]['reduction'] - reduction_ratio = target_reduction / input_reduction - self.resample[str(offset)] = ResampleFeatureMap( - in_channels, fpn_channels, reduction_ratio=reduction_ratio, pad_type=pad_type, - pooling_type=pooling_type, norm=norm, - apply_bn=apply_bn_for_resampling, conv_after_downsample=conv_after_downsample, - redundant_bias=redundant_bias) - - if weight_method == 'attn' or weight_method == 'fastattn': - # WSM - self.edge_weights = nn.Parameter(torch.ones(len(inputs_offsets)), requires_grad=True) - else: - self.edge_weights = None - - def forward(self, x): - dtype = x[0].dtype - nodes = [] - for offset in self.inputs_offsets: - input_node = x[offset] - input_node = self.resample[str(offset)](input_node) - nodes.append(input_node) - - if self.weight_method == 'attn': - normalized_weights = torch.softmax(self.edge_weights.type(dtype), dim=0) - x = torch.stack(nodes, dim=-1) * normalized_weights - elif self.weight_method == 'fastattn': - edge_weights = nn.functional.relu(self.edge_weights.type(dtype)) - weights_sum = torch.sum(edge_weights) - x = torch.stack( - [(nodes[i] * edge_weights[i]) / (weights_sum + 0.0001) for i in range(len(nodes))], dim=-1) - elif self.weight_method == 'sum': - x = torch.stack(nodes, dim=-1) - else: - raise ValueError('unknown weight_method {}'.format(self.weight_method)) - x = torch.sum(x, dim=-1) - return x - - -class BiFpnLayer(nn.Module): - def __init__(self, feature_info, fpn_config, fpn_channels, num_levels=5, pad_type='', - pooling_type='max', norm='', act_layer=Swish, - apply_bn_for_resampling=False, conv_after_downsample=True, conv_bn_relu_pattern=False, - separable_conv=True, redundant_bias=False): - super(BiFpnLayer, self).__init__() - self.fpn_config = fpn_config - self.num_levels = num_levels - self.conv_bn_relu_pattern = False - - self.feature_info = [] - self.fnode = SequentialAppend() - for i, fnode_cfg in enumerate(fpn_config['nodes']): - # logging.debug('fnode {} : {}'.format(i, fnode_cfg)) - # print('fnode {} : {}'.format(i, fnode_cfg)) - fnode_layers = OrderedDict() - - # combine features - reduction = fnode_cfg['reduction'] - fnode_layers['combine'] = FpnCombine( - feature_info, fpn_config, fpn_channels, fnode_cfg['inputs_offsets'], target_reduction=reduction, - pad_type=pad_type, pooling_type=pooling_type, norm=norm, - apply_bn_for_resampling=apply_bn_for_resampling, conv_after_downsample=conv_after_downsample, - redundant_bias=redundant_bias, weight_method=fpn_config['weight_method']) - self.feature_info.append(dict(num_chs=fpn_channels, reduction=reduction)) - - # after combine ops - after_combine = OrderedDict() - if not conv_bn_relu_pattern: - after_combine['act'] = act_layer(inplace=True) - conv_bias = redundant_bias - conv_act = None - else: - conv_bias = False - conv_act = act_layer - conv_kwargs = dict( - in_channels=fpn_channels, out_channels=fpn_channels, kernel_size=3, padding=pad_type, - bias=conv_bias, norm=norm, act_layer=conv_act) - after_combine['conv'] = SeparableConv2d(**conv_kwargs) if separable_conv else ConvBnAct2d(**conv_kwargs) - fnode_layers['after_combine'] = nn.Sequential(after_combine) - - self.fnode.add_module(str(i), nn.Sequential(fnode_layers)) - - self.feature_info = self.feature_info[-num_levels::] - - def forward(self, x): - x = self.fnode(x) - return x[-self.num_levels::] - - -class BiFPN(Backbone): - def __init__( - self, cfg, bottom_up, in_features, out_channels, norm='', - num_levels=5, num_bifpn=4, separable_conv=False, - ): - super(BiFPN, self).__init__() - assert isinstance(bottom_up, Backbone) - - # Feature map strides and channels from the bottom up network (e.g. ResNet) - input_shapes = bottom_up.output_shape() - in_strides = [input_shapes[f].stride for f in in_features] - in_channels = [input_shapes[f].channels for f in in_features] - - self.num_levels = num_levels - self.num_bifpn = num_bifpn - self.bottom_up = bottom_up - self.in_features = in_features - self._size_divisibility = 128 - levels = [int(math.log2(s)) for s in in_strides] - self._out_feature_strides = { - "p{}".format(int(math.log2(s))): s for s in in_strides} - if len(in_features) < num_levels: - for l in range(num_levels - len(in_features)): - s = l + levels[-1] - self._out_feature_strides["p{}".format(s + 1)] = 2 ** (s + 1) - self._out_features = list(sorted(self._out_feature_strides.keys())) - self._out_feature_channels = {k: out_channels for k in self._out_features} - - # print('self._out_feature_strides', self._out_feature_strides) - # print('self._out_feature_channels', self._out_feature_channels) - - feature_info = [ - {'num_chs': in_channels[level], 'reduction': in_strides[level]} \ - for level in range(len(self.in_features)) - ] - # self.config = config - fpn_config = get_fpn_config() - self.resample = SequentialAppendLast() - for level in range(num_levels): - if level < len(feature_info): - in_chs = in_channels[level] # feature_info[level]['num_chs'] - reduction = in_strides[level] # feature_info[level]['reduction'] - else: - # Adds a coarser level by downsampling the last feature map - reduction_ratio = 2 - self.resample.add_module(str(level), ResampleFeatureMap( - in_channels=in_chs, - out_channels=out_channels, - pad_type='same', - pooling_type=None, - norm=norm, - reduction_ratio=reduction_ratio, - apply_bn=True, - conv_after_downsample=False, - redundant_bias=False, - )) - in_chs = out_channels - reduction = int(reduction * reduction_ratio) - feature_info.append(dict(num_chs=in_chs, reduction=reduction)) - - self.cell = nn.Sequential() - for rep in range(self.num_bifpn): - # logging.debug('building cell {}'.format(rep)) - # print('building cell {}'.format(rep)) - fpn_layer = BiFpnLayer( - feature_info=feature_info, - fpn_config=fpn_config, - fpn_channels=out_channels, - num_levels=self.num_levels, - pad_type='same', - pooling_type=None, - norm=norm, - act_layer=Swish, - separable_conv=separable_conv, - apply_bn_for_resampling=True, - conv_after_downsample=False, - conv_bn_relu_pattern=False, - redundant_bias=False, - ) - self.cell.add_module(str(rep), fpn_layer) - feature_info = fpn_layer.feature_info - # import pdb; pdb.set_trace() - - @property - def size_divisibility(self): - return self._size_divisibility - - def forward(self, x): - # print('input shapes', x.shape) - bottom_up_features = self.bottom_up(x) - x = [bottom_up_features[f] for f in self.in_features] - assert len(self.resample) == self.num_levels - len(x) - x = self.resample(x) - shapes = [xx.shape for xx in x] - # print('resample shapes', shapes) - x = self.cell(x) - out = {f: xx for f, xx in zip(self._out_features, x)} - # import pdb; pdb.set_trace() - return out - - -@BACKBONE_REGISTRY.register() -def build_resnet_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = build_resnet_backbone(cfg, input_shape) - in_features = cfg.MODEL.FPN.IN_FEATURES - backbone = BiFPN( - cfg=cfg, - bottom_up=bottom_up, - in_features=in_features, - out_channels=cfg.MODEL.BIFPN.OUT_CHANNELS, - norm=cfg.MODEL.BIFPN.NORM, - num_levels=cfg.MODEL.BIFPN.NUM_LEVELS, - num_bifpn=cfg.MODEL.BIFPN.NUM_BIFPN, - separable_conv=cfg.MODEL.BIFPN.SEPARABLE_CONV, - ) - return backbone - -@BACKBONE_REGISTRY.register() -def build_p37_dla_bifpn_backbone(cfg, input_shape: ShapeSpec): - """ - Args: - cfg: a detectron2 CfgNode - Returns: - backbone (Backbone): backbone module, must be a subclass of :class:`Backbone`. - """ - bottom_up = dla34(cfg) - in_features = cfg.MODEL.FPN.IN_FEATURES - assert cfg.MODEL.BIFPN.NUM_LEVELS == 5 - - backbone = BiFPN( - cfg=cfg, - bottom_up=bottom_up, - in_features=in_features, - out_channels=cfg.MODEL.BIFPN.OUT_CHANNELS, - norm=cfg.MODEL.BIFPN.NORM, - num_levels=cfg.MODEL.BIFPN.NUM_LEVELS, - num_bifpn=cfg.MODEL.BIFPN.NUM_BIFPN, - separable_conv=cfg.MODEL.BIFPN.SEPARABLE_CONV, - ) - return backbone diff --git a/spaces/Benson/text-generation/Examples/Asfalto 8 - Juego De Carreras De Coches.md b/spaces/Benson/text-generation/Examples/Asfalto 8 - Juego De Carreras De Coches.md deleted file mode 100644 index 30b5cb8bb27e9e7eed698d6eb7c6663d01efe6c5..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Asfalto 8 - Juego De Carreras De Coches.md +++ /dev/null @@ -1,72 +0,0 @@ -
    -

    Ninja Shadow Fight 2: Una revisión

    -

    Si eres un fanático de los juegos de lucha con elementos RPG, es posible que quieras echar un vistazo a Ninja Shadow Fight 2. Este juego es una secuela del famoso éxito de Facebook con 40 millones de usuarios, Shadow Fight. Es una mezcla de técnicas clásicas de lucha y artes marciales. Puedes equipar a tu personaje con innumerables armas letales y armaduras raras, personalizar a tu luchador con habilidades épicas y poderes mágicos, y viajar a través de seis mundos diferentes llenos de demonios amenazantes. En este artículo, revisaremos Ninja Shadow Fight 2 en términos de su jugabilidad y controles, gráficos y sonido, pros y contras, consejos y trucos.

    -

    Juego y controles

    -

    Sistema de combate

    -

    El sistema de combate en Ninja Shadow Fight 2 se basa en la física realista y animaciones. Puedes usar un palo direccional a la izquierda para mover a tu personaje, y botones a la derecha para golpear o patear a tu oponente. También puede combinar diferentes direcciones y tipos de ataques para realizar varios movimientos y combos. Por ejemplo, puedes usar forward+punch para hacer una fuerte barra con tu arma, o backward+punch para hacer una barra giratoria. También puedes usar up+punch para hacer una barra superior que puede derribar a tu oponente, o down+punch para hacer una barra baja que puede golpearlos mientras están en el suelo.

    -

    asfalto 8 - juego de carreras de coches


    Download Zip ··· https://bltlly.com/2v6M6F



    -

    El sistema de combate también te permite usar armas a distancia y habilidades mágicas en algunas situaciones. Las armas a distancia se pueden lanzar a tu oponente pulsando un botón en la esquina superior derecha. Pueden causar daño a distancia o interrumpir sus ataques. Las habilidades mágicas se pueden activar pulsando un botón en la esquina inferior derecha cuando el medidor de magia está lleno. Pueden desatar poderosos efectos que pueden cambiar el curso de la batalla.

    -

    Elementos RPG

    - -

    Modos de juego

    -

    Los modos de juego en Ninja Shadow Fight 2 ofrecen diferentes desafíos y recompensas para los jugadores. Puedes elegir entre los siguientes modos:

    -
      -
    • Torneo: Este es el modo principal del juego, donde tienes que luchar tu camino a través de una serie de oponentes en cada mundo. Puedes ganar monedas y gemas ganando batallas, y desbloquear nuevos mundos derrotando jefes.
    • -
    • Supervivencia: Este es un modo en el que tienes que sobrevivir el mayor tiempo posible contra interminables oleadas de enemigos. Puedes ganar monedas y gemas matando enemigos y poniendo a prueba tus habilidades y resistencia.
    • -
    • Duelo: Este es un modo donde puedes luchar contra otros jugadores en línea. Puedes ganar monedas y gemas ganando duelos, y posicionarte en la clasificación.
    • -
    • Underworld: Este es un modo donde puedes unir fuerzas con otros jugadores en línea para luchar contra poderosos jefes. Puedes ganar monedas y gemas participando en incursiones, y recolectar objetos y equipos raros.
    • -
    -

    Gráficos y sonido

    -

    Estilo visual

    -

    El estilo visual de Ninja Shadow Fight 2 es único y atractivo. El juego utiliza un estilo de silueta para los personajes, lo que crea un contraste con los fondos coloridos y detallados. El juego también utiliza iluminación dinámica y sombras, que añaden profundidad y realismo a las escenas. El juego tiene una variedad de entornos, como bosques, templos, cuevas y castillos, cada uno con su propia atmósfera y estilo.

    -

    Efectos de sonido y música

    -

    Los efectos de sonido y la música de Ninja Shadow Fight 2 también son impresionantes e inmersivos. El juego utiliza sonidos realistas para las armas y los golpes, que hacen que el combate se sienta más intenso y satisfactorio. El juego también utiliza música atmosférica para los fondos, que coinciden con el estado de ánimo y el tema de cada mundo. El juego tiene una banda sonora diversa, que va desde melodías orientales hasta ritmos de rock, cada uno con su propio ritmo y tempo.

    -

    Pros y contras

    -

    Pros

    - -
      -
    • El sistema de combate es suave y sensible, con física realista y animaciones.
    • -
    • Los elementos RPG son profundos y gratificantes, con muchas opciones para personalizar tu luchador.
    • -
    • Los modos de juego son variados y desafiantes, con diferentes objetivos y recompensas.
    • -
    • El estilo visual es único y atractivo, con un contraste entre los personajes de silueta y los fondos de colores.
    • -
    • Los efectos de sonido y la música son impresionantes y envolventes, con sonidos realistas para las armas y los golpes, y música atmosférica para los fondos.
    • -
    • La historia es intrigante y cautivadora, con una trama misteriosa y personajes carismáticos.
    • -
    -

    Contras

    -

    Ninja Shadow Fight 2 también tiene algunos aspectos negativos que podrían restar provecho a su disfrute. Algunos de los contras son:

    -
      -
    • El juego tiene anuncios frecuentes que interrumpen el juego y molestan a los jugadores.
    • -
    • El juego tiene un modelo de pago a ganador que da una ventaja injusta a los jugadores que gastan dinero real en gemas.
    • -
    • El juego tiene una falta de sincronización entre dispositivos que hace que sea difícil transferir su progreso de un dispositivo a otro.
    • -
    -

    Consejos y trucos

    -

    Cómo ganar batallas

    -

    Si quieres ganar batallas en Ninja Shadow Fight 2, necesitas dominar el sistema de combate y usar algunas estrategias. Aquí hay algunos consejos y trucos sobre cómo ganar batallas:

    -

    -
      -
    • Apunta a la cabeza: Golpear la cabeza de tu oponente inflige más daño que golpear su cuerpo o extremidades. Puedes usar up+punch o up+kick para hacer una barra superior o patada que puede derribar a tu oponente o romper su guardia.
    • -
    • Usa patadas para interrumpir a los enemigos: Patear a tu oponente puede interrumpir sus ataques o empujarlos hacia atrás. Puedes usar forward+kick o backward+kick para hacer una patada fuerte que pueda hacer volar o aturdir a tu oponente.
    • - -
    -

    Cómo manejar la armadura al derrotar a los jefes en el modo de torneo. Cada jefe tiene un arma única y una armadura que puedes obtener al vencerlos. Por ejemplo, puedes conseguir la katana y la armadura samurái derrotando a Lynx, el primer jefe del juego. -
  11. Completar desafíos: Puedes desbloquear nuevas habilidades y habilidades mágicas completando desafíos en el juego. Los desafíos son tareas especiales que requieren realizar ciertas acciones o cumplir ciertos criterios en el juego. Por ejemplo, puedes desbloquear la habilidad de bola de fuego completando el reto de matar a 10 enemigos con armas a distancia.
  12. -
  13. Únete a las redadas: Puedes desbloquear objetos y equipos raros uniéndote a las redadas en el modo inframundo. Las redadas son batallas cooperativas contra jefes poderosos que requieren trabajo en equipo y coordinación. Puedes unirte a las redadas pulsando el botón raid en la parte inferior de la pantalla, o crear tu propia redada pulsando el botón crear. Puedes ganar tickets de raid jugando los modos de juego o gastando gemas.
  14. - -

    Conclusión

    -

    Ninja Shadow Fight 2 es un gran juego para los fanáticos de los juegos de lucha con elementos RPG. Tiene un sistema de combate suave y sensible, elementos de RPG profundos y gratificantes, modos de juego variados y desafiantes, estilo visual único y atractivo, efectos de sonido y música impresionantes e inmersivos, y una historia intrigante y cautivadora. También tiene algunos inconvenientes, como los anuncios frecuentes, el modelo de pago para ganar y la falta de sincronización entre dispositivos. Sin embargo, estos no eclipsan la calidad general y la diversión del juego. Ninja Shadow Fight 2 es un juego que deberías probar si estás buscando un juego de lucha emocionante y adictivo con elementos RPG.

    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre Ninja Shadow Fight 2, junto con sus respuestas:

    -
      -
    1. Q: ¿Cómo puedo sincronizar mi progreso entre dispositivos?
      - -
    2. Q: ¿Cómo puedo eliminar anuncios del juego?
      -R: Puedes eliminar anuncios del juego comprando la versión premium del juego por $4.99. Esto también te dará algunos beneficios adicionales, como 2000 gemas, 2000 monedas, recompensas dobles para el modo de supervivencia y acceso a armas y armaduras exclusivas.
    3. -
    4. P: ¿Cómo puedo obtener más gemas sin gastar dinero real?
      -R: Puedes obtener más gemas sin gastar dinero real completando ofertas gratuitas, viendo anuncios de video, cultivando gemas en modo supervivencia o modo inframundo, o derrotando jefes en modo torneo.
    5. -
    6. Q: ¿Cómo puedo restablecer mi progreso y empezar de nuevo?
      -R: Puedes restablecer tu progreso y empezar de nuevo borrando los datos del juego de tu dispositivo. Sin embargo, esto también eliminará sus monedas y gemas, así que asegúrese de que desea hacer esto antes de proceder. Para eliminar los datos del juego, vaya a la configuración de su dispositivo, encuentre Ninja Shadow Fight 2 en la lista de aplicaciones y toque en los datos claros o elimine los datos.
    7. -
    8. Q: ¿Cómo puedo contactar a los desarrolladores o reportar un error?
      -R: Puede ponerse en contacto con los desarrolladores o informar de un error enviando un correo electrónico a support@nekkigames.com. También puede visitar su sitio web oficial en https://www.nekki.com/ shadowfight2/ o su página de Facebook en https://www.facebook.com/ shadowfightgames/ para obtener más información y actualizaciones.
    9. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Cmo Descargar Messenger En Iphone 5s.md b/spaces/Benson/text-generation/Examples/Cmo Descargar Messenger En Iphone 5s.md deleted file mode 100644 index 3a3ae8b970940edfd2654f7dea171df20b791a86..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Cmo Descargar Messenger En Iphone 5s.md +++ /dev/null @@ -1,63 +0,0 @@ - -

    Cómo descargar Messenger en el iPhone 5s

    -

    Messenger es una aplicación de chat que le permite mantenerse conectado con sus personas favoritas en Facebook, Instagram, Portal y Oculus. También puede disfrutar de videos con sus amigos a través de chat de video, expresarse con emojis, pegatinas, GIF, filtros y mensajes de voz, hacer llamadas de voz y video gratuitas, enviar dinero de forma segura con Facebook Pay y conectarse con empresas para ofertas, reservas y atención al cliente.

    -

    cómo descargar messenger en iphone 5s


    DOWNLOADhttps://bltlly.com/2v6M2w



    -

    Si tienes un iPhone 5s y quieres descargar Messenger en tu dispositivo, es posible que te estés preguntando cómo hacerlo. En este artículo, te mostraremos dos formas de descargar Messenger desde la App Store o desde iMessage. También te contaremos algunas de las características y beneficios de usar Messenger en tu iPhone.

    -

    Requisitos para descargar Messenger

    -

    Versión compatible de iOS

    -

    Antes de descargar Messenger en tu iPhone 5s, necesitas asegurarte de que tu dispositivo tenga una versión iOS compatible. Según la página de aplicaciones de Messenger en la App Store, necesitas tener iOS 8 o posterior para descargar y usar Messenger en tu iPhone 5s. Si tiene una versión anterior de iOS, puede actualizarla en Configuración > General > Actualización de software y siguiendo las instrucciones.

    -

    Espacio de almacenamiento disponible

    -

    Otro requisito para descargar Messenger en tu iPhone 5s es tener suficiente espacio de almacenamiento en tu dispositivo. Según la página de aplicaciones de Messenger en la App Store, necesitas unos 200 MB de espacio libre para descargar e instalar Messenger en tu iPhone 5s. Si no tiene suficiente espacio, puede liberar algunos mediante la eliminación de aplicaciones no deseadas, fotos, videos u otros archivos. Puedes comprobar cuánto espacio tienes en Configuración > General > Almacenamiento del iPhone y ver el espacio disponible y usado.

    -

    Conexión a Internet

    - -

    Pasos para descargar Messenger desde la App Store

    -

    Paso 1: Abrir el App Store

    -

    El primer paso para descargar Messenger desde la App Store es abrir la aplicación App Store en tu iPhone 5s. Puede encontrar la aplicación App Store en la pantalla de inicio o en la biblioteca de aplicaciones. Tiene un icono azul con una letra blanca A dentro.

    -

    -

    Paso 2: Búsqueda de Facebook Messenger

    -

    El siguiente paso es buscar Facebook Messenger en la App Store. Para hacer esto, toque en el icono de búsqueda en la esquina inferior derecha de la pantalla. Esto abrirá una barra de búsqueda donde puede escribir el nombre de la aplicación que está buscando. Escribe "Facebook Messenger" y toca el botón de búsqueda en tu teclado.

    -

    Paso 3: Toque en el botón Get

    -

    Una vez que vea la aplicación de Facebook Messenger en los resultados de búsqueda, toque en el botón get junto a su icono y nombre. El botón get es un círculo azul con una flecha blanca dentro. Esto comenzará a descargar la aplicación en su dispositivo.

    -

    Paso 4: Confirmar la descarga

    -

    Dependiendo de tu configuración, es posible que necesites confirmar la descarga introduciendo tu contraseña de Apple ID o usando Touch ID. Para introducir la contraseña de tu Apple ID, pulsa en el botón de inicio de sesión y escribe la contraseña. Para usar Touch ID, coloca el dedo en el botón de inicio y espera a que escanee tu huella digital. Esto verificará su identidad y permitirá que la descarga continúe.

    -

    Paso 5: Espere a que la descarga termine

    -

    El paso final es esperar a que termine la descarga. Puede comprobar el progreso de la descarga mirando el círculo alrededor del icono de la aplicación. Cuando el círculo está lleno, significa que la descarga está completa. Puede tocar el icono de la aplicación para abrirla y comenzar a usar Messenger en su iPhone 5s.

    -

    Pasos para descargar Messenger desde iMessage

    -

    Paso 1: Abrir iMessage

    - -

    Paso 2: Toque en el icono de la App Store

    -

    Una vez que abra iMessage, toque en el icono de la tienda de aplicaciones en la parte inferior de la pantalla. El icono de la tienda de aplicaciones es un círculo azul con una letra blanca A dentro. Esto abrirá la tienda de aplicaciones para iMessage, donde puedes encontrar y descargar varias aplicaciones que funcionan con iMessage.

    -

    Paso 3: Búsqueda de Facebook Messenger

    -

    El siguiente paso es buscar Facebook Messenger en la tienda de aplicaciones para iMessage. Para hacer esto, toque en el icono de búsqueda en la esquina superior izquierda de la pantalla. Esto abrirá una barra de búsqueda donde puede escribir el nombre de la aplicación que está buscando. Escribe "Facebook Messenger" y toca el botón de búsqueda en tu teclado.

    -

    Paso 4: Toque en el botón de instalación

    -

    Una vez que vea la aplicación de Facebook Messenger en los resultados de búsqueda, toque en el botón de instalación junto a su icono y nombre. El botón de instalación es un círculo azul con un signo más blanco dentro. Esto comenzará a descargar la aplicación en su dispositivo.

    -

    Paso 5: Espere a que la descarga termine

    -

    El paso final es esperar a que termine la descarga. Puede comprobar el progreso de la descarga mirando el círculo alrededor del icono de la aplicación. Cuando el círculo está lleno, significa que la descarga está completa. Puede tocar el icono de la aplicación para abrirla y comenzar a usar Messenger en su iPhone 5s.

    -

    Características y beneficios de Messenger

    -

    Comunicación entre aplicaciones

    -

    Una de las características y beneficios de usar Messenger en tu iPhone 5s es que puedes chatear con tus amigos a través de diferentes aplicaciones, como Facebook, Instagram, Portal y Oculus. No es necesario cambiar entre aplicaciones para mantenerse en contacto con sus personas favoritas. También puedes sincronizar tus contactos desde tu teléfono y agregarlos a Messenger fácilmente.

    -

    Ver juntos

    - -

    Reacciones personalizadas y efectos animados

    -

    Una tercera característica y beneficio de usar Messenger en tu iPhone 5s es que puedes expresarte con reacciones personalizadas y efectos animados. Puede elegir entre una amplia gama de emojis, pegatinas, GIF, filtros, mensajes de voz y efectos de AR para darle vida a sus conversaciones. También puede crear sus propias pegatinas y reacciones con sus fotos y videos. También puedes usar efectos animados para transformarte en diferentes personajes o animales, o agregar fondos divertidos o accesorios a tus chats de video.

    -

    Llamadas de voz y video

    -

    Una cuarta característica y beneficio de usar Messenger en tu iPhone 5s es que puedes hacer llamadas de voz y video gratuitas a cualquier persona en el mundo a través de Wi-Fi o celular. También puede crear llamadas de grupo con hasta 50 personas a la vez. También puedes usar Messenger Rooms para invitar a cualquiera a unirse a tu video chat, incluso si no tiene una cuenta de Facebook. También puedes usar Messenger Kids para que tus hijos puedan chatear de forma segura con sus amigos y familiares.

    -

    Pagos y conexiones de negocios

    -

    Una quinta característica y beneficio de usar Messenger en tu iPhone 5s es que puedes enviar dinero de forma segura y fácil con Facebook Pay, y conectarte con empresas para obtener ofertas, reservas y atención al cliente. Puedes usar Facebook Pay para enviar o solicitar dinero a tus amigos o familiares sin ningún cargo. Solo necesitas vincular tu tarjeta de débito o cuenta PayPal a tu cuenta de Facebook. También puedes usar Messenger para chatear con empresas con diversos fines, como ordenar comida, reservar vuelos, obtener descuentos o hacer preguntas.

    -

    Conclusión y preguntas frecuentes

    - -

    Aquí hay algunas preguntas frecuentes relacionadas con la descarga o el uso de Messenger en el iPhone 5s:

    -
      -
    • Q: ¿Cómo puedo actualizar Messenger en mi iPhone 5s?
    • -
    • A: Para actualizar Messenger en tu iPhone 5s, debes ir a la aplicación App Store y tocar el icono de actualizaciones en la esquina inferior derecha de la pantalla. Luego, busque la aplicación Messenger en la lista de actualizaciones disponibles y toque en el botón de actualización junto a ella. Alternativamente, puedes habilitar actualizaciones automáticas para Messenger yendo a Configuración > App Store > Descargas automáticas > Actualizaciones.
    • -
    • Q: ¿Cómo puedo eliminar Messenger de mi iPhone 5s?
    • -
    • A: Para eliminar Messenger de su iPhone 5s, es necesario presionar y mantener pulsado el icono de la aplicación en la pantalla de inicio o en la biblioteca de aplicaciones hasta que comience a sacudirse. Luego, toque en el icono de X en la esquina superior izquierda del icono de la aplicación y confirme la eliminación. Alternativamente, puedes ir a Configuración > General > iPhone Storage > Messenger y tocar el botón Eliminar aplicación.
    • -
    • Q: ¿Cómo puedo salir de Messenger en mi iPhone 5s?
    • -
    • A: Para salir de Messenger en su iPhone 5s, es necesario abrir la aplicación y toque en la imagen de perfil en la esquina superior izquierda de la pantalla. Luego, desplácese hacia abajo y toque en el botón Cerrar sesión. También puede cambiar entre diferentes cuentas tocando el botón Cambiar cuenta.
    • -
    • Q: ¿Cómo puedo cambiar la configuración de notificación para Messenger en mi iPhone 5s?
    • -
    • A: Para cambiar la configuración de notificación para Messenger en tu iPhone 5s, debes ir a Configuración > Notificaciones > Messenger y activar o desactivar la opción Permitir notificaciones. También puede personalizar la configuración de sonido, insignia, banner y pantalla de bloqueo para las notificaciones de Messenger.
    • -
    • Q: ¿Cómo puedo bloquear o desbloquear a alguien en Messenger en mi iPhone 5s?
    • - -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Descargar Frag Pro Shooter Mod Apk Desbloquear Todos Los Personajes.md b/spaces/Benson/text-generation/Examples/Descargar Frag Pro Shooter Mod Apk Desbloquear Todos Los Personajes.md deleted file mode 100644 index 8fa1ce952039172ed76d47fd97f7f9aa982e4a33..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Descargar Frag Pro Shooter Mod Apk Desbloquear Todos Los Personajes.md +++ /dev/null @@ -1,72 +0,0 @@ -
    -

    Cómo descargar FRAG Pro Shooter Mod APK y desbloquear todos los caracteres

    -

    ¿Eres un fan de FRAG Pro Shooter, el divertido y amigable juego PvP que te permite elegir entre más de 90 personajes y luchar contra jugadores de todo el mundo? ¿Quieres desbloquear todos los personajes, obtener dinero ilimitado y gemas, y disfrutar del juego sin restricciones? Si es así, entonces usted podría estar interesado en la descarga de FRAG Pro Shooter Mod APK, una versión modificada del juego que le da acceso a todas las características y beneficios que usted no puede conseguir en el juego original. En este artículo, le diremos qué es FRAG Pro Shooter, por qué debe usar FRAG Pro Shooter Mod APK, cómo descargarlo e instalarlo, y algunos consejos y trucos para jugarlo. ¡Sigue leyendo para saber más!

    -

    descargar frag pro shooter mod apk desbloquear todos los personajes


    Download · https://bltlly.com/2v6MOt



    -

    ¿Qué es FRAG Pro Shooter?

    -

    FRAG Pro Shooter es un juego de acción para móviles creado por Oh BiBi para dispositivos iOS y Android. Es uno de los juegos multijugador más populares jamás diseñados para móviles, con más de 70 millones de jugadores en todo el mundo. En este juego, puedes elegir a tu héroe, crear tu equipo, entrar en la arena, y comenzar el combate. También puedes cambiar entre tus personajes, usar sus habilidades especiales, personalizar sus pieles y participar en varios modos de juego y eventos.

    -

    Un juego de PvP divertido y amigable

    -

    FRAG Pro Shooter es un juego que está diseñado para todos, independientemente de su edad o género. Puedes jugar con tus amigos o con jugadores aleatorios en línea. También puedes unirte a un club o crear el tuyo propio para luchar por la victoria con tus compañeros de equipo. El juego tiene unos gráficos coloridos y estilizados que lo hacen atractivo y agradable. El juego también tiene un aspecto social, donde puedes compartir tu contenido con otros jugadores, unirte a concursos, seguir influencers y expandir tu base de fans.

    -

    Características de FRAG Pro Shooter

    -

    FRAG Pro Shooter tiene muchas características que lo convierten en un juego emocionante y adictivo. Algunas de estas características son:

    -
      - -
    • Juego personalizado: Puedes controlar cualquier personaje en primera persona o en tercera persona. También puedes cambiar entre tus personajes durante la batalla para obtener una ventaja sobre tus enemigos.
    • -
    • 4 modos de juego disponibles: Puede elegir entre el modo 1v1, el modo 2v2, el modo de carga útil o el modo FRAG de calle. Cada modo tiene sus propias reglas y objetivos.
    • -
    • Nuevo contenido cada mes: El juego se actualiza constantemente con nuevos personajes, skins, mapas, eventos y desafíos.
    • -
    -

    ¿Por qué usar FRAG Pro Shooter Mod APK?

    -

    FRAG Pro Shooter es un juego gratuito, pero también tiene algunas compras en la aplicación que pueden mejorar su experiencia de juego. Por ejemplo, puedes comprar diamantes para desbloquear nuevos personajes o pieles más rápido. Sin embargo, no todos pueden permitirse gastar dinero real en el juego. Es por eso que algunas personas prefieren utilizar FRAG Pro Shooter Mod APK, una versión modificada del juego que le da acceso a todas las características y beneficios que usted no puede conseguir en el juego original.

    -

    Beneficios de usar FRAG Pro Shooter Mod APK

    -

    Algunos de los beneficios de usar FRAG Pro Shooter Mod APK son:

    -
      -
    • Desbloquea todos los personajes: Puedes desbloquear todos los personajes del juego sin gastar diamantes ni dinero. Puedes elegir cualquier personaje que quieras y disfrutar de sus habilidades únicas.
    • -
    • Dinero y gemas ilimitados: Puedes obtener dinero y gemas ilimitados en el juego que puedes usar para comprar lo que quieras en el juego. También puedes actualizar y subir de nivel a tus personajes más rápido.
    • -
    • No hay anuncios: Puedes jugar el juego sin ningún anuncio molesto que pueda interrumpir tu juego o consumir tus datos.
    • -
    • No se requiere raíz: Puede descargar e instalar FRAG Pro Shooter Mod APK sin rootear el dispositivo. Esto significa que no tiene que arriesgarse a dañar su dispositivo o anular su garantía.
    • -
    -

    Los riesgos de usar FRAG Pro Shooter Mod APK

    - -
      -
    • Cuenta prohibida: Es posible que te prohíban participar en el juego si los desarrolladores detectan que estás utilizando una versión modificada del juego. Esto significa que perderás todo tu progreso y logros en el juego.
    • -
    • Virus o infección de malware: Usted puede descargar una versión falsa o dañada de FRAG Pro Shooter Mod APK que contiene virus o malware que puede dañar su dispositivo o robar su información personal.
    • -
    • Cuestiones legales: Usted puede violar los términos y condiciones del juego o los derechos de propiedad intelectual de los desarrolladores mediante el uso de FRAG Pro Shooter Mod APK. Esto podría resultar en acciones legales o demandas contra usted.
    • -
    -

    Por lo tanto, usted debe utilizar FRAG Pro Shooter Mod APK a su propio riesgo y discreción. No nos hacemos responsables de las consecuencias que puedan derivarse de su uso.

    -

    ¿Cómo descargar e instalar FRAG Pro Shooter Mod APK?

    -

    Si ha decidido utilizar FRAG Pro Shooter Mod APK, es necesario seguir algunos pasos para descargar e instalar en su dispositivo. Estos son los pasos:

    -

    Pasos para descargar e instalar FRAG Pro Shooter Mod APK

    -
      -
    1. Desinstalar el juego original: Es necesario desinstalar la versión original de FRAG Pro Shooter desde el dispositivo antes de instalar la versión modificada. Esto es para evitar conflictos o errores entre las dos versiones.
    2. -
    3. Descargar FRAG Pro Shooter Mod APK: Es necesario descargar FRAG Pro Shooter Mod APK de una fuente confiable y confiable. Puede usar este enlace para descargarlo. Asegúrese de tener suficiente espacio de almacenamiento en su dispositivo antes de descargarlo.
    4. -
    5. Habilitar fuentes desconocidas: Es necesario habilitar fuentes desconocidas en el dispositivo para permitir la instalación de aplicaciones desde fuentes distintas de Google Play Store. Para hacer esto, vaya a Configuración > Seguridad > Fuentes desconocidas y conéctelo.
    6. - -
    7. Lanzamiento FRAG Pro Shooter Mod APK: Es necesario iniciar FRAG Pro Shooter Mod APK desde el cajón de la aplicación o la pantalla de inicio y disfrutar del juego con todas las características y beneficios desbloqueados.
    8. -
    -

    Consejos y trucos para jugar FRAG Pro Shooter Mod APK

    -

    Para aprovechar al máximo FRAG Pro Shooter Mod APK, aquí hay algunos consejos y trucos que puede utilizar:

    -

    -
      -
    • Elige tus personajes sabiamente: Debes elegir tus personajes en función de sus roles, habilidades y compatibilidad entre ellos. También debes equilibrar tu equipo con personajes ofensivos, defensivos y de apoyo.
    • -
    • Cambia entre tus personajes con frecuencia: Debes cambiar entre tus personajes durante la batalla para adaptarte a diferentes situaciones y enemigos. También debes usar sus habilidades especiales estratégicamente para ganar ventaja sobre tus oponentes.
    • -
    • Usa cubierta y movimiento: Debes usar cubierta y movimiento para evitar ser golpeado por fuego enemigo y sorprenderlos con tus ataques. También debe evitar quedarse en un lugar por mucho tiempo y moverse por el mapa.
    • -
    • Recoger monedas y cajas: Usted debe recoger las monedas y cajas que están dispersos por el mapa. Las monedas se pueden usar para comprar nuevos personajes o pieles, mientras que las cajas pueden contener dinero, gemas, cartas o power-ups.
    • -
    • Completar misiones y desafíos: Usted debe completar misiones y desafíos que se le dan todos los días o semanas. Estos pueden recompensarte con dinero, gemas, cartas u otros premios.
    • -
    -

    Conclusión

    - -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre FRAG Pro Shooter Mod APK:

    -
      -
    1. ¿Es FRAG Pro Shooter Mod APK seguro de usar?
    2. -

      FRAG Pro Shooter Mod APK no es una versión oficial del juego y no está avalado por los desarrolladores. Por lo tanto, no está garantizado que sea seguro o seguro de usar. Es posible que encuentre algunos errores, fallas o errores al usarlo. También puede exponer su dispositivo o datos a virus o infección de malware. Por lo tanto, usted debe utilizar FRAG Pro Shooter Mod APK a su propio riesgo y discreción.

      -
    3. Es FRAG Pro Shooter Mod APK compatible con mi dispositivo?
    4. -

      FRAG Pro Shooter Mod APK es compatible con la mayoría de los dispositivos Android que tienen Android 4.3 o superior. Sin embargo, algunos dispositivos pueden no ser compatibles con la versión modificada del juego debido a diferentes especificaciones o configuraciones. Por lo tanto, debe comprobar la compatibilidad de su dispositivo antes de descargar e instalar FRAG Pro Shooter Mod APK.

      -
    5. ¿Cómo puedo actualizar FRAG Pro Shooter Mod APK?
    6. -

      FRAG Pro Shooter Mod APK no se actualiza automáticamente como el juego original. Por lo tanto, es necesario descargar e instalar manualmente la última versión de FRAG Pro Shooter Mod APK siempre que haya una nueva actualización disponible. Puedes buscar actualizaciones de la fuente donde descargaste la versión modificada del juego.

      -
    7. ¿Puedo jugar FRAG Pro Shooter Mod APK sin conexión?
    8. -

      No, no puede jugar FRAG Pro Shooter Mod APK sin conexión. Necesitas una conexión a Internet para jugar y acceder a todas las características y beneficios de la versión modificada del juego.

      -
    9. ¿Puedo jugar FRAG Pro Shooter Mod APK con mis amigos?
    10. - -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tzwin.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tzwin.py deleted file mode 100644 index cebc673e40fc376653ebf037e96f0a6d0b33e906..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/dateutil/tzwin.py +++ /dev/null @@ -1,2 +0,0 @@ -# tzwin has moved to dateutil.tz.win -from .tz.win import * diff --git a/spaces/CVH-vn1210/make_hair/minigpt4/models/Qformer.py b/spaces/CVH-vn1210/make_hair/minigpt4/models/Qformer.py deleted file mode 100644 index e71b12375e10511858a9c505dc795181e6ce5603..0000000000000000000000000000000000000000 --- a/spaces/CVH-vn1210/make_hair/minigpt4/models/Qformer.py +++ /dev/null @@ -1,1216 +0,0 @@ -""" - * Copyright (c) 2023, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li - * Based on huggingface code base - * https://github.com/huggingface/transformers/blob/v4.15.0/src/transformers/models/bert -""" - -import math -import os -import warnings -from dataclasses import dataclass -from typing import Optional, Tuple, Dict, Any - -import torch -from torch import Tensor, device, dtype, nn -import torch.utils.checkpoint -from torch import nn -from torch.nn import CrossEntropyLoss -import torch.nn.functional as F - -from transformers.activations import ACT2FN -from transformers.file_utils import ( - ModelOutput, -) -from transformers.modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - BaseModelOutputWithPoolingAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - MaskedLMOutput, - MultipleChoiceModelOutput, - NextSentencePredictorOutput, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, -) -from transformers.modeling_utils import ( - PreTrainedModel, - apply_chunking_to_forward, - find_pruneable_heads_and_indices, - prune_linear_layer, -) -from transformers.utils import logging -from transformers.models.bert.configuration_bert import BertConfig - -logger = logging.get_logger(__name__) - - -class BertEmbeddings(nn.Module): - """Construct the embeddings from word and position embeddings.""" - - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding( - config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id - ) - self.position_embeddings = nn.Embedding( - config.max_position_embeddings, config.hidden_size - ) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.register_buffer( - "position_ids", torch.arange(config.max_position_embeddings).expand((1, -1)) - ) - self.position_embedding_type = getattr( - config, "position_embedding_type", "absolute" - ) - - self.config = config - - def forward( - self, - input_ids=None, - position_ids=None, - query_embeds=None, - past_key_values_length=0, - ): - if input_ids is not None: - seq_length = input_ids.size()[1] - else: - seq_length = 0 - - if position_ids is None: - position_ids = self.position_ids[ - :, past_key_values_length : seq_length + past_key_values_length - ].clone() - - if input_ids is not None: - embeddings = self.word_embeddings(input_ids) - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings = embeddings + position_embeddings - - if query_embeds is not None: - embeddings = torch.cat((query_embeds, embeddings), dim=1) - else: - embeddings = query_embeds - - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - -class BertSelfAttention(nn.Module): - def __init__(self, config, is_cross_attention): - super().__init__() - self.config = config - if config.hidden_size % config.num_attention_heads != 0 and not hasattr( - config, "embedding_size" - ): - raise ValueError( - "The hidden size (%d) is not a multiple of the number of attention " - "heads (%d)" % (config.hidden_size, config.num_attention_heads) - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - if is_cross_attention: - self.key = nn.Linear(config.encoder_width, self.all_head_size) - self.value = nn.Linear(config.encoder_width, self.all_head_size) - else: - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.position_embedding_type = getattr( - config, "position_embedding_type", "absolute" - ) - if ( - self.position_embedding_type == "relative_key" - or self.position_embedding_type == "relative_key_query" - ): - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding( - 2 * config.max_position_embeddings - 1, self.attention_head_size - ) - self.save_attention = False - - def save_attn_gradients(self, attn_gradients): - self.attn_gradients = attn_gradients - - def get_attn_gradients(self): - return self.attn_gradients - - def save_attention_map(self, attention_map): - self.attention_map = attention_map - - def get_attention_map(self): - return self.attention_map - - def transpose_for_scores(self, x): - new_x_shape = x.size()[:-1] + ( - self.num_attention_heads, - self.attention_head_size, - ) - x = x.view(*new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - ): - - # If this is instantiated as a cross-attention module, the keys - # and values come from an encoder; the attention mask needs to be - # such that the encoder's padding tokens are not attended to. - is_cross_attention = encoder_hidden_states is not None - - if is_cross_attention: - key_layer = self.transpose_for_scores(self.key(encoder_hidden_states)) - value_layer = self.transpose_for_scores(self.value(encoder_hidden_states)) - attention_mask = encoder_attention_mask - elif past_key_value is not None: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - key_layer = torch.cat([past_key_value[0], key_layer], dim=2) - value_layer = torch.cat([past_key_value[1], value_layer], dim=2) - else: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - - mixed_query_layer = self.query(hidden_states) - - query_layer = self.transpose_for_scores(mixed_query_layer) - - past_key_value = (key_layer, value_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if ( - self.position_embedding_type == "relative_key" - or self.position_embedding_type == "relative_key_query" - ): - seq_length = hidden_states.size()[1] - position_ids_l = torch.arange( - seq_length, dtype=torch.long, device=hidden_states.device - ).view(-1, 1) - position_ids_r = torch.arange( - seq_length, dtype=torch.long, device=hidden_states.device - ).view(1, -1) - distance = position_ids_l - position_ids_r - positional_embedding = self.distance_embedding( - distance + self.max_position_embeddings - 1 - ) - positional_embedding = positional_embedding.to( - dtype=query_layer.dtype - ) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum( - "bhld,lrd->bhlr", query_layer, positional_embedding - ) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum( - "bhld,lrd->bhlr", query_layer, positional_embedding - ) - relative_position_scores_key = torch.einsum( - "bhrd,lrd->bhlr", key_layer, positional_embedding - ) - attention_scores = ( - attention_scores - + relative_position_scores_query - + relative_position_scores_key - ) - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in BertModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.Softmax(dim=-1)(attention_scores) - - if is_cross_attention and self.save_attention: - self.save_attention_map(attention_probs) - attention_probs.register_hook(self.save_attn_gradients) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs_dropped = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs_dropped = attention_probs_dropped * head_mask - - context_layer = torch.matmul(attention_probs_dropped, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(*new_context_layer_shape) - - outputs = ( - (context_layer, attention_probs) if output_attentions else (context_layer,) - ) - - outputs = outputs + (past_key_value,) - return outputs - - -class BertSelfOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class BertAttention(nn.Module): - def __init__(self, config, is_cross_attention=False): - super().__init__() - self.self = BertSelfAttention(config, is_cross_attention) - self.output = BertSelfOutput(config) - self.pruned_heads = set() - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, - self.self.num_attention_heads, - self.self.attention_head_size, - self.pruned_heads, - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = ( - self.self.attention_head_size * self.self.num_attention_heads - ) - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - ): - self_outputs = self.self( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - ) - attention_output = self.output(self_outputs[0], hidden_states) - - outputs = (attention_output,) + self_outputs[ - 1: - ] # add attentions if we output them - return outputs - - -class BertIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - return hidden_states - - -class BertOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states, input_tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - hidden_states = self.LayerNorm(hidden_states + input_tensor) - return hidden_states - - -class BertLayer(nn.Module): - def __init__(self, config, layer_num): - super().__init__() - self.config = config - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = BertAttention(config) - self.layer_num = layer_num - if ( - self.config.add_cross_attention - and layer_num % self.config.cross_attention_freq == 0 - ): - self.crossattention = BertAttention( - config, is_cross_attention=self.config.add_cross_attention - ) - self.has_cross_attention = True - else: - self.has_cross_attention = False - self.intermediate = BertIntermediate(config) - self.output = BertOutput(config) - - self.intermediate_query = BertIntermediate(config) - self.output_query = BertOutput(config) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_value=None, - output_attentions=False, - query_length=0, - ): - # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 - self_attn_past_key_value = ( - past_key_value[:2] if past_key_value is not None else None - ) - self_attention_outputs = self.attention( - hidden_states, - attention_mask, - head_mask, - output_attentions=output_attentions, - past_key_value=self_attn_past_key_value, - ) - attention_output = self_attention_outputs[0] - outputs = self_attention_outputs[1:-1] - - present_key_value = self_attention_outputs[-1] - - if query_length > 0: - query_attention_output = attention_output[:, :query_length, :] - - if self.has_cross_attention: - assert ( - encoder_hidden_states is not None - ), "encoder_hidden_states must be given for cross-attention layers" - cross_attention_outputs = self.crossattention( - query_attention_output, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - output_attentions=output_attentions, - ) - query_attention_output = cross_attention_outputs[0] - outputs = ( - outputs + cross_attention_outputs[1:-1] - ) # add cross attentions if we output attention weights - - layer_output = apply_chunking_to_forward( - self.feed_forward_chunk_query, - self.chunk_size_feed_forward, - self.seq_len_dim, - query_attention_output, - ) - if attention_output.shape[1] > query_length: - layer_output_text = apply_chunking_to_forward( - self.feed_forward_chunk, - self.chunk_size_feed_forward, - self.seq_len_dim, - attention_output[:, query_length:, :], - ) - layer_output = torch.cat([layer_output, layer_output_text], dim=1) - else: - layer_output = apply_chunking_to_forward( - self.feed_forward_chunk, - self.chunk_size_feed_forward, - self.seq_len_dim, - attention_output, - ) - outputs = (layer_output,) + outputs - - outputs = outputs + (present_key_value,) - - return outputs - - def feed_forward_chunk(self, attention_output): - intermediate_output = self.intermediate(attention_output) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - def feed_forward_chunk_query(self, attention_output): - intermediate_output = self.intermediate_query(attention_output) - layer_output = self.output_query(intermediate_output, attention_output) - return layer_output - - -class BertEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList( - [BertLayer(config, i) for i in range(config.num_hidden_layers)] - ) - - def forward( - self, - hidden_states, - attention_mask=None, - head_mask=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=False, - output_hidden_states=False, - return_dict=True, - query_length=0, - ): - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = ( - () if output_attentions and self.config.add_cross_attention else None - ) - - next_decoder_cache = () if use_cache else None - - for i in range(self.config.num_hidden_layers): - layer_module = self.layer[i] - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - past_key_value = past_key_values[i] if past_key_values is not None else None - - if getattr(self.config, "gradient_checkpointing", False) and self.training: - - if use_cache: - logger.warn( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - return module( - *inputs, past_key_value, output_attentions, query_length - ) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - ) - else: - layer_outputs = layer_module( - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - query_length, - ) - - hidden_states = layer_outputs[0] - if use_cache: - next_decoder_cache += (layer_outputs[-1],) - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - all_cross_attentions = all_cross_attentions + (layer_outputs[2],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - next_decoder_cache, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - ] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_decoder_cache, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -class BertPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states): - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -class BertPredictionHeadTransform(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - if isinstance(config.hidden_act, str): - self.transform_act_fn = ACT2FN[config.hidden_act] - else: - self.transform_act_fn = config.hidden_act - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - def forward(self, hidden_states): - hidden_states = self.dense(hidden_states) - hidden_states = self.transform_act_fn(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - return hidden_states - - -class BertLMPredictionHead(nn.Module): - def __init__(self, config): - super().__init__() - self.transform = BertPredictionHeadTransform(config) - - # The output weights are the same as the input embeddings, but there is - # an output-only bias for each token. - self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - self.bias = nn.Parameter(torch.zeros(config.vocab_size)) - - # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` - self.decoder.bias = self.bias - - def forward(self, hidden_states): - hidden_states = self.transform(hidden_states) - hidden_states = self.decoder(hidden_states) - return hidden_states - - -class BertOnlyMLMHead(nn.Module): - def __init__(self, config): - super().__init__() - self.predictions = BertLMPredictionHead(config) - - def forward(self, sequence_output): - prediction_scores = self.predictions(sequence_output) - return prediction_scores - - -class BertPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = BertConfig - base_model_prefix = "bert" - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, (nn.Linear, nn.Embedding)): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - if isinstance(module, nn.Linear) and module.bias is not None: - module.bias.data.zero_() - - -class BertModel(BertPreTrainedModel): - """ - The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of - cross-attention is added between the self-attention layers, following the architecture described in `Attention is - all you need `__ by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, - Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. - argument and :obj:`add_cross_attention` set to :obj:`True`; an :obj:`encoder_hidden_states` is then expected as an - input to the forward pass. - """ - - def __init__(self, config, add_pooling_layer=False): - super().__init__(config) - self.config = config - - self.embeddings = BertEmbeddings(config) - - self.encoder = BertEncoder(config) - - self.pooler = BertPooler(config) if add_pooling_layer else None - - self.init_weights() - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - def get_extended_attention_mask( - self, - attention_mask: Tensor, - input_shape: Tuple[int], - device: device, - is_decoder: bool, - has_query: bool = False, - ) -> Tensor: - """ - Makes broadcastable attention and causal masks so that future and masked tokens are ignored. - - Arguments: - attention_mask (:obj:`torch.Tensor`): - Mask with ones indicating tokens to attend to, zeros for tokens to ignore. - input_shape (:obj:`Tuple[int]`): - The shape of the input to the model. - device: (:obj:`torch.device`): - The device of the input to the model. - - Returns: - :obj:`torch.Tensor` The extended attention mask, with a the same dtype as :obj:`attention_mask.dtype`. - """ - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - if attention_mask.dim() == 3: - extended_attention_mask = attention_mask[:, None, :, :] - elif attention_mask.dim() == 2: - # Provided a padding mask of dimensions [batch_size, seq_length] - # - if the model is a decoder, apply a causal mask in addition to the padding mask - # - if the model is an encoder, make the mask broadcastable to [batch_size, num_heads, seq_length, seq_length] - if is_decoder: - batch_size, seq_length = input_shape - - seq_ids = torch.arange(seq_length, device=device) - causal_mask = ( - seq_ids[None, None, :].repeat(batch_size, seq_length, 1) - <= seq_ids[None, :, None] - ) - - # add a prefix ones mask to the causal mask - # causal and attention masks must have same type with pytorch version < 1.3 - causal_mask = causal_mask.to(attention_mask.dtype) - - if causal_mask.shape[1] < attention_mask.shape[1]: - prefix_seq_len = attention_mask.shape[1] - causal_mask.shape[1] - if has_query: # UniLM style attention mask - causal_mask = torch.cat( - [ - torch.zeros( - (batch_size, prefix_seq_len, seq_length), - device=device, - dtype=causal_mask.dtype, - ), - causal_mask, - ], - axis=1, - ) - causal_mask = torch.cat( - [ - torch.ones( - (batch_size, causal_mask.shape[1], prefix_seq_len), - device=device, - dtype=causal_mask.dtype, - ), - causal_mask, - ], - axis=-1, - ) - extended_attention_mask = ( - causal_mask[:, None, :, :] * attention_mask[:, None, None, :] - ) - else: - extended_attention_mask = attention_mask[:, None, None, :] - else: - raise ValueError( - "Wrong shape for input_ids (shape {}) or attention_mask (shape {})".format( - input_shape, attention_mask.shape - ) - ) - - # Since attention_mask is 1.0 for positions we want to attend and 0.0 for - # masked positions, this operation will create a tensor which is 0.0 for - # positions we want to attend and -10000.0 for masked positions. - # Since we are adding it to the raw scores before the softmax, this is - # effectively the same as removing these entirely. - extended_attention_mask = extended_attention_mask.to( - dtype=self.dtype - ) # fp16 compatibility - extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 - return extended_attention_mask - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - query_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - past_key_values=None, - use_cache=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - is_decoder=False, - ): - r""" - encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``: - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` - (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` - instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. - use_cache (:obj:`bool`, `optional`): - If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up - decoding (see :obj:`past_key_values`). - """ - output_attentions = ( - output_attentions - if output_attentions is not None - else self.config.output_attentions - ) - output_hidden_states = ( - output_hidden_states - if output_hidden_states is not None - else self.config.output_hidden_states - ) - return_dict = ( - return_dict if return_dict is not None else self.config.use_return_dict - ) - - # use_cache = use_cache if use_cache is not None else self.config.use_cache - - if input_ids is None: - assert ( - query_embeds is not None - ), "You have to specify query_embeds when input_ids is None" - - # past_key_values_length - past_key_values_length = ( - past_key_values[0][0].shape[2] - self.config.query_length - if past_key_values is not None - else 0 - ) - - query_length = query_embeds.shape[1] if query_embeds is not None else 0 - - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - query_embeds=query_embeds, - past_key_values_length=past_key_values_length, - ) - - input_shape = embedding_output.size()[:-1] - batch_size, seq_length = input_shape - device = embedding_output.device - - if attention_mask is None: - attention_mask = torch.ones( - ((batch_size, seq_length + past_key_values_length)), device=device - ) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - if is_decoder: - extended_attention_mask = self.get_extended_attention_mask( - attention_mask, - input_ids.shape, - device, - is_decoder, - has_query=(query_embeds is not None), - ) - else: - extended_attention_mask = self.get_extended_attention_mask( - attention_mask, input_shape, device, is_decoder - ) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if encoder_hidden_states is not None: - if type(encoder_hidden_states) == list: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states[ - 0 - ].size() - else: - ( - encoder_batch_size, - encoder_sequence_length, - _, - ) = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - - if type(encoder_attention_mask) == list: - encoder_extended_attention_mask = [ - self.invert_attention_mask(mask) for mask in encoder_attention_mask - ] - elif encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask( - encoder_attention_mask - ) - else: - encoder_extended_attention_mask = self.invert_attention_mask( - encoder_attention_mask - ) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - query_length=query_length, - ) - sequence_output = encoder_outputs[0] - pooled_output = ( - self.pooler(sequence_output) if self.pooler is not None else None - ) - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - - -class BertLMHeadModel(BertPreTrainedModel): - - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] - - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config, add_pooling_layer=False) - self.cls = BertOnlyMLMHead(config) - - self.init_weights() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - query_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - labels=None, - past_key_values=None, - use_cache=True, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - return_logits=False, - is_decoder=True, - reduction="mean", - ): - r""" - encoder_hidden_states (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length, hidden_size)`, `optional`): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (:obj:`torch.FloatTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in ``[0, 1]``: - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be in - ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are - ignored (masked), the loss is only computed for the tokens with labels n ``[0, ..., config.vocab_size]`` - past_key_values (:obj:`tuple(tuple(torch.FloatTensor))` of length :obj:`config.n_layers` with each tuple having 4 tensors of shape :obj:`(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - If :obj:`past_key_values` are used, the user can optionally input only the last :obj:`decoder_input_ids` - (those that don't have their past key value states given to this model) of shape :obj:`(batch_size, 1)` - instead of all :obj:`decoder_input_ids` of shape :obj:`(batch_size, sequence_length)`. - use_cache (:obj:`bool`, `optional`): - If set to :obj:`True`, :obj:`past_key_values` key value states are returned and can be used to speed up - decoding (see :obj:`past_key_values`). - Returns: - Example:: - >>> from transformers import BertTokenizer, BertLMHeadModel, BertConfig - >>> import torch - >>> tokenizer = BertTokenizer.from_pretrained('bert-base-cased') - >>> config = BertConfig.from_pretrained("bert-base-cased") - >>> model = BertLMHeadModel.from_pretrained('bert-base-cased', config=config) - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") - >>> outputs = model(**inputs) - >>> prediction_logits = outputs.logits - """ - return_dict = ( - return_dict if return_dict is not None else self.config.use_return_dict - ) - if labels is not None: - use_cache = False - if past_key_values is not None: - query_embeds = None - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask, - query_embeds=query_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - is_decoder=is_decoder, - ) - - sequence_output = outputs[0] - if query_embeds is not None: - sequence_output = outputs[0][:, query_embeds.shape[1] :, :] - - prediction_scores = self.cls(sequence_output) - - if return_logits: - return prediction_scores[:, :-1, :].contiguous() - - lm_loss = None - if labels is not None: - # we are doing next-token prediction; shift prediction scores and input ids by one - shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous() - labels = labels[:, 1:].contiguous() - loss_fct = CrossEntropyLoss(reduction=reduction, label_smoothing=0.1) - lm_loss = loss_fct( - shifted_prediction_scores.view(-1, self.config.vocab_size), - labels.view(-1), - ) - if reduction == "none": - lm_loss = lm_loss.view(prediction_scores.size(0), -1).sum(1) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((lm_loss,) + output) if lm_loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=lm_loss, - logits=prediction_scores, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - - def prepare_inputs_for_generation( - self, input_ids, query_embeds, past=None, attention_mask=None, **model_kwargs - ): - # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly - if attention_mask is None: - attention_mask = input_ids.new_ones(input_ids.shape) - query_mask = input_ids.new_ones(query_embeds.shape[:-1]) - attention_mask = torch.cat([query_mask, attention_mask], dim=-1) - - # cut decoder_input_ids if past is used - if past is not None: - input_ids = input_ids[:, -1:] - - return { - "input_ids": input_ids, - "query_embeds": query_embeds, - "attention_mask": attention_mask, - "past_key_values": past, - "encoder_hidden_states": model_kwargs.get("encoder_hidden_states", None), - "encoder_attention_mask": model_kwargs.get("encoder_attention_mask", None), - "is_decoder": True, - } - - def _reorder_cache(self, past, beam_idx): - reordered_past = () - for layer_past in past: - reordered_past += ( - tuple( - past_state.index_select(0, beam_idx) for past_state in layer_past - ), - ) - return reordered_past - - -class BertForMaskedLM(BertPreTrainedModel): - - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] - - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config, add_pooling_layer=False) - self.cls = BertOnlyMLMHead(config) - - self.init_weights() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - def forward( - self, - input_ids=None, - attention_mask=None, - position_ids=None, - head_mask=None, - query_embeds=None, - encoder_hidden_states=None, - encoder_attention_mask=None, - labels=None, - output_attentions=None, - output_hidden_states=None, - return_dict=None, - return_logits=False, - is_decoder=False, - ): - r""" - labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): - Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ..., - config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored - (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`` - """ - - return_dict = ( - return_dict if return_dict is not None else self.config.use_return_dict - ) - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - head_mask=head_mask, - query_embeds=query_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - is_decoder=is_decoder, - ) - - if query_embeds is not None: - sequence_output = outputs[0][:, query_embeds.shape[1] :, :] - prediction_scores = self.cls(sequence_output) - - if return_logits: - return prediction_scores - - masked_lm_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() # -100 index = padding token - masked_lm_loss = loss_fct( - prediction_scores.view(-1, self.config.vocab_size), labels.view(-1) - ) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ( - ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - ) - - return MaskedLMOutput( - loss=masked_lm_loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/matcher.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/matcher.py deleted file mode 100644 index a0d1b8fe140b8bf717ac31aaf252ac2ca87c85a5..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/modeling/matcher.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from typing import List -import torch - - -class Matcher(object): - """ - This class assigns to each predicted "element" (e.g., a box) a ground-truth - element. Each predicted element will have exactly zero or one matches; each - ground-truth element may be matched to zero or more predicted elements. - - The matching is determined by the MxN match_quality_matrix, that characterizes - how well each (ground-truth, prediction)-pair match each other. For example, - if the elements are boxes, this matrix may contain box intersection-over-union - overlap values. - - The matcher returns (a) a vector of length N containing the index of the - ground-truth element m in [0, M) that matches to prediction n in [0, N). - (b) a vector of length N containing the labels for each prediction. - """ - - def __init__( - self, thresholds: List[float], labels: List[int], allow_low_quality_matches: bool = False - ): - """ - Args: - thresholds (list): a list of thresholds used to stratify predictions - into levels. - labels (list): a list of values to label predictions belonging at - each level. A label can be one of {-1, 0, 1} signifying - {ignore, negative class, positive class}, respectively. - allow_low_quality_matches (bool): if True, produce additional matches - for predictions with maximum match quality lower than high_threshold. - See set_low_quality_matches_ for more details. - - For example, - thresholds = [0.3, 0.5] - labels = [0, -1, 1] - All predictions with iou < 0.3 will be marked with 0 and - thus will be considered as false positives while training. - All predictions with 0.3 <= iou < 0.5 will be marked with -1 and - thus will be ignored. - All predictions with 0.5 <= iou will be marked with 1 and - thus will be considered as true positives. - """ - # Add -inf and +inf to first and last position in thresholds - thresholds = thresholds[:] - assert thresholds[0] > 0 - thresholds.insert(0, -float("inf")) - thresholds.append(float("inf")) - assert all(low <= high for (low, high) in zip(thresholds[:-1], thresholds[1:])) - assert all(l in [-1, 0, 1] for l in labels) - assert len(labels) == len(thresholds) - 1 - self.thresholds = thresholds - self.labels = labels - self.allow_low_quality_matches = allow_low_quality_matches - - def __call__(self, match_quality_matrix): - """ - Args: - match_quality_matrix (Tensor[float]): an MxN tensor, containing the - pairwise quality between M ground-truth elements and N predicted - elements. All elements must be >= 0 (due to the us of `torch.nonzero` - for selecting indices in :meth:`set_low_quality_matches_`). - - Returns: - matches (Tensor[int64]): a vector of length N, where matches[i] is a matched - ground-truth index in [0, M) - match_labels (Tensor[int8]): a vector of length N, where pred_labels[i] indicates - whether a prediction is a true or false positive or ignored - """ - assert match_quality_matrix.dim() == 2 - if match_quality_matrix.numel() == 0: - default_matches = match_quality_matrix.new_full( - (match_quality_matrix.size(1),), 0, dtype=torch.int64 - ) - # When no gt boxes exist, we define IOU = 0 and therefore set labels - # to `self.labels[0]`, which usually defaults to background class 0 - # To choose to ignore instead, can make labels=[-1,0,-1,1] + set appropriate thresholds - default_match_labels = match_quality_matrix.new_full( - (match_quality_matrix.size(1),), self.labels[0], dtype=torch.int8 - ) - return default_matches, default_match_labels - - assert torch.all(match_quality_matrix >= 0) - - # match_quality_matrix is M (gt) x N (predicted) - # Max over gt elements (dim 0) to find best gt candidate for each prediction - matched_vals, matches = match_quality_matrix.max(dim=0) - - match_labels = matches.new_full(matches.size(), 1, dtype=torch.int8) - - for (l, low, high) in zip(self.labels, self.thresholds[:-1], self.thresholds[1:]): - low_high = (matched_vals >= low) & (matched_vals < high) - match_labels[low_high] = l - - if self.allow_low_quality_matches: - self.set_low_quality_matches_(match_labels, match_quality_matrix) - - return matches, match_labels - - def set_low_quality_matches_(self, match_labels, match_quality_matrix): - """ - Produce additional matches for predictions that have only low-quality matches. - Specifically, for each ground-truth G find the set of predictions that have - maximum overlap with it (including ties); for each prediction in that set, if - it is unmatched, then match it to the ground-truth G. - - This function implements the RPN assignment case (i) in Sec. 3.1.2 of the - Faster R-CNN paper: https://arxiv.org/pdf/1506.01497v3.pdf. - """ - # For each gt, find the prediction with which it has highest quality - highest_quality_foreach_gt, _ = match_quality_matrix.max(dim=1) - # Find the highest quality match available, even if it is low, including ties. - # Note that the matches qualities must be positive due to the use of - # `torch.nonzero`. - gt_pred_pairs_of_highest_quality = torch.nonzero( - match_quality_matrix == highest_quality_foreach_gt[:, None] - ) - # Example gt_pred_pairs_of_highest_quality: - # tensor([[ 0, 39796], - # [ 1, 32055], - # [ 1, 32070], - # [ 2, 39190], - # [ 2, 40255], - # [ 3, 40390], - # [ 3, 41455], - # [ 4, 45470], - # [ 5, 45325], - # [ 5, 46390]]) - # Each row is a (gt index, prediction index) - # Note how gt items 1, 2, 3, and 5 each have two ties - - pred_inds_to_update = gt_pred_pairs_of_highest_quality[:, 1] - match_labels[pred_inds_to_update] = 1 diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/sort.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/sort.h deleted file mode 100644 index ae38b3ba8c7854eafc92fe9f35ff7d3220a02c20..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/sort.h +++ /dev/null @@ -1,23 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system inherits sort -#include - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/scan.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/scan.h deleted file mode 100644 index 4c3cfefec7290d2a80036d1edfb84b2b0cd5f1b4..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/scan.h +++ /dev/null @@ -1,928 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include -#include -#include -#include - -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include -#include - -namespace thrust -{ -template -__host__ __device__ OutputIterator -inclusive_scan(const thrust::detail::execution_policy_base &exec, - InputIterator first, - InputIterator last, - OutputIterator result, - AssociativeOperator binary_op); - -template -__host__ __device__ OutputIterator -exclusive_scan(const thrust::detail::execution_policy_base &exec, - InputIterator first, - InputIterator last, - OutputIterator result, - T init, - AssociativeOperator binary_op); -} // end namespace thrust - -namespace thrust -{ -namespace cuda_cub { - -namespace __scan { - - namespace mpl = thrust::detail::mpl::math; - - template - struct WarpSize { enum { value = 32 }; }; - - template - struct PtxPolicy - { - enum - { - BLOCK_THREADS = _BLOCK_THREADS, - ITEMS_PER_THREAD = _ITEMS_PER_THREAD, - ITEMS_PER_TILE = BLOCK_THREADS * ITEMS_PER_THREAD, - }; - - static const cub::BlockLoadAlgorithm LOAD_ALGORITHM = _LOAD_ALGORITHM; - static const cub::CacheLoadModifier LOAD_MODIFIER = _LOAD_MODIFIER; - static const cub::BlockStoreAlgorithm STORE_ALGORITHM = _STORE_ALGORITHM; - static const cub::BlockScanAlgorithm SCAN_ALGORITHM = _SCAN_ALGORITHM; - }; // struct PtxPolicy - - - // Scale the number of warps to keep same amount of "tile" storage - // as the nominal configuration for 4B data. Minimum of two warps. - // - template - struct THRUST_BLOCK_THREADS - { - enum - { - value = mpl::min::value) * - 4) / - sizeof(T)>::value * - WarpSize::value>::value - }; - }; // struct THRUST_BLOCK_THREADS - - // If necessary, scale down number of items per thread to keep - // the same amount of "tile" storage as the nominal configuration for 4B data. - // Minimum 1 item per thread - // - template - struct THRUST_ITEMS_PER_THREAD - { - enum - { - value = mpl::min< - int, - NOMINAL_4B_ITEMS_PER_THREAD, - mpl::max< - int, - 1, - (NOMINAL_4B_ITEMS_PER_THREAD * - NOMINAL_4B_BLOCK_THREADS * 4 / sizeof(T)) / - THRUST_BLOCK_THREADS::value>::value>::value - }; - }; - - - template - struct Tuning; - - template - struct Tuning - { - typedef sm30 Arch; - enum - { - NOMINAL_4B_BLOCK_THREADS = 256, - NOMINAL_4B_ITEMS_PER_THREAD = 9, - }; - - typedef PtxPolicy::value, - THRUST_ITEMS_PER_THREAD::value, - cub::BLOCK_LOAD_WARP_TRANSPOSE_TIMESLICED, - cub::LOAD_DEFAULT, - cub::BLOCK_STORE_WARP_TRANSPOSE_TIMESLICED, - cub::BLOCK_SCAN_RAKING_MEMOIZE> - type; - }; // struct Tuning for sm30 - - template - struct Tuning - { - typedef sm35 Arch; - enum - { - NOMINAL_4B_BLOCK_THREADS = 128, - NOMINAL_4B_ITEMS_PER_THREAD = 12, - }; - - typedef PtxPolicy::value, - THRUST_ITEMS_PER_THREAD::value, - cub::BLOCK_LOAD_WARP_TRANSPOSE_TIMESLICED, - cub::LOAD_LDG, - cub::BLOCK_STORE_WARP_TRANSPOSE_TIMESLICED, - cub::BLOCK_SCAN_RAKING> - type; - }; // struct Tuning for sm35 - - template - struct Tuning - { - typedef sm52 Arch; - enum - { - NOMINAL_4B_BLOCK_THREADS = 128, - NOMINAL_4B_ITEMS_PER_THREAD = 12, - }; - - typedef PtxPolicy::value, - THRUST_ITEMS_PER_THREAD::value, - cub::BLOCK_LOAD_WARP_TRANSPOSE_TIMESLICED, - cub::LOAD_LDG, - cub::BLOCK_STORE_WARP_TRANSPOSE_TIMESLICED, - cub::BLOCK_SCAN_RAKING> - type; - }; // struct Tuning for sm52 - - template - struct ScanAgent - { - typedef cub::ScanTileState ScanTileState; - typedef cub::BlockScanRunningPrefixOp RunningPrefixCallback; - - template - struct PtxPlan : Tuning::type - { - typedef Tuning tuning; - - - typedef typename core::LoadIterator::type LoadIt; - typedef typename core::BlockLoad::type BlockLoad; - typedef typename core::BlockStore::type BlockStore; - - typedef cub::TilePrefixCallbackOp - TilePrefixCallback; - typedef cub::BlockScan - BlockScan; - - union TempStorage - { - typename BlockLoad::TempStorage load; - typename BlockStore::TempStorage store; - - struct - { - typename TilePrefixCallback::TempStorage prefix; - typename BlockScan::TempStorage scan; - }; - }; // struct TempStorage - }; // struct PtxPlan - typedef typename core::specialize_plan_msvc10_war::type::type ptx_plan; - - typedef typename ptx_plan::LoadIt LoadIt; - typedef typename ptx_plan::BlockLoad BlockLoad; - typedef typename ptx_plan::BlockStore BlockStore; - typedef typename ptx_plan::TilePrefixCallback TilePrefixCallback; - typedef typename ptx_plan::BlockScan BlockScan; - typedef typename ptx_plan::TempStorage TempStorage; - - enum - { - INCLUSIVE = Inclusive::value, - BLOCK_THREADS = ptx_plan::BLOCK_THREADS, - ITEMS_PER_THREAD = ptx_plan::ITEMS_PER_THREAD, - ITEMS_PER_TILE = ptx_plan::ITEMS_PER_TILE, - - SYNC_AFTER_LOAD = (ptx_plan::LOAD_ALGORITHM != cub::BLOCK_LOAD_DIRECT), - }; - - struct impl - { - //--------------------------------------------------------------------- - // Per thread data - //--------------------------------------------------------------------- - - TempStorage &storage; - ScanTileState &tile_state; - LoadIt load_it; - OutputIt output_it; - ScanOp scan_op; - - //--------------------------------------------------------------------- - // Block scan utility methods (first tile) - //--------------------------------------------------------------------- - - // Exclusive scan specialization - // - template - void THRUST_DEVICE_FUNCTION scan_tile(T (&items)[ITEMS_PER_THREAD], - _ScanOp scan_op, - T & block_aggregate, - thrust::detail::false_type /* is_inclusive */) - { - BlockScan(storage.scan).ExclusiveScan(items, items, scan_op, block_aggregate); - } - - // Exclusive sum specialization - // - void THRUST_DEVICE_FUNCTION scan_tile(T (&items)[ITEMS_PER_THREAD], - plus /*scan_op*/, - T & block_aggregate, - thrust::detail::false_type /* is_inclusive */) - { - BlockScan(storage.scan).ExclusiveSum(items, items, block_aggregate); - } - - // Inclusive scan specialization - // - template - void THRUST_DEVICE_FUNCTION scan_tile(T (&items)[ITEMS_PER_THREAD], - _ScanOp scan_op, - T & block_aggregate, - thrust::detail::true_type /* is_inclusive */) - { - BlockScan(storage.scan).InclusiveScan(items, items, scan_op, block_aggregate); - } - - - // Inclusive sum specialization - // - void THRUST_DEVICE_FUNCTION scan_tile(T (&items)[ITEMS_PER_THREAD], - plus /*scan_op*/, - T & block_aggregate, - thrust::detail::true_type /* is_inclusive */) - { - BlockScan(storage.scan).InclusiveSum(items, items, block_aggregate); - } - - //--------------------------------------------------------------------- - // Block scan utility methods (subsequent tiles) - //--------------------------------------------------------------------- - - // Exclusive scan specialization (with prefix from predecessors) - // - template - void THRUST_DEVICE_FUNCTION scan_tile(T (&items)[ITEMS_PER_THREAD], - _ScanOp scan_op, - T & block_aggregate, - PrefixCallback &prefix_op, - thrust::detail::false_type /* is_inclusive */) - { - BlockScan(storage.scan).ExclusiveScan(items, items, scan_op, prefix_op); - block_aggregate = prefix_op.GetBlockAggregate(); - } - - // Exclusive sum specialization (with prefix from predecessors) - // - template - THRUST_DEVICE_FUNCTION void scan_tile(T (&items)[ITEMS_PER_THREAD], - plus /*scan_op*/, - T & block_aggregate, - PrefixCallback &prefix_op, - thrust::detail::false_type /* is_inclusive */) - { - BlockScan(storage.scan).ExclusiveSum(items, items, prefix_op); - block_aggregate = prefix_op.GetBlockAggregate(); - } - - // Inclusive scan specialization (with prefix from predecessors) - // - template - THRUST_DEVICE_FUNCTION void scan_tile(T (&items)[ITEMS_PER_THREAD], - _ScanOp scan_op, - T & block_aggregate, - PrefixCallback &prefix_op, - thrust::detail::true_type /* is_inclusive */) - { - BlockScan(storage.scan).InclusiveScan(items, items, scan_op, prefix_op); - block_aggregate = prefix_op.GetBlockAggregate(); - } - - // Inclusive sum specialization (with prefix from predecessors) - // - template - THRUST_DEVICE_FUNCTION void scan_tile(T (&items)[ITEMS_PER_THREAD], - plus /*scan_op*/, - T & block_aggregate, - PrefixCallback &prefix_op, - thrust::detail::true_type /* is_inclusive */) - { - BlockScan(storage.scan).InclusiveSum(items, items, prefix_op); - block_aggregate = prefix_op.GetBlockAggregate(); - } - - //--------------------------------------------------------------------- - // Cooperatively scan a device-wide sequence of tiles with other CTAs - //--------------------------------------------------------------------- - - // Process a tile of input (dynamic chained scan) - // - template - THRUST_DEVICE_FUNCTION void - consume_tile(Size /*num_items*/, - Size num_remaining, - int tile_idx, - Size tile_base, - AddInitToExclusive add_init_to_exclusive_scan) - { - using core::sync_threadblock; - - // Load items - T items[ITEMS_PER_THREAD]; - - if (IS_FULL_TILE) - { - BlockLoad(storage.load).Load(load_it + tile_base, items); - } - else - { - // Fill last element with the first element - // because collectives are not suffix guarded - BlockLoad(storage.load) - .Load(load_it + tile_base, - items, - num_remaining, - *(load_it + tile_base)); - } - - if (SYNC_AFTER_LOAD) - sync_threadblock(); - - // Perform tile scan - if (tile_idx == 0) - { - // Scan first tile - T block_aggregate; - scan_tile(items, scan_op, block_aggregate, Inclusive()); - - // Update tile status if there may be successor tiles (i.e., this tile is full) - if (IS_FULL_TILE && (threadIdx.x == 0)) - tile_state.SetInclusive(0, block_aggregate); - } - else - { - // Scan non-first tile - T block_aggregate; - TilePrefixCallback prefix_op(tile_state, storage.prefix, scan_op, tile_idx); - scan_tile(items, scan_op, block_aggregate, prefix_op, Inclusive()); - } - - sync_threadblock(); - - add_init_to_exclusive_scan(items, tile_idx); - - // Store items - if (IS_FULL_TILE) - { - BlockStore(storage.store).Store(output_it + tile_base, items); - } - else - { - BlockStore(storage.store).Store(output_it + tile_base, items, num_remaining); - } - } - - - //--------------------------------------------------------------------- - // Constructor - //--------------------------------------------------------------------- - - // Dequeue and scan tiles of items as part of a dynamic chained scan - // with Init - template - THRUST_DEVICE_FUNCTION - impl(TempStorage & storage_, - ScanTileState & tile_state_, - InputIt input_it, - OutputIt output_it_, - ScanOp scan_op_, - Size num_items, - AddInitToExclusiveScan add_init_to_exclusive_scan) - : storage(storage_), - tile_state(tile_state_), - load_it(core::make_load_iterator(ptx_plan(), input_it)), - output_it(output_it_), - scan_op(scan_op_) - { - int tile_idx = blockIdx.x; - Size tile_base = ITEMS_PER_TILE * tile_idx; - Size num_remaining = num_items - tile_base; - - if (num_remaining > ITEMS_PER_TILE) - { - // Full tile - consume_tile(num_items, - num_remaining, - tile_idx, - tile_base, - add_init_to_exclusive_scan); - } - else if (num_remaining > 0) - { - // Partially-full tile - consume_tile(num_items, - num_remaining, - tile_idx, - tile_base, - add_init_to_exclusive_scan); - } - } - }; // struct impl - - //--------------------------------------------------------------------- - // Agent entry point - //--------------------------------------------------------------------- - - template - THRUST_AGENT_ENTRY(InputIt input_it, - OutputIt output_it, - ScanOp scan_op, - Size num_items, - ScanTileState tile_state, - AddInitToExclusiveScan add_init_to_exclusive_scan, - char * shmem) - { - TempStorage &storage = *reinterpret_cast(shmem); - impl(storage, - tile_state, - input_it, - output_it, - scan_op, - num_items, - add_init_to_exclusive_scan); - } - }; // struct ScanAgent - - template - struct InitAgent - { - template - struct PtxPlan : PtxPolicy<128> {}; - - typedef core::specialize_plan ptx_plan; - - //--------------------------------------------------------------------- - // Agent entry point - //--------------------------------------------------------------------- - - THRUST_AGENT_ENTRY(ScanTileState tile_state, - Size num_tiles, - char * /*shmem*/) - { - tile_state.InitializeStatus(num_tiles); - } - - }; // struct InitAgent - - template - struct DoNothing - { - typedef T type; - template - THRUST_DEVICE_FUNCTION void - operator()(T (&items)[ITEMS_PER_THREAD], int /*tile_idx*/) - { - THRUST_UNUSED_VAR(items); - } - }; // struct DoNothing - - template - struct AddInitToExclusiveScan - { - typedef T type; - T init; - ScanOp scan_op; - - THRUST_RUNTIME_FUNCTION - AddInitToExclusiveScan(T init_, ScanOp scan_op_) - : init(init_), scan_op(scan_op_) {} - - template - THRUST_DEVICE_FUNCTION void - operator()(T (&items)[ITEMS_PER_THREAD], int tile_idx) - { - if (tile_idx == 0 && threadIdx.x == 0) - { - items[0] = init; - for (int i = 1; i < ITEMS_PER_THREAD; ++i) - items[i] = scan_op(init, items[i]); - } - else - { - for (int i = 0; i < ITEMS_PER_THREAD; ++i) - items[i] = scan_op(init, items[i]); - } - } - }; // struct AddInitToExclusiveScan - - template - static cudaError_t THRUST_RUNTIME_FUNCTION - doit_step(void * d_temp_storage, - size_t & temp_storage_bytes, - InputIt input_it, - Size num_items, - AddInitToExclusiveScan add_init_to_exclusive_scan, - OutputIt output_it, - ScanOp scan_op, - cudaStream_t stream, - bool debug_sync) - { - using core::AgentPlan; - using core::AgentLauncher; - - cudaError_t status = cudaSuccess; - if (num_items == 0) - return cudaErrorNotSupported; - - typedef typename AddInitToExclusiveScan::type T; - - typedef AgentLauncher< - ScanAgent > - scan_agent; - - typedef typename scan_agent::ScanTileState ScanTileState; - - typedef AgentLauncher > init_agent; - - AgentPlan scan_plan = scan_agent::get_plan(stream); - AgentPlan init_plan = init_agent::get_plan(); - - int tile_size = scan_plan.items_per_tile; - Size num_tiles = static_cast((num_items + tile_size - 1) / tile_size); - - size_t vshmem_size = core::vshmem_size(scan_plan.shared_memory_size, - num_tiles); - - size_t allocation_sizes[2] = {0, vshmem_size}; - status = ScanTileState::AllocationSize(static_cast(num_tiles), allocation_sizes[0]); - CUDA_CUB_RET_IF_FAIL(status); - - void* allocations[2] = {NULL, NULL}; - - status = core::alias_storage(d_temp_storage, - temp_storage_bytes, - allocations, - allocation_sizes); - CUDA_CUB_RET_IF_FAIL(status); - - if (d_temp_storage == NULL) - { - return status; - } - - ScanTileState tile_state; - status = tile_state.Init(static_cast(num_tiles), allocations[0], allocation_sizes[0]); - CUDA_CUB_RET_IF_FAIL(status); - - char *vshmem_ptr = vshmem_size > 0 ? (char*)allocations[1] : NULL; - - init_agent ia(init_plan, num_tiles, stream, "scan::init_agent", debug_sync); - ia.launch(tile_state, num_tiles); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - - scan_agent sa(scan_plan, num_items, stream, vshmem_ptr, "scan::scan_agent", debug_sync); - sa.launch(input_it, - output_it, - scan_op, - num_items, - tile_state, - add_init_to_exclusive_scan); - CUDA_CUB_RET_IF_FAIL(cudaPeekAtLastError()); - return status; - } // func doit_step - - template - THRUST_RUNTIME_FUNCTION - OutputIt scan(execution_policy& policy, - InputIt input_it, - OutputIt output_it, - Size num_items, - ScanOp scan_op, - AddInitToExclusiveScan add_init_to_exclusive_scan) - { - if (num_items == 0) - return output_it; - - size_t storage_size = 0; - cudaStream_t stream = cuda_cub::stream(policy); - bool debug_sync = THRUST_DEBUG_SYNC_FLAG; - - cudaError_t status; - THRUST_INDEX_TYPE_DISPATCH(status, - doit_step, - num_items, - (NULL, - storage_size, - input_it, - num_items_fixed, - add_init_to_exclusive_scan, - output_it, - scan_op, - stream, - debug_sync)); - cuda_cub::throw_on_error(status, "scan failed on 1st step"); - - // Allocate temporary storage. - thrust::detail::temporary_array - tmp(policy, storage_size); - void *ptr = static_cast(tmp.data().get()); - - THRUST_INDEX_TYPE_DISPATCH(status, - doit_step, - num_items, - (ptr, - storage_size, - input_it, - num_items_fixed, - add_init_to_exclusive_scan, - output_it, - scan_op, - stream, - debug_sync)); - cuda_cub::throw_on_error(status, "scan failed on 2nd step"); - - status = cuda_cub::synchronize(policy); - cuda_cub::throw_on_error(status, "scan failed to synchronize"); - - return output_it + num_items; - } // func scan - -} // namespace __scan - -//------------------------- -// Thrust API entry points -//------------------------- - -__thrust_exec_check_disable__ -template -OutputIt __host__ __device__ -inclusive_scan_n(execution_policy &policy, - InputIt first, - Size num_items, - OutputIt result, - ScanOp scan_op) -{ - OutputIt ret = result; - if (__THRUST_HAS_CUDART__) - { - typedef typename iterator_traits::value_type T; - ret = __scan::scan(policy, - first, - result, - num_items, - scan_op, - __scan::DoNothing()); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::inclusive_scan(cvt_to_seq(derived_cast(policy)), - first, - first + num_items, - result, - scan_op); -#endif - } - return ret; -} - - -template -OutputIt __host__ __device__ -inclusive_scan(execution_policy &policy, - InputIt first, - InputIt last, - OutputIt result, - ScanOp scan_op) -{ - typedef typename thrust::iterator_traits::difference_type diff_t; - diff_t num_items = thrust::distance(first, last); - return cuda_cub::inclusive_scan_n(policy, first, num_items, result, scan_op); -} - - -template -OutputIt __host__ __device__ -inclusive_scan(execution_policy &policy, - InputIt first, - OutputIt last, - OutputIt result) -{ - - typedef typename thrust::detail::eval_if< - thrust::detail::is_output_iterator::value, - thrust::iterator_value, - thrust::iterator_value >::type result_type; - return cuda_cub::inclusive_scan(policy, first, last, result, plus()); -}; - -__thrust_exec_check_disable__ -template -OutputIt __host__ __device__ -exclusive_scan_n(execution_policy &policy, - InputIt first, - Size num_items, - OutputIt result, - T init, - ScanOp scan_op) -{ - OutputIt ret = result; - if (__THRUST_HAS_CUDART__) - { - ret = __scan::scan( - policy, - first, - result, - num_items, - scan_op, - __scan::AddInitToExclusiveScan(init, scan_op)); - } - else - { -#if !__THRUST_HAS_CUDART__ - ret = thrust::exclusive_scan(cvt_to_seq(derived_cast(policy)), - first, - first + num_items, - result, - init, - scan_op); -#endif - } - return ret; -} - -template -OutputIt __host__ __device__ -exclusive_scan(execution_policy &policy, - InputIt first, - InputIt last, - OutputIt result, - T init, - ScanOp scan_op) -{ - typedef typename thrust::iterator_traits::difference_type diff_t; - diff_t num_items = thrust::distance(first, last); - return cuda_cub::exclusive_scan_n(policy, first, num_items, result, init, scan_op); -} - -template -OutputIt __host__ __device__ -exclusive_scan(execution_policy &policy, - InputIt first, - OutputIt last, - OutputIt result, - T init) -{ - return cuda_cub::exclusive_scan(policy, first, last, result, init, plus()); -} - -template -OutputIt __host__ __device__ -exclusive_scan(execution_policy &policy, - InputIt first, - OutputIt last, - OutputIt result) -{ - typedef typename thrust::detail::eval_if< - thrust::detail::is_output_iterator::value, - thrust::iterator_value, - thrust::iterator_value - >::type result_type; - return cuda_cub::exclusive_scan(policy, first, last, result, result_type(0)); -}; - -} // namespace cuda_cub -} // end namespace thrust - -#include - -#endif diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/for_each.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/for_each.h deleted file mode 100644 index 6e83d18c127027dfb0d11906db47909b896cf053..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/sequential/for_each.h +++ /dev/null @@ -1,95 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - - -/*! \file for_each.h - * \brief Sequential implementations of for_each functions. - */ - -#pragma once - -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace sequential -{ - - -__thrust_exec_check_disable__ -template -__host__ __device__ -InputIterator for_each(sequential::execution_policy &, - InputIterator first, - InputIterator last, - UnaryFunction f) -{ - // wrap f - thrust::detail::wrapped_function< - UnaryFunction, - void - > wrapped_f(f); - - for(; first != last; ++first) - { - wrapped_f(*first); - } - - return first; -} // end for_each() - - -template -__host__ __device__ -InputIterator for_each_n(sequential::execution_policy &, - InputIterator first, - Size n, - UnaryFunction f) -{ - // wrap f - thrust::detail::wrapped_function< - UnaryFunction, - void - > wrapped_f(f); - - for(Size i = 0; i != n; i++) - { - // we can dereference an OutputIterator if f does not - // try to use the reference for anything besides assignment - wrapped_f(*first); - ++first; - } - - return first; -} // end for_each_n() - - -} // end namespace sequential -} // end namespace detail -} // end namespace system -} // end namespace thrust - diff --git a/spaces/CVPR/MonoScene/monoscene/monoscene.py b/spaces/CVPR/MonoScene/monoscene/monoscene.py deleted file mode 100644 index d8dd444c86ac9b38494e7fc0f685504ae2f25a56..0000000000000000000000000000000000000000 --- a/spaces/CVPR/MonoScene/monoscene/monoscene.py +++ /dev/null @@ -1,125 +0,0 @@ -import pytorch_lightning as pl -import torch -import torch.nn as nn -from monoscene.unet3d_nyu import UNet3D as UNet3DNYU -from monoscene.unet3d_kitti import UNet3D as UNet3DKitti -from monoscene.flosp import FLoSP -import numpy as np -import torch.nn.functional as F -from monoscene.unet2d import UNet2D - - -class MonoScene(pl.LightningModule): - def __init__( - self, - n_classes, - feature, - project_scale, - full_scene_size, - dataset, - project_res=["1", "2", "4", "8"], - n_relations=4, - context_prior=True, - fp_loss=True, - frustum_size=4, - relation_loss=False, - CE_ssc_loss=True, - geo_scal_loss=True, - sem_scal_loss=True, - lr=1e-4, - weight_decay=1e-4, - ): - super().__init__() - - self.project_res = project_res - self.fp_loss = fp_loss - self.dataset = dataset - self.context_prior = context_prior - self.frustum_size = frustum_size - self.relation_loss = relation_loss - self.CE_ssc_loss = CE_ssc_loss - self.sem_scal_loss = sem_scal_loss - self.geo_scal_loss = geo_scal_loss - self.project_scale = project_scale - self.lr = lr - self.weight_decay = weight_decay - - self.projects = {} - self.scale_2ds = [1, 2, 4, 8] # 2D scales - for scale_2d in self.scale_2ds: - self.projects[str(scale_2d)] = FLoSP( - full_scene_size, project_scale=self.project_scale, dataset=self.dataset - ) - self.projects = nn.ModuleDict(self.projects) - - self.n_classes = n_classes - if self.dataset == "NYU": - self.net_3d_decoder = UNet3DNYU( - self.n_classes, - nn.BatchNorm3d, - n_relations=n_relations, - feature=feature, - full_scene_size=full_scene_size, - context_prior=context_prior, - ) - elif self.dataset == "kitti": - self.net_3d_decoder = UNet3DKitti( - self.n_classes, - nn.BatchNorm3d, - project_scale=project_scale, - feature=feature, - full_scene_size=full_scene_size, - context_prior=context_prior, - ) - self.net_rgb = UNet2D.build(out_feature=feature, use_decoder=True) - - def forward(self, batch): - - img = batch["img"] - bs = len(img) - - out = {} - - x_rgb = self.net_rgb(img) - - x3ds = [] - for i in range(bs): - x3d = None - for scale_2d in self.project_res: - - # project features at each 2D scale to target 3D scale - scale_2d = int(scale_2d) - projected_pix = batch["projected_pix_{}".format(self.project_scale)][i]#.cuda() - fov_mask = batch["fov_mask_{}".format(self.project_scale)][i]#.cuda() - - # Sum all the 3D features - if x3d is None: - x3d = self.projects[str(scale_2d)]( - x_rgb["1_" + str(scale_2d)][i], - # torch.div(projected_pix, scale_2d, rounding_mode='floor'), - projected_pix // scale_2d, - fov_mask, - ) - else: - x3d += self.projects[str(scale_2d)]( - x_rgb["1_" + str(scale_2d)][i], - # torch.div(projected_pix, scale_2d, rounding_mode='floor'), - projected_pix // scale_2d, - fov_mask, - ) - x3ds.append(x3d) - - input_dict = { - "x3d": torch.stack(x3ds), - } - - out_dict = self.net_3d_decoder(input_dict) - - ssc_pred = out_dict["ssc_logit"] - - y_pred = ssc_pred.detach().cpu().numpy() - y_pred = np.argmax(y_pred, axis=1) - - return y_pred - - diff --git a/spaces/CVPR/WALT/mmdet/models/losses/varifocal_loss.py b/spaces/CVPR/WALT/mmdet/models/losses/varifocal_loss.py deleted file mode 100644 index 7f00bd6916c04fef45a9aeecb50888266420daf9..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/losses/varifocal_loss.py +++ /dev/null @@ -1,133 +0,0 @@ -import mmcv -import torch.nn as nn -import torch.nn.functional as F - -from ..builder import LOSSES -from .utils import weight_reduce_loss - - -@mmcv.jit(derivate=True, coderize=True) -def varifocal_loss(pred, - target, - weight=None, - alpha=0.75, - gamma=2.0, - iou_weighted=True, - reduction='mean', - avg_factor=None): - """`Varifocal Loss `_ - - Args: - pred (torch.Tensor): The prediction with shape (N, C), C is the - number of classes - target (torch.Tensor): The learning target of the iou-aware - classification score with shape (N, C), C is the number of classes. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - alpha (float, optional): A balance factor for the negative part of - Varifocal Loss, which is different from the alpha of Focal Loss. - Defaults to 0.75. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - iou_weighted (bool, optional): Whether to weight the loss of the - positive example with the iou target. Defaults to True. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - """ - # pred and target should be of the same size - assert pred.size() == target.size() - pred_sigmoid = pred.sigmoid() - target = target.type_as(pred) - if iou_weighted: - focal_weight = target * (target > 0.0).float() + \ - alpha * (pred_sigmoid - target).abs().pow(gamma) * \ - (target <= 0.0).float() - else: - focal_weight = (target > 0.0).float() + \ - alpha * (pred_sigmoid - target).abs().pow(gamma) * \ - (target <= 0.0).float() - loss = F.binary_cross_entropy_with_logits( - pred, target, reduction='none') * focal_weight - loss = weight_reduce_loss(loss, weight, reduction, avg_factor) - return loss - - -@LOSSES.register_module() -class VarifocalLoss(nn.Module): - - def __init__(self, - use_sigmoid=True, - alpha=0.75, - gamma=2.0, - iou_weighted=True, - reduction='mean', - loss_weight=1.0): - """`Varifocal Loss `_ - - Args: - use_sigmoid (bool, optional): Whether the prediction is - used for sigmoid or softmax. Defaults to True. - alpha (float, optional): A balance factor for the negative part of - Varifocal Loss, which is different from the alpha of Focal - Loss. Defaults to 0.75. - gamma (float, optional): The gamma for calculating the modulating - factor. Defaults to 2.0. - iou_weighted (bool, optional): Whether to weight the loss of the - positive examples with the iou target. Defaults to True. - reduction (str, optional): The method used to reduce the loss into - a scalar. Defaults to 'mean'. Options are "none", "mean" and - "sum". - loss_weight (float, optional): Weight of loss. Defaults to 1.0. - """ - super(VarifocalLoss, self).__init__() - assert use_sigmoid is True, \ - 'Only sigmoid varifocal loss supported now.' - assert alpha >= 0.0 - self.use_sigmoid = use_sigmoid - self.alpha = alpha - self.gamma = gamma - self.iou_weighted = iou_weighted - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Options are "none", "mean" and "sum". - - Returns: - torch.Tensor: The calculated loss - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if self.use_sigmoid: - loss_cls = self.loss_weight * varifocal_loss( - pred, - target, - weight, - alpha=self.alpha, - gamma=self.gamma, - iou_weighted=self.iou_weighted, - reduction=reduction, - avg_factor=avg_factor) - else: - raise NotImplementedError - return loss_cls diff --git a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/rpn.py b/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/rpn.py deleted file mode 100644 index 99cd536d2f9880d2049390c45f73eb22335e1b82..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/modeling/proposal_generator/rpn.py +++ /dev/null @@ -1,533 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from typing import Dict, List, Optional, Tuple, Union -import torch -import torch.nn.functional as F -from torch import nn - -from detectron2.config import configurable -from detectron2.layers import Conv2d, ShapeSpec, cat -from detectron2.structures import Boxes, ImageList, Instances, pairwise_iou -from detectron2.utils.events import get_event_storage -from detectron2.utils.memory import retry_if_cuda_oom -from detectron2.utils.registry import Registry - -from ..anchor_generator import build_anchor_generator -from ..box_regression import Box2BoxTransform, _dense_box_regression_loss -from ..matcher import Matcher -from ..sampling import subsample_labels -from .build import PROPOSAL_GENERATOR_REGISTRY -from .proposal_utils import find_top_rpn_proposals - -RPN_HEAD_REGISTRY = Registry("RPN_HEAD") -RPN_HEAD_REGISTRY.__doc__ = """ -Registry for RPN heads, which take feature maps and perform -objectness classification and bounding box regression for anchors. - -The registered object will be called with `obj(cfg, input_shape)`. -The call should return a `nn.Module` object. -""" - - -""" -Shape shorthand in this module: - - N: number of images in the minibatch - L: number of feature maps per image on which RPN is run - A: number of cell anchors (must be the same for all feature maps) - Hi, Wi: height and width of the i-th feature map - B: size of the box parameterization - -Naming convention: - - objectness: refers to the binary classification of an anchor as object vs. not object. - - deltas: refers to the 4-d (dx, dy, dw, dh) deltas that parameterize the box2box - transform (see :class:`box_regression.Box2BoxTransform`), or 5d for rotated boxes. - - pred_objectness_logits: predicted objectness scores in [-inf, +inf]; use - sigmoid(pred_objectness_logits) to estimate P(object). - - gt_labels: ground-truth binary classification labels for objectness - - pred_anchor_deltas: predicted box2box transform deltas - - gt_anchor_deltas: ground-truth box2box transform deltas -""" - - -def build_rpn_head(cfg, input_shape): - """ - Build an RPN head defined by `cfg.MODEL.RPN.HEAD_NAME`. - """ - name = cfg.MODEL.RPN.HEAD_NAME - return RPN_HEAD_REGISTRY.get(name)(cfg, input_shape) - - -@RPN_HEAD_REGISTRY.register() -class StandardRPNHead(nn.Module): - """ - Standard RPN classification and regression heads described in :paper:`Faster R-CNN`. - Uses a 3x3 conv to produce a shared hidden state from which one 1x1 conv predicts - objectness logits for each anchor and a second 1x1 conv predicts bounding-box deltas - specifying how to deform each anchor into an object proposal. - """ - - @configurable - def __init__( - self, *, in_channels: int, num_anchors: int, box_dim: int = 4, conv_dims: List[int] = (-1,) - ): - """ - NOTE: this interface is experimental. - - Args: - in_channels (int): number of input feature channels. When using multiple - input features, they must have the same number of channels. - num_anchors (int): number of anchors to predict for *each spatial position* - on the feature map. The total number of anchors for each - feature map will be `num_anchors * H * W`. - box_dim (int): dimension of a box, which is also the number of box regression - predictions to make for each anchor. An axis aligned box has - box_dim=4, while a rotated box has box_dim=5. - conv_dims (list[int]): a list of integers representing the output channels - of N conv layers. Set it to -1 to use the same number of output channels - as input channels. - """ - super().__init__() - cur_channels = in_channels - # Keeping the old variable names and structure for backwards compatiblity. - # Otherwise the old checkpoints will fail to load. - if len(conv_dims) == 1: - out_channels = cur_channels if conv_dims[0] == -1 else conv_dims[0] - # 3x3 conv for the hidden representation - self.conv = self._get_rpn_conv(cur_channels, out_channels) - cur_channels = out_channels - else: - self.conv = nn.Sequential() - for k, conv_dim in enumerate(conv_dims): - out_channels = cur_channels if conv_dim == -1 else conv_dim - if out_channels <= 0: - raise ValueError( - f"Conv output channels should be greater than 0. Got {out_channels}" - ) - conv = self._get_rpn_conv(cur_channels, out_channels) - self.conv.add_module(f"conv{k}", conv) - cur_channels = out_channels - # 1x1 conv for predicting objectness logits - self.objectness_logits = nn.Conv2d(cur_channels, num_anchors, kernel_size=1, stride=1) - # 1x1 conv for predicting box2box transform deltas - self.anchor_deltas = nn.Conv2d(cur_channels, num_anchors * box_dim, kernel_size=1, stride=1) - - # Keeping the order of weights initialization same for backwards compatiblility. - for layer in self.modules(): - if isinstance(layer, nn.Conv2d): - nn.init.normal_(layer.weight, std=0.01) - nn.init.constant_(layer.bias, 0) - - def _get_rpn_conv(self, in_channels, out_channels): - return Conv2d( - in_channels, - out_channels, - kernel_size=3, - stride=1, - padding=1, - activation=nn.ReLU(), - ) - - @classmethod - def from_config(cls, cfg, input_shape): - # Standard RPN is shared across levels: - in_channels = [s.channels for s in input_shape] - assert len(set(in_channels)) == 1, "Each level must have the same channel!" - in_channels = in_channels[0] - - # RPNHead should take the same input as anchor generator - # NOTE: it assumes that creating an anchor generator does not have unwanted side effect. - anchor_generator = build_anchor_generator(cfg, input_shape) - num_anchors = anchor_generator.num_anchors - box_dim = anchor_generator.box_dim - assert ( - len(set(num_anchors)) == 1 - ), "Each level must have the same number of anchors per spatial position" - return { - "in_channels": in_channels, - "num_anchors": num_anchors[0], - "box_dim": box_dim, - "conv_dims": cfg.MODEL.RPN.CONV_DIMS, - } - - def forward(self, features: List[torch.Tensor]): - """ - Args: - features (list[Tensor]): list of feature maps - - Returns: - list[Tensor]: A list of L elements. - Element i is a tensor of shape (N, A, Hi, Wi) representing - the predicted objectness logits for all anchors. A is the number of cell anchors. - list[Tensor]: A list of L elements. Element i is a tensor of shape - (N, A*box_dim, Hi, Wi) representing the predicted "deltas" used to transform anchors - to proposals. - """ - pred_objectness_logits = [] - pred_anchor_deltas = [] - for x in features: - t = self.conv(x) - pred_objectness_logits.append(self.objectness_logits(t)) - pred_anchor_deltas.append(self.anchor_deltas(t)) - return pred_objectness_logits, pred_anchor_deltas - - -@PROPOSAL_GENERATOR_REGISTRY.register() -class RPN(nn.Module): - """ - Region Proposal Network, introduced by :paper:`Faster R-CNN`. - """ - - @configurable - def __init__( - self, - *, - in_features: List[str], - head: nn.Module, - anchor_generator: nn.Module, - anchor_matcher: Matcher, - box2box_transform: Box2BoxTransform, - batch_size_per_image: int, - positive_fraction: float, - pre_nms_topk: Tuple[float, float], - post_nms_topk: Tuple[float, float], - nms_thresh: float = 0.7, - min_box_size: float = 0.0, - anchor_boundary_thresh: float = -1.0, - loss_weight: Union[float, Dict[str, float]] = 1.0, - box_reg_loss_type: str = "smooth_l1", - smooth_l1_beta: float = 0.0, - ): - """ - NOTE: this interface is experimental. - - Args: - in_features (list[str]): list of names of input features to use - head (nn.Module): a module that predicts logits and regression deltas - for each level from a list of per-level features - anchor_generator (nn.Module): a module that creates anchors from a - list of features. Usually an instance of :class:`AnchorGenerator` - anchor_matcher (Matcher): label the anchors by matching them with ground truth. - box2box_transform (Box2BoxTransform): defines the transform from anchors boxes to - instance boxes - batch_size_per_image (int): number of anchors per image to sample for training - positive_fraction (float): fraction of foreground anchors to sample for training - pre_nms_topk (tuple[float]): (train, test) that represents the - number of top k proposals to select before NMS, in - training and testing. - post_nms_topk (tuple[float]): (train, test) that represents the - number of top k proposals to select after NMS, in - training and testing. - nms_thresh (float): NMS threshold used to de-duplicate the predicted proposals - min_box_size (float): remove proposal boxes with any side smaller than this threshold, - in the unit of input image pixels - anchor_boundary_thresh (float): legacy option - loss_weight (float|dict): weights to use for losses. Can be single float for weighting - all rpn losses together, or a dict of individual weightings. Valid dict keys are: - "loss_rpn_cls" - applied to classification loss - "loss_rpn_loc" - applied to box regression loss - box_reg_loss_type (str): Loss type to use. Supported losses: "smooth_l1", "giou". - smooth_l1_beta (float): beta parameter for the smooth L1 regression loss. Default to - use L1 loss. Only used when `box_reg_loss_type` is "smooth_l1" - """ - super().__init__() - self.in_features = in_features - self.rpn_head = head - self.anchor_generator = anchor_generator - self.anchor_matcher = anchor_matcher - self.box2box_transform = box2box_transform - self.batch_size_per_image = batch_size_per_image - self.positive_fraction = positive_fraction - # Map from self.training state to train/test settings - self.pre_nms_topk = {True: pre_nms_topk[0], False: pre_nms_topk[1]} - self.post_nms_topk = {True: post_nms_topk[0], False: post_nms_topk[1]} - self.nms_thresh = nms_thresh - self.min_box_size = float(min_box_size) - self.anchor_boundary_thresh = anchor_boundary_thresh - if isinstance(loss_weight, float): - loss_weight = {"loss_rpn_cls": loss_weight, "loss_rpn_loc": loss_weight} - self.loss_weight = loss_weight - self.box_reg_loss_type = box_reg_loss_type - self.smooth_l1_beta = smooth_l1_beta - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - in_features = cfg.MODEL.RPN.IN_FEATURES - ret = { - "in_features": in_features, - "min_box_size": cfg.MODEL.PROPOSAL_GENERATOR.MIN_SIZE, - "nms_thresh": cfg.MODEL.RPN.NMS_THRESH, - "batch_size_per_image": cfg.MODEL.RPN.BATCH_SIZE_PER_IMAGE, - "positive_fraction": cfg.MODEL.RPN.POSITIVE_FRACTION, - "loss_weight": { - "loss_rpn_cls": cfg.MODEL.RPN.LOSS_WEIGHT, - "loss_rpn_loc": cfg.MODEL.RPN.BBOX_REG_LOSS_WEIGHT * cfg.MODEL.RPN.LOSS_WEIGHT, - }, - "anchor_boundary_thresh": cfg.MODEL.RPN.BOUNDARY_THRESH, - "box2box_transform": Box2BoxTransform(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS), - "box_reg_loss_type": cfg.MODEL.RPN.BBOX_REG_LOSS_TYPE, - "smooth_l1_beta": cfg.MODEL.RPN.SMOOTH_L1_BETA, - } - - ret["pre_nms_topk"] = (cfg.MODEL.RPN.PRE_NMS_TOPK_TRAIN, cfg.MODEL.RPN.PRE_NMS_TOPK_TEST) - ret["post_nms_topk"] = (cfg.MODEL.RPN.POST_NMS_TOPK_TRAIN, cfg.MODEL.RPN.POST_NMS_TOPK_TEST) - - ret["anchor_generator"] = build_anchor_generator(cfg, [input_shape[f] for f in in_features]) - ret["anchor_matcher"] = Matcher( - cfg.MODEL.RPN.IOU_THRESHOLDS, cfg.MODEL.RPN.IOU_LABELS, allow_low_quality_matches=True - ) - ret["head"] = build_rpn_head(cfg, [input_shape[f] for f in in_features]) - return ret - - def _subsample_labels(self, label): - """ - Randomly sample a subset of positive and negative examples, and overwrite - the label vector to the ignore value (-1) for all elements that are not - included in the sample. - - Args: - labels (Tensor): a vector of -1, 0, 1. Will be modified in-place and returned. - """ - pos_idx, neg_idx = subsample_labels( - label, self.batch_size_per_image, self.positive_fraction, 0 - ) - # Fill with the ignore label (-1), then set positive and negative labels - label.fill_(-1) - label.scatter_(0, pos_idx, 1) - label.scatter_(0, neg_idx, 0) - return label - - @torch.jit.unused - @torch.no_grad() - def label_and_sample_anchors( - self, anchors: List[Boxes], gt_instances: List[Instances] - ) -> Tuple[List[torch.Tensor], List[torch.Tensor]]: - """ - Args: - anchors (list[Boxes]): anchors for each feature map. - gt_instances: the ground-truth instances for each image. - - Returns: - list[Tensor]: - List of #img tensors. i-th element is a vector of labels whose length is - the total number of anchors across all feature maps R = sum(Hi * Wi * A). - Label values are in {-1, 0, 1}, with meanings: -1 = ignore; 0 = negative - class; 1 = positive class. - list[Tensor]: - i-th element is a Rx4 tensor. The values are the matched gt boxes for each - anchor. Values are undefined for those anchors not labeled as 1. - """ - anchors = Boxes.cat(anchors) - - gt_boxes = [x.gt_boxes for x in gt_instances] - image_sizes = [x.image_size for x in gt_instances] - del gt_instances - - gt_labels = [] - matched_gt_boxes = [] - for image_size_i, gt_boxes_i in zip(image_sizes, gt_boxes): - """ - image_size_i: (h, w) for the i-th image - gt_boxes_i: ground-truth boxes for i-th image - """ - - match_quality_matrix = retry_if_cuda_oom(pairwise_iou)(gt_boxes_i, anchors) - matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix) - # Matching is memory-expensive and may result in CPU tensors. But the result is small - gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device) - del match_quality_matrix - - if self.anchor_boundary_thresh >= 0: - # Discard anchors that go out of the boundaries of the image - # NOTE: This is legacy functionality that is turned off by default in Detectron2 - anchors_inside_image = anchors.inside_box(image_size_i, self.anchor_boundary_thresh) - gt_labels_i[~anchors_inside_image] = -1 - - # A vector of labels (-1, 0, 1) for each anchor - gt_labels_i = self._subsample_labels(gt_labels_i) - - if len(gt_boxes_i) == 0: - # These values won't be used anyway since the anchor is labeled as background - matched_gt_boxes_i = torch.zeros_like(anchors.tensor) - else: - # TODO wasted indexing computation for ignored boxes - matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor - - gt_labels.append(gt_labels_i) # N,AHW - matched_gt_boxes.append(matched_gt_boxes_i) - return gt_labels, matched_gt_boxes - - @torch.jit.unused - def losses( - self, - anchors: List[Boxes], - pred_objectness_logits: List[torch.Tensor], - gt_labels: List[torch.Tensor], - pred_anchor_deltas: List[torch.Tensor], - gt_boxes: List[torch.Tensor], - ) -> Dict[str, torch.Tensor]: - """ - Return the losses from a set of RPN predictions and their associated ground-truth. - - Args: - anchors (list[Boxes or RotatedBoxes]): anchors for each feature map, each - has shape (Hi*Wi*A, B), where B is box dimension (4 or 5). - pred_objectness_logits (list[Tensor]): A list of L elements. - Element i is a tensor of shape (N, Hi*Wi*A) representing - the predicted objectness logits for all anchors. - gt_labels (list[Tensor]): Output of :meth:`label_and_sample_anchors`. - pred_anchor_deltas (list[Tensor]): A list of L elements. Element i is a tensor of shape - (N, Hi*Wi*A, 4 or 5) representing the predicted "deltas" used to transform anchors - to proposals. - gt_boxes (list[Tensor]): Output of :meth:`label_and_sample_anchors`. - - Returns: - dict[loss name -> loss value]: A dict mapping from loss name to loss value. - Loss names are: `loss_rpn_cls` for objectness classification and - `loss_rpn_loc` for proposal localization. - """ - num_images = len(gt_labels) - gt_labels = torch.stack(gt_labels) # (N, sum(Hi*Wi*Ai)) - - # Log the number of positive/negative anchors per-image that's used in training - pos_mask = gt_labels == 1 - num_pos_anchors = pos_mask.sum().item() - num_neg_anchors = (gt_labels == 0).sum().item() - storage = get_event_storage() - storage.put_scalar("rpn/num_pos_anchors", num_pos_anchors / num_images) - storage.put_scalar("rpn/num_neg_anchors", num_neg_anchors / num_images) - - localization_loss = _dense_box_regression_loss( - anchors, - self.box2box_transform, - pred_anchor_deltas, - gt_boxes, - pos_mask, - box_reg_loss_type=self.box_reg_loss_type, - smooth_l1_beta=self.smooth_l1_beta, - ) - - valid_mask = gt_labels >= 0 - objectness_loss = F.binary_cross_entropy_with_logits( - cat(pred_objectness_logits, dim=1)[valid_mask], - gt_labels[valid_mask].to(torch.float32), - reduction="sum", - ) - normalizer = self.batch_size_per_image * num_images - losses = { - "loss_rpn_cls": objectness_loss / normalizer, - # The original Faster R-CNN paper uses a slightly different normalizer - # for loc loss. But it doesn't matter in practice - "loss_rpn_loc": localization_loss / normalizer, - } - losses = {k: v * self.loss_weight.get(k, 1.0) for k, v in losses.items()} - return losses - - def forward( - self, - images: ImageList, - features: Dict[str, torch.Tensor], - gt_instances: Optional[List[Instances]] = None, - ): - """ - Args: - images (ImageList): input images of length `N` - features (dict[str, Tensor]): input data as a mapping from feature - map name to tensor. Axis 0 represents the number of images `N` in - the input data; axes 1-3 are channels, height, and width, which may - vary between feature maps (e.g., if a feature pyramid is used). - gt_instances (list[Instances], optional): a length `N` list of `Instances`s. - Each `Instances` stores ground-truth instances for the corresponding image. - - Returns: - proposals: list[Instances]: contains fields "proposal_boxes", "objectness_logits" - loss: dict[Tensor] or None - """ - features = [features[f] for f in self.in_features] - anchors = self.anchor_generator(features) - - pred_objectness_logits, pred_anchor_deltas = self.rpn_head(features) - # Transpose the Hi*Wi*A dimension to the middle: - pred_objectness_logits = [ - # (N, A, Hi, Wi) -> (N, Hi, Wi, A) -> (N, Hi*Wi*A) - score.permute(0, 2, 3, 1).flatten(1) - for score in pred_objectness_logits - ] - pred_anchor_deltas = [ - # (N, A*B, Hi, Wi) -> (N, A, B, Hi, Wi) -> (N, Hi, Wi, A, B) -> (N, Hi*Wi*A, B) - x.view(x.shape[0], -1, self.anchor_generator.box_dim, x.shape[-2], x.shape[-1]) - .permute(0, 3, 4, 1, 2) - .flatten(1, -2) - for x in pred_anchor_deltas - ] - - if self.training: - assert gt_instances is not None, "RPN requires gt_instances in training!" - gt_labels, gt_boxes = self.label_and_sample_anchors(anchors, gt_instances) - losses = self.losses( - anchors, pred_objectness_logits, gt_labels, pred_anchor_deltas, gt_boxes - ) - else: - losses = {} - proposals = self.predict_proposals( - anchors, pred_objectness_logits, pred_anchor_deltas, images.image_sizes - ) - return proposals, losses - - def predict_proposals( - self, - anchors: List[Boxes], - pred_objectness_logits: List[torch.Tensor], - pred_anchor_deltas: List[torch.Tensor], - image_sizes: List[Tuple[int, int]], - ): - """ - Decode all the predicted box regression deltas to proposals. Find the top proposals - by applying NMS and removing boxes that are too small. - - Returns: - proposals (list[Instances]): list of N Instances. The i-th Instances - stores post_nms_topk object proposals for image i, sorted by their - objectness score in descending order. - """ - # The proposals are treated as fixed for joint training with roi heads. - # This approach ignores the derivative w.r.t. the proposal boxes’ coordinates that - # are also network responses. - with torch.no_grad(): - pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas) - return find_top_rpn_proposals( - pred_proposals, - pred_objectness_logits, - image_sizes, - self.nms_thresh, - self.pre_nms_topk[self.training], - self.post_nms_topk[self.training], - self.min_box_size, - self.training, - ) - - def _decode_proposals(self, anchors: List[Boxes], pred_anchor_deltas: List[torch.Tensor]): - """ - Transform anchors into proposals by applying the predicted anchor deltas. - - Returns: - proposals (list[Tensor]): A list of L tensors. Tensor i has shape - (N, Hi*Wi*A, B) - """ - N = pred_anchor_deltas[0].shape[0] - proposals = [] - # For each feature map - for anchors_i, pred_anchor_deltas_i in zip(anchors, pred_anchor_deltas): - B = anchors_i.tensor.size(1) - pred_anchor_deltas_i = pred_anchor_deltas_i.reshape(-1, B) - # Expand anchors to shape (N*Hi*Wi*A, B) - anchors_i = anchors_i.tensor.unsqueeze(0).expand(N, -1, -1).reshape(-1, B) - proposals_i = self.box2box_transform.apply_deltas(pred_anchor_deltas_i, anchors_i) - # Append feature map proposals with shape (N, Hi*Wi*A, B) - proposals.append(proposals_i.view(N, -1, B)) - return proposals diff --git a/spaces/Cletrason/Cletrason-toad-mario-movie/hf_utils.py b/spaces/Cletrason/Cletrason-toad-mario-movie/hf_utils.py deleted file mode 100644 index d9c62f941cf1de126580427b596a8cc04f46fd39..0000000000000000000000000000000000000000 --- a/spaces/Cletrason/Cletrason-toad-mario-movie/hf_utils.py +++ /dev/null @@ -1,39 +0,0 @@ -from bs4 import BeautifulSoup -import requests - - -def model_url_list(): - url_list = [] - for i in range(0, 5): - url_list.append( - f"https://huggingface.co/models?p={i}&sort=downloads&search=dreambooth") - return url_list - - -def data_scraping(url_list): - model_list = [] - for url in url_list: - response = requests.get(url) - soup = BeautifulSoup(response.text, "html.parser") - div_class = 'grid grid-cols-1 gap-5 2xl:grid-cols-2' - div = soup.find('div', {'class': div_class}) - for a in div.find_all('a', href=True): - model_list.append(a['href']) - return model_list - - -def get_model_list(): - model_list = data_scraping(model_url_list()) - for i in range(len(model_list)): - model_list[i] = model_list[i][1:] - - best_model_list = [ - "dreamlike-art/dreamlike-photoreal-2.0", - "dreamlike-art/dreamlike-diffusion-1.0", - "runwayml/stable-diffusion-v1-5", - "CompVis/stable-diffusion-v1-4", - "prompthero/openjourney", - ] - - model_list = best_model_list + model_list - return model_list diff --git a/spaces/CloseEric/CloseEric/Dockerfile b/spaces/CloseEric/CloseEric/Dockerfile deleted file mode 100644 index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000 --- a/spaces/CloseEric/CloseEric/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/CofAI/tv/public/mpegts.js b/spaces/CofAI/tv/public/mpegts.js deleted file mode 100644 index ef0849ddc4e4bee12ba0db29f1bc4f1f500326bc..0000000000000000000000000000000000000000 --- a/spaces/CofAI/tv/public/mpegts.js +++ /dev/null @@ -1,8 +0,0 @@ -!function(e,t){"object"==typeof exports&&"object"==typeof module?module.exports=t():"function"==typeof define&&define.amd?define([],t):"object"==typeof exports?exports.mpegts=t():e.mpegts=t()}(window,(function(){return function(e){var t={};function i(n){if(t[n])return t[n].exports;var r=t[n]={i:n,l:!1,exports:{}};return e[n].call(r.exports,r,r.exports,i),r.l=!0,r.exports}return i.m=e,i.c=t,i.d=function(e,t,n){i.o(e,t)||Object.defineProperty(e,t,{enumerable:!0,get:n})},i.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},i.t=function(e,t){if(1&t&&(e=i(e)),8&t)return e;if(4&t&&"object"==typeof e&&e&&e.__esModule)return e;var n=Object.create(null);if(i.r(n),Object.defineProperty(n,"default",{enumerable:!0,value:e}),2&t&&"string"!=typeof e)for(var r in e)i.d(n,r,function(t){return e[t]}.bind(null,r));return n},i.n=function(e){var t=e&&e.__esModule?function(){return e.default}:function(){return e};return i.d(t,"a",t),t},i.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},i.p="",i(i.s=14)}([function(e,t,i){"use strict";var n=i(6),r=i.n(n),s=function(){function e(){}return e.e=function(t,i){t&&!e.FORCE_GLOBAL_TAG||(t=e.GLOBAL_TAG);var n="["+t+"] > "+i;e.ENABLE_CALLBACK&&e.emitter.emit("log","error",n),e.ENABLE_ERROR&&(console.error?console.error(n):console.warn?console.warn(n):console.log(n))},e.i=function(t,i){t&&!e.FORCE_GLOBAL_TAG||(t=e.GLOBAL_TAG);var n="["+t+"] > "+i;e.ENABLE_CALLBACK&&e.emitter.emit("log","info",n),e.ENABLE_INFO&&(console.info?console.info(n):console.log(n))},e.w=function(t,i){t&&!e.FORCE_GLOBAL_TAG||(t=e.GLOBAL_TAG);var n="["+t+"] > "+i;e.ENABLE_CALLBACK&&e.emitter.emit("log","warn",n),e.ENABLE_WARN&&(console.warn?console.warn(n):console.log(n))},e.d=function(t,i){t&&!e.FORCE_GLOBAL_TAG||(t=e.GLOBAL_TAG);var n="["+t+"] > "+i;e.ENABLE_CALLBACK&&e.emitter.emit("log","debug",n),e.ENABLE_DEBUG&&(console.debug?console.debug(n):console.log(n))},e.v=function(t,i){t&&!e.FORCE_GLOBAL_TAG||(t=e.GLOBAL_TAG);var n="["+t+"] > "+i;e.ENABLE_CALLBACK&&e.emitter.emit("log","verbose",n),e.ENABLE_VERBOSE&&console.log(n)},e}();s.GLOBAL_TAG="mpegts.js",s.FORCE_GLOBAL_TAG=!1,s.ENABLE_ERROR=!0,s.ENABLE_INFO=!0,s.ENABLE_WARN=!0,s.ENABLE_DEBUG=!0,s.ENABLE_VERBOSE=!0,s.ENABLE_CALLBACK=!1,s.emitter=new r.a,t.a=s},function(e,t,i){"use strict";t.a={IO_ERROR:"io_error",DEMUX_ERROR:"demux_error",INIT_SEGMENT:"init_segment",MEDIA_SEGMENT:"media_segment",LOADING_COMPLETE:"loading_complete",RECOVERED_EARLY_EOF:"recovered_early_eof",MEDIA_INFO:"media_info",METADATA_ARRIVED:"metadata_arrived",SCRIPTDATA_ARRIVED:"scriptdata_arrived",TIMED_ID3_METADATA_ARRIVED:"timed_id3_metadata_arrived",PES_PRIVATE_DATA_DESCRIPTOR:"pes_private_data_descriptor",PES_PRIVATE_DATA_ARRIVED:"pes_private_data_arrived",STATISTICS_INFO:"statistics_info",RECOMMEND_SEEKPOINT:"recommend_seekpoint"}},function(e,t,i){"use strict";i.d(t,"c",(function(){return r})),i.d(t,"b",(function(){return s})),i.d(t,"a",(function(){return a}));var n=i(3),r={kIdle:0,kConnecting:1,kBuffering:2,kError:3,kComplete:4},s={OK:"OK",EXCEPTION:"Exception",HTTP_STATUS_CODE_INVALID:"HttpStatusCodeInvalid",CONNECTING_TIMEOUT:"ConnectingTimeout",EARLY_EOF:"EarlyEof",UNRECOVERABLE_EARLY_EOF:"UnrecoverableEarlyEof"},a=function(){function e(e){this._type=e||"undefined",this._status=r.kIdle,this._needStash=!1,this._onContentLengthKnown=null,this._onURLRedirect=null,this._onDataArrival=null,this._onError=null,this._onComplete=null}return e.prototype.destroy=function(){this._status=r.kIdle,this._onContentLengthKnown=null,this._onURLRedirect=null,this._onDataArrival=null,this._onError=null,this._onComplete=null},e.prototype.isWorking=function(){return this._status===r.kConnecting||this._status===r.kBuffering},Object.defineProperty(e.prototype,"type",{get:function(){return this._type},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"status",{get:function(){return this._status},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"needStashBuffer",{get:function(){return this._needStash},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onContentLengthKnown",{get:function(){return this._onContentLengthKnown},set:function(e){this._onContentLengthKnown=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onURLRedirect",{get:function(){return this._onURLRedirect},set:function(e){this._onURLRedirect=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onDataArrival",{get:function(){return this._onDataArrival},set:function(e){this._onDataArrival=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onError",{get:function(){return this._onError},set:function(e){this._onError=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onComplete",{get:function(){return this._onComplete},set:function(e){this._onComplete=e},enumerable:!1,configurable:!0}),e.prototype.open=function(e,t){throw new n.c("Unimplemented abstract function!")},e.prototype.abort=function(){throw new n.c("Unimplemented abstract function!")},e}()},function(e,t,i){"use strict";i.d(t,"d",(function(){return s})),i.d(t,"a",(function(){return a})),i.d(t,"b",(function(){return o})),i.d(t,"c",(function(){return h}));var n,r=(n=function(e,t){return(n=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(e,t){e.__proto__=t}||function(e,t){for(var i in t)t.hasOwnProperty(i)&&(e[i]=t[i])})(e,t)},function(e,t){function i(){this.constructor=e}n(e,t),e.prototype=null===t?Object.create(t):(i.prototype=t.prototype,new i)}),s=function(){function e(e){this._message=e}return Object.defineProperty(e.prototype,"name",{get:function(){return"RuntimeException"},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"message",{get:function(){return this._message},enumerable:!1,configurable:!0}),e.prototype.toString=function(){return this.name+": "+this.message},e}(),a=function(e){function t(t){return e.call(this,t)||this}return r(t,e),Object.defineProperty(t.prototype,"name",{get:function(){return"IllegalStateException"},enumerable:!1,configurable:!0}),t}(s),o=function(e){function t(t){return e.call(this,t)||this}return r(t,e),Object.defineProperty(t.prototype,"name",{get:function(){return"InvalidArgumentException"},enumerable:!1,configurable:!0}),t}(s),h=function(e){function t(t){return e.call(this,t)||this}return r(t,e),Object.defineProperty(t.prototype,"name",{get:function(){return"NotImplementedException"},enumerable:!1,configurable:!0}),t}(s)},function(e,t,i){"use strict";var n={};!function(){var e=self.navigator.userAgent.toLowerCase(),t=/(edge)\/([\w.]+)/.exec(e)||/(opr)[\/]([\w.]+)/.exec(e)||/(chrome)[ \/]([\w.]+)/.exec(e)||/(iemobile)[\/]([\w.]+)/.exec(e)||/(version)(applewebkit)[ \/]([\w.]+).*(safari)[ \/]([\w.]+)/.exec(e)||/(webkit)[ \/]([\w.]+).*(version)[ \/]([\w.]+).*(safari)[ \/]([\w.]+)/.exec(e)||/(webkit)[ \/]([\w.]+)/.exec(e)||/(opera)(?:.*version|)[ \/]([\w.]+)/.exec(e)||/(msie) ([\w.]+)/.exec(e)||e.indexOf("trident")>=0&&/(rv)(?::| )([\w.]+)/.exec(e)||e.indexOf("compatible")<0&&/(firefox)[ \/]([\w.]+)/.exec(e)||[],i=/(ipad)/.exec(e)||/(ipod)/.exec(e)||/(windows phone)/.exec(e)||/(iphone)/.exec(e)||/(kindle)/.exec(e)||/(android)/.exec(e)||/(windows)/.exec(e)||/(mac)/.exec(e)||/(linux)/.exec(e)||/(cros)/.exec(e)||[],r={browser:t[5]||t[3]||t[1]||"",version:t[2]||t[4]||"0",majorVersion:t[4]||t[2]||"0",platform:i[0]||""},s={};if(r.browser){s[r.browser]=!0;var a=r.majorVersion.split(".");s.version={major:parseInt(r.majorVersion,10),string:r.version},a.length>1&&(s.version.minor=parseInt(a[1],10)),a.length>2&&(s.version.build=parseInt(a[2],10))}if(r.platform&&(s[r.platform]=!0),(s.chrome||s.opr||s.safari)&&(s.webkit=!0),s.rv||s.iemobile){s.rv&&delete s.rv;r.browser="msie",s.msie=!0}if(s.edge){delete s.edge;r.browser="msedge",s.msedge=!0}if(s.opr){r.browser="opera",s.opera=!0}if(s.safari&&s.android){r.browser="android",s.android=!0}for(var o in s.name=r.browser,s.platform=r.platform,n)n.hasOwnProperty(o)&&delete n[o];Object.assign(n,s)}(),t.a=n},function(e,t,i){"use strict";t.a={OK:"OK",FORMAT_ERROR:"FormatError",FORMAT_UNSUPPORTED:"FormatUnsupported",CODEC_UNSUPPORTED:"CodecUnsupported"}},function(e,t,i){"use strict";var n,r="object"==typeof Reflect?Reflect:null,s=r&&"function"==typeof r.apply?r.apply:function(e,t,i){return Function.prototype.apply.call(e,t,i)};n=r&&"function"==typeof r.ownKeys?r.ownKeys:Object.getOwnPropertySymbols?function(e){return Object.getOwnPropertyNames(e).concat(Object.getOwnPropertySymbols(e))}:function(e){return Object.getOwnPropertyNames(e)};var a=Number.isNaN||function(e){return e!=e};function o(){o.init.call(this)}e.exports=o,e.exports.once=function(e,t){return new Promise((function(i,n){function r(i){e.removeListener(t,s),n(i)}function s(){"function"==typeof e.removeListener&&e.removeListener("error",r),i([].slice.call(arguments))}g(e,t,s,{once:!0}),"error"!==t&&function(e,t,i){"function"==typeof e.on&&g(e,"error",t,i)}(e,r,{once:!0})}))},o.EventEmitter=o,o.prototype._events=void 0,o.prototype._eventsCount=0,o.prototype._maxListeners=void 0;var h=10;function d(e){if("function"!=typeof e)throw new TypeError('The "listener" argument must be of type Function. Received type '+typeof e)}function u(e){return void 0===e._maxListeners?o.defaultMaxListeners:e._maxListeners}function _(e,t,i,n){var r,s,a,o;if(d(i),void 0===(s=e._events)?(s=e._events=Object.create(null),e._eventsCount=0):(void 0!==s.newListener&&(e.emit("newListener",t,i.listener?i.listener:i),s=e._events),a=s[t]),void 0===a)a=s[t]=i,++e._eventsCount;else if("function"==typeof a?a=s[t]=n?[i,a]:[a,i]:n?a.unshift(i):a.push(i),(r=u(e))>0&&a.length>r&&!a.warned){a.warned=!0;var h=new Error("Possible EventEmitter memory leak detected. "+a.length+" "+String(t)+" listeners added. Use emitter.setMaxListeners() to increase limit");h.name="MaxListenersExceededWarning",h.emitter=e,h.type=t,h.count=a.length,o=h,console&&console.warn&&console.warn(o)}return e}function c(){if(!this.fired)return this.target.removeListener(this.type,this.wrapFn),this.fired=!0,0===arguments.length?this.listener.call(this.target):this.listener.apply(this.target,arguments)}function l(e,t,i){var n={fired:!1,wrapFn:void 0,target:e,type:t,listener:i},r=c.bind(n);return r.listener=i,n.wrapFn=r,r}function f(e,t,i){var n=e._events;if(void 0===n)return[];var r=n[t];return void 0===r?[]:"function"==typeof r?i?[r.listener||r]:[r]:i?function(e){for(var t=new Array(e.length),i=0;i0&&(a=t[0]),a instanceof Error)throw a;var o=new Error("Unhandled error."+(a?" ("+a.message+")":""));throw o.context=a,o}var h=r[e];if(void 0===h)return!1;if("function"==typeof h)s(h,this,t);else{var d=h.length,u=m(h,d);for(i=0;i=0;s--)if(i[s]===t||i[s].listener===t){a=i[s].listener,r=s;break}if(r<0)return this;0===r?i.shift():function(e,t){for(;t+1=0;n--)this.removeListener(e,t[n]);return this},o.prototype.listeners=function(e){return f(this,e,!0)},o.prototype.rawListeners=function(e){return f(this,e,!1)},o.listenerCount=function(e,t){return"function"==typeof e.listenerCount?e.listenerCount(t):p.call(e,t)},o.prototype.listenerCount=p,o.prototype.eventNames=function(){return this._eventsCount>0?n(this._events):[]}},function(e,t,i){"use strict";i.d(t,"d",(function(){return n})),i.d(t,"b",(function(){return r})),i.d(t,"a",(function(){return s})),i.d(t,"c",(function(){return a}));var n=function(e,t,i,n,r){this.dts=e,this.pts=t,this.duration=i,this.originalDts=n,this.isSyncPoint=r,this.fileposition=null},r=function(){function e(){this.beginDts=0,this.endDts=0,this.beginPts=0,this.endPts=0,this.originalBeginDts=0,this.originalEndDts=0,this.syncPoints=[],this.firstSample=null,this.lastSample=null}return e.prototype.appendSyncPoint=function(e){e.isSyncPoint=!0,this.syncPoints.push(e)},e}(),s=function(){function e(){this._list=[]}return e.prototype.clear=function(){this._list=[]},e.prototype.appendArray=function(e){var t=this._list;0!==e.length&&(t.length>0&&e[0].originalDts=t[r].dts&&et[n].lastSample.originalDts&&e=t[n].lastSample.originalDts&&(n===t.length-1||n0&&(r=this._searchNearestSegmentBefore(i.originalBeginDts)+1),this._lastAppendLocation=r,this._list.splice(r,0,i)},e.prototype.getLastSegmentBefore=function(e){var t=this._searchNearestSegmentBefore(e);return t>=0?this._list[t]:null},e.prototype.getLastSampleBefore=function(e){var t=this.getLastSegmentBefore(e);return null!=t?t.lastSample:null},e.prototype.getLastSyncPointBefore=function(e){for(var t=this._searchNearestSegmentBefore(e),i=this._list[t].syncPoints;0===i.length&&t>0;)t--,i=this._list[t].syncPoints;return i.length>0?i[i.length-1]:null},e}()},function(e,t,i){"use strict";var n=function(){function e(){this.mimeType=null,this.duration=null,this.hasAudio=null,this.hasVideo=null,this.audioCodec=null,this.videoCodec=null,this.audioDataRate=null,this.videoDataRate=null,this.audioSampleRate=null,this.audioChannelCount=null,this.width=null,this.height=null,this.fps=null,this.profile=null,this.level=null,this.refFrames=null,this.chromaFormat=null,this.sarNum=null,this.sarDen=null,this.metadata=null,this.segments=null,this.segmentCount=null,this.hasKeyframesIndex=null,this.keyframesIndex=null}return e.prototype.isComplete=function(){var e=!1===this.hasAudio||!0===this.hasAudio&&null!=this.audioCodec&&null!=this.audioSampleRate&&null!=this.audioChannelCount,t=!1===this.hasVideo||!0===this.hasVideo&&null!=this.videoCodec&&null!=this.width&&null!=this.height&&null!=this.fps&&null!=this.profile&&null!=this.level&&null!=this.refFrames&&null!=this.chromaFormat&&null!=this.sarNum&&null!=this.sarDen;return null!=this.mimeType&&e&&t},e.prototype.isSeekable=function(){return!0===this.hasKeyframesIndex},e.prototype.getNearestKeyframe=function(e){if(null==this.keyframesIndex)return null;var t=this.keyframesIndex,i=this._search(t.times,e);return{index:i,milliseconds:t.times[i],fileposition:t.filepositions[i]}},e.prototype._search=function(e,t){var i=0,n=e.length-1,r=0,s=0,a=n;for(t=e[r]&&t0){var i=e.getConfig();t.emit("change",i)}},e.registerListener=function(t){e.emitter.addListener("change",t)},e.removeListener=function(t){e.emitter.removeListener("change",t)},e.addLogListener=function(t){s.a.emitter.addListener("log",t),s.a.emitter.listenerCount("log")>0&&(s.a.ENABLE_CALLBACK=!0,e._notifyChange())},e.removeLogListener=function(t){s.a.emitter.removeListener("log",t),0===s.a.emitter.listenerCount("log")&&(s.a.ENABLE_CALLBACK=!1,e._notifyChange())},e}();a.emitter=new r.a,t.a=a},function(e,t,i){"use strict";var n=i(6),r=i.n(n),s=i(0),a=i(4),o=i(8);function h(e,t,i){var n=e;if(t+i=128){t.push(String.fromCharCode(65535&s)),n+=2;continue}}else if(i[n]<240){if(h(i,n,2))if((s=(15&i[n])<<12|(63&i[n+1])<<6|63&i[n+2])>=2048&&55296!=(63488&s)){t.push(String.fromCharCode(65535&s)),n+=3;continue}}else if(i[n]<248){var s;if(h(i,n,3))if((s=(7&i[n])<<18|(63&i[n+1])<<12|(63&i[n+2])<<6|63&i[n+3])>65536&&s<1114112){s-=65536,t.push(String.fromCharCode(s>>>10|55296)),t.push(String.fromCharCode(1023&s|56320)),n+=4;continue}}t.push(String.fromCharCode(65533)),++n}return t.join("")},_=i(3),c=(d=new ArrayBuffer(2),new DataView(d).setInt16(0,256,!0),256===new Int16Array(d)[0]),l=function(){function e(){}return e.parseScriptData=function(t,i,n){var r={};try{var a=e.parseValue(t,i,n),o=e.parseValue(t,i+a.size,n-a.size);r[a.data]=o.data}catch(e){s.a.e("AMF",e.toString())}return r},e.parseObject=function(t,i,n){if(n<3)throw new _.a("Data not enough when parse ScriptDataObject");var r=e.parseString(t,i,n),s=e.parseValue(t,i+r.size,n-r.size),a=s.objectEnd;return{data:{name:r.data,value:s.data},size:r.size+s.size,objectEnd:a}},e.parseVariable=function(t,i,n){return e.parseObject(t,i,n)},e.parseString=function(e,t,i){if(i<2)throw new _.a("Data not enough when parse String");var n=new DataView(e,t,i).getUint16(0,!c);return{data:n>0?u(new Uint8Array(e,t+2,n)):"",size:2+n}},e.parseLongString=function(e,t,i){if(i<4)throw new _.a("Data not enough when parse LongString");var n=new DataView(e,t,i).getUint32(0,!c);return{data:n>0?u(new Uint8Array(e,t+4,n)):"",size:4+n}},e.parseDate=function(e,t,i){if(i<10)throw new _.a("Data size invalid when parse Date");var n=new DataView(e,t,i),r=n.getFloat64(0,!c),s=n.getInt16(8,!c);return{data:new Date(r+=60*s*1e3),size:10}},e.parseValue=function(t,i,n){if(n<1)throw new _.a("Data not enough when parse Value");var r,a=new DataView(t,i,n),o=1,h=a.getUint8(0),d=!1;try{switch(h){case 0:r=a.getFloat64(1,!c),o+=8;break;case 1:r=!!a.getUint8(1),o+=1;break;case 2:var u=e.parseString(t,i+1,n-1);r=u.data,o+=u.size;break;case 3:r={};var l=0;for(9==(16777215&a.getUint32(n-4,!c))&&(l=3);o32)throw new _.b("ExpGolomb: readBits() bits exceeded max 32bits!");if(e<=this._current_word_bits_left){var t=this._current_word>>>32-e;return this._current_word<<=e,this._current_word_bits_left-=e,t}var i=this._current_word_bits_left?this._current_word:0;i>>>=32-this._current_word_bits_left;var n=e-this._current_word_bits_left;this._fillCurrentWord();var r=Math.min(n,this._current_word_bits_left),s=this._current_word>>>32-r;return this._current_word<<=r,this._current_word_bits_left-=r,i=i<>>e))return this._current_word<<=e,this._current_word_bits_left-=e,e;return this._fillCurrentWord(),e+this._skipLeadingZero()},e.prototype.readUEG=function(){var e=this._skipLeadingZero();return this.readBits(e+1)-1},e.prototype.readSEG=function(){var e=this.readUEG();return 1&e?e+1>>>1:-1*(e>>>1)},e}(),p=function(){function e(){}return e._ebsp2rbsp=function(e){for(var t=e,i=t.byteLength,n=new Uint8Array(i),r=0,s=0;s=2&&3===t[s]&&0===t[s-1]&&0===t[s-2]||(n[r]=t[s],r++);return new Uint8Array(n.buffer,0,r)},e.parseSPS=function(t){for(var i=t.subarray(1,4),n="avc1.",r=0;r<3;r++){var s=i[r].toString(16);s.length<2&&(s="0"+s),n+=s}var a=e._ebsp2rbsp(t),o=new f(a);o.readByte();var h=o.readByte();o.readByte();var d=o.readByte();o.readUEG();var u=e.getProfileString(h),_=e.getLevelString(d),c=1,l=420,p=8,m=8;if((100===h||110===h||122===h||244===h||44===h||83===h||86===h||118===h||128===h||138===h||144===h)&&(3===(c=o.readUEG())&&o.readBits(1),c<=3&&(l=[0,420,422,444][c]),p=o.readUEG()+8,m=o.readUEG()+8,o.readBits(1),o.readBool()))for(var g=3!==c?8:12,v=0;v0&&x<16?(k=[1,12,10,16,40,24,20,32,80,18,15,64,160,4,3,2][x-1],C=[1,11,11,11,33,11,11,11,33,11,11,33,99,3,2,1][x-1]):255===x&&(k=o.readByte()<<8|o.readByte(),C=o.readByte()<<8|o.readByte())}if(o.readBool()&&o.readBool(),o.readBool()&&(o.readBits(4),o.readBool()&&o.readBits(24)),o.readBool()&&(o.readUEG(),o.readUEG()),o.readBool()){var B=o.readBits(32),U=o.readBits(32);O=o.readBool(),I=(P=U)/(M=2*B)}}var N=1;1===k&&1===C||(N=k/C);var F=0,G=0;0===c?(F=1,G=2-R):(F=3===c?1:2,G=(1===c?2:1)*(2-R));var V=16*(S+1),j=16*(A+1)*(2-R);V-=(L+T)*F,j-=(w+D)*G;var z=Math.ceil(V*N);return o.destroy(),o=null,{codec_mimetype:n,profile_idc:h,level_idc:d,profile_string:u,level_string:_,chroma_format_idc:c,bit_depth:p,bit_depth_luma:p,bit_depth_chroma:m,ref_frames:E,chroma_format:l,chroma_format_string:e.getChromaFormatString(l),frame_rate:{fixed:O,fps:I,fps_den:M,fps_num:P},sar_ratio:{width:k,height:C},codec_size:{width:V,height:j},present_size:{width:z,height:j}}},e._skipScalingList=function(e,t){for(var i=8,n=8,r=0;r>>2!=0,a=0!=(1&t[4]),o=(n=t)[r=5]<<24|n[r+1]<<16|n[r+2]<<8|n[r+3];return o<9?i:{match:!0,consumed:o,dataOffset:o,hasAudioTrack:s,hasVideoTrack:a}},e.prototype.bindDataSource=function(e){return e.onDataArrival=this.parseChunks.bind(this),this},Object.defineProperty(e.prototype,"onTrackMetadata",{get:function(){return this._onTrackMetadata},set:function(e){this._onTrackMetadata=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onMediaInfo",{get:function(){return this._onMediaInfo},set:function(e){this._onMediaInfo=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onMetaDataArrived",{get:function(){return this._onMetaDataArrived},set:function(e){this._onMetaDataArrived=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onScriptDataArrived",{get:function(){return this._onScriptDataArrived},set:function(e){this._onScriptDataArrived=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onError",{get:function(){return this._onError},set:function(e){this._onError=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onDataAvailable",{get:function(){return this._onDataAvailable},set:function(e){this._onDataAvailable=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"timestampBase",{get:function(){return this._timestampBase},set:function(e){this._timestampBase=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"overridedDuration",{get:function(){return this._duration},set:function(e){this._durationOverrided=!0,this._duration=e,this._mediaInfo.duration=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"overridedHasAudio",{set:function(e){this._hasAudioFlagOverrided=!0,this._hasAudio=e,this._mediaInfo.hasAudio=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"overridedHasVideo",{set:function(e){this._hasVideoFlagOverrided=!0,this._hasVideo=e,this._mediaInfo.hasVideo=e},enumerable:!1,configurable:!0}),e.prototype.resetMediaInfo=function(){this._mediaInfo=new o.a},e.prototype._isInitialMetadataDispatched=function(){return this._hasAudio&&this._hasVideo?this._audioInitialMetadataDispatched&&this._videoInitialMetadataDispatched:this._hasAudio&&!this._hasVideo?this._audioInitialMetadataDispatched:!(this._hasAudio||!this._hasVideo)&&this._videoInitialMetadataDispatched},e.prototype.parseChunks=function(t,i){if(!(this._onError&&this._onMediaInfo&&this._onTrackMetadata&&this._onDataAvailable))throw new _.a("Flv: onError & onMediaInfo & onTrackMetadata & onDataAvailable callback must be specified");var n=0,r=this._littleEndian;if(0===i){if(!(t.byteLength>13))return 0;n=e.probe(t).dataOffset}this._firstParse&&(this._firstParse=!1,i+n!==this._dataOffset&&s.a.w(this.TAG,"First time parsing but chunk byteStart invalid!"),0!==(a=new DataView(t,n)).getUint32(0,!r)&&s.a.w(this.TAG,"PrevTagSize0 !== 0 !!!"),n+=4);for(;nt.byteLength)break;var o=a.getUint8(0),h=16777215&a.getUint32(0,!r);if(n+11+h+4>t.byteLength)break;if(8===o||9===o||18===o){var d=a.getUint8(4),u=a.getUint8(5),c=a.getUint8(6)|u<<8|d<<16|a.getUint8(7)<<24;0!==(16777215&a.getUint32(7,!r))&&s.a.w(this.TAG,"Meet tag which has StreamID != 0!");var l=n+11;switch(o){case 8:this._parseAudioData(t,l,h,c);break;case 9:this._parseVideoData(t,l,h,c,i+n);break;case 18:this._parseScriptData(t,l,h)}var f=a.getUint32(11+h,!r);f!==11+h&&s.a.w(this.TAG,"Invalid PrevTagSize "+f),n+=11+h+4}else s.a.w(this.TAG,"Unsupported tag type "+o+", skipped"),n+=11+h+4}return this._isInitialMetadataDispatched()&&this._dispatch&&(this._audioTrack.length||this._videoTrack.length)&&this._onDataAvailable(this._audioTrack,this._videoTrack),n},e.prototype._parseScriptData=function(e,t,i){var n=l.parseScriptData(e,t,i);if(n.hasOwnProperty("onMetaData")){if(null==n.onMetaData||"object"!=typeof n.onMetaData)return void s.a.w(this.TAG,"Invalid onMetaData structure!");this._metadata&&s.a.w(this.TAG,"Found another onMetaData tag!"),this._metadata=n;var r=this._metadata.onMetaData;if(this._onMetaDataArrived&&this._onMetaDataArrived(Object.assign({},r)),"boolean"==typeof r.hasAudio&&!1===this._hasAudioFlagOverrided&&(this._hasAudio=r.hasAudio,this._mediaInfo.hasAudio=this._hasAudio),"boolean"==typeof r.hasVideo&&!1===this._hasVideoFlagOverrided&&(this._hasVideo=r.hasVideo,this._mediaInfo.hasVideo=this._hasVideo),"number"==typeof r.audiodatarate&&(this._mediaInfo.audioDataRate=r.audiodatarate),"number"==typeof r.videodatarate&&(this._mediaInfo.videoDataRate=r.videodatarate),"number"==typeof r.width&&(this._mediaInfo.width=r.width),"number"==typeof r.height&&(this._mediaInfo.height=r.height),"number"==typeof r.duration){if(!this._durationOverrided){var a=Math.floor(r.duration*this._timescale);this._duration=a,this._mediaInfo.duration=a}}else this._mediaInfo.duration=0;if("number"==typeof r.framerate){var o=Math.floor(1e3*r.framerate);if(o>0){var h=o/1e3;this._referenceFrameRate.fixed=!0,this._referenceFrameRate.fps=h,this._referenceFrameRate.fps_num=o,this._referenceFrameRate.fps_den=1e3,this._mediaInfo.fps=h}}if("object"==typeof r.keyframes){this._mediaInfo.hasKeyframesIndex=!0;var d=r.keyframes;this._mediaInfo.keyframesIndex=this._parseKeyframesIndex(d),r.keyframes=null}else this._mediaInfo.hasKeyframesIndex=!1;this._dispatch=!1,this._mediaInfo.metadata=r,s.a.v(this.TAG,"Parsed onMetaData"),this._mediaInfo.isComplete()&&this._onMediaInfo(this._mediaInfo)}Object.keys(n).length>0&&this._onScriptDataArrived&&this._onScriptDataArrived(Object.assign({},n))},e.prototype._parseKeyframesIndex=function(e){for(var t=[],i=[],n=1;n>>4;if(2===a||10===a){var o=0,h=(12&r)>>>2;if(h>=0&&h<=4){o=this._flvSoundRateTable[h];var d=1&r,u=this._audioMetadata,_=this._audioTrack;if(u||(!1===this._hasAudio&&!1===this._hasAudioFlagOverrided&&(this._hasAudio=!0,this._mediaInfo.hasAudio=!0),(u=this._audioMetadata={}).type="audio",u.id=_.id,u.timescale=this._timescale,u.duration=this._duration,u.audioSampleRate=o,u.channelCount=0===d?1:2),10===a){var c=this._parseAACAudioData(e,t+1,i-1);if(null==c)return;if(0===c.packetType){u.config&&s.a.w(this.TAG,"Found another AudioSpecificConfig!");var l=c.data;u.audioSampleRate=l.samplingRate,u.channelCount=l.channelCount,u.codec=l.codec,u.originalCodec=l.originalCodec,u.config=l.config,u.refSampleDuration=1024/u.audioSampleRate*u.timescale,s.a.v(this.TAG,"Parsed AudioSpecificConfig"),this._isInitialMetadataDispatched()?this._dispatch&&(this._audioTrack.length||this._videoTrack.length)&&this._onDataAvailable(this._audioTrack,this._videoTrack):this._audioInitialMetadataDispatched=!0,this._dispatch=!1,this._onTrackMetadata("audio",u),(g=this._mediaInfo).audioCodec=u.originalCodec,g.audioSampleRate=u.audioSampleRate,g.audioChannelCount=u.channelCount,g.hasVideo?null!=g.videoCodec&&(g.mimeType='video/x-flv; codecs="'+g.videoCodec+","+g.audioCodec+'"'):g.mimeType='video/x-flv; codecs="'+g.audioCodec+'"',g.isComplete()&&this._onMediaInfo(g)}else if(1===c.packetType){var f=this._timestampBase+n,p={unit:c.data,length:c.data.byteLength,dts:f,pts:f};_.samples.push(p),_.length+=c.data.length}else s.a.e(this.TAG,"Flv: Unsupported AAC data type "+c.packetType)}else if(2===a){if(!u.codec){var g;if(null==(l=this._parseMP3AudioData(e,t+1,i-1,!0)))return;u.audioSampleRate=l.samplingRate,u.channelCount=l.channelCount,u.codec=l.codec,u.originalCodec=l.originalCodec,u.refSampleDuration=1152/u.audioSampleRate*u.timescale,s.a.v(this.TAG,"Parsed MPEG Audio Frame Header"),this._audioInitialMetadataDispatched=!0,this._onTrackMetadata("audio",u),(g=this._mediaInfo).audioCodec=u.codec,g.audioSampleRate=u.audioSampleRate,g.audioChannelCount=u.channelCount,g.audioDataRate=l.bitRate,g.hasVideo?null!=g.videoCodec&&(g.mimeType='video/x-flv; codecs="'+g.videoCodec+","+g.audioCodec+'"'):g.mimeType='video/x-flv; codecs="'+g.audioCodec+'"',g.isComplete()&&this._onMediaInfo(g)}var v=this._parseMP3AudioData(e,t+1,i-1,!1);if(null==v)return;f=this._timestampBase+n;var y={unit:v,length:v.byteLength,dts:f,pts:f};_.samples.push(y),_.length+=v.length}}else this._onError(m.a.FORMAT_ERROR,"Flv: Invalid audio sample rate idx: "+h)}else this._onError(m.a.CODEC_UNSUPPORTED,"Flv: Unsupported audio codec idx: "+a)}},e.prototype._parseAACAudioData=function(e,t,i){if(!(i<=1)){var n={},r=new Uint8Array(e,t,i);return n.packetType=r[0],0===r[0]?n.data=this._parseAACAudioSpecificConfig(e,t+1,i-1):n.data=r.subarray(1),n}s.a.w(this.TAG,"Flv: Invalid AAC packet, missing AACPacketType or/and Data!")},e.prototype._parseAACAudioSpecificConfig=function(e,t,i){var n,r,s=new Uint8Array(e,t,i),a=null,o=0,h=null;if(o=n=s[0]>>>3,(r=(7&s[0])<<1|s[1]>>>7)<0||r>=this._mpegSamplingRates.length)this._onError(m.a.FORMAT_ERROR,"Flv: AAC invalid sampling frequency index!");else{var d=this._mpegSamplingRates[r],u=(120&s[1])>>>3;if(!(u<0||u>=8)){5===o&&(h=(7&s[1])<<1|s[2]>>>7,(124&s[2])>>>2);var _=self.navigator.userAgent.toLowerCase();return-1!==_.indexOf("firefox")?r>=6?(o=5,a=new Array(4),h=r-3):(o=2,a=new Array(2),h=r):-1!==_.indexOf("android")?(o=2,a=new Array(2),h=r):(o=5,h=r,a=new Array(4),r>=6?h=r-3:1===u&&(o=2,a=new Array(2),h=r)),a[0]=o<<3,a[0]|=(15&r)>>>1,a[1]=(15&r)<<7,a[1]|=(15&u)<<3,5===o&&(a[1]|=(15&h)>>>1,a[2]=(1&h)<<7,a[2]|=8,a[3]=0),{config:a,samplingRate:d,channelCount:u,codec:"mp4a.40."+o,originalCodec:"mp4a.40."+n}}this._onError(m.a.FORMAT_ERROR,"Flv: AAC invalid channel configuration")}},e.prototype._parseMP3AudioData=function(e,t,i,n){if(!(i<4)){this._littleEndian;var r=new Uint8Array(e,t,i),a=null;if(n){if(255!==r[0])return;var o=r[1]>>>3&3,h=(6&r[1])>>1,d=(240&r[2])>>>4,u=(12&r[2])>>>2,_=3!==(r[3]>>>6&3)?2:1,c=0,l=0;switch(o){case 0:c=this._mpegAudioV25SampleRateTable[u];break;case 2:c=this._mpegAudioV20SampleRateTable[u];break;case 3:c=this._mpegAudioV10SampleRateTable[u]}switch(h){case 1:34,d>>4,h=15&a;7===h?this._parseAVCVideoPacket(e,t+1,i-1,n,r,o):this._onError(m.a.CODEC_UNSUPPORTED,"Flv: Unsupported codec in video frame: "+h)}},e.prototype._parseAVCVideoPacket=function(e,t,i,n,r,a){if(i<4)s.a.w(this.TAG,"Flv: Invalid AVC packet, missing AVCPacketType or/and CompositionTime");else{var o=this._littleEndian,h=new DataView(e,t,i),d=h.getUint8(0),u=(16777215&h.getUint32(0,!o))<<8>>8;if(0===d)this._parseAVCDecoderConfigurationRecord(e,t+4,i-4);else if(1===d)this._parseAVCVideoData(e,t+4,i-4,n,r,a,u);else if(2!==d)return void this._onError(m.a.FORMAT_ERROR,"Flv: Invalid video packet type "+d)}},e.prototype._parseAVCDecoderConfigurationRecord=function(e,t,i){if(i<7)s.a.w(this.TAG,"Flv: Invalid AVCDecoderConfigurationRecord, lack of data!");else{var n=this._videoMetadata,r=this._videoTrack,a=this._littleEndian,o=new DataView(e,t,i);n?void 0!==n.avcc&&s.a.w(this.TAG,"Found another AVCDecoderConfigurationRecord!"):(!1===this._hasVideo&&!1===this._hasVideoFlagOverrided&&(this._hasVideo=!0,this._mediaInfo.hasVideo=!0),(n=this._videoMetadata={}).type="video",n.id=r.id,n.timescale=this._timescale,n.duration=this._duration);var h=o.getUint8(0),d=o.getUint8(1);o.getUint8(2),o.getUint8(3);if(1===h&&0!==d)if(this._naluLengthSize=1+(3&o.getUint8(4)),3===this._naluLengthSize||4===this._naluLengthSize){var u=31&o.getUint8(5);if(0!==u){u>1&&s.a.w(this.TAG,"Flv: Strange AVCDecoderConfigurationRecord: SPS Count = "+u);for(var _=6,c=0;c1&&s.a.w(this.TAG,"Flv: Strange AVCDecoderConfigurationRecord: PPS Count = "+L),_++;for(c=0;c=i){s.a.w(this.TAG,"Malformed Nalu near timestamp "+f+", offset = "+c+", dataSize = "+i);break}var m=d.getUint32(c,!h);if(3===l&&(m>>>=8),m>i-l)return void s.a.w(this.TAG,"Malformed Nalus near timestamp "+f+", NaluSize > DataSize!");var g=31&d.getUint8(c+l);5===g&&(p=!0);var v=new Uint8Array(e,t+c,l+m),y={type:g,data:v};u.push(y),_+=v.byteLength,c+=l+m}if(u.length){var b=this._videoTrack,E={units:u,length:_,isKeyframe:p,dts:f,cts:o,pts:f+o};p&&(E.fileposition=r),b.samples.push(E),b.length+=_}},e}(),y=function(){function e(){}return e.prototype.destroy=function(){this.onError=null,this.onMediaInfo=null,this.onMetaDataArrived=null,this.onTrackMetadata=null,this.onDataAvailable=null,this.onTimedID3Metadata=null,this.onPESPrivateData=null,this.onPESPrivateDataDescriptor=null},e}(),b=function(){this.program_pmt_pid={}};!function(e){e[e.kMPEG1Audio=3]="kMPEG1Audio",e[e.kMPEG2Audio=4]="kMPEG2Audio",e[e.kPESPrivateData=6]="kPESPrivateData",e[e.kADTSAAC=15]="kADTSAAC",e[e.kID3=21]="kID3",e[e.kH264=27]="kH264",e[e.kH265=36]="kH265"}(g||(g={}));var E,S=function(){this.pid_stream_type={},this.common_pids={h264:void 0,adts_aac:void 0},this.pes_private_data_pids={},this.timed_id3_pids={}},A=function(){},R=function(){this.slices=[],this.total_length=0,this.file_position=0};!function(e){e[e.kUnspecified=0]="kUnspecified",e[e.kSliceNonIDR=1]="kSliceNonIDR",e[e.kSliceDPA=2]="kSliceDPA",e[e.kSliceDPB=3]="kSliceDPB",e[e.kSliceDPC=4]="kSliceDPC",e[e.kSliceIDR=5]="kSliceIDR",e[e.kSliceSEI=6]="kSliceSEI",e[e.kSliceSPS=7]="kSliceSPS",e[e.kSlicePPS=8]="kSlicePPS",e[e.kSliceAUD=9]="kSliceAUD",e[e.kEndOfSequence=10]="kEndOfSequence",e[e.kEndOfStream=11]="kEndOfStream",e[e.kFiller=12]="kFiller",e[e.kSPSExt=13]="kSPSExt",e[e.kReserved0=14]="kReserved0"}(E||(E={}));var L,T,w=function(){},D=function(e){var t=e.data.byteLength;this.type=e.type,this.data=new Uint8Array(4+t),new DataView(this.data.buffer).setUint32(0,t),this.data.set(e.data,4)},k=function(){function e(e){this.TAG="H264AnnexBParser",this.current_startcode_offset_=0,this.eof_flag_=!1,this.data_=e,this.current_startcode_offset_=this.findNextStartCodeOffset(0),this.eof_flag_&&s.a.e(this.TAG,"Could not found H264 startcode until payload end!")}return e.prototype.findNextStartCodeOffset=function(e){for(var t=e,i=this.data_;;){if(t+3>=i.byteLength)return this.eof_flag_=!0,i.byteLength;var n=i[t+0]<<24|i[t+1]<<16|i[t+2]<<8|i[t+3],r=i[t+0]<<16|i[t+1]<<8|i[t+2];if(1===n||1===r)return t;t++}},e.prototype.readNextNaluPayload=function(){for(var e=this.data_,t=null;null==t&&!this.eof_flag_;){var i=this.current_startcode_offset_,n=31&e[i+=1===(e[i]<<24|e[i+1]<<16|e[i+2]<<8|e[i+3])?4:3],r=(128&e[i])>>>7,s=this.findNextStartCodeOffset(i);if(this.current_startcode_offset_=s,!(n>=E.kReserved0)&&0===r){var a=e.subarray(i,s);(t=new w).type=n,t.data=a}}return t},e}(),C=function(){function e(e,t,i){var n=8+e.byteLength+1+2+t.byteLength,r=!1;66!==e[3]&&77!==e[3]&&88!==e[3]&&(r=!0,n+=4);var s=this.data=new Uint8Array(n);s[0]=1,s[1]=e[1],s[2]=e[2],s[3]=e[3],s[4]=255,s[5]=225;var a=e.byteLength;s[6]=a>>>8,s[7]=255&a;var o=8;s.set(e,8),s[o+=a]=1;var h=t.byteLength;s[o+1]=h>>>8,s[o+2]=255&h,s.set(t,o+3),o+=3+h,r&&(s[o]=252|i.chroma_format_idc,s[o+1]=248|i.bit_depth_luma-8,s[o+2]=248|i.bit_depth_chroma-8,s[o+3]=0,o+=4)}return e.prototype.getData=function(){return this.data},e}();!function(e){e[e.kNull=0]="kNull",e[e.kAACMain=1]="kAACMain",e[e.kAAC_LC=2]="kAAC_LC",e[e.kAAC_SSR=3]="kAAC_SSR",e[e.kAAC_LTP=4]="kAAC_LTP",e[e.kAAC_SBR=5]="kAAC_SBR",e[e.kAAC_Scalable=6]="kAAC_Scalable",e[e.kLayer1=32]="kLayer1",e[e.kLayer2=33]="kLayer2",e[e.kLayer3=34]="kLayer3"}(L||(L={})),function(e){e[e.k96000Hz=0]="k96000Hz",e[e.k88200Hz=1]="k88200Hz",e[e.k64000Hz=2]="k64000Hz",e[e.k48000Hz=3]="k48000Hz",e[e.k44100Hz=4]="k44100Hz",e[e.k32000Hz=5]="k32000Hz",e[e.k24000Hz=6]="k24000Hz",e[e.k22050Hz=7]="k22050Hz",e[e.k16000Hz=8]="k16000Hz",e[e.k12000Hz=9]="k12000Hz",e[e.k11025Hz=10]="k11025Hz",e[e.k8000Hz=11]="k8000Hz",e[e.k7350Hz=12]="k7350Hz"}(T||(T={}));var I,O=[96e3,88200,64e3,48e3,44100,32e3,24e3,22050,16e3,12e3,11025,8e3,7350],P=function(){},M=function(){function e(e){this.TAG="AACADTSParser",this.data_=e,this.current_syncword_offset_=this.findNextSyncwordOffset(0),this.eof_flag_&&s.a.e(this.TAG,"Could not found ADTS syncword until payload end")}return e.prototype.findNextSyncwordOffset=function(e){for(var t=e,i=this.data_;;){if(t+7>=i.byteLength)return this.eof_flag_=!0,i.byteLength;if(4095===(i[t+0]<<8|i[t+1])>>>4)return t;t++}},e.prototype.readNextAACFrame=function(){for(var e=this.data_,t=null;null==t&&!this.eof_flag_;){var i=this.current_syncword_offset_,n=(8&e[i+1])>>>3,r=(6&e[i+1])>>>1,s=1&e[i+1],a=(192&e[i+2])>>>6,o=(60&e[i+2])>>>2,h=(1&e[i+2])<<2|(192&e[i+3])>>>6,d=(3&e[i+3])<<11|e[i+4]<<3|(224&e[i+5])>>>5;e[i+6];if(i+d>this.data_.byteLength){this.eof_flag_=!0,this.has_last_incomplete_data=!0;break}var u=1===s?7:9,_=d-u;i+=u;var c=this.findNextSyncwordOffset(i+_);if(this.current_syncword_offset_=c,(0===n||1===n)&&0===r){var l=e.subarray(i,i+_);(t=new P).audio_object_type=a+1,t.sampling_freq_index=o,t.sampling_frequency=O[o],t.channel_config=h,t.data=l}}return t},e.prototype.hasIncompleteData=function(){return this.has_last_incomplete_data},e.prototype.getIncompleteData=function(){return this.has_last_incomplete_data?this.data_.subarray(this.current_syncword_offset_):null},e}(),x=function(e){var t=null,i=e.audio_object_type,n=e.audio_object_type,r=e.sampling_freq_index,s=e.channel_config,a=0,o=navigator.userAgent.toLowerCase();-1!==o.indexOf("firefox")?r>=6?(n=5,t=new Array(4),a=r-3):(n=2,t=new Array(2),a=r):-1!==o.indexOf("android")?(n=2,t=new Array(2),a=r):(n=5,a=r,t=new Array(4),r>=6?a=r-3:1===s&&(n=2,t=new Array(2),a=r)),t[0]=n<<3,t[0]|=(15&r)>>>1,t[1]=(15&r)<<7,t[1]|=(15&s)<<3,5===n&&(t[1]|=(15&a)>>>1,t[2]=(1&a)<<7,t[2]|=8,t[3]=0),this.config=t,this.sampling_rate=O[r],this.channel_count=s,this.codec_mimetype="mp4a.40."+n,this.original_codec_mimetype="mp4a.40."+i},B=function(){},U=function(){},N=(I=function(e,t){return(I=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(e,t){e.__proto__=t}||function(e,t){for(var i in t)t.hasOwnProperty(i)&&(e[i]=t[i])})(e,t)},function(e,t){function i(){this.constructor=e}I(e,t),e.prototype=null===t?Object.create(t):(i.prototype=t.prototype,new i)}),F=function(e){function t(t,i){var n=e.call(this)||this;return n.TAG="TSDemuxer",n.first_parse_=!0,n.media_info_=new o.a,n.timescale_=90,n.duration_=0,n.current_pmt_pid_=-1,n.program_pmt_map_={},n.pes_slice_queues_={},n.video_metadata_={sps:void 0,pps:void 0,sps_details:void 0},n.audio_metadata_={audio_object_type:void 0,sampling_freq_index:void 0,sampling_frequency:void 0,channel_config:void 0},n.aac_last_sample_pts_=void 0,n.aac_last_incomplete_data_=null,n.has_video_=!1,n.has_audio_=!1,n.video_init_segment_dispatched_=!1,n.audio_init_segment_dispatched_=!1,n.video_metadata_changed_=!1,n.audio_metadata_changed_=!1,n.video_track_={type:"video",id:1,sequenceNumber:0,samples:[],length:0},n.audio_track_={type:"audio",id:2,sequenceNumber:0,samples:[],length:0},n.ts_packet_size_=t.ts_packet_size,n.sync_offset_=t.sync_offset,n.config_=i,n}return N(t,e),t.prototype.destroy=function(){this.media_info_=null,this.pes_slice_queues_=null,this.video_metadata_=null,this.audio_metadata_=null,this.aac_last_incomplete_data_=null,this.video_track_=null,this.audio_track_=null,e.prototype.destroy.call(this)},t.probe=function(e){var t=new Uint8Array(e),i=-1,n=188;if(t.byteLength<=3*n)return s.a.e("TSDemuxer","Probe data "+t.byteLength+" bytes is too few for judging MPEG-TS stream format!"),{match:!1};for(;-1===i;){for(var r=Math.min(1e3,t.byteLength-3*n),a=0;a=4?(s.a.v("TSDemuxer","ts_packet_size = 192, m2ts mode"),i-=4):204===n&&s.a.v("TSDemuxer","ts_packet_size = 204, RS encoded MPEG2-TS stream"),{match:!0,consumed:0,ts_packet_size:n,sync_offset:i})},t.prototype.bindDataSource=function(e){return e.onDataArrival=this.parseChunks.bind(this),this},t.prototype.resetMediaInfo=function(){this.media_info_=new o.a},t.prototype.parseChunks=function(e,t){if(!(this.onError&&this.onMediaInfo&&this.onTrackMetadata&&this.onDataAvailable))throw new _.a("onError & onMediaInfo & onTrackMetadata & onDataAvailable callback must be specified");var i=0;for(this.first_parse_&&(this.first_parse_=!1,i=this.sync_offset_);i+this.ts_packet_size_<=e.byteLength;){var n=t+i;192===this.ts_packet_size_&&(i+=4);var r=new Uint8Array(e,i,188),a=r[0];if(71!==a){s.a.e(this.TAG,"sync_byte = "+a+", not 0x47");break}var o=(64&r[1])>>>6,h=(r[1],(31&r[1])<<8|r[2]),d=(48&r[3])>>>4,u=15&r[3],c={},l=4;if(2==d||3==d){var f=r[4];if(5+f===188){i+=188,204===this.ts_packet_size_&&(i+=16);continue}f>0&&(c=this.parseAdaptationField(e,i+4,1+f)),l=5+f}if(1==d||3==d)if(0===h||h===this.current_pmt_pid_){if(o)l+=1+r[l];var p=188-l;0===h?this.parsePAT(e,i+l,p,{payload_unit_start_indicator:o,continuity_conunter:u}):this.parsePMT(e,i+l,p,{payload_unit_start_indicator:o,continuity_conunter:u})}else if(null!=this.pmt_&&null!=this.pmt_.pid_stream_type[h]){p=188-l;var m=this.pmt_.pid_stream_type[h];h!==this.pmt_.common_pids.h264&&h!==this.pmt_.common_pids.adts_aac&&!0!==this.pmt_.pes_private_data_pids[h]&&!0!==this.pmt_.timed_id3_pids[h]||this.handlePESSlice(e,i+l,p,{pid:h,stream_type:m,file_position:n,payload_unit_start_indicator:o,continuity_conunter:u,random_access_indicator:c.random_access_indicator})}i+=188,204===this.ts_packet_size_&&(i+=16)}return this.dispatchAudioVideoMediaSegment(),i},t.prototype.parseAdaptationField=function(e,t,i){var n=new Uint8Array(e,t,i),r=n[0];return r>0?r>183?(s.a.w(this.TAG,"Illegal adaptation_field_length: "+r),{}):{discontinuity_indicator:(128&n[1])>>>7,random_access_indicator:(64&n[1])>>>6,elementary_stream_priority_indicator:(32&n[1])>>>5}:{}},t.prototype.parsePAT=function(e,t,i,n){var r=new Uint8Array(e,t,i),a=r[0];if(0===a){var o=(15&r[1])<<8|r[2],h=(r[3],r[4],(62&r[5])>>>1),d=1&r[5],u=r[6],_=(r[7],null);if(1===d&&0===u)(_=new b).version_number=h;else if(null==(_=this.pat_))return;for(var c=o-5-4,l=-1,f=-1,p=8;p<8+c;p+=4){var m=r[p]<<8|r[p+1],g=(31&r[p+2])<<8|r[p+3];0===m?_.network_pid=g:(_.program_pmt_pid[m]=g,-1===l&&(l=m),-1===f&&(f=g))}1===d&&0===u&&(null==this.pat_&&s.a.v(this.TAG,"Parsed first PAT: "+JSON.stringify(_)),this.pat_=_,this.current_program_=l,this.current_pmt_pid_=f)}else s.a.e(this.TAG,"parsePAT: table_id "+a+" is not corresponded to PAT!")},t.prototype.parsePMT=function(e,t,i,n){var r=new Uint8Array(e,t,i),a=r[0];if(2===a){var o=(15&r[1])<<8|r[2],h=r[3]<<8|r[4],d=(62&r[5])>>>1,u=1&r[5],_=r[6],c=(r[7],null);if(1===u&&0===_)(c=new S).program_number=h,c.version_number=d,this.program_pmt_map_[h]=c;else if(null==(c=this.program_pmt_map_[h]))return;r[8],r[9];for(var l=(15&r[10])<<8|r[11],f=12+l,p=o-9-l-4,m=f;m0){var E=r.subarray(m+5,m+5+b);this.dispatchPESPrivateDataDescriptor(y,v,E)}}else v===g.kID3&&(c.timed_id3_pids[y]=!0);else c.common_pids.adts_aac=y;else c.common_pids.h264=y;m+=5+b}h===this.current_program_&&(null==this.pmt_&&s.a.v(this.TAG,"Parsed first PMT: "+JSON.stringify(c)),this.pmt_=c,c.common_pids.h264&&(this.has_video_=!0),c.common_pids.adts_aac&&(this.has_audio_=!0))}else s.a.e(this.TAG,"parsePMT: table_id "+a+" is not corresponded to PMT!")},t.prototype.handlePESSlice=function(e,t,i,n){var r=new Uint8Array(e,t,i),a=r[0]<<16|r[1]<<8|r[2];r[3],r[4],r[5];if(n.payload_unit_start_indicator){if(1!==a)return void s.a.e(this.TAG,"handlePESSlice: packet_start_code_prefix should be 1 but with value "+a);var o=this.pes_slice_queues_[n.pid];if(o){for(var h=new Uint8Array(o.total_length),d=0,u=0;d>>6,o=t[8],h=void 0,d=void 0;2!==a&&3!==a||(h=536870912*(14&t[9])+4194304*(255&t[10])+16384*(254&t[11])+128*(255&t[12])+(254&t[13])/2,d=3===a?536870912*(14&t[14])+4194304*(255&t[15])+16384*(254&t[16])+128*(255&t[17])+(254&t[18])/2:h);var u=9+o,_=void 0;if(0!==r){if(r<3+o)return void s.a.v(this.TAG,"Malformed PES: PES_packet_length < 3 + PES_header_data_length");_=r-3-o}else _=t.byteLength-u;var c=t.subarray(u,u+_);switch(e.stream_type){case g.kMPEG1Audio:case g.kMPEG2Audio:break;case g.kPESPrivateData:this.parsePESPrivateDataPayload(c,h,d,e.pid,n);break;case g.kADTSAAC:this.parseAACPayload(c,h);break;case g.kID3:this.parseTimedID3MetadataPayload(c,h,d,e.pid,n);break;case g.kH264:this.parseH264Payload(c,h,d,e.file_position,e.random_access_indicator);break;case g.kH265:}}else if((188===n||191===n||240===n||241===n||255===n||242===n||248===n)&&e.stream_type===g.kPESPrivateData){u=6,_=void 0;_=0!==r?r:t.byteLength-u;c=t.subarray(u,u+_);this.parsePESPrivateDataPayload(c,void 0,void 0,e.pid,n)}}else s.a.e(this.TAG,"parsePES: packet_start_code_prefix should be 1 but with value "+i)},t.prototype.parseH264Payload=function(e,t,i,n,r){for(var a=new k(e),o=null,h=[],d=0,u=!1;null!=(o=a.readNextNaluPayload());){var _=new D(o);if(_.type===E.kSliceSPS){var c=p.parseSPS(o.data);this.video_init_segment_dispatched_?!0===this.detectVideoMetadataChange(_,c)&&(s.a.v(this.TAG,"H264: Critical h264 metadata has been changed, attempt to re-generate InitSegment"),this.video_metadata_changed_=!0,this.video_metadata_={sps:_,pps:void 0,sps_details:c}):(this.video_metadata_.sps=_,this.video_metadata_.sps_details=c)}else _.type===E.kSlicePPS?this.video_init_segment_dispatched_&&!this.video_metadata_changed_||(this.video_metadata_.pps=_,this.video_metadata_.sps&&this.video_metadata_.pps&&(this.video_metadata_changed_&&this.dispatchVideoMediaSegment(),this.dispatchVideoInitSegment())):(_.type===E.kSliceIDR||_.type===E.kSliceNonIDR&&1===r)&&(u=!0);this.video_init_segment_dispatched_&&(h.push(_),d+=_.data.byteLength)}var l=Math.floor(t/this.timescale_),f=Math.floor(i/this.timescale_);if(h.length){var m=this.video_track_,g={units:h,length:d,isKeyframe:u,dts:f,pts:l,cts:l-f,file_position:n};m.samples.push(g),m.length+=d}},t.prototype.detectVideoMetadataChange=function(e,t){if(e.data.byteLength!==this.video_metadata_.sps.data.byteLength)return!0;if(t.codec_mimetype!==this.video_metadata_.sps_details.codec_mimetype)return s.a.v(this.TAG,"H264: Codec mimeType changed from "+this.video_metadata_.sps_details.codec_mimetype+" to "+t.codec_mimetype),!0;if(t.codec_size.width!==this.video_metadata_.sps_details.codec_size.width||t.codec_size.height!==this.video_metadata_.sps_details.codec_size.height){var i=this.video_metadata_.sps_details.codec_size,n=t.codec_size;return s.a.v(this.TAG,"H264: Coded Resolution changed from "+i.width+"x"+i.height+" to "+n.width+"x"+n.height),!0}return t.present_size.width!==this.video_metadata_.sps_details.present_size.width&&(s.a.v(this.TAG,"H264: Present resolution width changed from "+this.video_metadata_.sps_details.present_size.width+" to "+t.present_size.width),!0)},t.prototype.isInitSegmentDispatched=function(){return this.has_video_&&this.has_audio_?this.video_init_segment_dispatched_&&this.audio_init_segment_dispatched_:this.has_video_&&!this.has_audio_?this.video_init_segment_dispatched_:!(this.has_video_||!this.has_audio_)&&this.audio_init_segment_dispatched_},t.prototype.dispatchVideoInitSegment=function(){var e=this.video_metadata_.sps_details,t={type:"video"};t.id=this.video_track_.id,t.timescale=1e3,t.duration=this.duration_,t.codecWidth=e.codec_size.width,t.codecHeight=e.codec_size.height,t.presentWidth=e.present_size.width,t.presentHeight=e.present_size.height,t.profile=e.profile_string,t.level=e.level_string,t.bitDepth=e.bit_depth,t.chromaFormat=e.chroma_format,t.sarRatio=e.sar_ratio,t.frameRate=e.frame_rate;var i=t.frameRate.fps_den,n=t.frameRate.fps_num;t.refSampleDuration=i/n*1e3,t.codec=e.codec_mimetype;var r=this.video_metadata_.sps.data.subarray(4),a=this.video_metadata_.pps.data.subarray(4),o=new C(r,a,e);t.avcc=o.getData(),0==this.video_init_segment_dispatched_&&s.a.v(this.TAG,"Generated first AVCDecoderConfigurationRecord for mimeType: "+t.codec),this.onTrackMetadata("video",t),this.video_init_segment_dispatched_=!0,this.video_metadata_changed_=!1;var h=this.media_info_;h.hasVideo=!0,h.width=t.codecWidth,h.height=t.codecHeight,h.fps=t.frameRate.fps,h.profile=t.profile,h.level=t.level,h.refFrames=e.ref_frames,h.chromaFormat=e.chroma_format_string,h.sarNum=t.sarRatio.width,h.sarDen=t.sarRatio.height,h.videoCodec=t.codec,h.hasAudio&&h.audioCodec?h.mimeType='video/mp2t; codecs="'+h.videoCodec+","+h.audioCodec+'"':h.mimeType='video/mp2t; codecs="'+h.videoCodec+'"',h.isComplete()&&this.onMediaInfo(h)},t.prototype.dispatchVideoMediaSegment=function(){this.isInitSegmentDispatched()&&this.video_track_.length&&this.onDataAvailable(null,this.video_track_)},t.prototype.dispatchAudioMediaSegment=function(){this.isInitSegmentDispatched()&&this.audio_track_.length&&this.onDataAvailable(this.audio_track_,null)},t.prototype.dispatchAudioVideoMediaSegment=function(){this.isInitSegmentDispatched()&&(this.audio_track_.length||this.video_track_.length)&&this.onDataAvailable(this.audio_track_,this.video_track_)},t.prototype.parseAACPayload=function(e,t){if(!this.has_video_||this.video_init_segment_dispatched_){if(this.aac_last_incomplete_data_){var i=new Uint8Array(e.byteLength+this.aac_last_incomplete_data_.byteLength);i.set(this.aac_last_incomplete_data_,0),i.set(e,this.aac_last_incomplete_data_.byteLength),e=i}var n,r;if(null!=t)r=t/this.timescale_;else{if(null==this.aac_last_sample_pts_)return void s.a.w(this.TAG,"AAC: Unknown pts");n=1024/this.audio_metadata_.sampling_frequency*1e3,r=this.aac_last_sample_pts_+n}if(this.aac_last_incomplete_data_&&this.aac_last_sample_pts_){n=1024/this.audio_metadata_.sampling_frequency*1e3;var a=this.aac_last_sample_pts_+n;Math.abs(a-r)>1&&(s.a.w(this.TAG,"AAC: Detected pts overlapped, expected: "+a+"ms, PES pts: "+r+"ms"),r=a)}for(var o,h=new M(e),d=null,u=r;null!=(d=h.readNextAACFrame());){n=1024/d.sampling_frequency*1e3,0==this.audio_init_segment_dispatched_?(this.audio_metadata_.audio_object_type=d.audio_object_type,this.audio_metadata_.sampling_freq_index=d.sampling_freq_index,this.audio_metadata_.sampling_frequency=d.sampling_frequency,this.audio_metadata_.channel_config=d.channel_config,this.dispatchAudioInitSegment(d)):this.detectAudioMetadataChange(d)&&(this.dispatchAudioMediaSegment(),this.dispatchAudioInitSegment(d)),o=u;var _=Math.floor(u),c={unit:d.data,length:d.data.byteLength,pts:_,dts:_};this.audio_track_.samples.push(c),this.audio_track_.length+=d.data.byteLength,u+=n}h.hasIncompleteData()&&(this.aac_last_incomplete_data_=h.getIncompleteData()),o&&(this.aac_last_sample_pts_=o)}},t.prototype.detectAudioMetadataChange=function(e){return e.audio_object_type!==this.audio_metadata_.audio_object_type?(s.a.v(this.TAG,"AAC: AudioObjectType changed from "+this.audio_metadata_.audio_object_type+" to "+e.audio_object_type),!0):e.sampling_freq_index!==this.audio_metadata_.sampling_freq_index?(s.a.v(this.TAG,"AAC: SamplingFrequencyIndex changed from "+this.audio_metadata_.sampling_freq_index+" to "+e.sampling_freq_index),!0):e.channel_config!==this.audio_metadata_.channel_config&&(s.a.v(this.TAG,"AAC: Channel configuration changed from "+this.audio_metadata_.channel_config+" to "+e.channel_config),!0)},t.prototype.dispatchAudioInitSegment=function(e){var t=new x(e),i={type:"audio"};i.id=this.audio_track_.id,i.timescale=1e3,i.duration=this.duration_,i.audioSampleRate=t.sampling_rate,i.channelCount=t.channel_count,i.codec=t.codec_mimetype,i.originalCodec=t.original_codec_mimetype,i.config=t.config,i.refSampleDuration=1024/i.audioSampleRate*i.timescale,0==this.audio_init_segment_dispatched_&&s.a.v(this.TAG,"Generated first AudioSpecificConfig for mimeType: "+i.codec),this.onTrackMetadata("audio",i),this.audio_init_segment_dispatched_=!0,this.video_metadata_changed_=!1;var n=this.media_info_;n.hasAudio=!0,n.audioCodec=i.originalCodec,n.audioSampleRate=i.audioSampleRate,n.audioChannelCount=i.channelCount,n.hasVideo&&n.videoCodec?n.mimeType='video/mp2t; codecs="'+n.videoCodec+","+n.audioCodec+'"':n.mimeType='video/mp2t; codecs="'+n.audioCodec+'"',n.isComplete()&&this.onMediaInfo(n)},t.prototype.dispatchPESPrivateDataDescriptor=function(e,t,i){var n=new U;n.pid=e,n.stream_type=t,n.descriptor=i,this.onPESPrivateDataDescriptor&&this.onPESPrivateDataDescriptor(n)},t.prototype.parsePESPrivateDataPayload=function(e,t,i,n,r){var s=new B;if(s.pid=n,s.stream_id=r,s.len=e.byteLength,s.data=e,null!=t){var a=Math.floor(t/this.timescale_);s.pts=a}else s.nearest_pts=this.aac_last_sample_pts_;if(null!=i){var o=Math.floor(i/this.timescale_);s.dts=o}this.onPESPrivateData&&this.onPESPrivateData(s)},t.prototype.parseTimedID3MetadataPayload=function(e,t,i,n,r){var s=new B;if(s.pid=n,s.stream_id=r,s.len=e.byteLength,s.data=e,null!=t){var a=Math.floor(t/this.timescale_);s.pts=a}if(null!=i){var o=Math.floor(i/this.timescale_);s.dts=o}this.onTimedID3Metadata&&this.onTimedID3Metadata(s)},t}(y),G=function(){function e(){}return e.init=function(){for(var t in e.types={avc1:[],avcC:[],btrt:[],dinf:[],dref:[],esds:[],ftyp:[],hdlr:[],mdat:[],mdhd:[],mdia:[],mfhd:[],minf:[],moof:[],moov:[],mp4a:[],mvex:[],mvhd:[],sdtp:[],stbl:[],stco:[],stsc:[],stsd:[],stsz:[],stts:[],tfdt:[],tfhd:[],traf:[],trak:[],trun:[],trex:[],tkhd:[],vmhd:[],smhd:[],".mp3":[]},e.types)e.types.hasOwnProperty(t)&&(e.types[t]=[t.charCodeAt(0),t.charCodeAt(1),t.charCodeAt(2),t.charCodeAt(3)]);var i=e.constants={};i.FTYP=new Uint8Array([105,115,111,109,0,0,0,1,105,115,111,109,97,118,99,49]),i.STSD_PREFIX=new Uint8Array([0,0,0,0,0,0,0,1]),i.STTS=new Uint8Array([0,0,0,0,0,0,0,0]),i.STSC=i.STCO=i.STTS,i.STSZ=new Uint8Array([0,0,0,0,0,0,0,0,0,0,0,0]),i.HDLR_VIDEO=new Uint8Array([0,0,0,0,0,0,0,0,118,105,100,101,0,0,0,0,0,0,0,0,0,0,0,0,86,105,100,101,111,72,97,110,100,108,101,114,0]),i.HDLR_AUDIO=new Uint8Array([0,0,0,0,0,0,0,0,115,111,117,110,0,0,0,0,0,0,0,0,0,0,0,0,83,111,117,110,100,72,97,110,100,108,101,114,0]),i.DREF=new Uint8Array([0,0,0,0,0,0,0,1,0,0,0,12,117,114,108,32,0,0,0,1]),i.SMHD=new Uint8Array([0,0,0,0,0,0,0,0]),i.VMHD=new Uint8Array([0,0,0,1,0,0,0,0,0,0,0,0])},e.box=function(e){for(var t=8,i=null,n=Array.prototype.slice.call(arguments,1),r=n.length,s=0;s>>24&255,i[1]=t>>>16&255,i[2]=t>>>8&255,i[3]=255&t,i.set(e,4);var a=8;for(s=0;s>>24&255,t>>>16&255,t>>>8&255,255&t,i>>>24&255,i>>>16&255,i>>>8&255,255&i,0,1,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,64,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,255,255,255,255]))},e.trak=function(t){return e.box(e.types.trak,e.tkhd(t),e.mdia(t))},e.tkhd=function(t){var i=t.id,n=t.duration,r=t.presentWidth,s=t.presentHeight;return e.box(e.types.tkhd,new Uint8Array([0,0,0,7,0,0,0,0,0,0,0,0,i>>>24&255,i>>>16&255,i>>>8&255,255&i,0,0,0,0,n>>>24&255,n>>>16&255,n>>>8&255,255&n,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,64,0,0,0,r>>>8&255,255&r,0,0,s>>>8&255,255&s,0,0]))},e.mdia=function(t){return e.box(e.types.mdia,e.mdhd(t),e.hdlr(t),e.minf(t))},e.mdhd=function(t){var i=t.timescale,n=t.duration;return e.box(e.types.mdhd,new Uint8Array([0,0,0,0,0,0,0,0,0,0,0,0,i>>>24&255,i>>>16&255,i>>>8&255,255&i,n>>>24&255,n>>>16&255,n>>>8&255,255&n,85,196,0,0]))},e.hdlr=function(t){var i=null;return i="audio"===t.type?e.constants.HDLR_AUDIO:e.constants.HDLR_VIDEO,e.box(e.types.hdlr,i)},e.minf=function(t){var i=null;return i="audio"===t.type?e.box(e.types.smhd,e.constants.SMHD):e.box(e.types.vmhd,e.constants.VMHD),e.box(e.types.minf,i,e.dinf(),e.stbl(t))},e.dinf=function(){return e.box(e.types.dinf,e.box(e.types.dref,e.constants.DREF))},e.stbl=function(t){return e.box(e.types.stbl,e.stsd(t),e.box(e.types.stts,e.constants.STTS),e.box(e.types.stsc,e.constants.STSC),e.box(e.types.stsz,e.constants.STSZ),e.box(e.types.stco,e.constants.STCO))},e.stsd=function(t){return"audio"===t.type?"mp3"===t.codec?e.box(e.types.stsd,e.constants.STSD_PREFIX,e.mp3(t)):e.box(e.types.stsd,e.constants.STSD_PREFIX,e.mp4a(t)):e.box(e.types.stsd,e.constants.STSD_PREFIX,e.avc1(t))},e.mp3=function(t){var i=t.channelCount,n=t.audioSampleRate,r=new Uint8Array([0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,i,0,16,0,0,0,0,n>>>8&255,255&n,0,0]);return e.box(e.types[".mp3"],r)},e.mp4a=function(t){var i=t.channelCount,n=t.audioSampleRate,r=new Uint8Array([0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,i,0,16,0,0,0,0,n>>>8&255,255&n,0,0]);return e.box(e.types.mp4a,r,e.esds(t))},e.esds=function(t){var i=t.config||[],n=i.length,r=new Uint8Array([0,0,0,0,3,23+n,0,1,0,4,15+n,64,21,0,0,0,0,0,0,0,0,0,0,0,5].concat([n]).concat(i).concat([6,1,2]));return e.box(e.types.esds,r)},e.avc1=function(t){var i=t.avcc,n=t.codecWidth,r=t.codecHeight,s=new Uint8Array([0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,n>>>8&255,255&n,r>>>8&255,255&r,0,72,0,0,0,72,0,0,0,0,0,0,0,1,10,120,113,113,47,102,108,118,46,106,115,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,24,255,255]);return e.box(e.types.avc1,s,e.box(e.types.avcC,i))},e.mvex=function(t){return e.box(e.types.mvex,e.trex(t))},e.trex=function(t){var i=t.id,n=new Uint8Array([0,0,0,0,i>>>24&255,i>>>16&255,i>>>8&255,255&i,0,0,0,1,0,0,0,0,0,0,0,0,0,1,0,1]);return e.box(e.types.trex,n)},e.moof=function(t,i){return e.box(e.types.moof,e.mfhd(t.sequenceNumber),e.traf(t,i))},e.mfhd=function(t){var i=new Uint8Array([0,0,0,0,t>>>24&255,t>>>16&255,t>>>8&255,255&t]);return e.box(e.types.mfhd,i)},e.traf=function(t,i){var n=t.id,r=e.box(e.types.tfhd,new Uint8Array([0,0,0,0,n>>>24&255,n>>>16&255,n>>>8&255,255&n])),s=e.box(e.types.tfdt,new Uint8Array([0,0,0,0,i>>>24&255,i>>>16&255,i>>>8&255,255&i])),a=e.sdtp(t),o=e.trun(t,a.byteLength+16+16+8+16+8+8);return e.box(e.types.traf,r,s,o,a)},e.sdtp=function(t){for(var i=t.samples||[],n=i.length,r=new Uint8Array(4+n),s=0;s>>24&255,r>>>16&255,r>>>8&255,255&r,i>>>24&255,i>>>16&255,i>>>8&255,255&i],0);for(var o=0;o>>24&255,h>>>16&255,h>>>8&255,255&h,d>>>24&255,d>>>16&255,d>>>8&255,255&d,u.isLeading<<2|u.dependsOn,u.isDependedOn<<6|u.hasRedundancy<<4|u.isNonSync,0,0,_>>>24&255,_>>>16&255,_>>>8&255,255&_],12+16*o)}return e.box(e.types.trun,a)},e.mdat=function(t){return e.box(e.types.mdat,t)},e}();G.init();var V=G,j=function(){function e(){}return e.getSilentFrame=function(e,t){if("mp4a.40.2"===e){if(1===t)return new Uint8Array([0,200,0,128,35,128]);if(2===t)return new Uint8Array([33,0,73,144,2,25,0,35,128]);if(3===t)return new Uint8Array([0,200,0,128,32,132,1,38,64,8,100,0,142]);if(4===t)return new Uint8Array([0,200,0,128,32,132,1,38,64,8,100,0,128,44,128,8,2,56]);if(5===t)return new Uint8Array([0,200,0,128,32,132,1,38,64,8,100,0,130,48,4,153,0,33,144,2,56]);if(6===t)return new Uint8Array([0,200,0,128,32,132,1,38,64,8,100,0,130,48,4,153,0,33,144,2,0,178,0,32,8,224])}else{if(1===t)return new Uint8Array([1,64,34,128,163,78,230,128,186,8,0,0,0,28,6,241,193,10,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,94]);if(2===t)return new Uint8Array([1,64,34,128,163,94,230,128,186,8,0,0,0,0,149,0,6,241,161,10,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,94]);if(3===t)return new Uint8Array([1,64,34,128,163,94,230,128,186,8,0,0,0,0,149,0,6,241,161,10,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,90,94])}return null},e}(),z=i(7),H=function(){function e(e){this.TAG="MP4Remuxer",this._config=e,this._isLive=!0===e.isLive,this._dtsBase=-1,this._dtsBaseInited=!1,this._audioDtsBase=1/0,this._videoDtsBase=1/0,this._audioNextDts=void 0,this._videoNextDts=void 0,this._audioStashedLastSample=null,this._videoStashedLastSample=null,this._audioMeta=null,this._videoMeta=null,this._audioSegmentInfoList=new z.c("audio"),this._videoSegmentInfoList=new z.c("video"),this._onInitSegment=null,this._onMediaSegment=null,this._forceFirstIDR=!(!a.a.chrome||!(a.a.version.major<50||50===a.a.version.major&&a.a.version.build<2661)),this._fillSilentAfterSeek=a.a.msedge||a.a.msie,this._mp3UseMpegAudio=!a.a.firefox,this._fillAudioTimestampGap=this._config.fixAudioTimestampGap}return e.prototype.destroy=function(){this._dtsBase=-1,this._dtsBaseInited=!1,this._audioMeta=null,this._videoMeta=null,this._audioSegmentInfoList.clear(),this._audioSegmentInfoList=null,this._videoSegmentInfoList.clear(),this._videoSegmentInfoList=null,this._onInitSegment=null,this._onMediaSegment=null},e.prototype.bindDataSource=function(e){return e.onDataAvailable=this.remux.bind(this),e.onTrackMetadata=this._onTrackMetadataReceived.bind(this),this},Object.defineProperty(e.prototype,"onInitSegment",{get:function(){return this._onInitSegment},set:function(e){this._onInitSegment=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onMediaSegment",{get:function(){return this._onMediaSegment},set:function(e){this._onMediaSegment=e},enumerable:!1,configurable:!0}),e.prototype.insertDiscontinuity=function(){this._audioNextDts=this._videoNextDts=void 0},e.prototype.seek=function(e){this._audioStashedLastSample=null,this._videoStashedLastSample=null,this._videoSegmentInfoList.clear(),this._audioSegmentInfoList.clear()},e.prototype.remux=function(e,t){if(!this._onMediaSegment)throw new _.a("MP4Remuxer: onMediaSegment callback must be specificed!");this._dtsBaseInited||this._calculateDtsBase(e,t),t&&this._remuxVideo(t),e&&this._remuxAudio(e)},e.prototype._onTrackMetadataReceived=function(e,t){var i=null,n="mp4",r=t.codec;if("audio"===e)this._audioMeta=t,"mp3"===t.codec&&this._mp3UseMpegAudio?(n="mpeg",r="",i=new Uint8Array):i=V.generateInitSegment(t);else{if("video"!==e)return;this._videoMeta=t,i=V.generateInitSegment(t)}if(!this._onInitSegment)throw new _.a("MP4Remuxer: onInitSegment callback must be specified!");this._onInitSegment(e,{type:e,data:i.buffer,codec:r,container:e+"/"+n,mediaDuration:t.duration})},e.prototype._calculateDtsBase=function(e,t){this._dtsBaseInited||(e&&e.samples&&e.samples.length&&(this._audioDtsBase=e.samples[0].dts),t&&t.samples&&t.samples.length&&(this._videoDtsBase=t.samples[0].dts),this._dtsBase=Math.min(this._audioDtsBase,this._videoDtsBase),this._dtsBaseInited=!0)},e.prototype.getTimestampBase=function(){return this._dtsBaseInited?this._dtsBase:0},e.prototype.flushStashedSamples=function(){var e=this._videoStashedLastSample,t=this._audioStashedLastSample,i={type:"video",id:1,sequenceNumber:0,samples:[],length:0};null!=e&&(i.samples.push(e),i.length=e.length);var n={type:"audio",id:2,sequenceNumber:0,samples:[],length:0};null!=t&&(n.samples.push(t),n.length=t.length),this._videoStashedLastSample=null,this._audioStashedLastSample=null,this._remuxVideo(i,!0),this._remuxAudio(n,!0)},e.prototype._remuxAudio=function(e,t){if(null!=this._audioMeta){var i,n=e,r=n.samples,o=void 0,h=-1,d=this._audioMeta.refSampleDuration,u="mp3"===this._audioMeta.codec&&this._mp3UseMpegAudio,_=this._dtsBaseInited&&void 0===this._audioNextDts,c=!1;if(r&&0!==r.length&&(1!==r.length||t)){var l=0,f=null,p=0;u?(l=0,p=n.length):(l=8,p=8+n.length);var m=null;if(r.length>1&&(p-=(m=r.pop()).length),null!=this._audioStashedLastSample){var g=this._audioStashedLastSample;this._audioStashedLastSample=null,r.unshift(g),p+=g.length}null!=m&&(this._audioStashedLastSample=m);var v=r[0].dts-this._dtsBase;if(this._audioNextDts)o=v-this._audioNextDts;else if(this._audioSegmentInfoList.isEmpty())o=0,this._fillSilentAfterSeek&&!this._videoSegmentInfoList.isEmpty()&&"mp3"!==this._audioMeta.originalCodec&&(c=!0);else{var y=this._audioSegmentInfoList.getLastSampleBefore(v);if(null!=y){var b=v-(y.originalDts+y.duration);b<=3&&(b=0),o=v-(y.dts+y.duration+b)}else o=0}if(c){var E=v-o,S=this._videoSegmentInfoList.getLastSegmentBefore(v);if(null!=S&&S.beginDts=3*d&&this._fillAudioTimestampGap&&!a.a.safari){k=!0;var P,M=Math.floor(o/d);s.a.w(this.TAG,"Large audio timestamp gap detected, may cause AV sync to drift. Silent frames will be generated to avoid unsync.\noriginalDts: "+D+" ms, curRefDts: "+O+" ms, dtsCorrection: "+Math.round(o)+" ms, generate: "+M+" frames"),A=Math.floor(O),I=Math.floor(O+d)-A,null==(P=j.getSilentFrame(this._audioMeta.originalCodec,this._audioMeta.channelCount))&&(s.a.w(this.TAG,"Unable to generate silent frame for "+this._audioMeta.originalCodec+" with "+this._audioMeta.channelCount+" channels, repeat last frame"),P=w),C=[];for(var x=0;x=1?L[L.length-1].duration:Math.floor(d);this._audioNextDts=A+I}-1===h&&(h=A),L.push({dts:A,pts:A,cts:0,unit:g.unit,size:g.unit.byteLength,duration:I,originalDts:D,flags:{isLeading:0,dependsOn:1,isDependedOn:0,hasRedundancy:0}}),k&&L.push.apply(L,C)}}if(0===L.length)return n.samples=[],void(n.length=0);u?f=new Uint8Array(p):((f=new Uint8Array(p))[0]=p>>>24&255,f[1]=p>>>16&255,f[2]=p>>>8&255,f[3]=255&p,f.set(V.types.mdat,4));for(T=0;T1&&(_-=(c=s.pop()).length),null!=this._videoStashedLastSample){var l=this._videoStashedLastSample;this._videoStashedLastSample=null,s.unshift(l),_+=l.length}null!=c&&(this._videoStashedLastSample=c);var f=s[0].dts-this._dtsBase;if(this._videoNextDts)a=f-this._videoNextDts;else if(this._videoSegmentInfoList.isEmpty())a=0;else{var p=this._videoSegmentInfoList.getLastSampleBefore(f);if(null!=p){var m=f-(p.originalDts+p.duration);m<=3&&(m=0),a=f-(p.dts+p.duration+m)}else a=0}for(var g=new z.b,v=[],y=0;y=1?v[v.length-1].duration:Math.floor(this._videoMeta.refSampleDuration);if(E){var T=new z.d(S,R,L,l.dts,!0);T.fileposition=l.fileposition,g.appendSyncPoint(T)}v.push({dts:S,pts:R,cts:A,units:l.units,size:l.length,isKeyframe:E,duration:L,originalDts:b,flags:{isLeading:0,dependsOn:E?2:1,isDependedOn:E?1:0,hasRedundancy:0,isNonSync:E?0:1}})}(u=new Uint8Array(_))[0]=_>>>24&255,u[1]=_>>>16&255,u[2]=_>>>8&255,u[3]=255&_,u.set(V.types.mdat,4);for(y=0;y0)this._demuxer.bindDataSource(this._ioctl),this._demuxer.timestampBase=this._mediaDataSource.segments[this._currentSegmentIndex].timestampBase,r=this._demuxer.parseChunks(e,t);else if((n=F.probe(e)).match){var a=this._demuxer=new F(n,this._config);this._remuxer||(this._remuxer=new H(this._config)),a.onError=this._onDemuxException.bind(this),a.onMediaInfo=this._onMediaInfo.bind(this),a.onMetaDataArrived=this._onMetaDataArrived.bind(this),a.onTimedID3Metadata=this._onTimedID3Metadata.bind(this),a.onPESPrivateDataDescriptor=this._onPESPrivateDataDescriptor.bind(this),a.onPESPrivateData=this._onPESPrivateData.bind(this),this._remuxer.bindDataSource(this._demuxer),this._demuxer.bindDataSource(this._ioctl),this._remuxer.onInitSegment=this._onRemuxerInitSegmentArrival.bind(this),this._remuxer.onMediaSegment=this._onRemuxerMediaSegmentArrival.bind(this),r=this._demuxer.parseChunks(e,t)}else if((n=v.probe(e)).match){this._demuxer=new v(n,this._config),this._remuxer||(this._remuxer=new H(this._config));var o=this._mediaDataSource;null==o.duration||isNaN(o.duration)||(this._demuxer.overridedDuration=o.duration),"boolean"==typeof o.hasAudio&&(this._demuxer.overridedHasAudio=o.hasAudio),"boolean"==typeof o.hasVideo&&(this._demuxer.overridedHasVideo=o.hasVideo),this._demuxer.timestampBase=o.segments[this._currentSegmentIndex].timestampBase,this._demuxer.onError=this._onDemuxException.bind(this),this._demuxer.onMediaInfo=this._onMediaInfo.bind(this),this._demuxer.onMetaDataArrived=this._onMetaDataArrived.bind(this),this._demuxer.onScriptDataArrived=this._onScriptDataArrived.bind(this),this._remuxer.bindDataSource(this._demuxer.bindDataSource(this._ioctl)),this._remuxer.onInitSegment=this._onRemuxerInitSegmentArrival.bind(this),this._remuxer.onMediaSegment=this._onRemuxerMediaSegmentArrival.bind(this),r=this._demuxer.parseChunks(e,t)}else n=null,s.a.e(this.TAG,"Non MPEG-TS/FLV, Unsupported media type!"),Promise.resolve().then((function(){i._internalAbort()})),this._emitter.emit(K.a.DEMUX_ERROR,m.a.FORMAT_UNSUPPORTED,"Non MPEG-TS/FLV, Unsupported media type!"),r=0;return r},e.prototype._onMediaInfo=function(e){var t=this;null==this._mediaInfo&&(this._mediaInfo=Object.assign({},e),this._mediaInfo.keyframesIndex=null,this._mediaInfo.segments=[],this._mediaInfo.segmentCount=this._mediaDataSource.segments.length,Object.setPrototypeOf(this._mediaInfo,o.a.prototype));var i=Object.assign({},e);Object.setPrototypeOf(i,o.a.prototype),this._mediaInfo.segments[this._currentSegmentIndex]=i,this._reportSegmentMediaInfo(this._currentSegmentIndex),null!=this._pendingSeekTime&&Promise.resolve().then((function(){var e=t._pendingSeekTime;t._pendingSeekTime=null,t.seek(e)}))},e.prototype._onMetaDataArrived=function(e){this._emitter.emit(K.a.METADATA_ARRIVED,e)},e.prototype._onScriptDataArrived=function(e){this._emitter.emit(K.a.SCRIPTDATA_ARRIVED,e)},e.prototype._onTimedID3Metadata=function(e){var t=this._remuxer.getTimestampBase();null!=e.pts&&(e.pts-=t),null!=e.dts&&(e.dts-=t),this._emitter.emit(K.a.TIMED_ID3_METADATA_ARRIVED,e)},e.prototype._onPESPrivateDataDescriptor=function(e){this._emitter.emit(K.a.PES_PRIVATE_DATA_DESCRIPTOR,e)},e.prototype._onPESPrivateData=function(e){var t=this._remuxer.getTimestampBase();null!=e.pts&&(e.pts-=t),null!=e.nearest_pts&&(e.nearest_pts-=t),null!=e.dts&&(e.dts-=t),this._emitter.emit(K.a.PES_PRIVATE_DATA_ARRIVED,e)},e.prototype._onIOSeeked=function(){this._remuxer.insertDiscontinuity()},e.prototype._onIOComplete=function(e){var t=e+1;t0&&i[0].originalDts===n&&(n=i[0].pts),this._emitter.emit(K.a.RECOMMEND_SEEKPOINT,n)}},e.prototype._enableStatisticsReporter=function(){null==this._statisticsReporter&&(this._statisticsReporter=self.setInterval(this._reportStatisticsInfo.bind(this),this._config.statisticsInfoReportInterval))},e.prototype._disableStatisticsReporter=function(){this._statisticsReporter&&(self.clearInterval(this._statisticsReporter),this._statisticsReporter=null)},e.prototype._reportSegmentMediaInfo=function(e){var t=this._mediaInfo.segments[e],i=Object.assign({},t);i.duration=this._mediaInfo.duration,i.segmentCount=this._mediaInfo.segmentCount,delete i.segments,delete i.keyframesIndex,this._emitter.emit(K.a.MEDIA_INFO,i)},e.prototype._reportStatisticsInfo=function(){var e={};e.url=this._ioctl.currentURL,e.hasRedirect=this._ioctl.hasRedirect,e.hasRedirect&&(e.redirectedURL=this._ioctl.currentRedirectedURL),e.speed=this._ioctl.currentSpeed,e.loaderType=this._ioctl.loaderType,e.currentSegmentIndex=this._currentSegmentIndex,e.totalSegmentCount=this._mediaDataSource.segments.length,this._emitter.emit(K.a.STATISTICS_INFO,e)},e}();t.a=W},function(e,t,i){"use strict";var n,r=i(0),s=function(){function e(){this._firstCheckpoint=0,this._lastCheckpoint=0,this._intervalBytes=0,this._totalBytes=0,this._lastSecondBytes=0,self.performance&&self.performance.now?this._now=self.performance.now.bind(self.performance):this._now=Date.now}return e.prototype.reset=function(){this._firstCheckpoint=this._lastCheckpoint=0,this._totalBytes=this._intervalBytes=0,this._lastSecondBytes=0},e.prototype.addBytes=function(e){0===this._firstCheckpoint?(this._firstCheckpoint=this._now(),this._lastCheckpoint=this._firstCheckpoint,this._intervalBytes+=e,this._totalBytes+=e):this._now()-this._lastCheckpoint<1e3?(this._intervalBytes+=e,this._totalBytes+=e):(this._lastSecondBytes=this._intervalBytes,this._intervalBytes=e,this._totalBytes+=e,this._lastCheckpoint=this._now())},Object.defineProperty(e.prototype,"currentKBps",{get:function(){this.addBytes(0);var e=(this._now()-this._lastCheckpoint)/1e3;return 0==e&&(e=1),this._intervalBytes/e/1024},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"lastSecondKBps",{get:function(){return this.addBytes(0),0!==this._lastSecondBytes?this._lastSecondBytes/1024:this._now()-this._lastCheckpoint>=500?this.currentKBps:0},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"averageKBps",{get:function(){var e=(this._now()-this._firstCheckpoint)/1e3;return this._totalBytes/e/1024},enumerable:!1,configurable:!0}),e}(),a=i(2),o=i(4),h=i(3),d=(n=function(e,t){return(n=Object.setPrototypeOf||{__proto__:[]}instanceof Array&&function(e,t){e.__proto__=t}||function(e,t){for(var i in t)t.hasOwnProperty(i)&&(e[i]=t[i])})(e,t)},function(e,t){function i(){this.constructor=e}n(e,t),e.prototype=null===t?Object.create(t):(i.prototype=t.prototype,new i)}),u=function(e){function t(t,i){var n=e.call(this,"fetch-stream-loader")||this;return n.TAG="FetchStreamLoader",n._seekHandler=t,n._config=i,n._needStash=!0,n._requestAbort=!1,n._abortController=null,n._contentLength=null,n._receivedLength=0,n}return d(t,e),t.isSupported=function(){try{var e=o.a.msedge&&o.a.version.minor>=15048,t=!o.a.msedge||e;return self.fetch&&self.ReadableStream&&t}catch(e){return!1}},t.prototype.destroy=function(){this.isWorking()&&this.abort(),e.prototype.destroy.call(this)},t.prototype.open=function(e,t){var i=this;this._dataSource=e,this._range=t;var n=e.url;this._config.reuseRedirectedURL&&null!=e.redirectedURL&&(n=e.redirectedURL);var r=this._seekHandler.getConfig(n,t),s=new self.Headers;if("object"==typeof r.headers){var o=r.headers;for(var d in o)o.hasOwnProperty(d)&&s.append(d,o[d])}var u={method:"GET",headers:s,mode:"cors",cache:"default",referrerPolicy:"no-referrer-when-downgrade"};if("object"==typeof this._config.headers)for(var d in this._config.headers)s.append(d,this._config.headers[d]);!1===e.cors&&(u.mode="same-origin"),e.withCredentials&&(u.credentials="include"),e.referrerPolicy&&(u.referrerPolicy=e.referrerPolicy),self.AbortController&&(this._abortController=new self.AbortController,u.signal=this._abortController.signal),this._status=a.c.kConnecting,self.fetch(r.url,u).then((function(e){if(i._requestAbort)return i._status=a.c.kIdle,void e.body.cancel();if(e.ok&&e.status>=200&&e.status<=299){if(e.url!==r.url&&i._onURLRedirect){var t=i._seekHandler.removeURLParameters(e.url);i._onURLRedirect(t)}var n=e.headers.get("Content-Length");return null!=n&&(i._contentLength=parseInt(n),0!==i._contentLength&&i._onContentLengthKnown&&i._onContentLengthKnown(i._contentLength)),i._pump.call(i,e.body.getReader())}if(i._status=a.c.kError,!i._onError)throw new h.d("FetchStreamLoader: Http code invalid, "+e.status+" "+e.statusText);i._onError(a.b.HTTP_STATUS_CODE_INVALID,{code:e.status,msg:e.statusText})})).catch((function(e){if(!i._abortController||!i._abortController.signal.aborted){if(i._status=a.c.kError,!i._onError)throw e;i._onError(a.b.EXCEPTION,{code:-1,msg:e.message})}}))},t.prototype.abort=function(){if(this._requestAbort=!0,(this._status!==a.c.kBuffering||!o.a.chrome)&&this._abortController)try{this._abortController.abort()}catch(e){}},t.prototype._pump=function(e){var t=this;return e.read().then((function(i){if(i.done)if(null!==t._contentLength&&t._receivedLength299)){if(this._status=a.c.kError,!this._onError)throw new h.d("MozChunkedLoader: Http code invalid, "+t.status+" "+t.statusText);this._onError(a.b.HTTP_STATUS_CODE_INVALID,{code:t.status,msg:t.statusText})}else this._status=a.c.kBuffering}},t.prototype._onProgress=function(e){if(this._status!==a.c.kError){null===this._contentLength&&null!==e.total&&0!==e.total&&(this._contentLength=e.total,this._onContentLengthKnown&&this._onContentLengthKnown(this._contentLength));var t=e.target.response,i=this._range.from+this._receivedLength;this._receivedLength+=t.byteLength,this._onDataArrival&&this._onDataArrival(t,i,this._receivedLength)}},t.prototype._onLoadEnd=function(e){!0!==this._requestAbort?this._status!==a.c.kError&&(this._status=a.c.kComplete,this._onComplete&&this._onComplete(this._range.from,this._range.from+this._receivedLength-1)):this._requestAbort=!1},t.prototype._onXhrError=function(e){this._status=a.c.kError;var t=0,i=null;if(this._contentLength&&e.loaded=this._contentLength&&(i=this._range.from+this._contentLength-1),this._currentRequestRange={from:t,to:i},this._internalOpen(this._dataSource,this._currentRequestRange)},t.prototype._internalOpen=function(e,t){this._lastTimeLoaded=0;var i=e.url;this._config.reuseRedirectedURL&&(null!=this._currentRedirectedURL?i=this._currentRedirectedURL:null!=e.redirectedURL&&(i=e.redirectedURL));var n=this._seekHandler.getConfig(i,t);this._currentRequestURL=n.url;var r=this._xhr=new XMLHttpRequest;if(r.open("GET",n.url,!0),r.responseType="arraybuffer",r.onreadystatechange=this._onReadyStateChange.bind(this),r.onprogress=this._onProgress.bind(this),r.onload=this._onLoad.bind(this),r.onerror=this._onXhrError.bind(this),e.withCredentials&&(r.withCredentials=!0),"object"==typeof n.headers){var s=n.headers;for(var a in s)s.hasOwnProperty(a)&&r.setRequestHeader(a,s[a])}if("object"==typeof this._config.headers){s=this._config.headers;for(var a in s)s.hasOwnProperty(a)&&r.setRequestHeader(a,s[a])}r.send()},t.prototype.abort=function(){this._requestAbort=!0,this._internalAbort(),this._status=a.c.kComplete},t.prototype._internalAbort=function(){this._xhr&&(this._xhr.onreadystatechange=null,this._xhr.onprogress=null,this._xhr.onload=null,this._xhr.onerror=null,this._xhr.abort(),this._xhr=null)},t.prototype._onReadyStateChange=function(e){var t=e.target;if(2===t.readyState){if(null!=t.responseURL){var i=this._seekHandler.removeURLParameters(t.responseURL);t.responseURL!==this._currentRequestURL&&i!==this._currentRedirectedURL&&(this._currentRedirectedURL=i,this._onURLRedirect&&this._onURLRedirect(i))}if(t.status>=200&&t.status<=299){if(this._waitForTotalLength)return;this._status=a.c.kBuffering}else{if(this._status=a.c.kError,!this._onError)throw new h.d("RangeLoader: Http code invalid, "+t.status+" "+t.statusText);this._onError(a.b.HTTP_STATUS_CODE_INVALID,{code:t.status,msg:t.statusText})}}},t.prototype._onProgress=function(e){if(this._status!==a.c.kError){if(null===this._contentLength){var t=!1;if(this._waitForTotalLength){this._waitForTotalLength=!1,this._totalLengthReceived=!0,t=!0;var i=e.total;this._internalAbort(),null!=i&0!==i&&(this._totalLength=i)}if(-1===this._range.to?this._contentLength=this._totalLength-this._range.from:this._contentLength=this._range.to-this._range.from+1,t)return void this._openSubRange();this._onContentLengthKnown&&this._onContentLengthKnown(this._contentLength)}var n=e.loaded-this._lastTimeLoaded;this._lastTimeLoaded=e.loaded,this._speedSampler.addBytes(n)}},t.prototype._normalizeSpeed=function(e){var t=this._chunkSizeKBList,i=t.length-1,n=0,r=0,s=i;if(e=t[n]&&e=3&&(t=this._speedSampler.currentKBps)),0!==t){var i=this._normalizeSpeed(t);this._currentSpeedNormalized!==i&&(this._currentSpeedNormalized=i,this._currentChunkSizeKB=i)}var n=e.target.response,r=this._range.from+this._receivedLength;this._receivedLength+=n.byteLength;var s=!1;null!=this._contentLength&&this._receivedLength0&&this._receivedLength0)for(var s=i.split("&"),a=0;a0;o[0]!==this._startName&&o[0]!==this._endName&&(h&&(r+="&"),r+=s[a])}return 0===r.length?t:t+"?"+r},e}(),y=function(){function e(e,t,i){this.TAG="IOController",this._config=t,this._extraData=i,this._stashInitialSize=65536,null!=t.stashInitialSize&&t.stashInitialSize>0&&(this._stashInitialSize=t.stashInitialSize),this._stashUsed=0,this._stashSize=this._stashInitialSize,this._bufferSize=3145728,this._stashBuffer=new ArrayBuffer(this._bufferSize),this._stashByteStart=0,this._enableStash=!0,!1===t.enableStashBuffer&&(this._enableStash=!1),this._loader=null,this._loaderClass=null,this._seekHandler=null,this._dataSource=e,this._isWebSocketURL=/wss?:\/\/(.+?)/.test(e.url),this._refTotalLength=e.filesize?e.filesize:null,this._totalLength=this._refTotalLength,this._fullRequestFlag=!1,this._currentRange=null,this._redirectedURL=null,this._speedNormalized=0,this._speedSampler=new s,this._speedNormalizeList=[32,64,96,128,192,256,384,512,768,1024,1536,2048,3072,4096],this._isEarlyEofReconnecting=!1,this._paused=!1,this._resumeFrom=0,this._onDataArrival=null,this._onSeeked=null,this._onError=null,this._onComplete=null,this._onRedirect=null,this._onRecoveredEarlyEof=null,this._selectSeekHandler(),this._selectLoader(),this._createLoader()}return e.prototype.destroy=function(){this._loader.isWorking()&&this._loader.abort(),this._loader.destroy(),this._loader=null,this._loaderClass=null,this._dataSource=null,this._stashBuffer=null,this._stashUsed=this._stashSize=this._bufferSize=this._stashByteStart=0,this._currentRange=null,this._speedSampler=null,this._isEarlyEofReconnecting=!1,this._onDataArrival=null,this._onSeeked=null,this._onError=null,this._onComplete=null,this._onRedirect=null,this._onRecoveredEarlyEof=null,this._extraData=null},e.prototype.isWorking=function(){return this._loader&&this._loader.isWorking()&&!this._paused},e.prototype.isPaused=function(){return this._paused},Object.defineProperty(e.prototype,"status",{get:function(){return this._loader.status},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"extraData",{get:function(){return this._extraData},set:function(e){this._extraData=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onDataArrival",{get:function(){return this._onDataArrival},set:function(e){this._onDataArrival=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onSeeked",{get:function(){return this._onSeeked},set:function(e){this._onSeeked=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onError",{get:function(){return this._onError},set:function(e){this._onError=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onComplete",{get:function(){return this._onComplete},set:function(e){this._onComplete=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onRedirect",{get:function(){return this._onRedirect},set:function(e){this._onRedirect=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"onRecoveredEarlyEof",{get:function(){return this._onRecoveredEarlyEof},set:function(e){this._onRecoveredEarlyEof=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"currentURL",{get:function(){return this._dataSource.url},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"hasRedirect",{get:function(){return null!=this._redirectedURL||null!=this._dataSource.redirectedURL},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"currentRedirectedURL",{get:function(){return this._redirectedURL||this._dataSource.redirectedURL},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"currentSpeed",{get:function(){return this._loaderClass===f?this._loader.currentSpeed:this._speedSampler.lastSecondKBps},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"loaderType",{get:function(){return this._loader.type},enumerable:!1,configurable:!0}),e.prototype._selectSeekHandler=function(){var e=this._config;if("range"===e.seekType)this._seekHandler=new g(this._config.rangeLoadZeroStart);else if("param"===e.seekType){var t=e.seekParamStart||"bstart",i=e.seekParamEnd||"bend";this._seekHandler=new v(t,i)}else{if("custom"!==e.seekType)throw new h.b("Invalid seekType in config: "+e.seekType);if("function"!=typeof e.customSeekHandler)throw new h.b("Custom seekType specified in config but invalid customSeekHandler!");this._seekHandler=new e.customSeekHandler}},e.prototype._selectLoader=function(){if(null!=this._config.customLoader)this._loaderClass=this._config.customLoader;else if(this._isWebSocketURL)this._loaderClass=m;else if(u.isSupported())this._loaderClass=u;else if(c.isSupported())this._loaderClass=c;else{if(!f.isSupported())throw new h.d("Your browser doesn't support xhr with arraybuffer responseType!");this._loaderClass=f}},e.prototype._createLoader=function(){this._loader=new this._loaderClass(this._seekHandler,this._config),!1===this._loader.needStashBuffer&&(this._enableStash=!1),this._loader.onContentLengthKnown=this._onContentLengthKnown.bind(this),this._loader.onURLRedirect=this._onURLRedirect.bind(this),this._loader.onDataArrival=this._onLoaderChunkArrival.bind(this),this._loader.onComplete=this._onLoaderComplete.bind(this),this._loader.onError=this._onLoaderError.bind(this)},e.prototype.open=function(e){this._currentRange={from:0,to:-1},e&&(this._currentRange.from=e),this._speedSampler.reset(),e||(this._fullRequestFlag=!0),this._loader.open(this._dataSource,Object.assign({},this._currentRange))},e.prototype.abort=function(){this._loader.abort(),this._paused&&(this._paused=!1,this._resumeFrom=0)},e.prototype.pause=function(){this.isWorking()&&(this._loader.abort(),0!==this._stashUsed?(this._resumeFrom=this._stashByteStart,this._currentRange.to=this._stashByteStart-1):this._resumeFrom=this._currentRange.to+1,this._stashUsed=0,this._stashByteStart=0,this._paused=!0)},e.prototype.resume=function(){if(this._paused){this._paused=!1;var e=this._resumeFrom;this._resumeFrom=0,this._internalSeek(e,!0)}},e.prototype.seek=function(e){this._paused=!1,this._stashUsed=0,this._stashByteStart=0,this._internalSeek(e,!0)},e.prototype._internalSeek=function(e,t){this._loader.isWorking()&&this._loader.abort(),this._flushStashBuffer(t),this._loader.destroy(),this._loader=null;var i={from:e,to:-1};this._currentRange={from:i.from,to:-1},this._speedSampler.reset(),this._stashSize=this._stashInitialSize,this._createLoader(),this._loader.open(this._dataSource,i),this._onSeeked&&this._onSeeked()},e.prototype.updateUrl=function(e){if(!e||"string"!=typeof e||0===e.length)throw new h.b("Url must be a non-empty string!");this._dataSource.url=e},e.prototype._expandBuffer=function(e){for(var t=this._stashSize;t+10485760){var n=new Uint8Array(this._stashBuffer,0,this._stashUsed);new Uint8Array(i,0,t).set(n,0)}this._stashBuffer=i,this._bufferSize=t}},e.prototype._normalizeSpeed=function(e){var t=this._speedNormalizeList,i=t.length-1,n=0,r=0,s=i;if(e=t[n]&&e=512&&e<=1024?Math.floor(1.5*e):2*e)>8192&&(t=8192);var i=1024*t+1048576;this._bufferSize0){var s=this._stashBuffer.slice(0,this._stashUsed);if((d=this._dispatchChunks(s,this._stashByteStart))0){u=new Uint8Array(s,d);o.set(u,0),this._stashUsed=u.byteLength,this._stashByteStart+=d}}else this._stashUsed=0,this._stashByteStart+=d;this._stashUsed+e.byteLength>this._bufferSize&&(this._expandBuffer(this._stashUsed+e.byteLength),o=new Uint8Array(this._stashBuffer,0,this._bufferSize)),o.set(new Uint8Array(e),this._stashUsed),this._stashUsed+=e.byteLength}else{if((d=this._dispatchChunks(e,t))this._bufferSize&&(this._expandBuffer(a),o=new Uint8Array(this._stashBuffer,0,this._bufferSize)),o.set(new Uint8Array(e,d),0),this._stashUsed+=a,this._stashByteStart=t+d}}else if(0===this._stashUsed){var a;if((d=this._dispatchChunks(e,t))this._bufferSize&&this._expandBuffer(a),(o=new Uint8Array(this._stashBuffer,0,this._bufferSize)).set(new Uint8Array(e,d),0),this._stashUsed+=a,this._stashByteStart=t+d}else{var o,d;if(this._stashUsed+e.byteLength>this._bufferSize&&this._expandBuffer(this._stashUsed+e.byteLength),(o=new Uint8Array(this._stashBuffer,0,this._bufferSize)).set(new Uint8Array(e),this._stashUsed),this._stashUsed+=e.byteLength,(d=this._dispatchChunks(this._stashBuffer.slice(0,this._stashUsed),this._stashByteStart))0){var u=new Uint8Array(this._stashBuffer,d);o.set(u,0)}this._stashUsed-=d,this._stashByteStart+=d}}},e.prototype._flushStashBuffer=function(e){if(this._stashUsed>0){var t=this._stashBuffer.slice(0,this._stashUsed),i=this._dispatchChunks(t,this._stashByteStart),n=t.byteLength-i;if(i0){var s=new Uint8Array(this._stashBuffer,0,this._bufferSize),a=new Uint8Array(t,i);s.set(a,0),this._stashUsed=a.byteLength,this._stashByteStart+=i}return 0}r.a.w(this.TAG,n+" bytes unconsumed data remain when flush buffer, dropped")}return this._stashUsed=0,this._stashByteStart=0,n}return 0},e.prototype._onLoaderComplete=function(e,t){this._flushStashBuffer(!0),this._onComplete&&this._onComplete(this._extraData)},e.prototype._onLoaderError=function(e,t){switch(r.a.e(this.TAG,"Loader error, code = "+t.code+", msg = "+t.msg),this._flushStashBuffer(!1),this._isEarlyEofReconnecting&&(this._isEarlyEofReconnecting=!1,e=a.b.UNRECOVERABLE_EARLY_EOF),e){case a.b.EARLY_EOF:if(!this._config.isLive&&this._totalLength){var i=this._currentRange.to+1;return void(i0}),!1)}e.exports=function(e,t){t=t||{};var r={main:i.m},o=t.all?{main:Object.keys(r.main)}:function(e,t){for(var i={main:[t]},n={main:[]},r={main:{}};a(i);)for(var o=Object.keys(i),h=0;h1)for(var i=1;i0&&(n+=";codecs="+i.codec);var r=!1;if(_.a.v(this.TAG,"Received Initialization Segment, mimeType: "+n),this._lastInitSegments[i.type]=i,n!==this._mimeTypes[i.type]){if(this._mimeTypes[i.type])_.a.v(this.TAG,"Notice: "+i.type+" mimeType changed, origin: "+this._mimeTypes[i.type]+", target: "+n);else{r=!0;try{var s=this._sourceBuffers[i.type]=this._mediaSource.addSourceBuffer(n);s.addEventListener("error",this.e.onSourceBufferError),s.addEventListener("updateend",this.e.onSourceBufferUpdateEnd)}catch(e){return _.a.e(this.TAG,e.message),void this._emitter.emit(E.ERROR,{code:e.code,msg:e.message})}}this._mimeTypes[i.type]=n}t||this._pendingSegments[i.type].push(i),r||this._sourceBuffers[i.type]&&!this._sourceBuffers[i.type].updating&&this._doAppendSegments(),c.a.safari&&"audio/mpeg"===i.container&&i.mediaDuration>0&&(this._requireSetMediaDuration=!0,this._pendingMediaDuration=i.mediaDuration/1e3,this._updateMediaSourceDuration())},e.prototype.appendMediaSegment=function(e){var t=e;this._pendingSegments[t.type].push(t),this._config.autoCleanupSourceBuffer&&this._needCleanupSourceBuffer()&&this._doCleanupSourceBuffer();var i=this._sourceBuffers[t.type];!i||i.updating||this._hasPendingRemoveRanges()||this._doAppendSegments()},e.prototype.seek=function(e){for(var t in this._sourceBuffers)if(this._sourceBuffers[t]){var i=this._sourceBuffers[t];if("open"===this._mediaSource.readyState)try{i.abort()}catch(e){_.a.e(this.TAG,e.message)}this._idrList.clear();var n=this._pendingSegments[t];if(n.splice(0,n.length),"closed"!==this._mediaSource.readyState){for(var r=0;r=1&&e-n.start(0)>=this._config.autoCleanupMaxBackwardDuration)return!0}}return!1},e.prototype._doCleanupSourceBuffer=function(){var e=this._mediaElement.currentTime;for(var t in this._sourceBuffers){var i=this._sourceBuffers[t];if(i){for(var n=i.buffered,r=!1,s=0;s=this._config.autoCleanupMaxBackwardDuration){r=!0;var h=e-this._config.autoCleanupMinBackwardDuration;this._pendingRemoveRanges[t].push({start:a,end:h})}}else o0&&(isNaN(t)||i>t)&&(_.a.v(this.TAG,"Update MediaSource duration from "+t+" to "+i),this._mediaSource.duration=i),this._requireSetMediaDuration=!1,this._pendingMediaDuration=0}},e.prototype._doRemoveRanges=function(){for(var e in this._pendingRemoveRanges)if(this._sourceBuffers[e]&&!this._sourceBuffers[e].updating)for(var t=this._sourceBuffers[e],i=this._pendingRemoveRanges[e];i.length&&!t.updating;){var n=i.shift();t.remove(n.start,n.end)}},e.prototype._doAppendSegments=function(){var e=this._pendingSegments;for(var t in e)if(this._sourceBuffers[t]&&!this._sourceBuffers[t].updating&&e[t].length>0){var i=e[t].shift();if(i.timestampOffset){var n=this._sourceBuffers[t].timestampOffset,r=i.timestampOffset/1e3;Math.abs(n-r)>.1&&(_.a.v(this.TAG,"Update MPEG audio timestampOffset from "+n+" to "+r),this._sourceBuffers[t].timestampOffset=r),delete i.timestampOffset}if(!i.data||0===i.data.byteLength)continue;try{this._sourceBuffers[t].appendBuffer(i.data),this._isBufferFull=!1,"video"===t&&i.hasOwnProperty("info")&&this._idrList.appendArray(i.info.syncPoints)}catch(e){this._pendingSegments[t].unshift(i),22===e.code?(this._isBufferFull||this._emitter.emit(E.BUFFER_FULL),this._isBufferFull=!0):(_.a.e(this.TAG,e.message),this._emitter.emit(E.ERROR,{code:e.code,msg:e.message}))}}},e.prototype._onSourceOpen=function(){if(_.a.v(this.TAG,"MediaSource onSourceOpen"),this._mediaSource.removeEventListener("sourceopen",this.e.onSourceOpen),this._pendingSourceBufferInit.length>0)for(var e=this._pendingSourceBufferInit;e.length;){var t=e.shift();this.appendInitSegment(t,!0)}this._hasPendingSegments()&&this._doAppendSegments(),this._emitter.emit(E.SOURCE_OPEN)},e.prototype._onSourceEnded=function(){_.a.v(this.TAG,"MediaSource onSourceEnded")},e.prototype._onSourceClose=function(){_.a.v(this.TAG,"MediaSource onSourceClose"),this._mediaSource&&null!=this.e&&(this._mediaSource.removeEventListener("sourceopen",this.e.onSourceOpen),this._mediaSource.removeEventListener("sourceended",this.e.onSourceEnded),this._mediaSource.removeEventListener("sourceclose",this.e.onSourceClose))},e.prototype._hasPendingSegments=function(){var e=this._pendingSegments;return e.video.length>0||e.audio.length>0},e.prototype._hasPendingRemoveRanges=function(){var e=this._pendingRemoveRanges;return e.video.length>0||e.audio.length>0},e.prototype._onSourceBufferUpdateEnd=function(){this._requireSetMediaDuration?this._updateMediaSourceDuration():this._hasPendingRemoveRanges()?this._doRemoveRanges():this._hasPendingSegments()?this._doAppendSegments():this._hasPendingEos&&this.endOfStream(),this._emitter.emit(E.UPDATE_END)},e.prototype._onSourceBufferError=function(e){_.a.e(this.TAG,"SourceBuffer Error: "+e)},e}(),L=i(5),T={NETWORK_ERROR:"NetworkError",MEDIA_ERROR:"MediaError",OTHER_ERROR:"OtherError"},w={NETWORK_EXCEPTION:h.b.EXCEPTION,NETWORK_STATUS_CODE_INVALID:h.b.HTTP_STATUS_CODE_INVALID,NETWORK_TIMEOUT:h.b.CONNECTING_TIMEOUT,NETWORK_UNRECOVERABLE_EARLY_EOF:h.b.UNRECOVERABLE_EARLY_EOF,MEDIA_MSE_ERROR:"MediaMSEError",MEDIA_FORMAT_ERROR:L.a.FORMAT_ERROR,MEDIA_FORMAT_UNSUPPORTED:L.a.FORMAT_UNSUPPORTED,MEDIA_CODEC_UNSUPPORTED:L.a.CODEC_UNSUPPORTED},D=function(){function e(e,t){this.TAG="MSEPlayer",this._type="MSEPlayer",this._emitter=new u.a,this._config=a(),"object"==typeof t&&Object.assign(this._config,t);var i=e.type.toLowerCase();if("mse"!==i&&"mpegts"!==i&&"m2ts"!==i&&"flv"!==i)throw new A.b("MSEPlayer requires an mpegts/m2ts/flv MediaDataSource input!");!0===e.isLive&&(this._config.isLive=!0),this.e={onvLoadedMetadata:this._onvLoadedMetadata.bind(this),onvSeeking:this._onvSeeking.bind(this),onvCanPlay:this._onvCanPlay.bind(this),onvStalled:this._onvStalled.bind(this),onvProgress:this._onvProgress.bind(this)},self.performance&&self.performance.now?this._now=self.performance.now.bind(self.performance):this._now=Date.now,this._pendingSeekTime=null,this._requestSetTime=!1,this._seekpointRecord=null,this._progressChecker=null,this._mediaDataSource=e,this._mediaElement=null,this._msectl=null,this._transmuxer=null,this._mseSourceOpened=!1,this._hasPendingLoad=!1,this._receivedCanPlay=!1,this._mediaInfo=null,this._statisticsInfo=null;var n=c.a.chrome&&(c.a.version.major<50||50===c.a.version.major&&c.a.version.build<2661);this._alwaysSeekKeyframe=!!(n||c.a.msedge||c.a.msie),this._alwaysSeekKeyframe&&(this._config.accurateSeek=!1)}return e.prototype.destroy=function(){null!=this._progressChecker&&(window.clearInterval(this._progressChecker),this._progressChecker=null),this._transmuxer&&this.unload(),this._mediaElement&&this.detachMediaElement(),this.e=null,this._mediaDataSource=null,this._emitter.removeAllListeners(),this._emitter=null},e.prototype.on=function(e,t){var i=this;e===l.MEDIA_INFO?null!=this._mediaInfo&&Promise.resolve().then((function(){i._emitter.emit(l.MEDIA_INFO,i.mediaInfo)})):e===l.STATISTICS_INFO&&null!=this._statisticsInfo&&Promise.resolve().then((function(){i._emitter.emit(l.STATISTICS_INFO,i.statisticsInfo)})),this._emitter.addListener(e,t)},e.prototype.off=function(e,t){this._emitter.removeListener(e,t)},e.prototype.attachMediaElement=function(e){var t=this;if(this._mediaElement=e,e.addEventListener("loadedmetadata",this.e.onvLoadedMetadata),e.addEventListener("seeking",this.e.onvSeeking),e.addEventListener("canplay",this.e.onvCanPlay),e.addEventListener("stalled",this.e.onvStalled),e.addEventListener("progress",this.e.onvProgress),this._msectl=new R(this._config),this._msectl.on(E.UPDATE_END,this._onmseUpdateEnd.bind(this)),this._msectl.on(E.BUFFER_FULL,this._onmseBufferFull.bind(this)),this._msectl.on(E.SOURCE_OPEN,(function(){t._mseSourceOpened=!0,t._hasPendingLoad&&(t._hasPendingLoad=!1,t.load())})),this._msectl.on(E.ERROR,(function(e){t._emitter.emit(l.ERROR,T.MEDIA_ERROR,w.MEDIA_MSE_ERROR,e)})),this._msectl.attachMediaElement(e),null!=this._pendingSeekTime)try{e.currentTime=this._pendingSeekTime,this._pendingSeekTime=null}catch(e){}},e.prototype.detachMediaElement=function(){this._mediaElement&&(this._msectl.detachMediaElement(),this._mediaElement.removeEventListener("loadedmetadata",this.e.onvLoadedMetadata),this._mediaElement.removeEventListener("seeking",this.e.onvSeeking),this._mediaElement.removeEventListener("canplay",this.e.onvCanPlay),this._mediaElement.removeEventListener("stalled",this.e.onvStalled),this._mediaElement.removeEventListener("progress",this.e.onvProgress),this._mediaElement=null),this._msectl&&(this._msectl.destroy(),this._msectl=null)},e.prototype.load=function(){var e=this;if(!this._mediaElement)throw new A.a("HTMLMediaElement must be attached before load()!");if(this._transmuxer)throw new A.a("MSEPlayer.load() has been called, please call unload() first!");this._hasPendingLoad||(this._config.deferLoadAfterSourceOpen&&!1===this._mseSourceOpened?this._hasPendingLoad=!0:(this._mediaElement.readyState>0&&(this._requestSetTime=!0,this._mediaElement.currentTime=0),this._transmuxer=new b(this._mediaDataSource,this._config),this._transmuxer.on(v.a.INIT_SEGMENT,(function(t,i){e._msectl.appendInitSegment(i)})),this._transmuxer.on(v.a.MEDIA_SEGMENT,(function(t,i){if(e._msectl.appendMediaSegment(i),e._config.lazyLoad&&!e._config.isLive){var n=e._mediaElement.currentTime;i.info.endDts>=1e3*(n+e._config.lazyLoadMaxDuration)&&null==e._progressChecker&&(_.a.v(e.TAG,"Maximum buffering duration exceeded, suspend transmuxing task"),e._suspendTransmuxer())}})),this._transmuxer.on(v.a.LOADING_COMPLETE,(function(){e._msectl.endOfStream(),e._emitter.emit(l.LOADING_COMPLETE)})),this._transmuxer.on(v.a.RECOVERED_EARLY_EOF,(function(){e._emitter.emit(l.RECOVERED_EARLY_EOF)})),this._transmuxer.on(v.a.IO_ERROR,(function(t,i){e._emitter.emit(l.ERROR,T.NETWORK_ERROR,t,i)})),this._transmuxer.on(v.a.DEMUX_ERROR,(function(t,i){e._emitter.emit(l.ERROR,T.MEDIA_ERROR,t,{code:-1,msg:i})})),this._transmuxer.on(v.a.MEDIA_INFO,(function(t){e._mediaInfo=t,e._emitter.emit(l.MEDIA_INFO,Object.assign({},t))})),this._transmuxer.on(v.a.METADATA_ARRIVED,(function(t){e._emitter.emit(l.METADATA_ARRIVED,t)})),this._transmuxer.on(v.a.SCRIPTDATA_ARRIVED,(function(t){e._emitter.emit(l.SCRIPTDATA_ARRIVED,t)})),this._transmuxer.on(v.a.TIMED_ID3_METADATA_ARRIVED,(function(t){e._emitter.emit(l.TIMED_ID3_METADATA_ARRIVED,t)})),this._transmuxer.on(v.a.PES_PRIVATE_DATA_DESCRIPTOR,(function(t){e._emitter.emit(l.PES_PRIVATE_DATA_DESCRIPTOR,t)})),this._transmuxer.on(v.a.PES_PRIVATE_DATA_ARRIVED,(function(t){e._emitter.emit(l.PES_PRIVATE_DATA_ARRIVED,t)})),this._transmuxer.on(v.a.STATISTICS_INFO,(function(t){e._statisticsInfo=e._fillStatisticsInfo(t),e._emitter.emit(l.STATISTICS_INFO,Object.assign({},e._statisticsInfo))})),this._transmuxer.on(v.a.RECOMMEND_SEEKPOINT,(function(t){e._mediaElement&&!e._config.accurateSeek&&(e._requestSetTime=!0,e._mediaElement.currentTime=t/1e3)})),this._transmuxer.open()))},e.prototype.unload=function(){this._mediaElement&&this._mediaElement.pause(),this._msectl&&this._msectl.seek(0),this._transmuxer&&(this._transmuxer.close(),this._transmuxer.destroy(),this._transmuxer=null)},e.prototype.play=function(){return this._mediaElement.play()},e.prototype.pause=function(){this._mediaElement.pause()},Object.defineProperty(e.prototype,"type",{get:function(){return this._type},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"buffered",{get:function(){return this._mediaElement.buffered},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"duration",{get:function(){return this._mediaElement.duration},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"volume",{get:function(){return this._mediaElement.volume},set:function(e){this._mediaElement.volume=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"muted",{get:function(){return this._mediaElement.muted},set:function(e){this._mediaElement.muted=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"currentTime",{get:function(){return this._mediaElement?this._mediaElement.currentTime:0},set:function(e){this._mediaElement?this._internalSeek(e):this._pendingSeekTime=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"mediaInfo",{get:function(){return Object.assign({},this._mediaInfo)},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"statisticsInfo",{get:function(){return null==this._statisticsInfo&&(this._statisticsInfo={}),this._statisticsInfo=this._fillStatisticsInfo(this._statisticsInfo),Object.assign({},this._statisticsInfo)},enumerable:!1,configurable:!0}),e.prototype._fillStatisticsInfo=function(e){if(e.playerType=this._type,!(this._mediaElement instanceof HTMLVideoElement))return e;var t=!0,i=0,n=0;if(this._mediaElement.getVideoPlaybackQuality){var r=this._mediaElement.getVideoPlaybackQuality();i=r.totalVideoFrames,n=r.droppedVideoFrames}else null!=this._mediaElement.webkitDecodedFrameCount?(i=this._mediaElement.webkitDecodedFrameCount,n=this._mediaElement.webkitDroppedFrameCount):t=!1;return t&&(e.decodedFrames=i,e.droppedFrames=n),e},e.prototype._onmseUpdateEnd=function(){var e=this._mediaElement.buffered,t=this._mediaElement.currentTime;if(this._config.isLive&&this._config.liveBufferLatencyChasing&&e.length>0&&!this._mediaElement.paused){var i=e.end(e.length-1);if(i>this._config.liveBufferLatencyMaxLatency&&i-t>this._config.liveBufferLatencyMaxLatency){var n=i-this._config.liveBufferLatencyMinRemain;this.currentTime=n}}if(this._config.lazyLoad&&!this._config.isLive){for(var r=0,s=0;s=t+this._config.lazyLoadMaxDuration&&null==this._progressChecker&&(_.a.v(this.TAG,"Maximum buffering duration exceeded, suspend transmuxing task"),this._suspendTransmuxer())}},e.prototype._onmseBufferFull=function(){_.a.v(this.TAG,"MSE SourceBuffer is full, suspend transmuxing task"),null==this._progressChecker&&this._suspendTransmuxer()},e.prototype._suspendTransmuxer=function(){this._transmuxer&&(this._transmuxer.pause(),null==this._progressChecker&&(this._progressChecker=window.setInterval(this._checkProgressAndResume.bind(this),1e3)))},e.prototype._checkProgressAndResume=function(){for(var e=this._mediaElement.currentTime,t=this._mediaElement.buffered,i=!1,n=0;n=r&&e=s-this._config.lazyLoadRecoverDuration&&(i=!0);break}}i&&(window.clearInterval(this._progressChecker),this._progressChecker=null,i&&(_.a.v(this.TAG,"Continue loading from paused position"),this._transmuxer.resume()))},e.prototype._isTimepointBuffered=function(e){for(var t=this._mediaElement.buffered,i=0;i=n&&e0){var r=this._mediaElement.buffered.start(0);(r<1&&e0&&t.currentTime0){var n=i.start(0);if(n<1&&t0&&(this._mediaElement.currentTime=0),this._mediaElement.preload="auto",this._mediaElement.load(),this._statisticsReporter=window.setInterval(this._reportStatisticsInfo.bind(this),this._config.statisticsInfoReportInterval)},e.prototype.unload=function(){this._mediaElement&&(this._mediaElement.src="",this._mediaElement.removeAttribute("src")),null!=this._statisticsReporter&&(window.clearInterval(this._statisticsReporter),this._statisticsReporter=null)},e.prototype.play=function(){return this._mediaElement.play()},e.prototype.pause=function(){this._mediaElement.pause()},Object.defineProperty(e.prototype,"type",{get:function(){return this._type},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"buffered",{get:function(){return this._mediaElement.buffered},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"duration",{get:function(){return this._mediaElement.duration},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"volume",{get:function(){return this._mediaElement.volume},set:function(e){this._mediaElement.volume=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"muted",{get:function(){return this._mediaElement.muted},set:function(e){this._mediaElement.muted=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"currentTime",{get:function(){return this._mediaElement?this._mediaElement.currentTime:0},set:function(e){this._mediaElement?this._mediaElement.currentTime=e:this._pendingSeekTime=e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"mediaInfo",{get:function(){var e={mimeType:(this._mediaElement instanceof HTMLAudioElement?"audio/":"video/")+this._mediaDataSource.type};return this._mediaElement&&(e.duration=Math.floor(1e3*this._mediaElement.duration),this._mediaElement instanceof HTMLVideoElement&&(e.width=this._mediaElement.videoWidth,e.height=this._mediaElement.videoHeight)),e},enumerable:!1,configurable:!0}),Object.defineProperty(e.prototype,"statisticsInfo",{get:function(){var e={playerType:this._type,url:this._mediaDataSource.url};if(!(this._mediaElement instanceof HTMLVideoElement))return e;var t=!0,i=0,n=0;if(this._mediaElement.getVideoPlaybackQuality){var r=this._mediaElement.getVideoPlaybackQuality();i=r.totalVideoFrames,n=r.droppedVideoFrames}else null!=this._mediaElement.webkitDecodedFrameCount?(i=this._mediaElement.webkitDecodedFrameCount,n=this._mediaElement.webkitDroppedFrameCount):t=!1;return t&&(e.decodedFrames=i,e.droppedFrames=n),e},enumerable:!1,configurable:!0}),e.prototype._onvLoadedMetadata=function(e){null!=this._pendingSeekTime&&(this._mediaElement.currentTime=this._pendingSeekTime,this._pendingSeekTime=null),this._emitter.emit(l.MEDIA_INFO,this.mediaInfo)},e.prototype._reportStatisticsInfo=function(){this._emitter.emit(l.STATISTICS_INFO,this.statisticsInfo)},e}();n.a.install();var C={createPlayer:function(e,t){var i=e;if(null==i||"object"!=typeof i)throw new A.b("MediaDataSource must be an javascript object!");if(!i.hasOwnProperty("type"))throw new A.b("MediaDataSource must has type field to indicate video file type!");switch(i.type){case"mse":case"mpegts":case"m2ts":case"flv":return new D(i,t);default:return new k(i,t)}},isSupported:function(){return o.supportMSEH264Playback()},getFeatureList:function(){return o.getFeatureList()}};C.BaseLoader=h.a,C.LoaderStatus=h.c,C.LoaderErrors=h.b,C.Events=l,C.ErrorTypes=T,C.ErrorDetails=w,C.MSEPlayer=D,C.NativePlayer=k,C.LoggingControl=m.a,Object.defineProperty(C,"version",{enumerable:!0,get:function(){return"1.6.7"}});t.default=C}])})); \ No newline at end of file diff --git a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/transforms_video.py b/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/transforms_video.py deleted file mode 100644 index 1106f388c4091f919e0e9602fcb1363e9caec9a6..0000000000000000000000000000000000000000 --- a/spaces/DAMO-NLP-SG/Video-LLaMA/video_llama/processors/transforms_video.py +++ /dev/null @@ -1,179 +0,0 @@ -#!/usr/bin/env python3 -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - - -import numbers -import random - -from torchvision.transforms import ( - RandomCrop, - RandomResizedCrop, -) - -import video_llama.processors.functional_video as F - - -__all__ = [ - "RandomCropVideo", - "RandomResizedCropVideo", - "CenterCropVideo", - "NormalizeVideo", - "ToTensorVideo", - "RandomHorizontalFlipVideo", -] - - -class RandomCropVideo(RandomCrop): - def __init__(self, size): - if isinstance(size, numbers.Number): - self.size = (int(size), int(size)) - else: - self.size = size - - def __call__(self, clip): - """ - Args: - clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W) - Returns: - torch.tensor: randomly cropped/resized video clip. - size is (C, T, OH, OW) - """ - i, j, h, w = self.get_params(clip, self.size) - return F.crop(clip, i, j, h, w) - - def __repr__(self) -> str: - return f"{self.__class__.__name__}(size={self.size})" - - -class RandomResizedCropVideo(RandomResizedCrop): - def __init__( - self, - size, - scale=(0.08, 1.0), - ratio=(3.0 / 4.0, 4.0 / 3.0), - interpolation_mode="bilinear", - ): - if isinstance(size, tuple): - if len(size) != 2: - raise ValueError( - f"size should be tuple (height, width), instead got {size}" - ) - self.size = size - else: - self.size = (size, size) - - self.interpolation_mode = interpolation_mode - self.scale = scale - self.ratio = ratio - - def __call__(self, clip): - """ - Args: - clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W) - Returns: - torch.tensor: randomly cropped/resized video clip. - size is (C, T, H, W) - """ - i, j, h, w = self.get_params(clip, self.scale, self.ratio) - return F.resized_crop(clip, i, j, h, w, self.size, self.interpolation_mode) - - def __repr__(self) -> str: - return f"{self.__class__.__name__}(size={self.size}, interpolation_mode={self.interpolation_mode}, scale={self.scale}, ratio={self.ratio})" - - -class CenterCropVideo: - def __init__(self, crop_size): - if isinstance(crop_size, numbers.Number): - self.crop_size = (int(crop_size), int(crop_size)) - else: - self.crop_size = crop_size - - def __call__(self, clip): - """ - Args: - clip (torch.tensor): Video clip to be cropped. Size is (C, T, H, W) - Returns: - torch.tensor: central cropping of video clip. Size is - (C, T, crop_size, crop_size) - """ - return F.center_crop(clip, self.crop_size) - - def __repr__(self) -> str: - return f"{self.__class__.__name__}(crop_size={self.crop_size})" - - -class NormalizeVideo: - """ - Normalize the video clip by mean subtraction and division by standard deviation - Args: - mean (3-tuple): pixel RGB mean - std (3-tuple): pixel RGB standard deviation - inplace (boolean): whether do in-place normalization - """ - - def __init__(self, mean, std, inplace=False): - self.mean = mean - self.std = std - self.inplace = inplace - - def __call__(self, clip): - """ - Args: - clip (torch.tensor): video clip to be normalized. Size is (C, T, H, W) - """ - return F.normalize(clip, self.mean, self.std, self.inplace) - - def __repr__(self) -> str: - return f"{self.__class__.__name__}(mean={self.mean}, std={self.std}, inplace={self.inplace})" - - -class ToTensorVideo: - """ - Convert tensor data type from uint8 to float, divide value by 255.0 and - permute the dimensions of clip tensor - """ - - def __init__(self): - pass - - def __call__(self, clip): - """ - Args: - clip (torch.tensor, dtype=torch.uint8): Size is (T, H, W, C) - Return: - clip (torch.tensor, dtype=torch.float): Size is (C, T, H, W) - """ - return F.to_tensor(clip) - - def __repr__(self) -> str: - return self.__class__.__name__ - - -class RandomHorizontalFlipVideo: - """ - Flip the video clip along the horizonal direction with a given probability - Args: - p (float): probability of the clip being flipped. Default value is 0.5 - """ - - def __init__(self, p=0.5): - self.p = p - - def __call__(self, clip): - """ - Args: - clip (torch.tensor): Size is (C, T, H, W) - Return: - clip (torch.tensor): Size is (C, T, H, W) - """ - if random.random() < self.p: - clip = F.hflip(clip) - return clip - - def __repr__(self) -> str: - return f"{self.__class__.__name__}(p={self.p})" diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/validators.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/validators.py deleted file mode 100644 index 01e3124fd38711fcb471c3b9e0f9ecca8a08c29e..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ufoLib/validators.py +++ /dev/null @@ -1,1186 +0,0 @@ -"""Various low level data validators.""" - -import calendar -from io import open -import fs.base -import fs.osfs - -from collections.abc import Mapping -from fontTools.ufoLib.utils import numberTypes - - -# ------- -# Generic -# ------- - - -def isDictEnough(value): - """ - Some objects will likely come in that aren't - dicts but are dict-ish enough. - """ - if isinstance(value, Mapping): - return True - for attr in ("keys", "values", "items"): - if not hasattr(value, attr): - return False - return True - - -def genericTypeValidator(value, typ): - """ - Generic. (Added at version 2.) - """ - return isinstance(value, typ) - - -def genericIntListValidator(values, validValues): - """ - Generic. (Added at version 2.) - """ - if not isinstance(values, (list, tuple)): - return False - valuesSet = set(values) - validValuesSet = set(validValues) - if valuesSet - validValuesSet: - return False - for value in values: - if not isinstance(value, int): - return False - return True - - -def genericNonNegativeIntValidator(value): - """ - Generic. (Added at version 3.) - """ - if not isinstance(value, int): - return False - if value < 0: - return False - return True - - -def genericNonNegativeNumberValidator(value): - """ - Generic. (Added at version 3.) - """ - if not isinstance(value, numberTypes): - return False - if value < 0: - return False - return True - - -def genericDictValidator(value, prototype): - """ - Generic. (Added at version 3.) - """ - # not a dict - if not isinstance(value, Mapping): - return False - # missing required keys - for key, (typ, required) in prototype.items(): - if not required: - continue - if key not in value: - return False - # unknown keys - for key in value.keys(): - if key not in prototype: - return False - # incorrect types - for key, v in value.items(): - prototypeType, required = prototype[key] - if v is None and not required: - continue - if not isinstance(v, prototypeType): - return False - return True - - -# -------------- -# fontinfo.plist -# -------------- - -# Data Validators - - -def fontInfoStyleMapStyleNameValidator(value): - """ - Version 2+. - """ - options = ["regular", "italic", "bold", "bold italic"] - return value in options - - -def fontInfoOpenTypeGaspRangeRecordsValidator(value): - """ - Version 3+. - """ - if not isinstance(value, list): - return False - if len(value) == 0: - return True - validBehaviors = [0, 1, 2, 3] - dictPrototype = dict(rangeMaxPPEM=(int, True), rangeGaspBehavior=(list, True)) - ppemOrder = [] - for rangeRecord in value: - if not genericDictValidator(rangeRecord, dictPrototype): - return False - ppem = rangeRecord["rangeMaxPPEM"] - behavior = rangeRecord["rangeGaspBehavior"] - ppemValidity = genericNonNegativeIntValidator(ppem) - if not ppemValidity: - return False - behaviorValidity = genericIntListValidator(behavior, validBehaviors) - if not behaviorValidity: - return False - ppemOrder.append(ppem) - if ppemOrder != sorted(ppemOrder): - return False - return True - - -def fontInfoOpenTypeHeadCreatedValidator(value): - """ - Version 2+. - """ - # format: 0000/00/00 00:00:00 - if not isinstance(value, str): - return False - # basic formatting - if not len(value) == 19: - return False - if value.count(" ") != 1: - return False - date, time = value.split(" ") - if date.count("/") != 2: - return False - if time.count(":") != 2: - return False - # date - year, month, day = date.split("/") - if len(year) != 4: - return False - if len(month) != 2: - return False - if len(day) != 2: - return False - try: - year = int(year) - month = int(month) - day = int(day) - except ValueError: - return False - if month < 1 or month > 12: - return False - monthMaxDay = calendar.monthrange(year, month)[1] - if day < 1 or day > monthMaxDay: - return False - # time - hour, minute, second = time.split(":") - if len(hour) != 2: - return False - if len(minute) != 2: - return False - if len(second) != 2: - return False - try: - hour = int(hour) - minute = int(minute) - second = int(second) - except ValueError: - return False - if hour < 0 or hour > 23: - return False - if minute < 0 or minute > 59: - return False - if second < 0 or second > 59: - return False - # fallback - return True - - -def fontInfoOpenTypeNameRecordsValidator(value): - """ - Version 3+. - """ - if not isinstance(value, list): - return False - dictPrototype = dict( - nameID=(int, True), - platformID=(int, True), - encodingID=(int, True), - languageID=(int, True), - string=(str, True), - ) - for nameRecord in value: - if not genericDictValidator(nameRecord, dictPrototype): - return False - return True - - -def fontInfoOpenTypeOS2WeightClassValidator(value): - """ - Version 2+. - """ - if not isinstance(value, int): - return False - if value < 0: - return False - return True - - -def fontInfoOpenTypeOS2WidthClassValidator(value): - """ - Version 2+. - """ - if not isinstance(value, int): - return False - if value < 1: - return False - if value > 9: - return False - return True - - -def fontInfoVersion2OpenTypeOS2PanoseValidator(values): - """ - Version 2. - """ - if not isinstance(values, (list, tuple)): - return False - if len(values) != 10: - return False - for value in values: - if not isinstance(value, int): - return False - # XXX further validation? - return True - - -def fontInfoVersion3OpenTypeOS2PanoseValidator(values): - """ - Version 3+. - """ - if not isinstance(values, (list, tuple)): - return False - if len(values) != 10: - return False - for value in values: - if not isinstance(value, int): - return False - if value < 0: - return False - # XXX further validation? - return True - - -def fontInfoOpenTypeOS2FamilyClassValidator(values): - """ - Version 2+. - """ - if not isinstance(values, (list, tuple)): - return False - if len(values) != 2: - return False - for value in values: - if not isinstance(value, int): - return False - classID, subclassID = values - if classID < 0 or classID > 14: - return False - if subclassID < 0 or subclassID > 15: - return False - return True - - -def fontInfoPostscriptBluesValidator(values): - """ - Version 2+. - """ - if not isinstance(values, (list, tuple)): - return False - if len(values) > 14: - return False - if len(values) % 2: - return False - for value in values: - if not isinstance(value, numberTypes): - return False - return True - - -def fontInfoPostscriptOtherBluesValidator(values): - """ - Version 2+. - """ - if not isinstance(values, (list, tuple)): - return False - if len(values) > 10: - return False - if len(values) % 2: - return False - for value in values: - if not isinstance(value, numberTypes): - return False - return True - - -def fontInfoPostscriptStemsValidator(values): - """ - Version 2+. - """ - if not isinstance(values, (list, tuple)): - return False - if len(values) > 12: - return False - for value in values: - if not isinstance(value, numberTypes): - return False - return True - - -def fontInfoPostscriptWindowsCharacterSetValidator(value): - """ - Version 2+. - """ - validValues = list(range(1, 21)) - if value not in validValues: - return False - return True - - -def fontInfoWOFFMetadataUniqueIDValidator(value): - """ - Version 3+. - """ - dictPrototype = dict(id=(str, True)) - if not genericDictValidator(value, dictPrototype): - return False - return True - - -def fontInfoWOFFMetadataVendorValidator(value): - """ - Version 3+. - """ - dictPrototype = { - "name": (str, True), - "url": (str, False), - "dir": (str, False), - "class": (str, False), - } - if not genericDictValidator(value, dictPrototype): - return False - if "dir" in value and value.get("dir") not in ("ltr", "rtl"): - return False - return True - - -def fontInfoWOFFMetadataCreditsValidator(value): - """ - Version 3+. - """ - dictPrototype = dict(credits=(list, True)) - if not genericDictValidator(value, dictPrototype): - return False - if not len(value["credits"]): - return False - dictPrototype = { - "name": (str, True), - "url": (str, False), - "role": (str, False), - "dir": (str, False), - "class": (str, False), - } - for credit in value["credits"]: - if not genericDictValidator(credit, dictPrototype): - return False - if "dir" in credit and credit.get("dir") not in ("ltr", "rtl"): - return False - return True - - -def fontInfoWOFFMetadataDescriptionValidator(value): - """ - Version 3+. - """ - dictPrototype = dict(url=(str, False), text=(list, True)) - if not genericDictValidator(value, dictPrototype): - return False - for text in value["text"]: - if not fontInfoWOFFMetadataTextValue(text): - return False - return True - - -def fontInfoWOFFMetadataLicenseValidator(value): - """ - Version 3+. - """ - dictPrototype = dict(url=(str, False), text=(list, False), id=(str, False)) - if not genericDictValidator(value, dictPrototype): - return False - if "text" in value: - for text in value["text"]: - if not fontInfoWOFFMetadataTextValue(text): - return False - return True - - -def fontInfoWOFFMetadataTrademarkValidator(value): - """ - Version 3+. - """ - dictPrototype = dict(text=(list, True)) - if not genericDictValidator(value, dictPrototype): - return False - for text in value["text"]: - if not fontInfoWOFFMetadataTextValue(text): - return False - return True - - -def fontInfoWOFFMetadataCopyrightValidator(value): - """ - Version 3+. - """ - dictPrototype = dict(text=(list, True)) - if not genericDictValidator(value, dictPrototype): - return False - for text in value["text"]: - if not fontInfoWOFFMetadataTextValue(text): - return False - return True - - -def fontInfoWOFFMetadataLicenseeValidator(value): - """ - Version 3+. - """ - dictPrototype = {"name": (str, True), "dir": (str, False), "class": (str, False)} - if not genericDictValidator(value, dictPrototype): - return False - if "dir" in value and value.get("dir") not in ("ltr", "rtl"): - return False - return True - - -def fontInfoWOFFMetadataTextValue(value): - """ - Version 3+. - """ - dictPrototype = { - "text": (str, True), - "language": (str, False), - "dir": (str, False), - "class": (str, False), - } - if not genericDictValidator(value, dictPrototype): - return False - if "dir" in value and value.get("dir") not in ("ltr", "rtl"): - return False - return True - - -def fontInfoWOFFMetadataExtensionsValidator(value): - """ - Version 3+. - """ - if not isinstance(value, list): - return False - if not value: - return False - for extension in value: - if not fontInfoWOFFMetadataExtensionValidator(extension): - return False - return True - - -def fontInfoWOFFMetadataExtensionValidator(value): - """ - Version 3+. - """ - dictPrototype = dict(names=(list, False), items=(list, True), id=(str, False)) - if not genericDictValidator(value, dictPrototype): - return False - if "names" in value: - for name in value["names"]: - if not fontInfoWOFFMetadataExtensionNameValidator(name): - return False - for item in value["items"]: - if not fontInfoWOFFMetadataExtensionItemValidator(item): - return False - return True - - -def fontInfoWOFFMetadataExtensionItemValidator(value): - """ - Version 3+. - """ - dictPrototype = dict(id=(str, False), names=(list, True), values=(list, True)) - if not genericDictValidator(value, dictPrototype): - return False - for name in value["names"]: - if not fontInfoWOFFMetadataExtensionNameValidator(name): - return False - for val in value["values"]: - if not fontInfoWOFFMetadataExtensionValueValidator(val): - return False - return True - - -def fontInfoWOFFMetadataExtensionNameValidator(value): - """ - Version 3+. - """ - dictPrototype = { - "text": (str, True), - "language": (str, False), - "dir": (str, False), - "class": (str, False), - } - if not genericDictValidator(value, dictPrototype): - return False - if "dir" in value and value.get("dir") not in ("ltr", "rtl"): - return False - return True - - -def fontInfoWOFFMetadataExtensionValueValidator(value): - """ - Version 3+. - """ - dictPrototype = { - "text": (str, True), - "language": (str, False), - "dir": (str, False), - "class": (str, False), - } - if not genericDictValidator(value, dictPrototype): - return False - if "dir" in value and value.get("dir") not in ("ltr", "rtl"): - return False - return True - - -# ---------- -# Guidelines -# ---------- - - -def guidelinesValidator(value, identifiers=None): - """ - Version 3+. - """ - if not isinstance(value, list): - return False - if identifiers is None: - identifiers = set() - for guide in value: - if not guidelineValidator(guide): - return False - identifier = guide.get("identifier") - if identifier is not None: - if identifier in identifiers: - return False - identifiers.add(identifier) - return True - - -_guidelineDictPrototype = dict( - x=((int, float), False), - y=((int, float), False), - angle=((int, float), False), - name=(str, False), - color=(str, False), - identifier=(str, False), -) - - -def guidelineValidator(value): - """ - Version 3+. - """ - if not genericDictValidator(value, _guidelineDictPrototype): - return False - x = value.get("x") - y = value.get("y") - angle = value.get("angle") - # x or y must be present - if x is None and y is None: - return False - # if x or y are None, angle must not be present - if x is None or y is None: - if angle is not None: - return False - # if x and y are defined, angle must be defined - if x is not None and y is not None and angle is None: - return False - # angle must be between 0 and 360 - if angle is not None: - if angle < 0: - return False - if angle > 360: - return False - # identifier must be 1 or more characters - identifier = value.get("identifier") - if identifier is not None and not identifierValidator(identifier): - return False - # color must follow the proper format - color = value.get("color") - if color is not None and not colorValidator(color): - return False - return True - - -# ------- -# Anchors -# ------- - - -def anchorsValidator(value, identifiers=None): - """ - Version 3+. - """ - if not isinstance(value, list): - return False - if identifiers is None: - identifiers = set() - for anchor in value: - if not anchorValidator(anchor): - return False - identifier = anchor.get("identifier") - if identifier is not None: - if identifier in identifiers: - return False - identifiers.add(identifier) - return True - - -_anchorDictPrototype = dict( - x=((int, float), False), - y=((int, float), False), - name=(str, False), - color=(str, False), - identifier=(str, False), -) - - -def anchorValidator(value): - """ - Version 3+. - """ - if not genericDictValidator(value, _anchorDictPrototype): - return False - x = value.get("x") - y = value.get("y") - # x and y must be present - if x is None or y is None: - return False - # identifier must be 1 or more characters - identifier = value.get("identifier") - if identifier is not None and not identifierValidator(identifier): - return False - # color must follow the proper format - color = value.get("color") - if color is not None and not colorValidator(color): - return False - return True - - -# ---------- -# Identifier -# ---------- - - -def identifierValidator(value): - """ - Version 3+. - - >>> identifierValidator("a") - True - >>> identifierValidator("") - False - >>> identifierValidator("a" * 101) - False - """ - validCharactersMin = 0x20 - validCharactersMax = 0x7E - if not isinstance(value, str): - return False - if not value: - return False - if len(value) > 100: - return False - for c in value: - c = ord(c) - if c < validCharactersMin or c > validCharactersMax: - return False - return True - - -# ----- -# Color -# ----- - - -def colorValidator(value): - """ - Version 3+. - - >>> colorValidator("0,0,0,0") - True - >>> colorValidator(".5,.5,.5,.5") - True - >>> colorValidator("0.5,0.5,0.5,0.5") - True - >>> colorValidator("1,1,1,1") - True - - >>> colorValidator("2,0,0,0") - False - >>> colorValidator("0,2,0,0") - False - >>> colorValidator("0,0,2,0") - False - >>> colorValidator("0,0,0,2") - False - - >>> colorValidator("1r,1,1,1") - False - >>> colorValidator("1,1g,1,1") - False - >>> colorValidator("1,1,1b,1") - False - >>> colorValidator("1,1,1,1a") - False - - >>> colorValidator("1 1 1 1") - False - >>> colorValidator("1 1,1,1") - False - >>> colorValidator("1,1 1,1") - False - >>> colorValidator("1,1,1 1") - False - - >>> colorValidator("1, 1, 1, 1") - True - """ - if not isinstance(value, str): - return False - parts = value.split(",") - if len(parts) != 4: - return False - for part in parts: - part = part.strip() - converted = False - try: - part = int(part) - converted = True - except ValueError: - pass - if not converted: - try: - part = float(part) - converted = True - except ValueError: - pass - if not converted: - return False - if part < 0: - return False - if part > 1: - return False - return True - - -# ----- -# image -# ----- - -pngSignature = b"\x89PNG\r\n\x1a\n" - -_imageDictPrototype = dict( - fileName=(str, True), - xScale=((int, float), False), - xyScale=((int, float), False), - yxScale=((int, float), False), - yScale=((int, float), False), - xOffset=((int, float), False), - yOffset=((int, float), False), - color=(str, False), -) - - -def imageValidator(value): - """ - Version 3+. - """ - if not genericDictValidator(value, _imageDictPrototype): - return False - # fileName must be one or more characters - if not value["fileName"]: - return False - # color must follow the proper format - color = value.get("color") - if color is not None and not colorValidator(color): - return False - return True - - -def pngValidator(path=None, data=None, fileObj=None): - """ - Version 3+. - - This checks the signature of the image data. - """ - assert path is not None or data is not None or fileObj is not None - if path is not None: - with open(path, "rb") as f: - signature = f.read(8) - elif data is not None: - signature = data[:8] - elif fileObj is not None: - pos = fileObj.tell() - signature = fileObj.read(8) - fileObj.seek(pos) - if signature != pngSignature: - return False, "Image does not begin with the PNG signature." - return True, None - - -# ------------------- -# layercontents.plist -# ------------------- - - -def layerContentsValidator(value, ufoPathOrFileSystem): - """ - Check the validity of layercontents.plist. - Version 3+. - """ - if isinstance(ufoPathOrFileSystem, fs.base.FS): - fileSystem = ufoPathOrFileSystem - else: - fileSystem = fs.osfs.OSFS(ufoPathOrFileSystem) - - bogusFileMessage = "layercontents.plist in not in the correct format." - # file isn't in the right format - if not isinstance(value, list): - return False, bogusFileMessage - # work through each entry - usedLayerNames = set() - usedDirectories = set() - contents = {} - for entry in value: - # layer entry in the incorrect format - if not isinstance(entry, list): - return False, bogusFileMessage - if not len(entry) == 2: - return False, bogusFileMessage - for i in entry: - if not isinstance(i, str): - return False, bogusFileMessage - layerName, directoryName = entry - # check directory naming - if directoryName != "glyphs": - if not directoryName.startswith("glyphs."): - return ( - False, - "Invalid directory name (%s) in layercontents.plist." - % directoryName, - ) - if len(layerName) == 0: - return False, "Empty layer name in layercontents.plist." - # directory doesn't exist - if not fileSystem.exists(directoryName): - return False, "A glyphset does not exist at %s." % directoryName - # default layer name - if layerName == "public.default" and directoryName != "glyphs": - return ( - False, - "The name public.default is being used by a layer that is not the default.", - ) - # check usage - if layerName in usedLayerNames: - return ( - False, - "The layer name %s is used by more than one layer." % layerName, - ) - usedLayerNames.add(layerName) - if directoryName in usedDirectories: - return ( - False, - "The directory %s is used by more than one layer." % directoryName, - ) - usedDirectories.add(directoryName) - # store - contents[layerName] = directoryName - # missing default layer - foundDefault = "glyphs" in contents.values() - if not foundDefault: - return False, "The required default glyph set is not in the UFO." - return True, None - - -# ------------ -# groups.plist -# ------------ - - -def groupsValidator(value): - """ - Check the validity of the groups. - Version 3+ (though it's backwards compatible with UFO 1 and UFO 2). - - >>> groups = {"A" : ["A", "A"], "A2" : ["A"]} - >>> groupsValidator(groups) - (True, None) - - >>> groups = {"" : ["A"]} - >>> valid, msg = groupsValidator(groups) - >>> valid - False - >>> print(msg) - A group has an empty name. - - >>> groups = {"public.awesome" : ["A"]} - >>> groupsValidator(groups) - (True, None) - - >>> groups = {"public.kern1." : ["A"]} - >>> valid, msg = groupsValidator(groups) - >>> valid - False - >>> print(msg) - The group data contains a kerning group with an incomplete name. - >>> groups = {"public.kern2." : ["A"]} - >>> valid, msg = groupsValidator(groups) - >>> valid - False - >>> print(msg) - The group data contains a kerning group with an incomplete name. - - >>> groups = {"public.kern1.A" : ["A"], "public.kern2.A" : ["A"]} - >>> groupsValidator(groups) - (True, None) - - >>> groups = {"public.kern1.A1" : ["A"], "public.kern1.A2" : ["A"]} - >>> valid, msg = groupsValidator(groups) - >>> valid - False - >>> print(msg) - The glyph "A" occurs in too many kerning groups. - """ - bogusFormatMessage = "The group data is not in the correct format." - if not isDictEnough(value): - return False, bogusFormatMessage - firstSideMapping = {} - secondSideMapping = {} - for groupName, glyphList in value.items(): - if not isinstance(groupName, (str)): - return False, bogusFormatMessage - if not isinstance(glyphList, (list, tuple)): - return False, bogusFormatMessage - if not groupName: - return False, "A group has an empty name." - if groupName.startswith("public."): - if not groupName.startswith("public.kern1.") and not groupName.startswith( - "public.kern2." - ): - # unknown public.* name. silently skip. - continue - else: - if len("public.kernN.") == len(groupName): - return ( - False, - "The group data contains a kerning group with an incomplete name.", - ) - if groupName.startswith("public.kern1."): - d = firstSideMapping - else: - d = secondSideMapping - for glyphName in glyphList: - if not isinstance(glyphName, str): - return ( - False, - "The group data %s contains an invalid member." % groupName, - ) - if glyphName in d: - return ( - False, - 'The glyph "%s" occurs in too many kerning groups.' % glyphName, - ) - d[glyphName] = groupName - return True, None - - -# ------------- -# kerning.plist -# ------------- - - -def kerningValidator(data): - """ - Check the validity of the kerning data structure. - Version 3+ (though it's backwards compatible with UFO 1 and UFO 2). - - >>> kerning = {"A" : {"B" : 100}} - >>> kerningValidator(kerning) - (True, None) - - >>> kerning = {"A" : ["B"]} - >>> valid, msg = kerningValidator(kerning) - >>> valid - False - >>> print(msg) - The kerning data is not in the correct format. - - >>> kerning = {"A" : {"B" : "100"}} - >>> valid, msg = kerningValidator(kerning) - >>> valid - False - >>> print(msg) - The kerning data is not in the correct format. - """ - bogusFormatMessage = "The kerning data is not in the correct format." - if not isinstance(data, Mapping): - return False, bogusFormatMessage - for first, secondDict in data.items(): - if not isinstance(first, str): - return False, bogusFormatMessage - elif not isinstance(secondDict, Mapping): - return False, bogusFormatMessage - for second, value in secondDict.items(): - if not isinstance(second, str): - return False, bogusFormatMessage - elif not isinstance(value, numberTypes): - return False, bogusFormatMessage - return True, None - - -# ------------- -# lib.plist/lib -# ------------- - -_bogusLibFormatMessage = "The lib data is not in the correct format: %s" - - -def fontLibValidator(value): - """ - Check the validity of the lib. - Version 3+ (though it's backwards compatible with UFO 1 and UFO 2). - - >>> lib = {"foo" : "bar"} - >>> fontLibValidator(lib) - (True, None) - - >>> lib = {"public.awesome" : "hello"} - >>> fontLibValidator(lib) - (True, None) - - >>> lib = {"public.glyphOrder" : ["A", "C", "B"]} - >>> fontLibValidator(lib) - (True, None) - - >>> lib = "hello" - >>> valid, msg = fontLibValidator(lib) - >>> valid - False - >>> print(msg) # doctest: +ELLIPSIS - The lib data is not in the correct format: expected a dictionary, ... - - >>> lib = {1: "hello"} - >>> valid, msg = fontLibValidator(lib) - >>> valid - False - >>> print(msg) - The lib key is not properly formatted: expected str, found int: 1 - - >>> lib = {"public.glyphOrder" : "hello"} - >>> valid, msg = fontLibValidator(lib) - >>> valid - False - >>> print(msg) # doctest: +ELLIPSIS - public.glyphOrder is not properly formatted: expected list or tuple,... - - >>> lib = {"public.glyphOrder" : ["A", 1, "B"]} - >>> valid, msg = fontLibValidator(lib) - >>> valid - False - >>> print(msg) # doctest: +ELLIPSIS - public.glyphOrder is not properly formatted: expected str,... - """ - if not isDictEnough(value): - reason = "expected a dictionary, found %s" % type(value).__name__ - return False, _bogusLibFormatMessage % reason - for key, value in value.items(): - if not isinstance(key, str): - return False, ( - "The lib key is not properly formatted: expected str, found %s: %r" - % (type(key).__name__, key) - ) - # public.glyphOrder - if key == "public.glyphOrder": - bogusGlyphOrderMessage = "public.glyphOrder is not properly formatted: %s" - if not isinstance(value, (list, tuple)): - reason = "expected list or tuple, found %s" % type(value).__name__ - return False, bogusGlyphOrderMessage % reason - for glyphName in value: - if not isinstance(glyphName, str): - reason = "expected str, found %s" % type(glyphName).__name__ - return False, bogusGlyphOrderMessage % reason - return True, None - - -# -------- -# GLIF lib -# -------- - - -def glyphLibValidator(value): - """ - Check the validity of the lib. - Version 3+ (though it's backwards compatible with UFO 1 and UFO 2). - - >>> lib = {"foo" : "bar"} - >>> glyphLibValidator(lib) - (True, None) - - >>> lib = {"public.awesome" : "hello"} - >>> glyphLibValidator(lib) - (True, None) - - >>> lib = {"public.markColor" : "1,0,0,0.5"} - >>> glyphLibValidator(lib) - (True, None) - - >>> lib = {"public.markColor" : 1} - >>> valid, msg = glyphLibValidator(lib) - >>> valid - False - >>> print(msg) - public.markColor is not properly formatted. - """ - if not isDictEnough(value): - reason = "expected a dictionary, found %s" % type(value).__name__ - return False, _bogusLibFormatMessage % reason - for key, value in value.items(): - if not isinstance(key, str): - reason = "key (%s) should be a string" % key - return False, _bogusLibFormatMessage % reason - # public.markColor - if key == "public.markColor": - if not colorValidator(value): - return False, "public.markColor is not properly formatted." - return True, None - - -if __name__ == "__main__": - import doctest - - doctest.testmod() diff --git a/spaces/Deep1994/t5-paraphrase/README.md b/spaces/Deep1994/t5-paraphrase/README.md deleted file mode 100644 index 7a7913335badea48163679f25af349404869f44f..0000000000000000000000000000000000000000 --- a/spaces/Deep1994/t5-paraphrase/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: T5 Paraphrase -emoji: 👁 -colorFrom: red -colorTo: yellow -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Dinoking/Guccio-AI-Designer/netdissect/nethook.py b/spaces/Dinoking/Guccio-AI-Designer/netdissect/nethook.py deleted file mode 100644 index f36e84ee0cae2de2c3be247498408cf66db3ee8f..0000000000000000000000000000000000000000 --- a/spaces/Dinoking/Guccio-AI-Designer/netdissect/nethook.py +++ /dev/null @@ -1,266 +0,0 @@ -''' -Utilities for instrumenting a torch model. - -InstrumentedModel will wrap a pytorch model and allow hooking -arbitrary layers to monitor or modify their output directly. - -Modified by Erik Härkönen: -- 29.11.2019: Unhooking bugfix -- 25.01.2020: Offset edits, removed old API -''' - -import torch, numpy, types -from collections import OrderedDict - -class InstrumentedModel(torch.nn.Module): - ''' - A wrapper for hooking, probing and intervening in pytorch Modules. - Example usage: - - ``` - model = load_my_model() - with inst as InstrumentedModel(model): - inst.retain_layer(layername) - inst.edit_layer(layername, 0.5, target_features) - inst.edit_layer(layername, offset=offset_tensor) - inst(inputs) - original_features = inst.retained_layer(layername) - ``` - ''' - - def __init__(self, model): - super(InstrumentedModel, self).__init__() - self.model = model - self._retained = OrderedDict() - self._ablation = {} - self._replacement = {} - self._offset = {} - self._hooked_layer = {} - self._old_forward = {} - - def __enter__(self): - return self - - def __exit__(self, type, value, traceback): - self.close() - - def forward(self, *inputs, **kwargs): - return self.model(*inputs, **kwargs) - - def retain_layer(self, layername): - ''' - Pass a fully-qualified layer name (E.g., module.submodule.conv3) - to hook that layer and retain its output each time the model is run. - A pair (layername, aka) can be provided, and the aka will be used - as the key for the retained value instead of the layername. - ''' - self.retain_layers([layername]) - - def retain_layers(self, layernames): - ''' - Retains a list of a layers at once. - ''' - self.add_hooks(layernames) - for layername in layernames: - aka = layername - if not isinstance(aka, str): - layername, aka = layername - if aka not in self._retained: - self._retained[aka] = None - - def retained_features(self): - ''' - Returns a dict of all currently retained features. - ''' - return OrderedDict(self._retained) - - def retained_layer(self, aka=None, clear=False): - ''' - Retrieve retained data that was previously hooked by retain_layer. - Call this after the model is run. If clear is set, then the - retained value will return and also cleared. - ''' - if aka is None: - # Default to the first retained layer. - aka = next(self._retained.keys().__iter__()) - result = self._retained[aka] - if clear: - self._retained[aka] = None - return result - - def edit_layer(self, layername, ablation=None, replacement=None, offset=None): - ''' - Pass a fully-qualified layer name (E.g., module.submodule.conv3) - to hook that layer and modify its output each time the model is run. - The output of the layer will be modified to be a convex combination - of the replacement and x interpolated according to the ablation, i.e.: - `output = x * (1 - a) + (r * a)`. - Additionally or independently, an offset can be added to the output. - ''' - if not isinstance(layername, str): - layername, aka = layername - else: - aka = layername - - # The default ablation if a replacement is specified is 1.0. - if ablation is None and replacement is not None: - ablation = 1.0 - self.add_hooks([(layername, aka)]) - if ablation is not None: - self._ablation[aka] = ablation - if replacement is not None: - self._replacement[aka] = replacement - if offset is not None: - self._offset[aka] = offset - # If needed, could add an arbitrary postprocessing lambda here. - - def remove_edits(self, layername=None, remove_offset=True, remove_replacement=True): - ''' - Removes edits at the specified layer, or removes edits at all layers - if no layer name is specified. - ''' - if layername is None: - if remove_replacement: - self._ablation.clear() - self._replacement.clear() - if remove_offset: - self._offset.clear() - return - - if not isinstance(layername, str): - layername, aka = layername - else: - aka = layername - if remove_replacement and aka in self._ablation: - del self._ablation[aka] - if remove_replacement and aka in self._replacement: - del self._replacement[aka] - if remove_offset and aka in self._offset: - del self._offset[aka] - - def add_hooks(self, layernames): - ''' - Sets up a set of layers to be hooked. - - Usually not called directly: use edit_layer or retain_layer instead. - ''' - needed = set() - aka_map = {} - for name in layernames: - aka = name - if not isinstance(aka, str): - name, aka = name - if self._hooked_layer.get(aka, None) != name: - aka_map[name] = aka - needed.add(name) - if not needed: - return - for name, layer in self.model.named_modules(): - if name in aka_map: - needed.remove(name) - aka = aka_map[name] - self._hook_layer(layer, name, aka) - for name in needed: - raise ValueError('Layer %s not found in model' % name) - - def _hook_layer(self, layer, layername, aka): - ''' - Internal method to replace a forward method with a closure that - intercepts the call, and tracks the hook so that it can be reverted. - ''' - if aka in self._hooked_layer: - raise ValueError('Layer %s already hooked' % aka) - if layername in self._old_forward: - raise ValueError('Layer %s already hooked' % layername) - self._hooked_layer[aka] = layername - self._old_forward[layername] = (layer, aka, - layer.__dict__.get('forward', None)) - editor = self - original_forward = layer.forward - def new_forward(self, *inputs, **kwargs): - original_x = original_forward(*inputs, **kwargs) - x = editor._postprocess_forward(original_x, aka) - return x - layer.forward = types.MethodType(new_forward, layer) - - def _unhook_layer(self, aka): - ''' - Internal method to remove a hook, restoring the original forward method. - ''' - if aka not in self._hooked_layer: - return - layername = self._hooked_layer[aka] - layer, check, old_forward = self._old_forward[layername] - assert check == aka - if old_forward is None: - if 'forward' in layer.__dict__: - del layer.__dict__['forward'] - else: - layer.forward = old_forward - del self._old_forward[layername] - del self._hooked_layer[aka] - if aka in self._ablation: - del self._ablation[aka] - if aka in self._replacement: - del self._replacement[aka] - if aka in self._offset: - del self._offset[aka] - if aka in self._retained: - del self._retained[aka] - - def _postprocess_forward(self, x, aka): - ''' - The internal method called by the hooked layers after they are run. - ''' - # Retain output before edits, if desired. - if aka in self._retained: - self._retained[aka] = x.detach() - - # Apply replacement edit - a = make_matching_tensor(self._ablation, aka, x) - if a is not None: - x = x * (1 - a) - v = make_matching_tensor(self._replacement, aka, x) - if v is not None: - x += (v * a) - - # Apply offset edit - b = make_matching_tensor(self._offset, aka, x) - if b is not None: - x = x + b - - return x - - def close(self): - ''' - Unhooks all hooked layers in the model. - ''' - for aka in list(self._old_forward.keys()): - self._unhook_layer(aka) - assert len(self._old_forward) == 0 - - -def make_matching_tensor(valuedict, name, data): - ''' - Converts `valuedict[name]` to be a tensor with the same dtype, device, - and dimension count as `data`, and caches the converted tensor. - ''' - v = valuedict.get(name, None) - if v is None: - return None - if not isinstance(v, torch.Tensor): - # Accept non-torch data. - v = torch.from_numpy(numpy.array(v)) - valuedict[name] = v - if not v.device == data.device or not v.dtype == data.dtype: - # Ensure device and type matches. - assert not v.requires_grad, '%s wrong device or type' % (name) - v = v.to(device=data.device, dtype=data.dtype) - valuedict[name] = v - if len(v.shape) < len(data.shape): - # Ensure dimensions are unsqueezed as needed. - assert not v.requires_grad, '%s wrong dimensions' % (name) - v = v.view((1,) + tuple(v.shape) + - (1,) * (len(data.shape) - len(v.shape) - 1)) - valuedict[name] = v - return v diff --git a/spaces/DragGan/DragGan-Inversion/training/augment.py b/spaces/DragGan/DragGan-Inversion/training/augment.py deleted file mode 100644 index 8067f4e3fec058c9025edaa7a9a0442afe859ae5..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/training/augment.py +++ /dev/null @@ -1,562 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Augmentation pipeline from the paper -"Training Generative Adversarial Networks with Limited Data". -Matches the original implementation by Karras et al. at -https://github.com/NVlabs/stylegan2-ada/blob/main/training/augment.py""" - -import numpy as np -import scipy.signal -import torch -from torch_utils import persistence -from torch_utils import misc -from torch_utils.ops import upfirdn2d -from torch_utils.ops import grid_sample_gradfix -from torch_utils.ops import conv2d_gradfix - -# ---------------------------------------------------------------------------- -# Coefficients of various wavelet decomposition low-pass filters. - -wavelets = { - 'haar': [0.7071067811865476, 0.7071067811865476], - 'db1': [0.7071067811865476, 0.7071067811865476], - 'db2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'db3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'db4': [-0.010597401784997278, 0.032883011666982945, 0.030841381835986965, -0.18703481171888114, -0.02798376941698385, 0.6308807679295904, 0.7148465705525415, 0.23037781330885523], - 'db5': [0.003335725285001549, -0.012580751999015526, -0.006241490213011705, 0.07757149384006515, -0.03224486958502952, -0.24229488706619015, 0.13842814590110342, 0.7243085284385744, 0.6038292697974729, 0.160102397974125], - 'db6': [-0.00107730108499558, 0.004777257511010651, 0.0005538422009938016, -0.031582039318031156, 0.02752286553001629, 0.09750160558707936, -0.12976686756709563, -0.22626469396516913, 0.3152503517092432, 0.7511339080215775, 0.4946238903983854, 0.11154074335008017], - 'db7': [0.0003537138000010399, -0.0018016407039998328, 0.00042957797300470274, 0.012550998556013784, -0.01657454163101562, -0.03802993693503463, 0.0806126091510659, 0.07130921926705004, -0.22403618499416572, -0.14390600392910627, 0.4697822874053586, 0.7291320908465551, 0.39653931948230575, 0.07785205408506236], - 'db8': [-0.00011747678400228192, 0.0006754494059985568, -0.0003917403729959771, -0.00487035299301066, 0.008746094047015655, 0.013981027917015516, -0.04408825393106472, -0.01736930100202211, 0.128747426620186, 0.00047248457399797254, -0.2840155429624281, -0.015829105256023893, 0.5853546836548691, 0.6756307362980128, 0.3128715909144659, 0.05441584224308161], - 'sym2': [-0.12940952255092145, 0.22414386804185735, 0.836516303737469, 0.48296291314469025], - 'sym3': [0.035226291882100656, -0.08544127388224149, -0.13501102001039084, 0.4598775021193313, 0.8068915093133388, 0.3326705529509569], - 'sym4': [-0.07576571478927333, -0.02963552764599851, 0.49761866763201545, 0.8037387518059161, 0.29785779560527736, -0.09921954357684722, -0.012603967262037833, 0.0322231006040427], - 'sym5': [0.027333068345077982, 0.029519490925774643, -0.039134249302383094, 0.1993975339773936, 0.7234076904024206, 0.6339789634582119, 0.01660210576452232, -0.17532808990845047, -0.021101834024758855, 0.019538882735286728], - 'sym6': [0.015404109327027373, 0.0034907120842174702, -0.11799011114819057, -0.048311742585633, 0.4910559419267466, 0.787641141030194, 0.3379294217276218, -0.07263752278646252, -0.021060292512300564, 0.04472490177066578, 0.0017677118642428036, -0.007800708325034148], - 'sym7': [0.002681814568257878, -0.0010473848886829163, -0.01263630340325193, 0.03051551316596357, 0.0678926935013727, -0.049552834937127255, 0.017441255086855827, 0.5361019170917628, 0.767764317003164, 0.2886296317515146, -0.14004724044296152, -0.10780823770381774, 0.004010244871533663, 0.010268176708511255], - 'sym8': [-0.0033824159510061256, -0.0005421323317911481, 0.03169508781149298, 0.007607487324917605, -0.1432942383508097, -0.061273359067658524, 0.4813596512583722, 0.7771857517005235, 0.3644418948353314, -0.05194583810770904, -0.027219029917056003, 0.049137179673607506, 0.003808752013890615, -0.01495225833704823, -0.0003029205147213668, 0.0018899503327594609], -} - -# ---------------------------------------------------------------------------- -# Helpers for constructing transformation matrices. - - -def matrix(*rows, device=None): - assert all(len(row) == len(rows[0]) for row in rows) - elems = [x for row in rows for x in row] - ref = [x for x in elems if isinstance(x, torch.Tensor)] - if len(ref) == 0: - return misc.constant(np.asarray(rows), device=device) - assert device is None or device == ref[0].device - elems = [x if isinstance(x, torch.Tensor) else misc.constant( - x, shape=ref[0].shape, device=ref[0].device) for x in elems] - return torch.stack(elems, dim=-1).reshape(ref[0].shape + (len(rows), -1)) - - -def translate2d(tx, ty, **kwargs): - return matrix( - [1, 0, tx], - [0, 1, ty], - [0, 0, 1], - **kwargs) - - -def translate3d(tx, ty, tz, **kwargs): - return matrix( - [1, 0, 0, tx], - [0, 1, 0, ty], - [0, 0, 1, tz], - [0, 0, 0, 1], - **kwargs) - - -def scale2d(sx, sy, **kwargs): - return matrix( - [sx, 0, 0], - [0, sy, 0], - [0, 0, 1], - **kwargs) - - -def scale3d(sx, sy, sz, **kwargs): - return matrix( - [sx, 0, 0, 0], - [0, sy, 0, 0], - [0, 0, sz, 0], - [0, 0, 0, 1], - **kwargs) - - -def rotate2d(theta, **kwargs): - return matrix( - [torch.cos(theta), torch.sin(-theta), 0], - [torch.sin(theta), torch.cos(theta), 0], - [0, 0, 1], - **kwargs) - - -def rotate3d(v, theta, **kwargs): - vx = v[..., 0] - vy = v[..., 1] - vz = v[..., 2] - s = torch.sin(theta) - c = torch.cos(theta) - cc = 1 - c - return matrix( - [vx*vx*cc+c, vx*vy*cc-vz*s, vx*vz*cc+vy*s, 0], - [vy*vx*cc+vz*s, vy*vy*cc+c, vy*vz*cc-vx*s, 0], - [vz*vx*cc-vy*s, vz*vy*cc+vx*s, vz*vz*cc+c, 0], - [0, 0, 0, 1], - **kwargs) - - -def translate2d_inv(tx, ty, **kwargs): - return translate2d(-tx, -ty, **kwargs) - - -def scale2d_inv(sx, sy, **kwargs): - return scale2d(1 / sx, 1 / sy, **kwargs) - - -def rotate2d_inv(theta, **kwargs): - return rotate2d(-theta, **kwargs) - -# ---------------------------------------------------------------------------- -# Versatile image augmentation pipeline from the paper -# "Training Generative Adversarial Networks with Limited Data". -# -# All augmentations are disabled by default; individual augmentations can -# be enabled by setting their probability multipliers to 1. - - -@persistence.persistent_class -class AugmentPipe(torch.nn.Module): - def __init__(self, - xflip=0, rotate90=0, xint=0, xint_max=0.125, - scale=0, rotate=0, aniso=0, xfrac=0, scale_std=0.2, rotate_max=1, aniso_std=0.2, xfrac_std=0.125, - brightness=0, contrast=0, lumaflip=0, hue=0, saturation=0, brightness_std=0.2, contrast_std=0.5, hue_max=1, saturation_std=1, - imgfilter=0, imgfilter_bands=[1, 1, 1, 1], imgfilter_std=1, - noise=0, cutout=0, noise_std=0.1, cutout_size=0.5, - ): - super().__init__() - # Overall multiplier for augmentation probability. - self.register_buffer('p', torch.ones([])) - - # Pixel blitting. - # Probability multiplier for x-flip. - self.xflip = float(xflip) - # Probability multiplier for 90 degree rotations. - self.rotate90 = float(rotate90) - # Probability multiplier for integer translation. - self.xint = float(xint) - # Range of integer translation, relative to image dimensions. - self.xint_max = float(xint_max) - - # General geometric transformations. - # Probability multiplier for isotropic scaling. - self.scale = float(scale) - # Probability multiplier for arbitrary rotation. - self.rotate = float(rotate) - # Probability multiplier for anisotropic scaling. - self.aniso = float(aniso) - # Probability multiplier for fractional translation. - self.xfrac = float(xfrac) - # Log2 standard deviation of isotropic scaling. - self.scale_std = float(scale_std) - # Range of arbitrary rotation, 1 = full circle. - self.rotate_max = float(rotate_max) - # Log2 standard deviation of anisotropic scaling. - self.aniso_std = float(aniso_std) - # Standard deviation of frational translation, relative to image dimensions. - self.xfrac_std = float(xfrac_std) - - # Color transformations. - # Probability multiplier for brightness. - self.brightness = float(brightness) - # Probability multiplier for contrast. - self.contrast = float(contrast) - # Probability multiplier for luma flip. - self.lumaflip = float(lumaflip) - # Probability multiplier for hue rotation. - self.hue = float(hue) - # Probability multiplier for saturation. - self.saturation = float(saturation) - # Standard deviation of brightness. - self.brightness_std = float(brightness_std) - # Log2 standard deviation of contrast. - self.contrast_std = float(contrast_std) - # Range of hue rotation, 1 = full circle. - self.hue_max = float(hue_max) - # Log2 standard deviation of saturation. - self.saturation_std = float(saturation_std) - - # Image-space filtering. - # Probability multiplier for image-space filtering. - self.imgfilter = float(imgfilter) - # Probability multipliers for individual frequency bands. - self.imgfilter_bands = list(imgfilter_bands) - # Log2 standard deviation of image-space filter amplification. - self.imgfilter_std = float(imgfilter_std) - - # Image-space corruptions. - # Probability multiplier for additive RGB noise. - self.noise = float(noise) - # Probability multiplier for cutout. - self.cutout = float(cutout) - # Standard deviation of additive RGB noise. - self.noise_std = float(noise_std) - # Size of the cutout rectangle, relative to image dimensions. - self.cutout_size = float(cutout_size) - - # Setup orthogonal lowpass filter for geometric augmentations. - self.register_buffer( - 'Hz_geom', upfirdn2d.setup_filter(wavelets['sym6'])) - - # Construct filter bank for image-space filtering. - Hz_lo = np.asarray(wavelets['sym2']) # H(z) - Hz_hi = Hz_lo * ((-1) ** np.arange(Hz_lo.size)) # H(-z) - Hz_lo2 = np.convolve(Hz_lo, Hz_lo[::-1]) / 2 # H(z) * H(z^-1) / 2 - Hz_hi2 = np.convolve(Hz_hi, Hz_hi[::-1]) / 2 # H(-z) * H(-z^-1) / 2 - Hz_fbank = np.eye(4, 1) # Bandpass(H(z), b_i) - for i in range(1, Hz_fbank.shape[0]): - Hz_fbank = np.dstack([Hz_fbank, np.zeros_like(Hz_fbank)]).reshape( - Hz_fbank.shape[0], -1)[:, :-1] - Hz_fbank = scipy.signal.convolve(Hz_fbank, [Hz_lo2]) - Hz_fbank[i, (Hz_fbank.shape[1] - Hz_hi2.size) // - 2: (Hz_fbank.shape[1] + Hz_hi2.size) // 2] += Hz_hi2 - self.register_buffer('Hz_fbank', torch.as_tensor( - Hz_fbank, dtype=torch.float32)) - - def forward(self, images, debug_percentile=None): - assert isinstance(images, torch.Tensor) and images.ndim == 4 - batch_size, num_channels, height, width = images.shape - device = images.device - if debug_percentile is not None: - debug_percentile = torch.as_tensor( - debug_percentile, dtype=torch.float32, device=device) - - # ------------------------------------- - # Select parameters for pixel blitting. - # ------------------------------------- - - # Initialize inverse homogeneous 2D transform: G_inv @ pixel_out ==> pixel_in - I_3 = torch.eye(3, device=device) - G_inv = I_3 - - # Apply x-flip with probability (xflip * strength). - if self.xflip > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 2) - i = torch.where(torch.rand( - [batch_size], device=device) < self.xflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - G_inv = G_inv @ scale2d_inv(1 - 2 * i, 1) - - # Apply 90 degree rotations with probability (rotate90 * strength). - if self.rotate90 > 0: - i = torch.floor(torch.rand([batch_size], device=device) * 4) - i = torch.where(torch.rand( - [batch_size], device=device) < self.rotate90 * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 4)) - G_inv = G_inv @ rotate2d_inv(-np.pi / 2 * i) - - # Apply integer translation with probability (xint * strength). - if self.xint > 0: - t = (torch.rand([batch_size, 2], device=device) - * 2 - 1) * self.xint_max - t = torch.where(torch.rand( - [batch_size, 1], device=device) < self.xint * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like( - t, (debug_percentile * 2 - 1) * self.xint_max) - G_inv = G_inv @ translate2d_inv(torch.round( - t[:, 0] * width), torch.round(t[:, 1] * height)) - - # -------------------------------------------------------- - # Select parameters for general geometric transformations. - # -------------------------------------------------------- - - # Apply isotropic scaling with probability (scale * strength). - if self.scale > 0: - s = torch.exp2(torch.randn( - [batch_size], device=device) * self.scale_std) - s = torch.where(torch.rand( - [batch_size], device=device) < self.scale * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.scale_std)) - G_inv = G_inv @ scale2d_inv(s, s) - - # Apply pre-rotation with probability p_rot. - # P(pre OR post) = p - p_rot = 1 - torch.sqrt((1 - self.rotate * self.p).clamp(0, 1)) - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) - * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand( - [batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like( - theta, (debug_percentile * 2 - 1) * np.pi * self.rotate_max) - G_inv = G_inv @ rotate2d_inv(-theta) # Before anisotropic scaling. - - # Apply anisotropic scaling with probability (aniso * strength). - if self.aniso > 0: - s = torch.exp2(torch.randn( - [batch_size], device=device) * self.aniso_std) - s = torch.where(torch.rand( - [batch_size], device=device) < self.aniso * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.aniso_std)) - G_inv = G_inv @ scale2d_inv(s, 1 / s) - - # Apply post-rotation with probability p_rot. - if self.rotate > 0: - theta = (torch.rand([batch_size], device=device) - * 2 - 1) * np.pi * self.rotate_max - theta = torch.where(torch.rand( - [batch_size], device=device) < p_rot, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.zeros_like(theta) - G_inv = G_inv @ rotate2d_inv(-theta) # After anisotropic scaling. - - # Apply fractional translation with probability (xfrac * strength). - if self.xfrac > 0: - t = torch.randn([batch_size, 2], device=device) * self.xfrac_std - t = torch.where(torch.rand( - [batch_size, 1], device=device) < self.xfrac * self.p, t, torch.zeros_like(t)) - if debug_percentile is not None: - t = torch.full_like(t, torch.erfinv( - debug_percentile * 2 - 1) * self.xfrac_std) - G_inv = G_inv @ translate2d_inv(t[:, 0] * width, t[:, 1] * height) - - # ---------------------------------- - # Execute geometric transformations. - # ---------------------------------- - - # Execute if the transform is not identity. - if G_inv is not I_3: - - # Calculate padding. - cx = (width - 1) / 2 - cy = (height - 1) / 2 - cp = matrix([-cx, -cy, 1], [cx, -cy, 1], [cx, cy, 1], - [-cx, cy, 1], device=device) # [idx, xyz] - cp = G_inv @ cp.t() # [batch, xyz, idx] - Hz_pad = self.Hz_geom.shape[0] // 4 - margin = cp[:, :2, :].permute( - 1, 0, 2).flatten(1) # [xy, batch * idx] - # [x0, y0, x1, y1] - margin = torch.cat([-margin, margin]).max(dim=1).values - margin = margin + \ - misc.constant([Hz_pad * 2 - cx, Hz_pad * 2 - cy] - * 2, device=device) - margin = margin.max(misc.constant([0, 0] * 2, device=device)) - margin = margin.min(misc.constant( - [width-1, height-1] * 2, device=device)) - mx0, my0, mx1, my1 = margin.ceil().to(torch.int32) - - # Pad image and adjust origin. - images = torch.nn.functional.pad( - input=images, pad=[mx0, mx1, my0, my1], mode='reflect') - G_inv = translate2d((mx0 - mx1) / 2, (my0 - my1) / 2) @ G_inv - - # Upsample. - images = upfirdn2d.upsample2d(x=images, f=self.Hz_geom, up=2) - G_inv = scale2d( - 2, 2, device=device) @ G_inv @ scale2d_inv(2, 2, device=device) - G_inv = translate2d(-0.5, -0.5, - device=device) @ G_inv @ translate2d_inv(-0.5, -0.5, device=device) - - # Execute transformation. - shape = [batch_size, num_channels, - (height + Hz_pad * 2) * 2, (width + Hz_pad * 2) * 2] - G_inv = scale2d(2 / images.shape[3], 2 / images.shape[2], device=device) @ G_inv @ scale2d_inv( - 2 / shape[3], 2 / shape[2], device=device) - grid = torch.nn.functional.affine_grid( - theta=G_inv[:, :2, :], size=shape, align_corners=False) - images = grid_sample_gradfix.grid_sample(images, grid) - - # Downsample and crop. - images = upfirdn2d.downsample2d( - x=images, f=self.Hz_geom, down=2, padding=-Hz_pad*2, flip_filter=True) - - # -------------------------------------------- - # Select parameters for color transformations. - # -------------------------------------------- - - # Initialize homogeneous 3D transformation matrix: C @ color_in ==> color_out - I_4 = torch.eye(4, device=device) - C = I_4 - - # Apply brightness with probability (brightness * strength). - if self.brightness > 0: - b = torch.randn([batch_size], device=device) * self.brightness_std - b = torch.where(torch.rand( - [batch_size], device=device) < self.brightness * self.p, b, torch.zeros_like(b)) - if debug_percentile is not None: - b = torch.full_like(b, torch.erfinv( - debug_percentile * 2 - 1) * self.brightness_std) - C = translate3d(b, b, b) @ C - - # Apply contrast with probability (contrast * strength). - if self.contrast > 0: - c = torch.exp2(torch.randn( - [batch_size], device=device) * self.contrast_std) - c = torch.where(torch.rand( - [batch_size], device=device) < self.contrast * self.p, c, torch.ones_like(c)) - if debug_percentile is not None: - c = torch.full_like(c, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.contrast_std)) - C = scale3d(c, c, c) @ C - - # Apply luma flip with probability (lumaflip * strength). - # Luma axis. - v = misc.constant(np.asarray([1, 1, 1, 0]) / np.sqrt(3), device=device) - if self.lumaflip > 0: - i = torch.floor(torch.rand([batch_size, 1, 1], device=device) * 2) - i = torch.where(torch.rand( - [batch_size, 1, 1], device=device) < self.lumaflip * self.p, i, torch.zeros_like(i)) - if debug_percentile is not None: - i = torch.full_like(i, torch.floor(debug_percentile * 2)) - C = (I_4 - 2 * v.ger(v) * i) @ C # Householder reflection. - - # Apply hue rotation with probability (hue * strength). - if self.hue > 0 and num_channels > 1: - theta = (torch.rand([batch_size], device=device) - * 2 - 1) * np.pi * self.hue_max - theta = torch.where(torch.rand( - [batch_size], device=device) < self.hue * self.p, theta, torch.zeros_like(theta)) - if debug_percentile is not None: - theta = torch.full_like( - theta, (debug_percentile * 2 - 1) * np.pi * self.hue_max) - C = rotate3d(v, theta) @ C # Rotate around v. - - # Apply saturation with probability (saturation * strength). - if self.saturation > 0 and num_channels > 1: - s = torch.exp2(torch.randn( - [batch_size, 1, 1], device=device) * self.saturation_std) - s = torch.where(torch.rand( - [batch_size, 1, 1], device=device) < self.saturation * self.p, s, torch.ones_like(s)) - if debug_percentile is not None: - s = torch.full_like(s, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.saturation_std)) - C = (v.ger(v) + (I_4 - v.ger(v)) * s) @ C - - # ------------------------------ - # Execute color transformations. - # ------------------------------ - - # Execute if the transform is not identity. - if C is not I_4: - images = images.reshape([batch_size, num_channels, height * width]) - if num_channels == 3: - images = C[:, :3, :3] @ images + C[:, :3, 3:] - elif num_channels == 1: - C = C[:, :3, :].mean(dim=1, keepdims=True) - images = images * \ - C[:, :, :3].sum(dim=2, keepdims=True) + C[:, :, 3:] - else: - raise ValueError( - 'Image must be RGB (3 channels) or L (1 channel)') - images = images.reshape([batch_size, num_channels, height, width]) - - # ---------------------- - # Image-space filtering. - # ---------------------- - - if self.imgfilter > 0: - num_bands = self.Hz_fbank.shape[0] - assert len(self.imgfilter_bands) == num_bands - # Expected power spectrum (1/f). - expected_power = misc.constant( - np.array([10, 1, 1, 1]) / 13, device=device) - - # Apply amplification for each band with probability (imgfilter * strength * band_strength). - # Global gain vector (identity). - g = torch.ones([batch_size, num_bands], device=device) - for i, band_strength in enumerate(self.imgfilter_bands): - t_i = torch.exp2(torch.randn( - [batch_size], device=device) * self.imgfilter_std) - t_i = torch.where(torch.rand( - [batch_size], device=device) < self.imgfilter * self.p * band_strength, t_i, torch.ones_like(t_i)) - if debug_percentile is not None: - t_i = torch.full_like(t_i, torch.exp2(torch.erfinv( - debug_percentile * 2 - 1) * self.imgfilter_std)) if band_strength > 0 else torch.ones_like(t_i) - # Temporary gain vector. - t = torch.ones([batch_size, num_bands], device=device) - # Replace i'th element. - t[:, i] = t_i - # Normalize power. - t = t / (expected_power * t.square() - ).sum(dim=-1, keepdims=True).sqrt() - # Accumulate into global gain. - g = g * t - - # Construct combined amplification filter. - # [batch, tap] - Hz_prime = g @ self.Hz_fbank - Hz_prime = Hz_prime.unsqueeze(1).repeat( - [1, num_channels, 1]) # [batch, channels, tap] - # [batch * channels, 1, tap] - Hz_prime = Hz_prime.reshape([batch_size * num_channels, 1, -1]) - - # Apply filter. - p = self.Hz_fbank.shape[1] // 2 - images = images.reshape( - [1, batch_size * num_channels, height, width]) - images = torch.nn.functional.pad( - input=images, pad=[p, p, p, p], mode='reflect') - images = conv2d_gradfix.conv2d( - input=images, weight=Hz_prime.unsqueeze(2), groups=batch_size*num_channels) - images = conv2d_gradfix.conv2d( - input=images, weight=Hz_prime.unsqueeze(3), groups=batch_size*num_channels) - images = images.reshape([batch_size, num_channels, height, width]) - - # ------------------------ - # Image-space corruptions. - # ------------------------ - - # Apply additive RGB noise with probability (noise * strength). - if self.noise > 0: - sigma = torch.randn([batch_size, 1, 1, 1], - device=device).abs() * self.noise_std - sigma = torch.where(torch.rand( - [batch_size, 1, 1, 1], device=device) < self.noise * self.p, sigma, torch.zeros_like(sigma)) - if debug_percentile is not None: - sigma = torch.full_like(sigma, torch.erfinv( - debug_percentile) * self.noise_std) - images = images + \ - torch.randn([batch_size, num_channels, height, - width], device=device) * sigma - - # Apply cutout with probability (cutout * strength). - if self.cutout > 0: - size = torch.full([batch_size, 2, 1, 1, 1], - self.cutout_size, device=device) - size = torch.where(torch.rand( - [batch_size, 1, 1, 1, 1], device=device) < self.cutout * self.p, size, torch.zeros_like(size)) - center = torch.rand([batch_size, 2, 1, 1, 1], device=device) - if debug_percentile is not None: - size = torch.full_like(size, self.cutout_size) - center = torch.full_like(center, debug_percentile) - coord_x = torch.arange(width, device=device).reshape([1, 1, 1, -1]) - coord_y = torch.arange( - height, device=device).reshape([1, 1, -1, 1]) - mask_x = (((coord_x + 0.5) / width - - center[:, 0]).abs() >= size[:, 0] / 2) - mask_y = (((coord_y + 0.5) / height - - center[:, 1]).abs() >= size[:, 1] / 2) - mask = torch.logical_or(mask_x, mask_y).to(torch.float32) - images = images * mask - - return images - -# ---------------------------------------------------------------------------- diff --git a/spaces/ESG-TFM-UV/ESG_API_BATCH/README.md b/spaces/ESG-TFM-UV/ESG_API_BATCH/README.md deleted file mode 100644 index e4696b946e4551e6e68b145592b4ab53eedf91a9..0000000000000000000000000000000000000000 --- a/spaces/ESG-TFM-UV/ESG_API_BATCH/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ESG API BATCH -emoji: ⛓ -colorFrom: blue -colorTo: purple -sdk: gradio -sdk_version: 3.3 -python_version: 3.7.14 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Emmawang/audio_summarizer/app.py b/spaces/Emmawang/audio_summarizer/app.py deleted file mode 100644 index 8ade2c1fb8bbd8cea2e4b4a5fa1697c6c4fd23f4..0000000000000000000000000000000000000000 --- a/spaces/Emmawang/audio_summarizer/app.py +++ /dev/null @@ -1,38 +0,0 @@ -import gradio as gr -from transformers import pipeline -from gtts import gTTS - -def audio(text): - # Summarize the input text using the Hugging Face model - # Load the pre-trained summarization model from Hugging Face - summarizer = pipeline("summarization", model="facebook/bart-large-cnn") - summary = summarizer(text, do_sample=False)[0]["summary_text"] - # Convert the summary to audio using Google Text-to-Speech - tts = gTTS(summary) - tts.save("summary.mp3") - return "summary.mp3" - -def text_summary(text): - # Summarize the input text using the Hugging Face model - # Load the pre-trained summarization model from Hugging Face - summarizer = pipeline("summarization", model="facebook/bart-large-cnn") - summary = summarizer(text, do_sample=False)[0]["summary_text"] - return summary - -# using streamlit to create a web app to display the summary or play the audio - -import streamlit as st - -st.title("📌 Your Personal Audio Summary") -text = st.text_input("Enter text to summarize") - -#choose between text summary or audio summary -option = st.selectbox("Choose between text summary or audio summary", ("📃Text Summary", "🗣Audio Summary")) - -if st.button("Summarize"): - if option == "📃Text Summary": - summary = text_summary(text) - st.write(summary) - if option == "🗣Audio Summary": - file_path = audio(text) - st.audio(file_path) diff --git a/spaces/EronSamez/RVC_HFmeu/demucs/augment.py b/spaces/EronSamez/RVC_HFmeu/demucs/augment.py deleted file mode 100644 index bb36d3298d89470f306316322e7587187819c94b..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/demucs/augment.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import random -import torch as th -from torch import nn - - -class Shift(nn.Module): - """ - Randomly shift audio in time by up to `shift` samples. - """ - def __init__(self, shift=8192): - super().__init__() - self.shift = shift - - def forward(self, wav): - batch, sources, channels, time = wav.size() - length = time - self.shift - if self.shift > 0: - if not self.training: - wav = wav[..., :length] - else: - offsets = th.randint(self.shift, [batch, sources, 1, 1], device=wav.device) - offsets = offsets.expand(-1, -1, channels, -1) - indexes = th.arange(length, device=wav.device) - wav = wav.gather(3, indexes + offsets) - return wav - - -class FlipChannels(nn.Module): - """ - Flip left-right channels. - """ - def forward(self, wav): - batch, sources, channels, time = wav.size() - if self.training and wav.size(2) == 2: - left = th.randint(2, (batch, sources, 1, 1), device=wav.device) - left = left.expand(-1, -1, -1, time) - right = 1 - left - wav = th.cat([wav.gather(2, left), wav.gather(2, right)], dim=2) - return wav - - -class FlipSign(nn.Module): - """ - Random sign flip. - """ - def forward(self, wav): - batch, sources, channels, time = wav.size() - if self.training: - signs = th.randint(2, (batch, sources, 1, 1), device=wav.device, dtype=th.float32) - wav = wav * (2 * signs - 1) - return wav - - -class Remix(nn.Module): - """ - Shuffle sources to make new mixes. - """ - def __init__(self, group_size=4): - """ - Shuffle sources within one batch. - Each batch is divided into groups of size `group_size` and shuffling is done within - each group separatly. This allow to keep the same probability distribution no matter - the number of GPUs. Without this grouping, using more GPUs would lead to a higher - probability of keeping two sources from the same track together which can impact - performance. - """ - super().__init__() - self.group_size = group_size - - def forward(self, wav): - batch, streams, channels, time = wav.size() - device = wav.device - - if self.training: - group_size = self.group_size or batch - if batch % group_size != 0: - raise ValueError(f"Batch size {batch} must be divisible by group size {group_size}") - groups = batch // group_size - wav = wav.view(groups, group_size, streams, channels, time) - permutations = th.argsort(th.rand(groups, group_size, streams, 1, 1, device=device), - dim=1) - wav = wav.gather(1, permutations.expand(-1, -1, -1, channels, time)) - wav = wav.view(batch, streams, channels, time) - return wav - - -class Scale(nn.Module): - def __init__(self, proba=1., min=0.25, max=1.25): - super().__init__() - self.proba = proba - self.min = min - self.max = max - - def forward(self, wav): - batch, streams, channels, time = wav.size() - device = wav.device - if self.training and random.random() < self.proba: - scales = th.empty(batch, streams, 1, 1, device=device).uniform_(self.min, self.max) - wav *= scales - return wav diff --git a/spaces/EronSamez/RVC_HFmeu/infer/modules/vc/utils.py b/spaces/EronSamez/RVC_HFmeu/infer/modules/vc/utils.py deleted file mode 100644 index a1cb0ff84097d1c7eb82373ccf19db061f595096..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/infer/modules/vc/utils.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -import re -from fairseq import checkpoint_utils - - -def get_index_path_from_model(sid): - sid0strip = re.sub(r'\.pth|\.onnx$', '', sid) - sid0name = os.path.split(sid0strip)[-1] # Extract only the name, not the directory - - # Check if the sid0strip has the specific ending format _eXXX_sXXX - if re.match(r'.+_e\d+_s\d+$', sid0name): - base_model_name = sid0name.rsplit('_', 2)[0] - else: - base_model_name = sid0name - - return next( - ( - f - for f in [ - os.path.join(root, name) - for root, _, files in os.walk(os.getenv("index_root"), topdown=False) - for name in files - if name.endswith(".index") and "trained" not in name - ] - if base_model_name in f - ), - "", - ) - - -def load_hubert(config): - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["assets/hubert/hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(config.device) - if config.is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - return hubert_model.eval() diff --git a/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/README.md b/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/README.md deleted file mode 100644 index 3e09032580c2d3c5f86a77a2bb941f7bb0c3efe8..0000000000000000000000000000000000000000 --- a/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Latvian Twitter Sentiment Classifier -emoji: ⚡ -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.28.3 -python_version: 3.7 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/FlippFuzz/whisper-webui/tests/segments_test.py b/spaces/FlippFuzz/whisper-webui/tests/segments_test.py deleted file mode 100644 index d829f1c77f74b3c96513fe4965d532cf2d1dceb4..0000000000000000000000000000000000000000 --- a/spaces/FlippFuzz/whisper-webui/tests/segments_test.py +++ /dev/null @@ -1,48 +0,0 @@ -import sys -import unittest - -sys.path.append('../whisper-webui') - -from src.segments import merge_timestamps - -class TestSegments(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestSegments, self).__init__(*args, **kwargs) - - def test_merge_segments(self): - segments = [ - {'start': 10.0, 'end': 20.0}, - {'start': 22.0, 'end': 27.0}, - {'start': 31.0, 'end': 35.0}, - {'start': 45.0, 'end': 60.0}, - {'start': 61.0, 'end': 65.0}, - {'start': 68.0, 'end': 98.0}, - {'start': 100.0, 'end': 102.0}, - {'start': 110.0, 'end': 112.0} - ] - - result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1) - - self.assertListEqual(result, [ - {'start': 9.0, 'end': 36.0}, - {'start': 44.0, 'end': 66.0}, - {'start': 67.0, 'end': 99.0}, - {'start': 99.0, 'end': 103.0}, - {'start': 109.0, 'end': 113.0} - ]) - - def test_overlap_next(self): - segments = [ - {'start': 5.0, 'end': 39.182}, - {'start': 39.986, 'end': 40.814} - ] - - result = merge_timestamps(segments, merge_window=5, max_merge_size=30, padding_left=1, padding_right=1) - - self.assertListEqual(result, [ - {'start': 4.0, 'end': 39.584}, - {'start': 39.584, 'end': 41.814} - ]) - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/wavenet.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/wavenet.py deleted file mode 100644 index 3d48c7eaaa0e8191b27a5d1890eb657cbcc0d143..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/diffusion/wavenet.py +++ /dev/null @@ -1,108 +0,0 @@ -import math -from math import sqrt - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn import Mish - - -class Conv1d(torch.nn.Conv1d): - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - nn.init.kaiming_normal_(self.weight) - - -class SinusoidalPosEmb(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - - def forward(self, x): - device = x.device - half_dim = self.dim // 2 - emb = math.log(10000) / (half_dim - 1) - emb = torch.exp(torch.arange(half_dim, device=device) * -emb) - emb = x[:, None] * emb[None, :] - emb = torch.cat((emb.sin(), emb.cos()), dim=-1) - return emb - - -class ResidualBlock(nn.Module): - def __init__(self, encoder_hidden, residual_channels, dilation): - super().__init__() - self.residual_channels = residual_channels - self.dilated_conv = nn.Conv1d( - residual_channels, - 2 * residual_channels, - kernel_size=3, - padding=dilation, - dilation=dilation - ) - self.diffusion_projection = nn.Linear(residual_channels, residual_channels) - self.conditioner_projection = nn.Conv1d(encoder_hidden, 2 * residual_channels, 1) - self.output_projection = nn.Conv1d(residual_channels, 2 * residual_channels, 1) - - def forward(self, x, conditioner, diffusion_step): - diffusion_step = self.diffusion_projection(diffusion_step).unsqueeze(-1) - conditioner = self.conditioner_projection(conditioner) - y = x + diffusion_step - - y = self.dilated_conv(y) + conditioner - - # Using torch.split instead of torch.chunk to avoid using onnx::Slice - gate, filter = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - y = torch.sigmoid(gate) * torch.tanh(filter) - - y = self.output_projection(y) - - # Using torch.split instead of torch.chunk to avoid using onnx::Slice - residual, skip = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - return (x + residual) / math.sqrt(2.0), skip - - -class WaveNet(nn.Module): - def __init__(self, in_dims=128, n_layers=20, n_chans=384, n_hidden=256): - super().__init__() - self.input_projection = Conv1d(in_dims, n_chans, 1) - self.diffusion_embedding = SinusoidalPosEmb(n_chans) - self.mlp = nn.Sequential( - nn.Linear(n_chans, n_chans * 4), - Mish(), - nn.Linear(n_chans * 4, n_chans) - ) - self.residual_layers = nn.ModuleList([ - ResidualBlock( - encoder_hidden=n_hidden, - residual_channels=n_chans, - dilation=1 - ) - for i in range(n_layers) - ]) - self.skip_projection = Conv1d(n_chans, n_chans, 1) - self.output_projection = Conv1d(n_chans, in_dims, 1) - nn.init.zeros_(self.output_projection.weight) - - def forward(self, spec, diffusion_step, cond): - """ - :param spec: [B, 1, M, T] - :param diffusion_step: [B, 1] - :param cond: [B, M, T] - :return: - """ - x = spec.squeeze(1) - x = self.input_projection(x) # [B, residual_channel, T] - - x = F.relu(x) - diffusion_step = self.diffusion_embedding(diffusion_step) - diffusion_step = self.mlp(diffusion_step) - skip = [] - for layer in self.residual_layers: - x, skip_connection = layer(x, cond, diffusion_step) - skip.append(skip_connection) - - x = torch.sum(torch.stack(skip), dim=0) / sqrt(len(self.residual_layers)) - x = self.skip_projection(x) - x = F.relu(x) - x = self.output_projection(x) # [B, mel_bins, T] - return x[:, None, :, :] diff --git a/spaces/FrankZxShen/so-vits-svc-models-pcr/modules/F0Predictor/crepe.py b/spaces/FrankZxShen/so-vits-svc-models-pcr/modules/F0Predictor/crepe.py deleted file mode 100644 index c6fb45c79bcd306202a2c0282b3d73a8074ced5d..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-pcr/modules/F0Predictor/crepe.py +++ /dev/null @@ -1,340 +0,0 @@ -from typing import Optional,Union -try: - from typing import Literal -except Exception as e: - from typing_extensions import Literal -import numpy as np -import torch -import torchcrepe -from torch import nn -from torch.nn import functional as F -import scipy - -#from:https://github.com/fishaudio/fish-diffusion - -def repeat_expand( - content: Union[torch.Tensor, np.ndarray], target_len: int, mode: str = "nearest" -): - """Repeat content to target length. - This is a wrapper of torch.nn.functional.interpolate. - - Args: - content (torch.Tensor): tensor - target_len (int): target length - mode (str, optional): interpolation mode. Defaults to "nearest". - - Returns: - torch.Tensor: tensor - """ - - ndim = content.ndim - - if content.ndim == 1: - content = content[None, None] - elif content.ndim == 2: - content = content[None] - - assert content.ndim == 3 - - is_np = isinstance(content, np.ndarray) - if is_np: - content = torch.from_numpy(content) - - results = torch.nn.functional.interpolate(content, size=target_len, mode=mode) - - if is_np: - results = results.numpy() - - if ndim == 1: - return results[0, 0] - elif ndim == 2: - return results[0] - - -class BasePitchExtractor: - def __init__( - self, - hop_length: int = 512, - f0_min: float = 50.0, - f0_max: float = 1100.0, - keep_zeros: bool = True, - ): - """Base pitch extractor. - - Args: - hop_length (int, optional): Hop length. Defaults to 512. - f0_min (float, optional): Minimum f0. Defaults to 50.0. - f0_max (float, optional): Maximum f0. Defaults to 1100.0. - keep_zeros (bool, optional): Whether keep zeros in pitch. Defaults to True. - """ - - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.keep_zeros = keep_zeros - - def __call__(self, x, sampling_rate=44100, pad_to=None): - raise NotImplementedError("BasePitchExtractor is not callable.") - - def post_process(self, x, sampling_rate, f0, pad_to): - if isinstance(f0, np.ndarray): - f0 = torch.from_numpy(f0).float().to(x.device) - - if pad_to is None: - return f0 - - f0 = repeat_expand(f0, pad_to) - - if self.keep_zeros: - return f0 - - vuv_vector = torch.zeros_like(f0) - vuv_vector[f0 > 0.0] = 1.0 - vuv_vector[f0 <= 0.0] = 0.0 - - # 去掉0频率, 并线性插值 - nzindex = torch.nonzero(f0).squeeze() - f0 = torch.index_select(f0, dim=0, index=nzindex).cpu().numpy() - time_org = self.hop_length / sampling_rate * nzindex.cpu().numpy() - time_frame = np.arange(pad_to) * self.hop_length / sampling_rate - - if f0.shape[0] <= 0: - return torch.zeros(pad_to, dtype=torch.float, device=x.device),torch.zeros(pad_to, dtype=torch.float, device=x.device) - - if f0.shape[0] == 1: - return torch.ones(pad_to, dtype=torch.float, device=x.device) * f0[0],torch.ones(pad_to, dtype=torch.float, device=x.device) - - # 大概可以用 torch 重写? - f0 = np.interp(time_frame, time_org, f0, left=f0[0], right=f0[-1]) - vuv_vector = vuv_vector.cpu().numpy() - vuv_vector = np.ceil(scipy.ndimage.zoom(vuv_vector,pad_to/len(vuv_vector),order = 0)) - - return f0,vuv_vector - - -class MaskedAvgPool1d(nn.Module): - def __init__( - self, kernel_size: int, stride: Optional[int] = None, padding: Optional[int] = 0 - ): - """An implementation of mean pooling that supports masked values. - - Args: - kernel_size (int): The size of the median pooling window. - stride (int, optional): The stride of the median pooling window. Defaults to None. - padding (int, optional): The padding of the median pooling window. Defaults to 0. - """ - - super(MaskedAvgPool1d, self).__init__() - self.kernel_size = kernel_size - self.stride = stride or kernel_size - self.padding = padding - - def forward(self, x, mask=None): - ndim = x.dim() - if ndim == 2: - x = x.unsqueeze(1) - - assert ( - x.dim() == 3 - ), "Input tensor must have 2 or 3 dimensions (batch_size, channels, width)" - - # Apply the mask by setting masked elements to zero, or make NaNs zero - if mask is None: - mask = ~torch.isnan(x) - - # Ensure mask has the same shape as the input tensor - assert x.shape == mask.shape, "Input tensor and mask must have the same shape" - - masked_x = torch.where(mask, x, torch.zeros_like(x)) - # Create a ones kernel with the same number of channels as the input tensor - ones_kernel = torch.ones(x.size(1), 1, self.kernel_size, device=x.device) - - # Perform sum pooling - sum_pooled = nn.functional.conv1d( - masked_x, - ones_kernel, - stride=self.stride, - padding=self.padding, - groups=x.size(1), - ) - - # Count the non-masked (valid) elements in each pooling window - valid_count = nn.functional.conv1d( - mask.float(), - ones_kernel, - stride=self.stride, - padding=self.padding, - groups=x.size(1), - ) - valid_count = valid_count.clamp(min=1) # Avoid division by zero - - # Perform masked average pooling - avg_pooled = sum_pooled / valid_count - - # Fill zero values with NaNs - avg_pooled[avg_pooled == 0] = float("nan") - - if ndim == 2: - return avg_pooled.squeeze(1) - - return avg_pooled - - -class MaskedMedianPool1d(nn.Module): - def __init__( - self, kernel_size: int, stride: Optional[int] = None, padding: Optional[int] = 0 - ): - """An implementation of median pooling that supports masked values. - - This implementation is inspired by the median pooling implementation in - https://gist.github.com/rwightman/f2d3849281624be7c0f11c85c87c1598 - - Args: - kernel_size (int): The size of the median pooling window. - stride (int, optional): The stride of the median pooling window. Defaults to None. - padding (int, optional): The padding of the median pooling window. Defaults to 0. - """ - - super(MaskedMedianPool1d, self).__init__() - self.kernel_size = kernel_size - self.stride = stride or kernel_size - self.padding = padding - - def forward(self, x, mask=None): - ndim = x.dim() - if ndim == 2: - x = x.unsqueeze(1) - - assert ( - x.dim() == 3 - ), "Input tensor must have 2 or 3 dimensions (batch_size, channels, width)" - - if mask is None: - mask = ~torch.isnan(x) - - assert x.shape == mask.shape, "Input tensor and mask must have the same shape" - - masked_x = torch.where(mask, x, torch.zeros_like(x)) - - x = F.pad(masked_x, (self.padding, self.padding), mode="reflect") - mask = F.pad( - mask.float(), (self.padding, self.padding), mode="constant", value=0 - ) - - x = x.unfold(2, self.kernel_size, self.stride) - mask = mask.unfold(2, self.kernel_size, self.stride) - - x = x.contiguous().view(x.size()[:3] + (-1,)) - mask = mask.contiguous().view(mask.size()[:3] + (-1,)).to(x.device) - - # Combine the mask with the input tensor - #x_masked = torch.where(mask.bool(), x, torch.fill_(torch.zeros_like(x),float("inf"))) - x_masked = torch.where(mask.bool(), x, torch.FloatTensor([float("inf")]).to(x.device)) - - # Sort the masked tensor along the last dimension - x_sorted, _ = torch.sort(x_masked, dim=-1) - - # Compute the count of non-masked (valid) values - valid_count = mask.sum(dim=-1) - - # Calculate the index of the median value for each pooling window - median_idx = (torch.div((valid_count - 1), 2, rounding_mode='trunc')).clamp(min=0) - - # Gather the median values using the calculated indices - median_pooled = x_sorted.gather(-1, median_idx.unsqueeze(-1).long()).squeeze(-1) - - # Fill infinite values with NaNs - median_pooled[torch.isinf(median_pooled)] = float("nan") - - if ndim == 2: - return median_pooled.squeeze(1) - - return median_pooled - - -class CrepePitchExtractor(BasePitchExtractor): - def __init__( - self, - hop_length: int = 512, - f0_min: float = 50.0, - f0_max: float = 1100.0, - threshold: float = 0.05, - keep_zeros: bool = False, - device = None, - model: Literal["full", "tiny"] = "full", - use_fast_filters: bool = True, - decoder="viterbi" - ): - super().__init__(hop_length, f0_min, f0_max, keep_zeros) - if decoder == "viterbi": - self.decoder = torchcrepe.decode.viterbi - elif decoder == "argmax": - self.decoder = torchcrepe.decode.argmax - elif decoder == "weighted_argmax": - self.decoder = torchcrepe.decode.weighted_argmax - else: - raise "Unknown decoder" - self.threshold = threshold - self.model = model - self.use_fast_filters = use_fast_filters - self.hop_length = hop_length - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - if self.use_fast_filters: - self.median_filter = MaskedMedianPool1d(3, 1, 1).to(device) - self.mean_filter = MaskedAvgPool1d(3, 1, 1).to(device) - - def __call__(self, x, sampling_rate=44100, pad_to=None): - """Extract pitch using crepe. - - - Args: - x (torch.Tensor): Audio signal, shape (1, T). - sampling_rate (int, optional): Sampling rate. Defaults to 44100. - pad_to (int, optional): Pad to length. Defaults to None. - - Returns: - torch.Tensor: Pitch, shape (T // hop_length,). - """ - - assert x.ndim == 2, f"Expected 2D tensor, got {x.ndim}D tensor." - assert x.shape[0] == 1, f"Expected 1 channel, got {x.shape[0]} channels." - - x = x.to(self.dev) - f0, pd = torchcrepe.predict( - x, - sampling_rate, - self.hop_length, - self.f0_min, - self.f0_max, - pad=True, - model=self.model, - batch_size=1024, - device=x.device, - return_periodicity=True, - decoder=self.decoder - ) - - # Filter, remove silence, set uv threshold, refer to the original warehouse readme - if self.use_fast_filters: - pd = self.median_filter(pd) - else: - pd = torchcrepe.filter.median(pd, 3) - - pd = torchcrepe.threshold.Silence(-60.0)(pd, x, sampling_rate, 512) - f0 = torchcrepe.threshold.At(self.threshold)(f0, pd) - - if self.use_fast_filters: - f0 = self.mean_filter(f0) - else: - f0 = torchcrepe.filter.mean(f0, 3) - - f0 = torch.where(torch.isnan(f0), torch.full_like(f0, 0), f0)[0] - - if torch.all(f0 == 0): - rtn = f0.cpu().numpy() if pad_to==None else np.zeros(pad_to) - return rtn,rtn - - return self.post_process(x, sampling_rate, f0, pad_to) diff --git a/spaces/GIZ/SDSN-demo/ver0.1 scripts/cleaning.py b/spaces/GIZ/SDSN-demo/ver0.1 scripts/cleaning.py deleted file mode 100644 index 6943522a54729cef9b272a1d7a8d6c75042f64ac..0000000000000000000000000000000000000000 --- a/spaces/GIZ/SDSN-demo/ver0.1 scripts/cleaning.py +++ /dev/null @@ -1,168 +0,0 @@ -import logging -import pandas as pd -import numpy as np -import string -import nltk -import spacy -import en_core_web_sm -import re -import streamlit as st - -from haystack.nodes import PreProcessor - -'''basic cleaning - suitable for transformer models''' -def basic(s,SDG = False): - """ - :param s: string to be processed - :return: processed string: see comments in the source code for more info - """ - # Text Lowercase - #s = s.lower() - # Remove punctuation - #translator = str.maketrans(' ', ' ', string.punctuation) - #s = s.translate(translator) - # Remove URLs - s = re.sub(r'^https?:\/\/.*[\r\n]*', ' ', s, flags=re.MULTILINE) - s = re.sub(r"http\S+", " ", s) - if SDG == True: - s = s.lower() - translator = str.maketrans(' ', ' ', string.punctuation) - s = s.translate(translator) - s = re.sub('\n', ' ', s) - s = re.sub("\'", " ", s) - s = re.sub(r'\d+', ' ', s) - s = re.sub(r'\W+', ' ', s) - - # Remove new line characters - #s = re.sub('\n', ' ', s) - - # Remove distracting single quotes - #s = re.sub("\'", " ", s) - # Remove all remaining numbers and non alphanumeric characters - #s = re.sub(r'\d+', ' ', s) - #s = re.sub(r'\W+', ' ', s) - - # define custom words to replace: - #s = re.sub(r'strengthenedstakeholder', 'strengthened stakeholder', s) - - return s.strip() - - -def preprocessingForSDG(document): - - """ - takes in haystack document object and splits it into paragraphs and applies simple cleaning. - - Returns cleaned list of haystack document objects. One paragraph per object. Also returns pandas df and - list that contains all text joined together. - """ - - preprocessor = PreProcessor( - clean_empty_lines=True, - clean_whitespace=True, - clean_header_footer=True, - split_by="word", - split_length=120, - split_respect_sentence_boundary=False, - #split_overlap=1 - ) - for i in document: - docs_processed = preprocessor.process([i]) - for item in docs_processed: - item.content = basic(item.content, SDG = True) - - with st.spinner("👑 document being splitted into paragraphs"): - logging.info("document has been splitted to {} paragraphs".format(len(docs_processed))) - - # create dataframe of text and list of all text - df = pd.DataFrame(docs_processed) - all_text = " ".join(df.content.to_list()) - par_list = df.content.to_list() - - return docs_processed, df, all_text, par_list - -def preprocessing(document): - - """ - takes in haystack document object and splits it into paragraphs and applies simple cleaning. - - Returns cleaned list of haystack document objects. One paragraph per object. Also returns pandas df and - list that contains all text joined together. - """ - - preprocessor = PreProcessor( - clean_empty_lines=True, - clean_whitespace=True, - clean_header_footer=True, - split_by="sentence", - split_length=3, - split_respect_sentence_boundary=False, - split_overlap=1 - ) - for i in document: - docs_processed = preprocessor.process([i]) - for item in docs_processed: - item.content = basic(item.content) - - with st.spinner("👑 document being splitted into paragraphs"): - logging.info("document has been splitted to {} paragraphs".format(len(docs_processed))) - - # create dataframe of text and list of all text - df = pd.DataFrame(docs_processed) - all_text = " ".join(df.content.to_list()) - par_list = df.content.to_list() - - return docs_processed, df, all_text, par_list - -'''processing with spacy - suitable for models such as tf-idf, word2vec''' -def spacy_clean(alpha:str, use_nlp:bool = True) -> str: - - """ - - Clean and tokenise a string using Spacy. Keeps only alphabetic characters, removes stopwords and - - filters out all but proper nouns, nounts, verbs and adjectives. - - Parameters - ---------- - alpha : str - - The input string. - - use_nlp : bool, default False - - Indicates whether Spacy needs to use NLP. Enable this when using this function on its own. - - Should be set to False if used inside nlp.pipeline - - Returns - ------- - ' '.join(beta) : a concatenated list of lemmatised tokens, i.e. a processed string - - Notes - ----- - Fails if alpha is an NA value. Performance decreases as len(alpha) gets large. - Use together with nlp.pipeline for batch processing. - - """ - - nlp = spacy.load("en_core_web_sm", disable=["parser", "ner", "textcat"]) - - if use_nlp: - - alpha = nlp(alpha) - - - - beta = [] - - for tok in alpha: - - if all([tok.is_alpha, not tok.is_stop, tok.pos_ in ['PROPN', 'NOUN', 'VERB', 'ADJ']]): - - beta.append(tok.lemma_) - - - text = ' '.join(beta) - text = text.lower() - return text \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/models/rn50_bert_lingunet.py b/spaces/Gen-Sim/Gen-Sim/cliport/models/rn50_bert_lingunet.py deleted file mode 100644 index 3b72b982b8351b48f856bc2ddbfdb846748cc0a1..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/models/rn50_bert_lingunet.py +++ /dev/null @@ -1,79 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F - -import cliport.utils.utils as utils -from cliport.models.resnet import IdentityBlock, ConvBlock -from cliport.models.core.unet import Up - -from cliport.models.rn50_bert_lingunet_lat import RN50BertLingUNetLat - - -class RN50BertLingUNet(RN50BertLingUNetLat): - """ ImageNet RN50 & Bert with U-Net skip connections but without lateral connections """ - - def __init__(self, input_shape, output_dim, cfg, device, preprocess): - super().__init__(input_shape, output_dim, cfg, device, preprocess) - - def _build_decoder(self): - self.conv1 = nn.Sequential( - nn.Conv2d(self.input_dim, 1024, kernel_size=3, stride=1, padding=1, bias=False), - nn.ReLU(True) - ) - self.up1 = Up(2048, 1024 // self.up_factor, self.bilinear) - self.up2 = Up(1024, 512 // self.up_factor, self.bilinear) - self.up3 = Up(512, 256 // self.up_factor, self.bilinear) - - self.layer1 = nn.Sequential( - ConvBlock(128, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm), - IdentityBlock(64, [64, 64, 64], kernel_size=3, stride=1, batchnorm=self.batchnorm), - nn.UpsamplingBilinear2d(scale_factor=2), - ) - - self.layer2 = nn.Sequential( - ConvBlock(64, [32, 32, 32], kernel_size=3, stride=1, batchnorm=self.batchnorm), - IdentityBlock(32, [32, 32, 32], kernel_size=3, stride=1, batchnorm=self.batchnorm), - nn.UpsamplingBilinear2d(scale_factor=2), - ) - - self.layer3 = nn.Sequential( - ConvBlock(32, [16, 16, 16], kernel_size=3, stride=1, batchnorm=self.batchnorm), - IdentityBlock(16, [16, 16, 16], kernel_size=3, stride=1, batchnorm=self.batchnorm), - nn.UpsamplingBilinear2d(scale_factor=2), - ) - - self.conv2 = nn.Sequential( - nn.Conv2d(16, self.output_dim, kernel_size=1) - ) - - def forward(self, x, l): - x = self.preprocess(x, dist='clip') - - in_type = x.dtype - in_shape = x.shape - x = x[:,:3] # select RGB - x, im = self.encode_image(x) - x = x.to(in_type) - - # encode language - l_enc, l_emb, l_mask = self.encode_text(l) - l_input = l_emb if 'word' in self.lang_fusion_type else l_enc - l_input = l_input.to(dtype=x.dtype) - - # encode image - assert x.shape[1] == self.input_dim - x = self.conv1(x) - - x = self.lang_fuser1(x, l_input, x2_mask=l_mask, x2_proj=self.lang_proj1) - x = self.up1(x, im[-2]) - - x = self.lang_fuser2(x, l_input, x2_mask=l_mask, x2_proj=self.lang_proj2) - x = self.up2(x, im[-3]) - - x = self.lang_fuser3(x, l_input, x2_mask=l_mask, x2_proj=self.lang_proj3) - x = self.up3(x, im[-4]) - - for layer in [self.layer1, self.layer2, self.layer3, self.conv2]: - x = layer(x) - - x = F.interpolate(x, size=(in_shape[-2], in_shape[-1]), mode='bilinear') - return x \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/fcn_s101-d8_512x1024_80k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/fcn_s101-d8_512x1024_80k_cityscapes.py deleted file mode 100644 index 33fa0252d8b4cc786f1297605c169ee6068195a4..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/resnest/fcn_s101-d8_512x1024_80k_cityscapes.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = '../fcn/fcn_r101-d8_512x1024_80k_cityscapes.py' -model = dict( - pretrained='open-mmlab://resnest101', - backbone=dict( - type='ResNeSt', - stem_channels=128, - radix=2, - reduction_factor=4, - avg_down_stride=True)) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/res_layer.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/res_layer.py deleted file mode 100644 index 2585ab551aea79252ef6b34b5faef476e9e1abaa..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/utils/res_layer.py +++ /dev/null @@ -1,94 +0,0 @@ -from mmcv.cnn import build_conv_layer, build_norm_layer -from torch import nn as nn - - -class ResLayer(nn.Sequential): - """ResLayer to build ResNet style backbone. - - Args: - block (nn.Module): block used to build ResLayer. - inplanes (int): inplanes of block. - planes (int): planes of block. - num_blocks (int): number of blocks. - stride (int): stride of the first block. Default: 1 - avg_down (bool): Use AvgPool instead of stride conv when - downsampling in the bottleneck. Default: False - conv_cfg (dict): dictionary to construct and config conv layer. - Default: None - norm_cfg (dict): dictionary to construct and config norm layer. - Default: dict(type='BN') - multi_grid (int | None): Multi grid dilation rates of last - stage. Default: None - contract_dilation (bool): Whether contract first dilation of each layer - Default: False - """ - - def __init__(self, - block, - inplanes, - planes, - num_blocks, - stride=1, - dilation=1, - avg_down=False, - conv_cfg=None, - norm_cfg=dict(type='BN'), - multi_grid=None, - contract_dilation=False, - **kwargs): - self.block = block - - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = [] - conv_stride = stride - if avg_down: - conv_stride = 1 - downsample.append( - nn.AvgPool2d( - kernel_size=stride, - stride=stride, - ceil_mode=True, - count_include_pad=False)) - downsample.extend([ - build_conv_layer( - conv_cfg, - inplanes, - planes * block.expansion, - kernel_size=1, - stride=conv_stride, - bias=False), - build_norm_layer(norm_cfg, planes * block.expansion)[1] - ]) - downsample = nn.Sequential(*downsample) - - layers = [] - if multi_grid is None: - if dilation > 1 and contract_dilation: - first_dilation = dilation // 2 - else: - first_dilation = dilation - else: - first_dilation = multi_grid[0] - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=stride, - dilation=first_dilation, - downsample=downsample, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - inplanes = planes * block.expansion - for i in range(1, num_blocks): - layers.append( - block( - inplanes=inplanes, - planes=planes, - stride=1, - dilation=dilation if multi_grid is None else multi_grid[i], - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - **kwargs)) - super(ResLayer, self).__init__(*layers) diff --git a/spaces/Hallucinate/demo/ldm/modules/image_degradation/__init__.py b/spaces/Hallucinate/demo/ldm/modules/image_degradation/__init__.py deleted file mode 100644 index 7836cada81f90ded99c58d5942eea4c3477f58fc..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/ldm/modules/image_degradation/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from ldm.modules.image_degradation.bsrgan import degradation_bsrgan_variant as degradation_fn_bsr -from ldm.modules.image_degradation.bsrgan_light import degradation_bsrgan_variant as degradation_fn_bsr_light diff --git a/spaces/HarlanHong/DaGAN/modules/dynamic_conv.py b/spaces/HarlanHong/DaGAN/modules/dynamic_conv.py deleted file mode 100644 index 29c1032b6ba2d6a6adb6959eb112e5bb656a872c..0000000000000000000000000000000000000000 --- a/spaces/HarlanHong/DaGAN/modules/dynamic_conv.py +++ /dev/null @@ -1,382 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import pdb - -class attention1d(nn.Module): - def __init__(self, in_planes, ratios, K, temperature, init_weight=True): - super(attention1d, self).__init__() - assert temperature%3==1 - self.avgpool = nn.AdaptiveAvgPool1d(1) - if in_planes!=3: - hidden_planes = int(in_planes*ratios)+1 - else: - hidden_planes = K - self.fc1 = nn.Conv1d(in_planes, hidden_planes, 1, bias=False) - # self.bn = nn.BatchNorm2d(hidden_planes) - self.fc2 = nn.Conv1d(hidden_planes, K, 1, bias=True) - self.temperature = temperature - if init_weight: - self._initialize_weights() - - - def _initialize_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv1d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - if m.bias is not None: - nn.init.constant_(m.bias, 0) - if isinstance(m ,nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - def updata_temperature(self): - if self.temperature!=1: - self.temperature -=3 - print('Change temperature to:', str(self.temperature)) - - - def forward(self, x): - x = self.avgpool(x) - x = self.fc1(x) - x = F.relu(x) - x = self.fc2(x).view(x.size(0), -1) - return F.softmax(x/self.temperature, 1) - - -class Dynamic_conv1d(nn.Module): - def __init__(self, in_planes, out_planes, kernel_size, ratio=0.25, stride=1, padding=0, dilation=1, groups=1, bias=True, K=4,temperature=34, init_weight=True): - super(Dynamic_conv1d, self).__init__() - assert in_planes%groups==0 - self.in_planes = in_planes - self.out_planes = out_planes - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.bias = bias - self.K = K - self.attention = attention1d(in_planes, ratio, K, temperature) - - self.weight = nn.Parameter(torch.randn(K, out_planes, in_planes//groups, kernel_size), requires_grad=True) - if bias: - self.bias = nn.Parameter(torch.Tensor(K, out_planes)) - else: - self.bias = None - if init_weight: - self._initialize_weights() - - #TODO 初始化 - def _initialize_weights(self): - for i in range(self.K): - nn.init.kaiming_uniform_(self.weight[i]) - - - def update_temperature(self): - self.attention.updata_temperature() - - def forward(self, x):#将batch视作维度变量,进行组卷积,因为组卷积的权重是不同的,动态卷积的权重也是不同的 - softmax_attention = self.attention(x) - batch_size, in_planes, height = x.size() - x = x.view(1, -1, height, )# 变化成一个维度进行组卷积 - weight = self.weight.view(self.K, -1) - - # 动态卷积的权重的生成, 生成的是batch_size个卷积参数(每个参数不同) - aggregate_weight = torch.mm(softmax_attention, weight).view(-1, self.in_planes, self.kernel_size,) - if self.bias is not None: - aggregate_bias = torch.mm(softmax_attention, self.bias).view(-1) - output = F.conv1d(x, weight=aggregate_weight, bias=aggregate_bias, stride=self.stride, padding=self.padding, - dilation=self.dilation, groups=self.groups*batch_size) - else: - output = F.conv1d(x, weight=aggregate_weight, bias=None, stride=self.stride, padding=self.padding, - dilation=self.dilation, groups=self.groups * batch_size) - - output = output.view(batch_size, self.out_planes, output.size(-1)) - return output - - - -class attention2d(nn.Module): - def __init__(self, in_planes, ratios, K, temperature, init_weight=True): - super(attention2d, self).__init__() - assert temperature%3==1 - self.avgpool = nn.AdaptiveAvgPool2d(1) - if in_planes!=3: - hidden_planes = int(in_planes*ratios)+1 - else: - hidden_planes = K - self.fc1 = nn.Conv2d(in_planes, hidden_planes, 1, bias=False) - # self.bn = nn.BatchNorm2d(hidden_planes) - self.fc2 = nn.Conv2d(hidden_planes, K, 1, bias=True) - self.temperature = temperature - if init_weight: - self._initialize_weights() - - - def _initialize_weights(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - if m.bias is not None: - nn.init.constant_(m.bias, 0) - if isinstance(m ,nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - def updata_temperature(self): - if self.temperature!=1: - self.temperature -=3 - print('Change temperature to:', str(self.temperature)) - - - def forward(self, x): - x = self.avgpool(x) - x = self.fc1(x) - x = F.relu(x) - x = self.fc2(x).view(x.size(0), -1) - return F.softmax(x/self.temperature, 1) - - -class Dynamic_deepwise_conv2d(nn.Module): - def __init__(self, in_planes, out_planes, kernel_size, ratio=0.25, stride=1, padding=0, dilation=1, groups=1, bias=True, K=4,temperature=34, init_weight=True): - super(Dynamic_deepwise_conv2d, self).__init__() - assert in_planes%groups==0 - self.in_planes = in_planes - self.out_planes = out_planes - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.bias = bias - self.K = K - self.attention = attention2d(in_planes, ratio, K, temperature) - - self.weight = nn.Parameter(torch.randn(K, out_planes, in_planes//groups, kernel_size, kernel_size), requires_grad=True) - if bias: - self.bias = nn.Parameter(torch.Tensor(K, out_planes)) - else: - self.bias = None - if init_weight: - self._initialize_weights() - - #TODO 初始化 - def _initialize_weights(self): - for i in range(self.K): - nn.init.kaiming_uniform_(self.weight[i]) - - - def update_temperature(self): - self.attention.updata_temperature() - - def forward(self, x, y):#将batch视作维度变量,进行组卷积,因为组卷积的权重是不同的,动态卷积的权重也是不同的 - softmax_attention = self.attention(x) - batch_size, in_planes, height, width = x.size() - y = y.view(1, -1, height, width)# 变化成一个维度进行组卷积 - weight = self.weight.view(self.K, -1) - - # 动态卷积的权重的生成, 生成的是batch_size个卷积参数(每个参数不同) - aggregate_weight = torch.mm(softmax_attention, weight).view(-1, 1, self.kernel_size, self.kernel_size) - if self.bias is not None: - aggregate_bias = torch.mm(softmax_attention, self.bias).view(-1) - output = F.conv2d(y, weight=aggregate_weight, bias=aggregate_bias, stride=self.stride, padding=self.padding, - dilation=self.dilation, groups=self.groups*batch_size) - else: - output = F.conv2d(y, weight=aggregate_weight, bias=None, stride=self.stride, padding=self.padding, - dilation=self.dilation, groups=self.groups * batch_size) - - output = output.view(batch_size, self.out_planes, output.size(-2), output.size(-1)) - return output - -class Dynamic_conv2d(nn.Module): - def __init__(self, in_planes, out_planes, kernel_size, ratio=0.25, stride=1, padding=0, dilation=1, groups=1, bias=True, K=4,temperature=34, init_weight=True): - super(Dynamic_conv2d, self).__init__() - assert in_planes%groups==0 - self.in_planes = in_planes - self.out_planes = out_planes - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.bias = bias - self.K = K - self.attention = attention2d(in_planes, ratio, K, temperature) - - self.weight = nn.Parameter(torch.randn(K, out_planes, in_planes//groups, kernel_size, kernel_size), requires_grad=True) - if bias: - self.bias = nn.Parameter(torch.Tensor(K, out_planes)) - else: - self.bias = None - if init_weight: - self._initialize_weights() - - #TODO 初始化 - def _initialize_weights(self): - for i in range(self.K): - nn.init.kaiming_uniform_(self.weight[i]) - - - def update_temperature(self): - self.attention.updata_temperature() - - def forward(self, x,y):#将batch视作维度变量,进行组卷积,因为组卷积的权重是不同的,动态卷积的权重也是不同的 - softmax_attention = self.attention(x) - batch_size, in_planes, height, width = x.size() - y = y.view(1, -1, height, width)# 变化成一个维度进行组卷积 - weight = self.weight.view(self.K, -1) - - # 动态卷积的权重的生成, 生成的是batch_size个卷积参数(每个参数不同) - aggregate_weight = torch.mm(softmax_attention, weight).view(-1, self.in_planes, self.kernel_size, self.kernel_size) - if self.bias is not None: - aggregate_bias = torch.mm(softmax_attention, self.bias).view(-1) - output = F.conv2d(y, weight=aggregate_weight, bias=aggregate_bias, stride=self.stride, padding=self.padding, - dilation=self.dilation, groups=self.groups*batch_size) - else: - output = F.conv2d(y, weight=aggregate_weight, bias=None, stride=self.stride, padding=self.padding, - dilation=self.dilation, groups=self.groups * batch_size) - - output = output.view(batch_size, self.out_planes, output.size(-2), output.size(-1)) - return output - - -class attention3d(nn.Module): - def __init__(self, in_planes, ratios, K, temperature): - super(attention3d, self).__init__() - assert temperature%3==1 - self.avgpool = nn.AdaptiveAvgPool3d(1) - if in_planes != 3: - hidden_planes = int(in_planes * ratios)+1 - else: - hidden_planes = K - self.fc1 = nn.Conv3d(in_planes, hidden_planes, 1, bias=False) - self.fc2 = nn.Conv3d(hidden_planes, K, 1, bias=False) - self.temperature = temperature - - def updata_temperature(self): - if self.temperature!=1: - self.temperature -=3 - print('Change temperature to:', str(self.temperature)) - - def forward(self, x): - x = self.avgpool(x) - x = self.fc1(x) - x = F.relu(x) - x = self.fc2(x).view(x.size(0), -1) - return F.softmax(x / self.temperature, 1) - -class Dynamic_conv3d(nn.Module): - def __init__(self, in_planes, out_planes, kernel_size, ratio=0.25, stride=1, padding=0, dilation=1, groups=1, bias=True, K=4, temperature=34): - super(Dynamic_conv3d, self).__init__() - assert in_planes%groups==0 - self.in_planes = in_planes - self.out_planes = out_planes - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - self.dilation = dilation - self.groups = groups - self.bias = bias - self.K = K - self.attention = attention3d(in_planes, ratio, K, temperature) - - self.weight = nn.Parameter(torch.randn(K, out_planes, in_planes//groups, kernel_size, kernel_size, kernel_size), requires_grad=True) - if bias: - self.bias = nn.Parameter(torch.Tensor(K, out_planes)) - else: - self.bias = None - - - #TODO 初始化 - # nn.init.kaiming_uniform_(self.weight, ) - - def update_temperature(self): - self.attention.updata_temperature() - - def forward(self, x):#将batch视作维度变量,进行组卷积,因为组卷积的权重是不同的,动态卷积的权重也是不同的 - softmax_attention = self.attention(x) - batch_size, in_planes, depth, height, width = x.size() - x = x.view(1, -1, depth, height, width)# 变化成一个维度进行组卷积 - weight = self.weight.view(self.K, -1) - - # 动态卷积的权重的生成, 生成的是batch_size个卷积参数(每个参数不同) - aggregate_weight = torch.mm(softmax_attention, weight).view(-1, self.in_planes, self.kernel_size, self.kernel_size, self.kernel_size) - if self.bias is not None: - aggregate_bias = torch.mm(softmax_attention, self.bias).view(-1) - output = F.conv3d(x, weight=aggregate_weight, bias=aggregate_bias, stride=self.stride, padding=self.padding, - dilation=self.dilation, groups=self.groups*batch_size) - else: - output = F.conv3d(x, weight=aggregate_weight, bias=None, stride=self.stride, padding=self.padding, - dilation=self.dilation, groups=self.groups * batch_size) - - output = output.view(batch_size, self.out_planes, output.size(-3), output.size(-2), output.size(-1)) - return output - - - - -if __name__ == '__main__': - x = torch.randn(12, 256, 64, 64) - y = torch.randn(12, 256, 64, 64) - - model = Dynamic_conv2d(in_planes=256, out_planes=256, kernel_size=3, ratio=0.25, padding=1,groups=1) - x = x.to('cuda:0') - y = y.to('cuda:0') - model.to('cuda') - # model.attention.cuda() - print(model(x,y).shape) - # nn.Conv3d() - # print(model(x).shape) - # model.update_temperature() - # model.update_temperature() - # model.update_temperature() - # model.update_temperature() - # model.update_temperature() - # model.update_temperature() - # model.update_temperature() - # model.update_temperature() - # model.update_temperature() - # model.update_temperature() - # model.update_temperature() - # model.update_temperature() - # model.update_temperature() - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - # print(model(x).shape) - diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/modules/multihead_attention.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/modules/multihead_attention.py deleted file mode 100644 index 8eb9d09dad37ab132295166d691873beec63eaf1..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/modules/multihead_attention.py +++ /dev/null @@ -1,349 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from typing import Dict, Optional, Tuple - -import torch -import torch.nn.functional as F -from fairseq import utils -from fairseq.incremental_decoding_utils import with_incremental_state -from fairseq.modules.fairseq_dropout import FairseqDropout -from torch import Tensor, nn - - -try: - from fairseq.model_parallel.megatron.mpu import ( - get_cuda_rng_tracker, - get_model_parallel_world_size, - ColumnParallelLinear, - RowParallelLinear, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -@with_incremental_state -class ModelParallelMultiheadAttention(nn.Module): - """Model parallel Multi-headed attention. - This performs the Multi-headed attention over multiple gpus. - - See "Megatron-LM: https://arxiv.org/pdf/1909.08053.pdf" for more details. - """ - - def __init__( - self, - embed_dim, - num_heads, - kdim=None, - vdim=None, - dropout=0.0, - bias=True, - self_attention=False, - encoder_decoder_attention=False, - ): - super().__init__() - if not has_megatron_submodule: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - self.embed_dim = embed_dim - self.kdim = kdim if kdim is not None else embed_dim - self.vdim = vdim if vdim is not None else embed_dim - self.qkv_same_dim = self.kdim == embed_dim and self.vdim == embed_dim - - self.model_parallel_size = get_model_parallel_world_size() - - self.num_heads_partition = num_heads // self.model_parallel_size - assert ( - self.num_heads_partition * self.model_parallel_size == num_heads - ), "Number of heads must be divisible by model parallel size" - - self.dropout_module = FairseqDropout( - dropout, module_name=self.__class__.__name__ - ) - self.head_dim = embed_dim // num_heads - assert ( - self.head_dim * num_heads == self.embed_dim - ), "embed_dim must be divisible by num_heads" - self.scaling = self.head_dim ** -0.5 - - self.self_attention = self_attention - self.encoder_decoder_attention = encoder_decoder_attention - - assert ( - not self.self_attention or self.qkv_same_dim - ), "Self-attention requires query, key and value to be of the same size" - - self.k_proj = ColumnParallelLinear( - self.kdim, embed_dim, bias=bias, gather_output=False - ) - self.v_proj = ColumnParallelLinear( - self.vdim, embed_dim, bias=bias, gather_output=False - ) - self.q_proj = ColumnParallelLinear( - embed_dim, embed_dim, bias=bias, gather_output=False - ) - self.out_proj = RowParallelLinear( - embed_dim, embed_dim, bias=bias, input_is_parallel=True - ) - - def forward( - self, - query, - key: Optional[Tensor], - value: Optional[Tensor], - key_padding_mask: Optional[Tensor] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - static_kv: bool = False, - attn_mask: Optional[Tensor] = None, - **unused_kwargs, - ) -> Tuple[Tensor, Optional[Tensor]]: - """Input shape: Time x Batch x Channel - - Args: - key_padding_mask (ByteTensor, optional): mask to exclude - keys that are pads, of shape `(batch, src_len)`, where - padding elements are indicated by 1s. - attn_mask (ByteTensor, optional): typically used to - implement causal attention, where the mask prevents the - attention from looking forward in time (default: None). - """ - tgt_len, bsz, embed_dim = query.size() - assert embed_dim == self.embed_dim - assert list(query.size()) == [tgt_len, bsz, embed_dim] - - is_tpu = query.device.type == "xla" - - if incremental_state is not None: - saved_state = self._get_input_buffer(incremental_state) - if saved_state is not None and "prev_key" in saved_state: - # previous time steps are cached - no need to recompute - # key and value if they are static - if static_kv: - assert self.encoder_decoder_attention and not self.self_attention - key = value = None - else: - saved_state = None - - if self.self_attention: - q = self.q_proj(query) - k = self.k_proj(query) - v = self.v_proj(query) - elif self.encoder_decoder_attention: - # encoder-decoder attention - q = self.q_proj(query) - if key is None: - assert value is None - k = v = None - else: - k = self.k_proj(key) - v = self.v_proj(key) - - else: - assert key is not None and value is not None - q = self.q_proj(query) - k = self.k_proj(key) - v = self.v_proj(value) - q *= self.scaling - - q = ( - q.contiguous() - .view(tgt_len, bsz * self.num_heads_partition, self.head_dim) - .transpose(0, 1) - ) - if k is not None: - k = ( - k.contiguous() - .view(-1, bsz * self.num_heads_partition, self.head_dim) - .transpose(0, 1) - ) - if v is not None: - v = ( - v.contiguous() - .view(-1, bsz * self.num_heads_partition, self.head_dim) - .transpose(0, 1) - ) - - if saved_state is not None: - # saved states are stored with shape (bsz, num_heads_partition, seq_len, head_dim) - if "prev_key" in saved_state: - _prev_key = saved_state["prev_key"] - assert _prev_key is not None - prev_key = _prev_key.view( - bsz * self.num_heads_partition, -1, self.head_dim - ) - if static_kv: - k = prev_key - else: - assert k is not None - k = torch.cat([prev_key, k], dim=1) - if "prev_value" in saved_state: - _prev_value = saved_state["prev_value"] - assert _prev_value is not None - prev_value = _prev_value.view( - bsz * self.num_heads_partition, -1, self.head_dim - ) - if static_kv: - v = prev_value - else: - assert v is not None - v = torch.cat([prev_value, v], dim=1) - prev_key_padding_mask: Optional[Tensor] = None - if "prev_key_padding_mask" in saved_state: - prev_key_padding_mask = saved_state["prev_key_padding_mask"] - assert k is not None and v is not None - key_padding_mask = ( - ModelParallelMultiheadAttention._append_prev_key_padding_mask( - key_padding_mask=key_padding_mask, - prev_key_padding_mask=prev_key_padding_mask, - batch_size=bsz, - src_len=k.size(1), - static_kv=static_kv, - ) - ) - - saved_state["prev_key"] = k.view( - bsz, self.num_heads_partition, -1, self.head_dim - ) - saved_state["prev_value"] = v.view( - bsz, self.num_heads_partition, -1, self.head_dim - ) - saved_state["prev_key_padding_mask"] = key_padding_mask - # In this branch incremental_state is never None - assert incremental_state is not None - incremental_state = self._set_input_buffer(incremental_state, saved_state) - assert k is not None - src_len = k.size(1) - - # This is part of a workaround to get around fork/join parallelism - # not supporting Optional types. - if key_padding_mask is not None and key_padding_mask.dim() == 0: - key_padding_mask = None - - if key_padding_mask is not None: - assert key_padding_mask.size(0) == bsz - assert key_padding_mask.size(1) == src_len - - attn_weights = torch.bmm(q, k.transpose(1, 2)) - - assert list(attn_weights.size()) == [ - bsz * self.num_heads_partition, - tgt_len, - src_len, - ] - - if attn_mask is not None: - attn_mask = attn_mask.unsqueeze(0) - attn_weights += attn_mask - - if key_padding_mask is not None: - # don't attend to padding symbols - attn_weights = attn_weights.view( - bsz, self.num_heads_partition, tgt_len, src_len - ) - if not is_tpu: - attn_weights = attn_weights.masked_fill( - key_padding_mask.unsqueeze(1).unsqueeze(2).to(torch.bool), - float("-inf"), - ) - else: - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.masked_fill(key_padding_mask, float("-inf")) - attn_weights = attn_weights.transpose(0, 2) - attn_weights = attn_weights.view( - bsz * self.num_heads_partition, tgt_len, src_len - ) - - attn_weights_float = utils.softmax(attn_weights, dim=-1) - attn_weights = attn_weights_float.type_as(attn_weights) - - with get_cuda_rng_tracker().fork(): - attn_probs = self.dropout_module(attn_weights) - - assert v is not None - attn = torch.bmm(attn_probs, v) - assert list(attn.size()) == [ - bsz * self.num_heads_partition, - tgt_len, - self.head_dim, - ] - embed_dim_partition = embed_dim // self.model_parallel_size - attn = attn.transpose(0, 1).contiguous().view(tgt_len, bsz, embed_dim_partition) - attn = self.out_proj(attn) - # return attn_weights None to keep the return type same as single gpu multihead attention - # This will be deprecated. - attn_weights: Optional[Tensor] = None - - return attn, attn_weights - - @staticmethod - def _append_prev_key_padding_mask( - key_padding_mask: Optional[Tensor], - prev_key_padding_mask: Optional[Tensor], - batch_size: int, - src_len: int, - static_kv: bool, - ) -> Optional[Tensor]: - # saved key padding masks have shape (bsz, seq_len) - if prev_key_padding_mask is not None and static_kv: - new_key_padding_mask = prev_key_padding_mask - elif prev_key_padding_mask is not None and key_padding_mask is not None: - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), key_padding_mask.float()], dim=1 - ) - # During incremental decoding, as the padding token enters and - # leaves the frame, there will be a time when prev or current - # is None - elif prev_key_padding_mask is not None: - - filler = torch.zeros(batch_size, src_len - prev_key_padding_mask.size(1)) - if prev_key_padding_mask.is_cuda: - filler = filler.cuda() - new_key_padding_mask = torch.cat( - [prev_key_padding_mask.float(), filler.float()], dim=1 - ) - elif key_padding_mask is not None: - filler = torch.zeros(batch_size, src_len - key_padding_mask.size(1)) - if key_padding_mask.is_cuda: - filler = filler.cuda() - new_key_padding_mask = torch.cat( - [filler.float(), key_padding_mask.float()], dim=1 - ) - else: - new_key_padding_mask = prev_key_padding_mask - return new_key_padding_mask - - def reorder_incremental_state( - self, incremental_state: Dict[str, Dict[str, Optional[Tensor]]], new_order - ): - """Reorder buffered internal state (for incremental generation).""" - input_buffer = self._get_input_buffer(incremental_state) - if input_buffer is not None: - for k in input_buffer.keys(): - if input_buffer[k] is not None: - input_buffer[k] = input_buffer[k].index_select(0, new_order) - incremental_state = self._set_input_buffer(incremental_state, input_buffer) - return incremental_state - - def _get_input_buffer( - self, incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] - ) -> Dict[str, Optional[Tensor]]: - result = self.get_incremental_state(incremental_state, "attn_state") - if result is not None: - return result - else: - empty_result: Dict[str, Optional[Tensor]] = {} - return empty_result - - def _set_input_buffer( - self, - incremental_state: Dict[str, Dict[str, Optional[Tensor]]], - buffer: Dict[str, Optional[Tensor]], - ): - return self.set_incremental_state(incremental_state, "attn_state", buffer) diff --git a/spaces/HeyAxolotl/Bio/README.md b/spaces/HeyAxolotl/Bio/README.md deleted file mode 100644 index 06be4247d38e28958738444eae4fc62355ec1304..0000000000000000000000000000000000000000 --- a/spaces/HeyAxolotl/Bio/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Bio -emoji: ⚡ -colorFrom: red -colorTo: yellow -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ICML2022/OFA/fairseq/examples/hubert/measure_teacher_quality.py b/spaces/ICML2022/OFA/fairseq/examples/hubert/measure_teacher_quality.py deleted file mode 100644 index 92279b2214bb2ba4a99aea92098907ef4f55821b..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/hubert/measure_teacher_quality.py +++ /dev/null @@ -1,241 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import os.path as op -import re -from tabulate import tabulate -from collections import Counter - - -def comp_purity(p_xy, axis): - max_p = p_xy.max(axis=axis) - marg_p = p_xy.sum(axis=axis) - indv_pur = max_p / marg_p - aggr_pur = max_p.sum() - return indv_pur, aggr_pur - - -def comp_entropy(p): - return (-p * np.log(p + 1e-8)).sum() - - -def comp_norm_mutual_info(p_xy): - p_x = p_xy.sum(axis=1, keepdims=True) - p_y = p_xy.sum(axis=0, keepdims=True) - pmi = np.log(p_xy / np.matmul(p_x, p_y) + 1e-8) - mi = (p_xy * pmi).sum() - h_x = comp_entropy(p_x) - h_y = comp_entropy(p_y) - return mi, mi / h_x, mi / h_y, h_x, h_y - - -def pad(labs, n): - if n == 0: - return np.array(labs) - return np.concatenate([[labs[0]] * n, labs, [labs[-1]] * n]) - - -def comp_avg_seg_dur(labs_list): - n_frms = 0 - n_segs = 0 - for labs in labs_list: - labs = np.array(labs) - edges = np.zeros(len(labs)).astype(bool) - edges[0] = True - edges[1:] = labs[1:] != labs[:-1] - n_frms += len(edges) - n_segs += edges.astype(int).sum() - return n_frms / n_segs - - -def comp_joint_prob(uid2refs, uid2hyps): - """ - Args: - pad: padding for spliced-feature derived labels - """ - cnts = Counter() - skipped = [] - abs_frmdiff = 0 - for uid in uid2refs: - if uid not in uid2hyps: - skipped.append(uid) - continue - refs = uid2refs[uid] - hyps = uid2hyps[uid] - abs_frmdiff += abs(len(refs) - len(hyps)) - min_len = min(len(refs), len(hyps)) - refs = refs[:min_len] - hyps = hyps[:min_len] - cnts.update(zip(refs, hyps)) - tot = sum(cnts.values()) - - ref_set = sorted({ref for ref, _ in cnts.keys()}) - hyp_set = sorted({hyp for _, hyp in cnts.keys()}) - ref2pid = dict(zip(ref_set, range(len(ref_set)))) - hyp2lid = dict(zip(hyp_set, range(len(hyp_set)))) - # print(hyp_set) - p_xy = np.zeros((len(ref2pid), len(hyp2lid)), dtype=float) - for (ref, hyp), cnt in cnts.items(): - p_xy[ref2pid[ref], hyp2lid[hyp]] = cnt - p_xy /= p_xy.sum() - return p_xy, ref2pid, hyp2lid, tot, abs_frmdiff, skipped - - -def read_phn(tsv_path, rm_stress=True): - uid2phns = {} - with open(tsv_path) as f: - for line in f: - uid, phns = line.rstrip().split("\t") - phns = phns.split(",") - if rm_stress: - phns = [re.sub("[0-9]", "", phn) for phn in phns] - uid2phns[uid] = phns - return uid2phns - - -def read_lab(tsv_path, lab_path, pad_len=0, upsample=1): - """ - tsv is needed to retrieve the uids for the labels - """ - with open(tsv_path) as f: - f.readline() - uids = [op.splitext(op.basename(line.rstrip().split()[0]))[0] for line in f] - with open(lab_path) as f: - labs_list = [pad(line.rstrip().split(), pad_len).repeat(upsample) for line in f] - assert len(uids) == len(labs_list) - return dict(zip(uids, labs_list)) - - -def main_lab_lab( - tsv_dir, - lab_dir, - lab_name, - lab_sets, - ref_dir, - ref_name, - pad_len=0, - upsample=1, - verbose=False, -): - # assume tsv_dir is the same for both the reference and the hypotheses - tsv_dir = lab_dir if tsv_dir is None else tsv_dir - - uid2refs = {} - for s in lab_sets: - uid2refs.update(read_lab(f"{tsv_dir}/{s}.tsv", f"{ref_dir}/{s}.{ref_name}")) - - uid2hyps = {} - for s in lab_sets: - uid2hyps.update( - read_lab( - f"{tsv_dir}/{s}.tsv", f"{lab_dir}/{s}.{lab_name}", pad_len, upsample - ) - ) - _main(uid2refs, uid2hyps, verbose) - - -def main_phn_lab( - tsv_dir, - lab_dir, - lab_name, - lab_sets, - phn_dir, - phn_sets, - pad_len=0, - upsample=1, - verbose=False, -): - uid2refs = {} - for s in phn_sets: - uid2refs.update(read_phn(f"{phn_dir}/{s}.tsv")) - - uid2hyps = {} - tsv_dir = lab_dir if tsv_dir is None else tsv_dir - for s in lab_sets: - uid2hyps.update( - read_lab( - f"{tsv_dir}/{s}.tsv", f"{lab_dir}/{s}.{lab_name}", pad_len, upsample - ) - ) - _main(uid2refs, uid2hyps, verbose) - - -def _main(uid2refs, uid2hyps, verbose): - (p_xy, ref2pid, hyp2lid, tot, frmdiff, skipped) = comp_joint_prob( - uid2refs, uid2hyps - ) - ref_pur_by_hyp, ref_pur = comp_purity(p_xy, axis=0) - hyp_pur_by_ref, hyp_pur = comp_purity(p_xy, axis=1) - (mi, mi_norm_by_ref, mi_norm_by_hyp, h_ref, h_hyp) = comp_norm_mutual_info(p_xy) - outputs = { - "ref pur": ref_pur, - "hyp pur": hyp_pur, - "H(ref)": h_ref, - "H(hyp)": h_hyp, - "MI": mi, - "MI/H(ref)": mi_norm_by_ref, - "ref segL": comp_avg_seg_dur(uid2refs.values()), - "hyp segL": comp_avg_seg_dur(uid2hyps.values()), - "p_xy shape": p_xy.shape, - "frm tot": tot, - "frm diff": frmdiff, - "utt tot": len(uid2refs), - "utt miss": len(skipped), - } - print(tabulate([outputs.values()], outputs.keys(), floatfmt=".4f")) - - -if __name__ == "__main__": - """ - compute quality of labels with respect to phone or another labels if set - """ - import argparse - - parser = argparse.ArgumentParser() - parser.add_argument("tsv_dir") - parser.add_argument("lab_dir") - parser.add_argument("lab_name") - parser.add_argument("--lab_sets", default=["valid"], type=str, nargs="+") - parser.add_argument( - "--phn_dir", - default="/checkpoint/wnhsu/data/librispeech/960h/fa/raw_phn/phone_frame_align_v1", - ) - parser.add_argument( - "--phn_sets", default=["dev-clean", "dev-other"], type=str, nargs="+" - ) - parser.add_argument("--pad_len", default=0, type=int, help="padding for hypotheses") - parser.add_argument( - "--upsample", default=1, type=int, help="upsample factor for hypotheses" - ) - parser.add_argument("--ref_lab_dir", default="") - parser.add_argument("--ref_lab_name", default="") - parser.add_argument("--verbose", action="store_true") - args = parser.parse_args() - - if args.ref_lab_dir and args.ref_lab_name: - main_lab_lab( - args.tsv_dir, - args.lab_dir, - args.lab_name, - args.lab_sets, - args.ref_lab_dir, - args.ref_lab_name, - args.pad_len, - args.upsample, - args.verbose, - ) - else: - main_phn_lab( - args.tsv_dir, - args.lab_dir, - args.lab_name, - args.lab_sets, - args.phn_dir, - args.phn_sets, - args.pad_len, - args.upsample, - args.verbose, - ) diff --git a/spaces/ICML2022/OFA/fairseq/examples/pointer_generator/pointer_generator_src/transformer_pg.py b/spaces/ICML2022/OFA/fairseq/examples/pointer_generator/pointer_generator_src/transformer_pg.py deleted file mode 100644 index 4ccf30f4eb154f8fab1e285934fb973a2d1166cb..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/pointer_generator/pointer_generator_src/transformer_pg.py +++ /dev/null @@ -1,518 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -from typing import Any, Dict, Optional, List, Tuple - -import torch -import torch.nn as nn -from fairseq import utils -from fairseq.models import register_model, register_model_architecture -from fairseq.models.transformer import ( - DEFAULT_MAX_SOURCE_POSITIONS, - DEFAULT_MAX_TARGET_POSITIONS, - TransformerDecoder, - TransformerEncoder, - TransformerModel, - base_architecture, -) -from torch import Tensor - - -logger = logging.getLogger(__name__) - - -@register_model("transformer_pointer_generator") -class TransformerPointerGeneratorModel(TransformerModel): - """ - Transformer model from `"Attention Is All You Need" (Vaswani et al, 2017) - `_, augmented with a pointer-generator - network from `"Get To The Point: Summarization with Pointer-Generator - Networks" (See et al, 2017) `_. - - Args: - encoder (TransformerPointerGeneratorEncoder): the encoder - decoder (TransformerPointerGeneratorDecoder): the decoder - - The Transformer pointer-generator model provides the following named - architectures and command-line arguments: - - .. argparse:: - :ref: fairseq.models.transformer_pointer_generator_parser - :prog: - """ - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - # fmt: off - TransformerModel.add_args(parser) - parser.add_argument('--alignment-heads', type=int, metavar='N', - help='number of attention heads to be used for ' - 'pointing') - parser.add_argument('--alignment-layer', type=int, metavar='I', - help='layer number to be used for pointing (0 ' - 'corresponding to the bottommost layer)') - parser.add_argument('--source-position-markers', type=int, metavar='N', - help='dictionary includes N additional items that ' - 'represent an OOV token at a particular input ' - 'position') - parser.add_argument('--force-generation', type=float, metavar='P', - default=None, - help='set the vocabulary distribution weight to P, ' - 'instead of predicting it from the input (1.0 ' - 'corresponding to generation, 0.0 to pointing)') - # fmt: on - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - - # make sure all arguments are present in older models - base_architecture(args) - - if args.encoder_layers_to_keep: - args.encoder_layers = len(args.encoder_layers_to_keep.split(",")) - if args.decoder_layers_to_keep: - args.decoder_layers = len(args.decoder_layers_to_keep.split(",")) - - if getattr(args, "max_source_positions", None) is None: - args.max_source_positions = DEFAULT_MAX_SOURCE_POSITIONS - if getattr(args, "max_target_positions", None) is None: - args.max_target_positions = DEFAULT_MAX_TARGET_POSITIONS - if getattr(args, "source_position_markers", None) is None: - args.source_position_markers = args.max_source_positions - - src_dict, tgt_dict = task.source_dictionary, task.target_dictionary - if src_dict != tgt_dict: - raise ValueError("Pointer-generator requires a joined dictionary") - - def build_embedding(dictionary, embed_dim, path=None): - # The dictionary may include additional items that can be used in - # place of the normal OOV token and that all map to the same - # embedding. Using a different token for each input position allows - # one to restore the word identities from the original source text. - num_embeddings = len(dictionary) - args.source_position_markers - padding_idx = dictionary.pad() - unk_idx = dictionary.unk() - logger.info( - "dictionary indices from {0} to {1} will be mapped to {2}".format( - num_embeddings, len(dictionary) - 1, unk_idx - ) - ) - emb = Embedding(num_embeddings, embed_dim, padding_idx, unk_idx) - # if provided, load from preloaded dictionaries - if path: - embed_dict = utils.parse_embedding(path) - utils.load_embedding(embed_dict, dictionary, emb) - return emb - - if args.share_all_embeddings: - if args.encoder_embed_dim != args.decoder_embed_dim: - raise ValueError( - "--share-all-embeddings requires --encoder-embed-dim to match --decoder-embed-dim" - ) - if args.decoder_embed_path and ( - args.decoder_embed_path != args.encoder_embed_path - ): - raise ValueError( - "--share-all-embeddings not compatible with --decoder-embed-path" - ) - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = encoder_embed_tokens - args.share_decoder_input_output_embed = True - else: - encoder_embed_tokens = build_embedding( - src_dict, args.encoder_embed_dim, args.encoder_embed_path - ) - decoder_embed_tokens = build_embedding( - tgt_dict, args.decoder_embed_dim, args.decoder_embed_path - ) - - encoder = cls.build_encoder(args, src_dict, encoder_embed_tokens) - decoder = cls.build_decoder(args, tgt_dict, decoder_embed_tokens) - return cls(args, encoder, decoder) - - @classmethod - def build_encoder(cls, args, src_dict, embed_tokens): - return TransformerPointerGeneratorEncoder(args, src_dict, embed_tokens) - - @classmethod - def build_decoder(cls, args, tgt_dict, embed_tokens): - return TransformerPointerGeneratorDecoder(args, tgt_dict, embed_tokens) - - -class TransformerPointerGeneratorEncoder(TransformerEncoder): - """ - Transformer encoder consisting of *args.encoder_layers* layers. Each layer - is a :class:`TransformerEncoderLayer`. The pointer-generator variant adds - the source tokens to the encoder output as these are otherwise not passed - to the decoder. - """ - - def forward( - self, - src_tokens, - src_lengths: Optional[Tensor] = None, - return_all_hiddens: bool = False, - token_embeddings: Optional[Tensor] = None - ): - """ - Runs the `forward()` method of the parent Transformer class. Then adds - the source tokens into the encoder output tuple. - - While it might be more elegant that the model would pass the source - tokens to the `forward()` method of the decoder too, this would require - changes to `SequenceGenerator`. - - Args: - src_tokens (torch.LongTensor): tokens in the source language of - shape `(batch, src_len)` - src_lengths (torch.LongTensor): lengths of each source sentence of - shape `(batch)` - return_all_hiddens (bool, optional): also return all of the - intermediate hidden states (default: False). - token_embeddings (torch.Tensor, optional): precomputed embeddings - default `None` will recompute embeddings - - Returns: - namedtuple: - - **encoder_out** (Tensor): the last encoder layer's output of - shape `(src_len, batch, embed_dim)` - - **encoder_padding_mask** (ByteTensor): the positions of - padding elements of shape `(batch, src_len)` - - **encoder_embedding** (Tensor): the (scaled) embedding lookup - of shape `(batch, src_len, embed_dim)` - - **encoder_states** (List[Tensor]): all intermediate - hidden states of shape `(src_len, batch, embed_dim)`. - Only populated if *return_all_hiddens* is True. - - **src_tokens** (Tensor): input token ids of shape - `(batch, src_len)` - """ - encoder_out = self.forward_scriptable(src_tokens, - src_lengths, - return_all_hiddens, - token_embeddings) - - # The Pytorch Mobile lite interpreter does not supports returning NamedTuple in - # `forward` so we use a dictionary instead. - # TorchScript does not support mixed values so the values are all lists. - # The empty list is equivalent to None. - return { - "encoder_out": encoder_out["encoder_out"], # T x B x C - "encoder_padding_mask": encoder_out["encoder_padding_mask"], # B x T - "encoder_embedding": encoder_out["encoder_embedding"], # B x T x C - "encoder_states": encoder_out["encoder_states"], # List[T x B x C] - "src_tokens": [src_tokens], # B x T - "src_lengths": [], - } - - -class TransformerPointerGeneratorDecoder(TransformerDecoder): - """ - Transformer decoder consisting of *args.decoder_layers* layers. Each layer - is a :class:`TransformerDecoderLayer`. The pointer-generator variant mixes - the output probabilities with an attention distribution in the output layer. - - Args: - args (argparse.Namespace): parsed command-line arguments - dictionary (~fairseq.data.Dictionary): decoding dictionary - embed_tokens (torch.nn.Embedding): output embedding - """ - - def __init__(self, args, dictionary, embed_tokens): - super().__init__(args, dictionary, embed_tokens, no_encoder_attn=False) - - # In the pointer-generator model these arguments define the decoder - # layer and the number of attention heads that will be averaged to - # create the alignment for pointing. - self.alignment_heads = args.alignment_heads - self.alignment_layer = args.alignment_layer - - input_embed_dim = embed_tokens.embedding_dim - - # Generation probabilities / interpolation coefficients are predicted - # from the current decoder input embedding and the decoder output, which - # is the size of output_embed_dim. - p_gen_input_size = input_embed_dim + self.output_embed_dim - self.project_p_gens = nn.Linear(p_gen_input_size, 1) - nn.init.zeros_(self.project_p_gens.bias) - - # The dictionary may include a separate entry for an OOV token in each - # input position, so that their identity can be restored from the - # original source text. - self.num_types = len(dictionary) - self.num_oov_types = args.source_position_markers - self.num_embeddings = self.num_types - self.num_oov_types - self.force_p_gen = args.force_generation - - def forward( - self, - prev_output_tokens, - encoder_out: Optional[Dict[str, List[Tensor]]] = None, - incremental_state: Optional[Dict[str, Dict[str, Optional[Tensor]]]] = None, - features_only: bool = False, - alignment_layer: Optional[int] = 0, - alignment_heads: Optional[int] = 1, - src_lengths: Optional[Any] = None, - return_all_hiddens: bool = False, - ): - """ - Args: - prev_output_tokens (LongTensor): previous decoder outputs of shape - `(batch, tgt_len)`, for teacher forcing - encoder_out (optional): output from the encoder, used for - encoder-side attention - incremental_state (dict, optional): dictionary used for storing - state during :ref:`Incremental decoding` - features_only (bool, optional): only return features without - applying output layer (default: False) - alignment_layer (int, optional): 0-based index of the layer to be - used for pointing (default: 0) - alignment_heads (int, optional): number of attention heads to be - used for pointing (default: 1) - - Returns: - tuple: - - the decoder's output of shape `(batch, tgt_len, vocab)` - - a dictionary with any model-specific outputs - """ - # The normal Transformer model doesn't pass the alignment_layer and - # alignment_heads parameters correctly. We use our local variables. - x, extra = self.extract_features( - prev_output_tokens, - encoder_out=encoder_out, - incremental_state=incremental_state, - alignment_layer=self.alignment_layer, - alignment_heads=self.alignment_heads, - ) - if not features_only: - # Embedding the tokens again for generation probability prediction, - # so that we don't have to reimplement the whole extract_features() - # method. - if incremental_state is not None: - prev_output_tokens = prev_output_tokens[:, -1:] - prev_output_embed = self.embed_tokens(prev_output_tokens) - prev_output_embed *= self.embed_scale - predictors = torch.cat((prev_output_embed, x), 2) - p_gens = self.project_p_gens(predictors) - p_gens = torch.sigmoid(p_gens.float()) - # Torchscript complains if encoder_out or attn are None because - # `output_layer()` signature expects tensors instead - attn: Optional[Tensor] = extra["attn"][0] - assert encoder_out is not None - assert attn is not None - x = self.output_layer(x, attn, encoder_out["src_tokens"][0], p_gens) - return x, extra - - def output_layer( - self, - features: Tensor, - attn: Tensor, - src_tokens: Tensor, - p_gens: Tensor - ) -> Tensor: - """ - Project features to the vocabulary size and mix with the attention - distributions. - """ - if self.force_p_gen is not None: - p_gens = self.force_p_gen - - # project back to size of vocabulary - if self.adaptive_softmax is None: - logits = self.output_projection(features) - else: - logits = features - - batch_size = logits.shape[0] - output_length = logits.shape[1] - assert logits.shape[2] == self.num_embeddings - assert src_tokens.shape[0] == batch_size - src_length = src_tokens.shape[1] - - # The final output distribution will be a mixture of the normal output - # distribution (softmax of logits) and attention weights. - gen_dists = self.get_normalized_probs_scriptable( - (logits, None), log_probs=False, sample=None - ) - gen_dists = torch.mul(gen_dists, p_gens) - padding_size = (batch_size, output_length, self.num_oov_types) - padding = gen_dists.new_zeros(padding_size) - gen_dists = torch.cat((gen_dists, padding), 2) - assert gen_dists.shape[2] == self.num_types - - # Scatter attention distributions to distributions over the extended - # vocabulary in a tensor of shape [batch_size, output_length, - # vocab_size]. Each attention weight will be written into a location - # that is for other dimensions the same as in the index tensor, but for - # the third dimension it's the value of the index tensor (the token ID). - attn = torch.mul(attn.float(), 1 - p_gens) - index = src_tokens[:, None, :] - index = index.expand(batch_size, output_length, src_length) - attn_dists_size = (batch_size, output_length, self.num_types) - attn_dists = attn.new_zeros(attn_dists_size) - attn_dists.scatter_add_(2, index, attn.float()) - - # Final distributions, [batch_size, output_length, num_types]. - return gen_dists + attn_dists - - def get_normalized_probs( - self, - net_output: Tuple[Tensor, Optional[Dict[str, List[Optional[Tensor]]]]], - log_probs: bool, - sample: Optional[Dict[str, Tensor]] = None, - ): - """ - Get normalized probabilities (or log probs) from a net's output. - Pointer-generator network output is already normalized. - """ - probs = net_output[0] - # Make sure the probabilities are greater than zero when returning log - # probabilities. - return probs.clamp(1e-10, 1.0).log() if log_probs else probs - - -class Embedding(nn.Embedding): - r"""A simple lookup table that stores embeddings of a fixed dictionary and size. - This module is often used to store word embeddings and retrieve them using indices. - The input to the module is a list of indices, and the output is the corresponding - word embeddings. This subclass differs from the standard PyTorch Embedding class by - allowing additional vocabulary entries that will be mapped to the unknown token - embedding. - Args: - num_embeddings (int): size of the dictionary of embeddings - embedding_dim (int): the size of each embedding vector - padding_idx (int): Pads the output with the embedding vector at :attr:`padding_idx` - (initialized to zeros) whenever it encounters the index. - unk_idx (int): Maps all token indices that are greater than or equal to - num_embeddings to this index. - Attributes: - weight (Tensor): the learnable weights of the module of shape (num_embeddings, embedding_dim) - initialized from :math:`\mathcal{N}(0, 1)` - Shape: - - Input: :math:`(*)`, LongTensor of arbitrary shape containing the indices to extract - - Output: :math:`(*, H)`, where `*` is the input shape and :math:`H=\text{embedding\_dim}` - .. note:: - Keep in mind that only a limited number of optimizers support - sparse gradients: currently it's :class:`optim.SGD` (`CUDA` and `CPU`), - :class:`optim.SparseAdam` (`CUDA` and `CPU`) and :class:`optim.Adagrad` (`CPU`) - .. note:: - With :attr:`padding_idx` set, the embedding vector at - :attr:`padding_idx` is initialized to all zeros. However, note that this - vector can be modified afterwards, e.g., using a customized - initialization method, and thus changing the vector used to pad the - output. The gradient for this vector from :class:`~torch.nn.Embedding` - is always zero. - """ - __constants__ = ["unk_idx"] - - # Torchscript: Inheriting from Embedding class produces an error when exporting to Torchscript - # -> RuntimeError: Unable to cast Python instance to C++ type (compile in debug mode for details - # It's happening because max_norm attribute from nn.Embedding is None by default and it cannot be - # cast to a C++ type - def __init__( - self, - num_embeddings: int, - embedding_dim: int, - padding_idx: Optional[int], - unk_idx: int, - max_norm: Optional[float] = float("inf"), - ): - super().__init__(num_embeddings, embedding_dim, padding_idx=padding_idx, max_norm=max_norm) - self.unk_idx = unk_idx - nn.init.normal_(self.weight, mean=0, std=embedding_dim ** -0.5) - nn.init.constant_(self.weight[padding_idx], 0) - - def forward(self, input): - input = torch.where( - input >= self.num_embeddings, torch.ones_like(input) * self.unk_idx, input - ) - return nn.functional.embedding( - input, self.weight, self.padding_idx, self.max_norm, - self.norm_type, self.scale_grad_by_freq, self.sparse - ) - - -@register_model_architecture( - "transformer_pointer_generator", "transformer_pointer_generator" -) -def transformer_pointer_generator(args): - args.alignment_heads = getattr(args, "alignment_heads", 1) - args.alignment_layer = getattr(args, "alignment_layer", -1) - base_architecture(args) - if args.alignment_layer < 0: - args.alignment_layer = args.decoder_layers + args.alignment_layer - - -@register_model_architecture( - "transformer_pointer_generator", "transformer_pointer_generator_iwslt_de_en" -) -def transformer_pointer_generator_iwslt_de_en(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 512) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 1024) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 4) - args.encoder_layers = getattr(args, "encoder_layers", 6) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 512) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 1024) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 4) - args.decoder_layers = getattr(args, "decoder_layers", 6) - transformer_pointer_generator(args) - - -@register_model_architecture( - "transformer_pointer_generator", "transformer_pointer_generator_wmt_en_de" -) -def transformer_pointer_generator_wmt_en_de(args): - transformer_pointer_generator(args) - - -# Transformer pointer-generator with the base Transformer parameters as used in -# the "Attention Is All You Need" paper (Vaswani et al., 2017) -@register_model_architecture( - "transformer_pointer_generator", - "transformer_pointer_generator_vaswani_wmt_en_de_big", -) -def transformer_pointer_generator_vaswani_wmt_en_de_big(args): - args.encoder_embed_dim = getattr(args, "encoder_embed_dim", 1024) - args.encoder_ffn_embed_dim = getattr(args, "encoder_ffn_embed_dim", 4096) - args.encoder_attention_heads = getattr(args, "encoder_attention_heads", 16) - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", False) - args.decoder_embed_dim = getattr(args, "decoder_embed_dim", 1024) - args.decoder_ffn_embed_dim = getattr(args, "decoder_ffn_embed_dim", 4096) - args.decoder_attention_heads = getattr(args, "decoder_attention_heads", 16) - args.dropout = getattr(args, "dropout", 0.3) - transformer_pointer_generator(args) - - -@register_model_architecture( - "transformer_pointer_generator", - "transformer_pointer_generator_vaswani_wmt_en_fr_big", -) -def transformer_pointer_generator_vaswani_wmt_en_fr_big(args): - args.dropout = getattr(args, "dropout", 0.1) - transformer_pointer_generator_vaswani_wmt_en_de_big(args) - - -@register_model_architecture( - "transformer_pointer_generator", "transformer_pointer_generator_wmt_en_de_big" -) -def transformer_pointer_generator_wmt_en_de_big(args): - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - transformer_pointer_generator_vaswani_wmt_en_de_big(args) - - -# default parameters used in tensor2tensor implementation -@register_model_architecture( - "transformer_pointer_generator", "transformer_pointer_generator_wmt_en_de_big_t2t" -) -def transformer_pointer_generator_wmt_en_de_big_t2t(args): - args.encoder_normalize_before = getattr(args, "encoder_normalize_before", True) - args.decoder_normalize_before = getattr(args, "decoder_normalize_before", True) - args.attention_dropout = getattr(args, "attention_dropout", 0.1) - args.activation_dropout = getattr(args, "activation_dropout", 0.1) - transformer_pointer_generator_vaswani_wmt_en_de_big(args) diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py b/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py deleted file mode 100644 index 655a9b0d19d11e35511392a016f9d6b7d7aa2925..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_recognition/models/w2l_conv_glu_enc.py +++ /dev/null @@ -1,177 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch -import torch.nn as nn -import torch.nn.functional as F -from fairseq.models import ( - FairseqEncoder, - FairseqEncoderModel, - register_model, - register_model_architecture, -) -from fairseq.modules.fairseq_dropout import FairseqDropout - - -default_conv_enc_config = """[ - (400, 13, 170, 0.2), - (440, 14, 0, 0.214), - (484, 15, 0, 0.22898), - (532, 16, 0, 0.2450086), - (584, 17, 0, 0.262159202), - (642, 18, 0, 0.28051034614), - (706, 19, 0, 0.30014607037), - (776, 20, 0, 0.321156295296), - (852, 21, 0, 0.343637235966), - (936, 22, 0, 0.367691842484), - (1028, 23, 0, 0.393430271458), - (1130, 24, 0, 0.42097039046), - (1242, 25, 0, 0.450438317792), - (1366, 26, 0, 0.481969000038), - (1502, 27, 0, 0.51570683004), - (1652, 28, 0, 0.551806308143), - (1816, 29, 0, 0.590432749713), -]""" - - -@register_model("asr_w2l_conv_glu_encoder") -class W2lConvGluEncoderModel(FairseqEncoderModel): - def __init__(self, encoder): - super().__init__(encoder) - - @staticmethod - def add_args(parser): - """Add model-specific arguments to the parser.""" - parser.add_argument( - "--input-feat-per-channel", - type=int, - metavar="N", - help="encoder input dimension per input channel", - ) - parser.add_argument( - "--in-channels", - type=int, - metavar="N", - help="number of encoder input channels", - ) - parser.add_argument( - "--conv-enc-config", - type=str, - metavar="EXPR", - help=""" - an array of tuples each containing the configuration of one conv layer - [(out_channels, kernel_size, padding, dropout), ...] - """, - ) - - @classmethod - def build_model(cls, args, task): - """Build a new model instance.""" - conv_enc_config = getattr(args, "conv_enc_config", default_conv_enc_config) - encoder = W2lConvGluEncoder( - vocab_size=len(task.target_dictionary), - input_feat_per_channel=args.input_feat_per_channel, - in_channels=args.in_channels, - conv_enc_config=eval(conv_enc_config), - ) - return cls(encoder) - - def get_normalized_probs(self, net_output, log_probs, sample=None): - lprobs = super().get_normalized_probs(net_output, log_probs, sample) - lprobs.batch_first = False - return lprobs - - -class W2lConvGluEncoder(FairseqEncoder): - def __init__( - self, vocab_size, input_feat_per_channel, in_channels, conv_enc_config - ): - super().__init__(None) - - self.input_dim = input_feat_per_channel - if in_channels != 1: - raise ValueError("only 1 input channel is currently supported") - - self.conv_layers = nn.ModuleList() - self.linear_layers = nn.ModuleList() - self.dropouts = [] - cur_channels = input_feat_per_channel - - for out_channels, kernel_size, padding, dropout in conv_enc_config: - layer = nn.Conv1d(cur_channels, out_channels, kernel_size, padding=padding) - layer.weight.data.mul_(math.sqrt(3)) # match wav2letter init - self.conv_layers.append(nn.utils.weight_norm(layer)) - self.dropouts.append( - FairseqDropout(dropout, module_name=self.__class__.__name__) - ) - if out_channels % 2 != 0: - raise ValueError("odd # of out_channels is incompatible with GLU") - cur_channels = out_channels // 2 # halved by GLU - - for out_channels in [2 * cur_channels, vocab_size]: - layer = nn.Linear(cur_channels, out_channels) - layer.weight.data.mul_(math.sqrt(3)) - self.linear_layers.append(nn.utils.weight_norm(layer)) - cur_channels = out_channels // 2 - - def forward(self, src_tokens, src_lengths, **kwargs): - - """ - src_tokens: padded tensor (B, T, C * feat) - src_lengths: tensor of original lengths of input utterances (B,) - """ - B, T, _ = src_tokens.size() - x = src_tokens.transpose(1, 2).contiguous() # (B, feat, T) assuming C == 1 - - for layer_idx in range(len(self.conv_layers)): - x = self.conv_layers[layer_idx](x) - x = F.glu(x, dim=1) - x = self.dropouts[layer_idx](x) - - x = x.transpose(1, 2).contiguous() # (B, T, 908) - x = self.linear_layers[0](x) - x = F.glu(x, dim=2) - x = self.dropouts[-1](x) - x = self.linear_layers[1](x) - - assert x.size(0) == B - assert x.size(1) == T - - encoder_out = x.transpose(0, 1) # (T, B, vocab_size) - - # need to debug this -- find a simpler/elegant way in pytorch APIs - encoder_padding_mask = ( - torch.arange(T).view(1, T).expand(B, -1).to(x.device) - >= src_lengths.view(B, 1).expand(-1, T) - ).t() # (B x T) -> (T x B) - - return { - "encoder_out": encoder_out, # (T, B, vocab_size) - "encoder_padding_mask": encoder_padding_mask, # (T, B) - } - - def reorder_encoder_out(self, encoder_out, new_order): - encoder_out["encoder_out"] = encoder_out["encoder_out"].index_select( - 1, new_order - ) - encoder_out["encoder_padding_mask"] = encoder_out[ - "encoder_padding_mask" - ].index_select(1, new_order) - return encoder_out - - def max_positions(self): - """Maximum input length supported by the encoder.""" - return (1e6, 1e6) # an arbitrary large number - - -@register_model_architecture("asr_w2l_conv_glu_encoder", "w2l_conv_glu_enc") -def w2l_conv_glu_enc(args): - args.input_feat_per_channel = getattr(args, "input_feat_per_channel", 80) - args.in_channels = getattr(args, "in_channels", 1) - args.conv_enc_config = getattr(args, "conv_enc_config", default_conv_enc_config) diff --git a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/seg_mustc_data.py b/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/seg_mustc_data.py deleted file mode 100644 index 1ee665d6399729afe17d790d872eff34de124900..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/speech_to_text/seg_mustc_data.py +++ /dev/null @@ -1,54 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import logging -from pathlib import Path -import soundfile as sf -from examples.speech_to_text.prep_mustc_data import ( - MUSTC -) - -from tqdm import tqdm - -log = logging.getLogger(__name__) - - -def main(args): - root = Path(args.data_root).absolute() - lang = args.lang - split = args.split - - cur_root = root / f"en-{lang}" - assert cur_root.is_dir(), ( - f"{cur_root.as_posix()} does not exist. Skipped." - ) - - dataset = MUSTC(root.as_posix(), lang, split) - output = Path(args.output).absolute() - output.mkdir(exist_ok=True) - f_text = open(output / f"{split}.{lang}", "w") - f_wav_list = open(output / f"{split}.wav_list", "w") - for waveform, sample_rate, _, text, _, utt_id in tqdm(dataset): - sf.write( - output / f"{utt_id}.wav", - waveform.squeeze(0).numpy(), - samplerate=int(sample_rate) - ) - f_text.write(text + "\n") - f_wav_list.write(str(output / f"{utt_id}.wav") + "\n") - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--data-root", "-d", required=True, type=str) - parser.add_argument("--task", required=True, type=str, choices=["asr", "st"]) - parser.add_argument("--lang", required=True, type=str) - parser.add_argument("--output", required=True, type=str) - parser.add_argument("--split", required=True, choices=MUSTC.SPLITS) - args = parser.parse_args() - - main(args) diff --git a/spaces/IISRFactCheck/claim_detection/README.md b/spaces/IISRFactCheck/claim_detection/README.md deleted file mode 100644 index afad159b0355c7d12d60028ed406903e787bd0fa..0000000000000000000000000000000000000000 --- a/spaces/IISRFactCheck/claim_detection/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Factcheck -emoji: 👀 -colorFrom: pink -colorTo: blue -sdk: gradio -sdk_version: 3.20.1 -app_file: code/app.py -pinned: false -license: unknown ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/README_ZH.md b/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/README_ZH.md deleted file mode 100644 index 269546ccb91643bef62d872b39a90bf19a8393aa..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/VITS-fast-fine-tuning_nymph/README_ZH.md +++ /dev/null @@ -1,60 +0,0 @@ -English Documentation Please Click [here](https://github.com/Plachtaa/VITS-fast-fine-tuning/blob/main/README.md) -# VITS 快速微调 -这个代码库会指导你如何将自定义角色(甚至你自己),加入预训练的VITS模型中,在1小时内的微调使模型具备如下功能: -1. 在 模型所包含的任意两个角色 之间进行声线转换 -2. 以 你加入的角色声线 进行中日英三语 文本到语音合成。 - -本项目使用的底模涵盖常见二次元男/女配音声线(来自原神数据集)以及现实世界常见男/女声线(来自VCTK数据集),支持中日英三语,保证能够在微调时快速适应新的声线。 - -欢迎体验微调所使用的底模! - -中日英:[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Plachta/VITS-Umamusume-voice-synthesizer) 作者:我 - -中日:[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/sayashi/vits-uma-genshin-honkai) 作者:[SayaSS](https://github.com/SayaSS) - -### 目前支持的任务: -- [x] 从 10条以上的短音频 克隆角色声音 -- [x] 从 3分钟以上的长音频(单个音频只能包含单说话人) 克隆角色声音 -- [x] 从 3分钟以上的视频(单个视频只能包含单说话人) 克隆角色声音 -- [x] 通过输入 bilibili视频链接(单个视频只能包含单说话人) 克隆角色声音 - -### 目前支持声线转换和中日英三语TTS的角色 -- [x] 任意角色(只要你有角色的声音样本) -(注意:声线转换只能在任意两个存在于模型中的说话人之间进行) - - - - -## 微调 -建议使用 [Google Colab](https://colab.research.google.com/drive/1pn1xnFfdLK63gVXDwV4zCXfVeo8c-I-0?usp=sharing) -进行微调任务,因为VITS在多语言情况下的某些环境依赖相当难以配置。 -### 在Google Colab里,我需要花多长时间? -1. 安装依赖 (3 min) -2. 选择预训练模型,详细区别参见[Colab 笔记本页面](https://colab.research.google.com/drive/1pn1xnFfdLK63gVXDwV4zCXfVeo8c-I-0?usp=sharing)。 -3. 上传你希望加入的其它角色声音,详细上传方式见[DATA.MD](https://github.com/Plachtaa/VITS-fast-fine-tuning/blob/main/DATA.MD) -4. 进行微调,根据选择的微调方式和样本数量不同,花费时长可能在20分钟到2小时不等。 - -微调结束后可以直接下载微调好的模型,日后在本地运行(不需要GPU) - -## 本地运行和推理 -0. 记得下载微调好的模型和config文件! -1. 下载最新的Release包(在Github页面的右侧) -2. 把下载的模型和config文件放在 `inference`文件夹下, 其文件名分别为 `G_latest.pth` 和 `finetune_speaker.json`。 -3. 一切准备就绪后,文件结构应该如下所示: -``` -inference -├───inference.exe -├───... -├───finetune_speaker.json -└───G_latest.pth -``` -4. 运行 `inference.exe`, 浏览器会自动弹出窗口, 注意其所在路径不能有中文字符或者空格. - -## 在MoeGoe使用 -0. MoeGoe以及类似其它VITS推理UI使用的config格式略有不同,需要下载的文件为模型`G_latest.pth`和配置文件`moegoe_config.json` -1. 按照[MoeGoe](https://github.com/CjangCjengh/MoeGoe)页面的提示配置路径即可使用。 -2. MoeGoe在输入句子时需要使用相应的语言标记包裹句子才能正常合成。(日语用[JA], 中文用[ZH], 英文用[EN]),例如: -[JA]こんにちわ。[JA] -[ZH]你好![ZH] -[EN]Hello![EN] - diff --git a/spaces/Illumotion/Koboldcpp/scripts/qnt-all.sh b/spaces/Illumotion/Koboldcpp/scripts/qnt-all.sh deleted file mode 100644 index b4c2a159e2bf510c96f8e92dd6a4fac7ea07bdaa..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/scripts/qnt-all.sh +++ /dev/null @@ -1,30 +0,0 @@ -#!/bin/bash - -qnt=(q8_0 q6_k q5_k q5_1 q5_0 q4_k q4_1 q4_0 q3_k q2_k) -args="" - -if [ -z "$1" ]; then - echo "usage: $0 [qnt] [args]" - echo "default: $0 \"${qnt[@]}\" \"${args}\"" - exit 1 -fi - -if [ ! -z "$2" ]; then - qnt=($2) -fi - -if [ ! -z "$3" ]; then - args="$3" -fi - -model="$1" -out="../tmp/results-${model}" - -set -o pipefail -set -e - -mkdir -p ${out} - -for q in ${qnt[@]}; do - time ./bin/quantize ../models/${model}/ggml-model-f16.gguf ../models/${model}/ggml-model-${q}.gguf ${q} 2>&1 ${args} | tee ${out}/qnt-${q}.txt -done diff --git a/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules.py b/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/Ilzhabimantara/rvc-Blue-archives/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/JMalott/ai_architecture/dalle/models/stage2/layers.py b/spaces/JMalott/ai_architecture/dalle/models/stage2/layers.py deleted file mode 100644 index 459ddae9fd7970c05618181511d863cb637eb0b0..0000000000000000000000000000000000000000 --- a/spaces/JMalott/ai_architecture/dalle/models/stage2/layers.py +++ /dev/null @@ -1,140 +0,0 @@ -# ------------------------------------------------------------------------------------ -# minDALL-E -# Copyright (c) 2021 Kakao Brain Corp. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------ -# Modified from minGPT (https://github.com/karpathy/minGPT) -# Copyright (c) 2020 Andrej Karpathy. All Rights Reserved. -# ------------------------------------------------------------------------------------ - -import math -import torch -import torch.nn as nn -from torch.nn import functional as F - - -class GELU(nn.Module): - def __init__(self, use_approx=False): - super().__init__() - self.use_approx = use_approx - - def forward(self, x): - if self.use_approx: - return x * torch.sigmoid(1.702 * x) - else: - return F.gelu(x) - - -class MultiHeadSelfAttention(nn.Module): - - def __init__(self, - ctx_len: int, - embed_dim: int, - n_heads: int, - resid_pdrop: float, - attn_pdrop: float, - attn_bias: bool, - use_mask: bool = True): - super().__init__() - assert embed_dim % n_heads == 0 - - # key, query, value projections for all heads - self.key = nn.Linear(embed_dim, embed_dim, bias=attn_bias) - self.query = nn.Linear(embed_dim, embed_dim, bias=attn_bias) - self.value = nn.Linear(embed_dim, embed_dim, bias=attn_bias) - - # regularization - self.attn_drop = nn.Dropout(attn_pdrop) - self.resid_drop = nn.Dropout(resid_pdrop) - - # output projection - self.proj = nn.Linear(embed_dim, embed_dim, attn_bias) - - self.n_heads = n_heads - self.ctx_len = ctx_len - self.use_mask = use_mask - if self.use_mask: - self.register_buffer("mask", torch.ones(ctx_len, ctx_len), persistent=False) - self.mask = torch.tril(self.mask).view(1, ctx_len, ctx_len) - - def forward(self, x, use_cache=False, layer_past=None): - B, T, C = x.shape - x = x.transpose(0, 1).contiguous() # (B, T, C) -> (T, B, C) - - # calculate query, key, values for all heads in batch and move head forward to be the batch dim - k = self.key(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs) - q = self.query(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs) - v = self.value(x).view(T, B*self.n_heads, C//self.n_heads).transpose(0, 1) # (B*nh, T, hs) - - if use_cache: - present = torch.stack([k, v]) - - if layer_past is not None: - past_key, past_value = layer_past - k = torch.cat([past_key, k], dim=-2) - v = torch.cat([past_value, v], dim=-2) - - if use_cache and layer_past is not None: - # Tensor shape below: (B * nh, 1, hs) X (B * nh, hs, K) -> (B * nh, 1, K) - att = torch.bmm(q, (k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))) - att = F.softmax(att, dim=-1) - att = self.attn_drop(att) - y = torch.bmm(att, v) # (B*nh, 1, K) X (B*nh, K, hs) -> (B*nh, 1, hs) - else: - # Tensor shape below: (B * nh, T, hs) X (B * nh, hs, T) -> (B * nh, T, T) - att = torch.bmm(q, (k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))) - if self.use_mask: - mask = self.mask if T == self.ctx_len else self.mask[:, :T, :T] - att = att.masked_fill(mask == 0, float('-inf')) - att = F.softmax(att, dim=-1) - att = self.attn_drop(att) - y = torch.bmm(att, v) # (B*nh, T, T) X (B*nh, T, hs) -> (B*nh, T, hs) - y = y.transpose(0, 1).contiguous().view(T, B, C) # re-assemble all head outputs side by side - - # output projection - y = self.resid_drop(self.proj(y)) - if use_cache: - return y.transpose(0, 1).contiguous(), present # (T, B, C) -> (B, T, C) - else: - return y.transpose(0, 1).contiguous() # (T, B, C) -> (B, T, C) - - -class Block(nn.Module): - - def __init__(self, - ctx_len: int, - embed_dim: int, - n_heads: int, - mlp_bias: bool, - attn_bias: bool, - resid_pdrop: bool, - attn_pdrop: bool, - gelu_use_approx: bool): - super().__init__() - self.ln1 = nn.LayerNorm(embed_dim) - self.ln2 = nn.LayerNorm(embed_dim) - - self.attn = MultiHeadSelfAttention(ctx_len=ctx_len, - embed_dim=embed_dim, - n_heads=n_heads, - attn_pdrop=attn_pdrop, - resid_pdrop=resid_pdrop, - attn_bias=attn_bias, - use_mask=True) - self.mlp = nn.Sequential( - nn.Linear(embed_dim, 4 * embed_dim, bias=mlp_bias), - GELU(gelu_use_approx), - nn.Linear(4 * embed_dim, embed_dim, bias=mlp_bias), - nn.Dropout(resid_pdrop), - ) - - def forward(self, x): - x = x + self.attn(self.ln1(x)) - x = x + self.mlp(self.ln2(x)) - return x - - def sample(self, x, layer_past=None): - attn, present = self.attn(self.ln1(x), use_cache=True, layer_past=layer_past) - x = x + attn - x = x + self.mlp(self.ln2(x)) - return x, present diff --git a/spaces/JammyMachina/the-jam-machine-app/generate.py b/spaces/JammyMachina/the-jam-machine-app/generate.py deleted file mode 100644 index f95200755ea24297938349921900881ce832bf4b..0000000000000000000000000000000000000000 --- a/spaces/JammyMachina/the-jam-machine-app/generate.py +++ /dev/null @@ -1,436 +0,0 @@ -from generation_utils import * -import random - - -class GenerateMidiText: - """Generating music with Class - - LOGIC: - - FOR GENERATING FROM SCRATCH: - - self.generate_one_new_track() - it calls - - self.generate_until_track_end() - - FOR GENERATING NEW BARS: - - self.generate_one_more_bar() - it calls - - self.process_prompt_for_next_bar() - - self.generate_until_track_end()""" - - def __init__(self, model, tokenizer, piece_by_track=[]): - self.model = model - self.tokenizer = tokenizer - # default initialization - self.initialize_default_parameters() - self.initialize_dictionaries(piece_by_track) - - """Setters""" - - def initialize_default_parameters(self): - self.set_device() - self.set_attention_length() - self.generate_until = "TRACK_END" - self.set_force_sequence_lenth() - self.set_nb_bars_generated() - self.set_improvisation_level(0) - - def initialize_dictionaries(self, piece_by_track): - self.piece_by_track = piece_by_track - - def set_device(self, device="cpu"): - self.device = ("cpu",) - - def set_attention_length(self): - self.max_length = self.model.config.n_positions - print( - f"Attention length set to {self.max_length} -> 'model.config.n_positions'" - ) - - def set_force_sequence_lenth(self, force_sequence_length=True): - self.force_sequence_length = force_sequence_length - - def set_improvisation_level(self, improvisation_value): - self.no_repeat_ngram_size = improvisation_value - print("--------------------") - print(f"no_repeat_ngram_size set to {improvisation_value}") - print("--------------------") - - def reset_temperatures(self, track_id, temperature): - self.piece_by_track[track_id]["temperature"] = temperature - - def set_nb_bars_generated(self, n_bars=8): # default is a 8 bar model - self.model_n_bar = n_bars - - """ Generation Tools - Dictionnaries """ - - def initiate_track_dict(self, instr, density, temperature): - label = len(self.piece_by_track) - self.piece_by_track.append( - { - "label": f"track_{label}", - "instrument": instr, - "density": density, - "temperature": temperature, - "bars": [], - } - ) - - def update_track_dict__add_bars(self, bars, track_id): - """Add bars to the track dictionnary""" - for bar in self.striping_track_ends(bars).split("BAR_START "): - if bar == "": # happens is there is one bar only - continue - else: - if "TRACK_START" in bar: - self.piece_by_track[track_id]["bars"].append(bar) - else: - self.piece_by_track[track_id]["bars"].append("BAR_START " + bar) - - def get_all_instr_bars(self, track_id): - return self.piece_by_track[track_id]["bars"] - - def striping_track_ends(self, text): - if "TRACK_END" in text: - # first get rid of extra space if any - # then gets rid of "TRACK_END" - text = text.rstrip(" ").rstrip("TRACK_END") - return text - - def get_last_generated_track(self, piece): - """Get the last track from a piece written as a single long string""" - track = self.get_tracks_from_a_piece(piece)[-1] - return track - - def get_tracks_from_a_piece(self, piece): - """Get all the tracks from a piece written as a single long string""" - all_tracks = [ - "TRACK_START " + the_track + "TRACK_END " - for the_track in self.striping_track_ends(piece.split("TRACK_START ")[1::]) - ] - return all_tracks - - def get_piece_from_track_list(self, track_list): - piece = "PIECE_START " - for track in track_list: - piece += track - return piece - - def get_whole_track_from_bar_dict(self, track_id): - text = "" - for bar in self.piece_by_track[track_id]["bars"]: - text += bar - text += "TRACK_END " - return text - - @staticmethod - def get_newly_generated_text(input_prompt, full_piece): - return full_piece[len(input_prompt) :] - - def get_whole_piece_from_bar_dict(self): - text = "PIECE_START " - for track_id, _ in enumerate(self.piece_by_track): - text += self.get_whole_track_from_bar_dict(track_id) - return text - - def delete_one_track(self, track): - self.piece_by_track.pop(track) - - """Basic generation tools""" - - def tokenize_input_prompt(self, input_prompt, verbose=True): - """Tokenizing prompt - - Args: - - input_prompt (str): prompt to tokenize - - Returns: - - input_prompt_ids (torch.tensor): tokenized prompt - """ - if verbose: - print("Tokenizing input_prompt...") - - return self.tokenizer.encode(input_prompt, return_tensors="pt") - - def generate_sequence_of_token_ids( - self, - input_prompt_ids, - temperature, - verbose=True, - ): - """ - generate a sequence of token ids based on input_prompt_ids - The sequence length depends on the trained model (self.model_n_bar) - """ - generated_ids = self.model.generate( - input_prompt_ids, - max_length=self.max_length, - do_sample=True, - temperature=temperature, - no_repeat_ngram_size=self.no_repeat_ngram_size, # default = 0 - eos_token_id=self.tokenizer.encode(self.generate_until)[0], # good - ) - - if verbose: - print("Generating a token_id sequence...") - - return generated_ids - - def convert_ids_to_text(self, generated_ids, verbose=True): - """converts the token_ids to text""" - generated_text = self.tokenizer.decode(generated_ids[0]) - if verbose: - print("Converting token sequence to MidiText...") - return generated_text - - def generate_until_track_end( - self, - input_prompt="PIECE_START ", - instrument=None, - density=None, - temperature=None, - verbose=True, - expected_length=None, - ): - """generate until the TRACK_END token is reached - full_piece = input_prompt + generated""" - if expected_length is None: - expected_length = self.model_n_bar - - if instrument is not None: - input_prompt = f"{input_prompt}TRACK_START INST={str(instrument)} " - if density is not None: - input_prompt = f"{input_prompt}DENSITY={str(density)} " - - if instrument is None and density is not None: - print("Density cannot be defined without an input_prompt instrument #TOFIX") - - if temperature is None: - ValueError("Temperature must be defined") - - if verbose: - print("--------------------") - print( - f"Generating {instrument} - Density {density} - temperature {temperature}" - ) - bar_count_checks = False - failed = 0 - while not bar_count_checks: # regenerate until right length - input_prompt_ids = self.tokenize_input_prompt(input_prompt, verbose=verbose) - generated_tokens = self.generate_sequence_of_token_ids( - input_prompt_ids, temperature, verbose=verbose - ) - full_piece = self.convert_ids_to_text(generated_tokens, verbose=verbose) - generated = self.get_newly_generated_text(input_prompt, full_piece) - # bar_count_checks - bar_count_checks, bar_count = bar_count_check(generated, expected_length) - - if not self.force_sequence_length: - # set bar_count_checks to true to exist the while loop - bar_count_checks = True - - if not bar_count_checks and self.force_sequence_length: - # if the generated sequence is not the expected length - if failed > -1: # deactivated for speed - full_piece, bar_count_checks = forcing_bar_count( - input_prompt, - generated, - bar_count, - expected_length, - ) - else: - print('"--- Wrong length - Regenerating ---') - - if not bar_count_checks: - failed += 1 - - if failed > 2: - bar_count_checks = True # exit the while loop if failed too much - - return full_piece - - def generate_one_new_track( - self, - instrument, - density, - temperature, - input_prompt="PIECE_START ", - ): - self.initiate_track_dict(instrument, density, temperature) - full_piece = self.generate_until_track_end( - input_prompt=input_prompt, - instrument=instrument, - density=density, - temperature=temperature, - ) - - track = self.get_last_generated_track(full_piece) - self.update_track_dict__add_bars(track, -1) - full_piece = self.get_whole_piece_from_bar_dict() - return full_piece - - """ Piece generation - Basics """ - - def generate_piece(self, instrument_list, density_list, temperature_list): - """generate a sequence with mutiple tracks - - Args: - - inst_list sets the list of instruments and the the order of generation - - density and - - temperature are paired with inst_list - - Each track/intrument is generated based on a prompt which contains the previously generated track/instrument - - Returns: - 'generated_piece' which keeps track of the entire piece - """ - - generated_piece = "PIECE_START " - for instrument, density, temperature in zip( - instrument_list, density_list, temperature_list - ): - generated_piece = self.generate_one_new_track( - instrument, - density, - temperature, - input_prompt=generated_piece, - ) - - # generated_piece = self.get_whole_piece_from_bar_dict() - self.check_the_piece_for_errors() - return generated_piece - - """ Piece generation - Extra Bars """ - - def process_prompt_for_next_bar(self, track_idx, verbose=True): - """Processing the prompt for the model to generate one more bar only. - The prompt containts: - if not the first bar: the previous, already processed, bars of the track - the bar initialization (ex: "TRACK_START INST=DRUMS DENSITY=2 ") - the last (self.model_n_bar)-1 bars of the track - Args: - track_idx (int): the index of the track to be processed - - Returns: - the processed prompt for generating the next bar - """ - track = self.piece_by_track[track_idx] - # for bars which are not the bar to prolong - pre_promt = "PIECE_START " - for i, othertrack in enumerate(self.piece_by_track): - if i != track_idx: - len_diff = len(othertrack["bars"]) - len(track["bars"]) - if len_diff > 0: - if verbose: - print( - f"Adding bars - {len(track['bars'][-self.model_n_bar :])} selected from SIDE track: {i} for prompt" - ) - # if other bars are longer, it mean that this one should catch up - pre_promt += othertrack["bars"][0] - for bar in track["bars"][-self.model_n_bar :]: - pre_promt += bar - pre_promt += "TRACK_END " - elif ( - False - ): # len_diff <= 0: # THIS DOES NOT WORK - It just adds empty bars - # adding an empty bars at the end of the other tracks if they have not been processed yet - pre_promt += othertracks["bars"][0] - for bar in track["bars"][-(self.model_n_bar - 1) :]: - pre_promt += bar - for _ in range(abs(len_diff) + 1): - pre_promt += "BAR_START BAR_END " - pre_promt += "TRACK_END " - - # for the bar to prolong - # initialization e.g TRACK_START INST=DRUMS DENSITY=2 - processed_prompt = track["bars"][0] - if verbose: - print( - f"Adding bars - {len(track['bars'][-(self.model_n_bar - 1) :])} selected from MAIN track: {track_idx} for prompt" - ) - for bar in track["bars"][-(self.model_n_bar - 1) :]: - # adding the "last" bars of the track - processed_prompt += bar - - processed_prompt += "BAR_START " - - # making the preprompt short enought to avoid bug due to length of the prompt (model limitation) - pre_promt = self.force_prompt_length(pre_promt, 1500) - - print( - f"--- prompt length = {len((pre_promt + processed_prompt).split(' '))} ---" - ) - - return pre_promt + processed_prompt - - def force_prompt_length(self, prompt, expected_length): - """remove one instrument/track from the prompt it too long - Args: - prompt (str): the prompt to be processed - expected_length (int): the expected length of the prompt - Returns: - the truncated prompt""" - if len(prompt.split(" ")) < expected_length: - truncated_prompt = prompt - else: - tracks = self.get_tracks_from_a_piece(prompt) - selected_tracks = random.sample(tracks, len(tracks) - 1) - truncated_prompt = self.get_piece_from_track_list(selected_tracks) - print(f"Prompt too long - deleting one track") - - return truncated_prompt - - def generate_one_more_bar(self, track_index): - """Generate one more bar from the input_prompt""" - processed_prompt = self.process_prompt_for_next_bar(track_index) - - prompt_plus_bar = self.generate_until_track_end( - input_prompt=processed_prompt, - temperature=self.piece_by_track[track_index]["temperature"], - expected_length=1, - verbose=False, - ) - added_bar = self.get_newly_generated_bar(prompt_plus_bar) - self.update_track_dict__add_bars(added_bar, track_index) - - def get_newly_generated_bar(self, prompt_plus_bar): - return "BAR_START " + self.striping_track_ends( - prompt_plus_bar.split("BAR_START ")[-1] - ) - - def generate_n_more_bars(self, n_bars, only_this_track=None, verbose=True): - """Generate n more bars from the input_prompt""" - if only_this_track is None: - only_this_track - - print(f"================== ") - print(f"Adding {n_bars} more bars to the piece ") - for bar_id in range(n_bars): - print(f"----- added bar #{bar_id+1} --") - for i, track in enumerate(self.piece_by_track): - if only_this_track is None or i == only_this_track: - print(f"--------- {track['label']}") - self.generate_one_more_bar(i) - self.check_the_piece_for_errors() - - def check_the_piece_for_errors(self, piece: str = None): - if piece is None: - piece = self.get_whole_piece_from_bar_dict() - errors = [] - errors.append( - [ - (token, id) - for id, token in enumerate(piece.split(" ")) - if token not in self.tokenizer.vocab or token == "UNK" - ] - ) - if len(errors) > 0: - # print(piece) - for er in errors: - er - print(f"Token not found in the piece at {er[0][1]}: {er[0][0]}") - print(piece.split(" ")[er[0][1] - 5 : er[0][1] + 5]) - - -if __name__ == "__main__": - pass diff --git a/spaces/JohnnyPittt/audio-styling/deepafx_st/models/baselines.py b/spaces/JohnnyPittt/audio-styling/deepafx_st/models/baselines.py deleted file mode 100644 index 806caca587e7bedc71251da58f5acfdba9492ad3..0000000000000000000000000000000000000000 --- a/spaces/JohnnyPittt/audio-styling/deepafx_st/models/baselines.py +++ /dev/null @@ -1,280 +0,0 @@ -import torch -import torchaudio -import scipy.signal -import numpy as np -import pyloudnorm as pyln -import matplotlib.pyplot as plt -from deepafx_st.processors.dsp.compressor import compressor - -from tqdm import tqdm - - -class BaselineEQ(torch.nn.Module): - def __init__( - self, - ntaps: int = 63, - n_fft: int = 65536, - sample_rate: float = 44100, - ): - super().__init__() - self.ntaps = ntaps - self.n_fft = n_fft - self.sample_rate = sample_rate - - # compute the target spectrum - # print("Computing target spectrum...") - # self.target_spec, self.sm_target_spec = self.analyze_speech_dataset(filepaths) - # self.plot_spectrum(self.target_spec, filename="targetEQ") - # self.plot_spectrum(self.sm_target_spec, filename="targetEQsm") - - def forward(self, x, y): - - bs, ch, s = x.size() - - x = x.view(bs * ch, -1) - y = y.view(bs * ch, -1) - - in_spec = self.get_average_spectrum(x) - ref_spec = self.get_average_spectrum(y) - - sm_in_spec = self.smooth_spectrum(in_spec) - sm_ref_spec = self.smooth_spectrum(ref_spec) - - # self.plot_spectrum(in_spec, filename="inSpec") - # self.plot_spectrum(sm_in_spec, filename="inSpecsm") - - # design inverse FIR filter to match target EQ - freqs = np.linspace(0, 1.0, num=(self.n_fft // 2) + 1) - response = sm_ref_spec / sm_in_spec - response[-1] = 0.0 # zero gain at nyquist - - b = scipy.signal.firwin2( - self.ntaps, - freqs * (self.sample_rate / 2), - response, - fs=self.sample_rate, - ) - - # scale the coefficients for less intense filter - # clearb *= 0.5 - - # apply the filter - x_filt = scipy.signal.lfilter(b, [1.0], x.numpy()) - x_filt = torch.tensor(x_filt.astype("float32")) - - if False: - # plot the filter response - w, h = scipy.signal.freqz(b, fs=self.sample_rate, worN=response.shape[-1]) - - fig, ax1 = plt.subplots() - ax1.set_title("Digital filter frequency response") - ax1.plot(w, 20 * np.log10(abs(h + 1e-8))) - ax1.plot(w, 20 * np.log10(abs(response + 1e-8))) - - ax1.set_xscale("log") - ax1.set_ylim([-12, 12]) - plt.grid(c="lightgray") - plt.savefig(f"inverse.png") - - x_filt_avg_spec = self.get_average_spectrum(x_filt) - sm_x_filt_avg_spec = self.smooth_spectrum(x_filt_avg_spec) - y_avg_spec = self.get_average_spectrum(y) - sm_y_avg_spec = self.smooth_spectrum(y_avg_spec) - compare = torch.stack( - [ - torch.tensor(sm_in_spec), - torch.tensor(sm_x_filt_avg_spec), - torch.tensor(sm_ref_spec), - torch.tensor(sm_y_avg_spec), - ] - ) - self.plot_multi_spectrum( - compare, - legend=["in", "out", "target curve", "actual target"], - filename="outSpec", - ) - - return x_filt - - def analyze_speech_dataset(self, filepaths, peak=-3.0): - avg_spec = [] - for filepath in tqdm(filepaths, ncols=80): - x, sr = torchaudio.load(filepath) - x /= x.abs().max() - x *= 10 ** (peak / 20.0) - avg_spec.append(self.get_average_spectrum(x)) - avg_specs = torch.stack(avg_spec) - - avg_spec = avg_specs.mean(dim=0).numpy() - avg_spec_std = avg_specs.std(dim=0).numpy() - - # self.plot_multi_spectrum(avg_specs, filename="allTargetEQs") - # self.plot_spectrum_stats(avg_spec, avg_spec_std, filename="targetEQstats") - - sm_avg_spec = self.smooth_spectrum(avg_spec) - - return avg_spec, sm_avg_spec - - def smooth_spectrum(self, H): - # apply Savgol filter for smoothed target curve - return scipy.signal.savgol_filter(H, 1025, 2) - - def get_average_spectrum(self, x): - - # x = x[:, : self.n_fft] - X = torch.stft(x, self.n_fft, return_complex=True, normalized=True) - # fft_size = self.next_power_of_2(x.shape[-1]) - # X = torch.fft.rfft(x, n=fft_size) - - X = X.abs() # convert to magnitude - X = X.mean(dim=-1).view(-1) # average across frames - - return X - - @staticmethod - def next_power_of_2(x): - return 1 if x == 0 else int(2 ** np.ceil(np.log2(x))) - - def plot_multi_spectrum(self, Hs, legend=[], filename=None): - - bin_width = (self.sample_rate / 2) / (self.n_fft // 2) - freqs = np.arange(0, (self.sample_rate / 2) + bin_width, step=bin_width) - - fig, ax1 = plt.subplots() - - for H in Hs: - ax1.plot( - freqs, - 20 * np.log10(abs(H) + 1e-8), - ) - - plt.legend(legend) - - # avg_spec = Hs.mean(dim=0).numpy() - # ax1.plot(freqs, 20 * np.log10(avg_spec), color="k", linewidth=2) - - ax1.set_xscale("log") - ax1.set_ylim([-80, 0]) - plt.grid(c="lightgray") - - if filename is not None: - plt.savefig(f"{filename}.png") - - def plot_spectrum_stats(self, H_mean, H_std, filename=None): - bin_width = (self.sample_rate / 2) / (self.n_fft // 2) - freqs = np.arange(0, (self.sample_rate / 2) + bin_width, step=bin_width) - - fig, ax1 = plt.subplots() - ax1.plot(freqs, 20 * np.log10(H_mean)) - ax1.plot( - freqs, - (20 * np.log10(H_mean)) + (20 * np.log10(H_std)), - linestyle="--", - color="k", - ) - ax1.plot( - freqs, - (20 * np.log10(H_mean)) - (20 * np.log10(H_std)), - linestyle="--", - color="k", - ) - - ax1.set_xscale("log") - ax1.set_ylim([-80, 0]) - plt.grid(c="lightgray") - - if filename is not None: - plt.savefig(f"{filename}.png") - - def plot_spectrum(self, H, legend=[], filename=None): - - bin_width = (self.sample_rate / 2) / (self.n_fft // 2) - freqs = np.arange(0, (self.sample_rate / 2) + bin_width, step=bin_width) - - fig, ax1 = plt.subplots() - ax1.plot(freqs, 20 * np.log10(H)) - ax1.set_xscale("log") - ax1.set_ylim([-80, 0]) - plt.grid(c="lightgray") - - plt.legend(legend) - - if filename is not None: - plt.savefig(f"{filename}.png") - - -class BaslineComp(torch.nn.Module): - def __init__( - self, - sample_rate: float = 44100, - ): - super().__init__() - self.sample_rate = sample_rate - self.meter = pyln.Meter(sample_rate) - - def forward(self, x, y): - - x_lufs = self.meter.integrated_loudness(x.view(-1).numpy()) - y_lufs = self.meter.integrated_loudness(y.view(-1).numpy()) - - delta_lufs = y_lufs - x_lufs - - threshold = 0.0 - x_comp = x - x_comp_new = x - while delta_lufs > 0.5 and threshold > -80.0: - x_comp = x_comp_new # use the last setting - x_comp_new = compressor( - x.view(-1).numpy(), - self.sample_rate, - threshold=threshold, - ratio=3, - attack_time=0.001, - release_time=0.05, - knee_dB=6.0, - makeup_gain_dB=0.0, - ) - x_comp_new = torch.tensor(x_comp_new) - x_comp_new /= x_comp_new.abs().max() - x_comp_new *= 10 ** (-12.0 / 20) - x_lufs = self.meter.integrated_loudness(x_comp_new.view(-1).numpy()) - delta_lufs = y_lufs - x_lufs - threshold -= 0.5 - - return x_comp.view(1, 1, -1) - - -class BaselineEQAndComp(torch.nn.Module): - def __init__( - self, - ntaps=63, - n_fft=65536, - sample_rate=44100, - block_size=1024, - plugin_config=None, - ): - super().__init__() - self.eq = BaselineEQ(ntaps, n_fft, sample_rate) - self.comp = BaslineComp(sample_rate) - - def forward(self, x, y): - - with torch.inference_mode(): - x /= x.abs().max() - y /= y.abs().max() - x *= 10 ** (-12.0 / 20) - y *= 10 ** (-12.0 / 20) - - x = self.eq(x, y) - - x /= x.abs().max() - y /= y.abs().max() - x *= 10 ** (-12.0 / 20) - y *= 10 ** (-12.0 / 20) - - x = self.comp(x, y) - - x /= x.abs().max() - x *= 10 ** (-12.0 / 20) - - return x diff --git a/spaces/KPCGD/bingo/src/lib/bots/bing/tts.ts b/spaces/KPCGD/bingo/src/lib/bots/bing/tts.ts deleted file mode 100644 index cd10b7d1d7581bf9cf46ff6755fcca550c558c9b..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/src/lib/bots/bing/tts.ts +++ /dev/null @@ -1,82 +0,0 @@ -import { sleep } from './utils' - -const synth = window.speechSynthesis - -export class TTS { - currentText = '' - speakText = '' - private controller = new AbortController() - speaking = false - get isSpeaking() { - return this.speaking - } - finished = false - constructor() {} - abort = () => { - this.controller.abort() - } - - reset = () => { - this.speaking = false - this.finished = true - this.currentText = '' - this.speakText = '' - this.abort() - } - - speak = (text: string) => { - if (!synth || text?.trim()?.length < 2) { - return - } - this.currentText = text.replace(/[^\u4e00-\u9fa5_a-zA-Z0-9,。?,:;\.,:]+/g, '') - this.finished = false - this.loop() - } - - private async doSpeek() { - return new Promise((resolve) => { - const endIndex = this.finished ? this.currentText.length : - Math.max( - this.currentText.lastIndexOf('。'), - this.currentText.lastIndexOf(';'), - this.currentText.lastIndexOf('、'), - this.currentText.lastIndexOf('?'), - this.currentText.lastIndexOf('\n') - ) - const startIndex = this.speakText.length ? Math.max(0, this.currentText.lastIndexOf(this.speakText) + this.speakText.length) : 0 - - if (startIndex >= endIndex) { - return resolve(true) - } - const text = this.currentText.slice(startIndex, endIndex) - this.speakText = text - const utterThis = new SpeechSynthesisUtterance(text) - this.controller.signal.onabort = () => { - synth.cancel() - this.finished = true - resolve(false) - } - - utterThis.onend = function (event) { - resolve(true) - } - - utterThis.onerror = function (event) { - resolve(false) - } - - const voice = synth.getVoices().find(v => v.name.includes('Microsoft Yunxi Online')) ?? null - utterThis.voice = voice - synth.speak(utterThis) - }) - } - - private async loop() { - if (this.speaking) return - this.speaking = true - while(!this.finished) { - await Promise.all([sleep(1000), this.doSpeek()]) - } - this.speaking = false - } -} diff --git a/spaces/Kaludi/Virtual-AI-Career-Coach_App/README.md b/spaces/Kaludi/Virtual-AI-Career-Coach_App/README.md deleted file mode 100644 index d97817fae68d0ab157a4affc00cd5f66e3db57a5..0000000000000000000000000000000000000000 --- a/spaces/Kaludi/Virtual-AI-Career-Coach_App/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Virtual-AI-Career-Coach App -emoji: 💼 -colorFrom: indigo -colorTo: green -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/clonerepo_experimental.py b/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/clonerepo_experimental.py deleted file mode 100644 index b0ae02648c1307562cf48033908edcf2996db5e2..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/Applio-RVC-Fork/utils/clonerepo_experimental.py +++ /dev/null @@ -1,253 +0,0 @@ -import os -import subprocess -import shutil -from concurrent.futures import ThreadPoolExecutor, as_completed -from tqdm.notebook import tqdm -from pathlib import Path -import requests - -def run_script(): - def run_cmd(cmd): - process = subprocess.run(cmd, shell=True, check=True, text=True) - return process.stdout - - # Change the current directory to /content/ - os.chdir('/content/') - print("Changing dir to /content/") - - # Your function to edit the file - def edit_file(file_path): - temp_file_path = "/tmp/temp_file.py" - changes_made = False - with open(file_path, "r") as file, open(temp_file_path, "w") as temp_file: - previous_line = "" - second_previous_line = "" - for line in file: - new_line = line.replace("value=160", "value=128") - if new_line != line: - print("Replaced 'value=160' with 'value=128'") - changes_made = True - line = new_line - - new_line = line.replace("crepe hop length: 160", "crepe hop length: 128") - if new_line != line: - print("Replaced 'crepe hop length: 160' with 'crepe hop length: 128'") - changes_made = True - line = new_line - - new_line = line.replace("value=0.88", "value=0.75") - if new_line != line: - print("Replaced 'value=0.88' with 'value=0.75'") - changes_made = True - line = new_line - - if "label=i18n(\"输入源音量包络替换输出音量包络融合比例,越靠近1越使用输出包络\")" in previous_line and "value=1," in line: - new_line = line.replace("value=1,", "value=0.25,") - if new_line != line: - print("Replaced 'value=1,' with 'value=0.25,' based on the condition") - changes_made = True - line = new_line - - if "label=i18n(\"总训练轮数total_epoch\")" in previous_line and "value=20," in line: - new_line = line.replace("value=20,", "value=500,") - if new_line != line: - print("Replaced 'value=20,' with 'value=500,' based on the condition for DEFAULT EPOCH") - changes_made = True - line = new_line - - if 'choices=["pm", "harvest", "dio", "crepe", "crepe-tiny", "mangio-crepe", "mangio-crepe-tiny"], # Fork Feature. Add Crepe-Tiny' in previous_line: - if 'value="pm",' in line: - new_line = line.replace('value="pm",', 'value="mangio-crepe",') - if new_line != line: - print("Replaced 'value=\"pm\",' with 'value=\"mangio-crepe\",' based on the condition") - changes_made = True - line = new_line - - new_line = line.replace('label=i18n("输入训练文件夹路径"), value="E:\\\\语音音频+标注\\\\米津玄师\\\\src"', 'label=i18n("输入训练文件夹路径"), value="/content/dataset/"') - if new_line != line: - print("Replaced 'label=i18n(\"输入训练文件夹路径\"), value=\"E:\\\\语音音频+标注\\\\米津玄师\\\\src\"' with 'label=i18n(\"输入训练文件夹路径\"), value=\"/content/dataset/\"'") - changes_made = True - line = new_line - - if 'label=i18n("是否仅保存最新的ckpt文件以节省硬盘空间"),' in second_previous_line: - if 'value=i18n("否"),' in line: - new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),') - if new_line != line: - print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE ONLY LATEST") - changes_made = True - line = new_line - - if 'label=i18n("是否在每次保存时间点将最终小模型保存至weights文件夹"),' in second_previous_line: - if 'value=i18n("否"),' in line: - new_line = line.replace('value=i18n("否"),', 'value=i18n("是"),') - if new_line != line: - print("Replaced 'value=i18n(\"否\"),' with 'value=i18n(\"是\"),' based on the condition for SAVE SMALL WEIGHTS") - changes_made = True - line = new_line - - temp_file.write(line) - second_previous_line = previous_line - previous_line = line - - # After finished, we replace the original file with the temp one - import shutil - shutil.move(temp_file_path, file_path) - - if changes_made: - print("Changes made and file saved successfully.") - else: - print("No changes were needed.") - - # Define the repo path - repo_path = '/content/Applio-RVC-Fork' - - def copy_all_files_in_directory(src_dir, dest_dir): - # Iterate over all files in source directory - for item in Path(src_dir).glob('*'): - if item.is_file(): - # Copy each file to destination directory - shutil.copy(item, dest_dir) - else: - # If it's a directory, make a new directory in the destination and copy the files recursively - new_dest = Path(dest_dir) / item.name - new_dest.mkdir(exist_ok=True) - copy_all_files_in_directory(str(item), str(new_dest)) - - def clone_and_copy_repo(repo_path): - # New repository link - new_repo_link = "https://github.com/IAHispano/Applio-RVC-Fork/" - # Temporary path to clone the repository - temp_repo_path = "/content/temp_Applio-RVC-Fork" - # New folder name - new_folder_name = "Applio-RVC-Fork" - - # Clone the latest code from the new repository to a temporary location - run_cmd(f"git clone {new_repo_link} {temp_repo_path}") - os.chdir(temp_repo_path) - - run_cmd(f"git checkout 3fa4dad3d8961e5ca2522e9e12c0b4ddb71ad402") - run_cmd(f"git checkout f9e606c279cb49420597519b0a83b92be81e42e4") - run_cmd(f"git checkout 9e305588844c5442d58add1061b29beeca89d679") - run_cmd(f"git checkout bf92dc1eb54b4f28d6396a4d1820a25896cc9af8") - run_cmd(f"git checkout c3810e197d3cb98039973b2f723edf967ecd9e61") - run_cmd(f"git checkout a33159efd134c2413b0afe26a76b7dc87926d2de") - run_cmd(f"git checkout 24e251fb62c662e39ac5cf9253cc65deb9be94ec") - run_cmd(f"git checkout ad5667d3017e93232dba85969cddac1322ba2902") - run_cmd(f"git checkout ce9715392cf52dd5a0e18e00d1b5e408f08dbf27") - run_cmd(f"git checkout 7c7da3f2ac68f3bd8f3ad5ca5c700f18ab9f90eb") - run_cmd(f"git checkout 4ac395eab101955e8960b50d772c26f592161764") - run_cmd(f"git checkout b15b358702294c7375761584e5276c811ffab5e8") - run_cmd(f"git checkout 1501793dc490982db9aca84a50647764caa66e51") - run_cmd(f"git checkout 21f7faf57219c75e6ba837062350391a803e9ae2") - run_cmd(f"git checkout b5eb689fbc409b49f065a431817f822f554cebe7") - run_cmd(f"git checkout 7e02fae1ebf24cb151bf6cbe787d06734aa65862") - run_cmd(f"git checkout 6aea5ea18ed0b9a1e03fa5d268d6bc3c616672a9") - run_cmd(f"git checkout f0f9b25717e59116473fb42bd7f9252cfc32b398") - run_cmd(f"git checkout b394de424088a81fc081224bc27338a8651ad3b2") - run_cmd(f"git checkout f1999406a88b80c965d2082340f5ea2bfa9ab67a") - run_cmd(f"git checkout d98a0fa8dc715308dfc73eac5c553b69c6ee072b") - run_cmd(f"git checkout d73267a415fb0eba98477afa43ef71ffd82a7157") - run_cmd(f"git checkout 1a03d01356ae79179e1fb8d8915dc9cc79925742") - run_cmd(f"git checkout 81497bb3115e92c754300c9b3992df428886a3e9") - run_cmd(f"git checkout c5af1f8edcf79cb70f065c0110e279e78e48caf9") - run_cmd(f"git checkout cdb3c90109387fa4dfa92f53c3864c71170ffc77") - - # Edit the file here, before copying - #edit_file(f"{temp_repo_path}/infer-web.py") - - # Copy all files from the cloned repository to the existing path - copy_all_files_in_directory(temp_repo_path, repo_path) - print(f"Copying all {new_folder_name} files from GitHub.") - - # Change working directory back to /content/ - os.chdir('/content/') - print("Changed path back to /content/") - - # Remove the temporary cloned repository - shutil.rmtree(temp_repo_path) - - # Call the function - clone_and_copy_repo(repo_path) - - # Download the credentials file for RVC archive sheet - os.makedirs('/content/Applio-RVC-Fork/stats/', exist_ok=True) - run_cmd("wget -q https://cdn.discordapp.com/attachments/945486970883285045/1114717554481569802/peppy-generator-388800-07722f17a188.json -O /content/Applio-RVC-Fork/stats/peppy-generator-388800-07722f17a188.json") - - # Forcefully delete any existing torchcrepe dependencies downloaded from an earlier run just in case - shutil.rmtree('/content/Applio-RVC-Fork/torchcrepe', ignore_errors=True) - shutil.rmtree('/content/torchcrepe', ignore_errors=True) - - # Download the torchcrepe folder from the maxrmorrison/torchcrepe repository - run_cmd("git clone https://github.com/maxrmorrison/torchcrepe.git") - shutil.move('/content/torchcrepe/torchcrepe', '/content/Applio-RVC-Fork/') - shutil.rmtree('/content/torchcrepe', ignore_errors=True) # Delete the torchcrepe repository folder - - # Change the current directory to /content/Applio-RVC-Fork - os.chdir('/content/Applio-RVC-Fork') - os.makedirs('pretrained', exist_ok=True) - os.makedirs('uvr5_weights', exist_ok=True) - -def download_file(url, filepath): - response = requests.get(url, stream=True) - response.raise_for_status() - - with open(filepath, "wb") as file: - for chunk in response.iter_content(chunk_size=8192): - if chunk: - file.write(chunk) - -def download_pretrained_models(): - pretrained_models = { - "pretrained": [ - "D40k.pth", - "G40k.pth", - "f0D40k.pth", - "f0G40k.pth" - ], - "pretrained_v2": [ - "D40k.pth", - "G40k.pth", - "f0D40k.pth", - "f0G40k.pth", - "f0G48k.pth", - "f0D48k.pth" - ], - "uvr5_weights": [ - "HP2-人声vocals+非人声instrumentals.pth", - "HP5-主旋律人声vocals+其他instrumentals.pth", - "VR-DeEchoNormal.pth", - "VR-DeEchoDeReverb.pth", - "VR-DeEchoAggressive.pth", - "HP5_only_main_vocal.pth", - "HP3_all_vocals.pth", - "HP2_all_vocals.pth" - ] - } - part2 = "I" - base_url = "https://huggingface.co/lj1995/VoiceConversionWebU" + part2 + "/resolve/main/" - base_path = "/content/Applio-RVC-Fork/" - base_pathm = base_path - - # Calculate total number of files to download - total_files = sum(len(files) for files in pretrained_models.values()) + 1 # +1 for hubert_base.pt - - with tqdm(total=total_files, desc="Downloading files") as pbar: - for folder, models in pretrained_models.items(): - folder_path = os.path.join(base_path, folder) - os.makedirs(folder_path, exist_ok=True) - for model in models: - url = base_url + folder + "/" + model - filepath = os.path.join(folder_path, model) - download_file(url, filepath) - pbar.update() - - # Download hubert_base.pt to the base path - hubert_url = base_url + "hubert_base.pt" - hubert_filepath = os.path.join(base_pathm, "hubert_base.pt") - download_file(hubert_url, hubert_filepath) - pbar.update() -def clone_repository(run_download): - with ThreadPoolExecutor(max_workers=2) as executor: - executor.submit(run_script) - if run_download: - executor.submit(download_pretrained_models) diff --git a/spaces/KaygNas/cut-it/index.html b/spaces/KaygNas/cut-it/index.html deleted file mode 100644 index c9af1b7f49e90dac4361bfcb65cca5271283db3f..0000000000000000000000000000000000000000 --- a/spaces/KaygNas/cut-it/index.html +++ /dev/null @@ -1,31 +0,0 @@ - - - - - - - - - Cut It - - - - - - - - - \ No newline at end of file diff --git a/spaces/Kevin676/Clone-Your-Voice/utils/default_models.py b/spaces/Kevin676/Clone-Your-Voice/utils/default_models.py deleted file mode 100644 index a0fb9276e44c0bfd9cdb779c9f93599bab41f725..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Clone-Your-Voice/utils/default_models.py +++ /dev/null @@ -1,56 +0,0 @@ -import urllib.request -from pathlib import Path -from threading import Thread -from urllib.error import HTTPError - -from tqdm import tqdm - - -default_models = { - "encoder": ("https://drive.google.com/uc?export=download&id=1q8mEGwCkFy23KZsinbuvdKAQLqNKbYf1", 17090379), - "synthesizer": ("https://drive.google.com/u/0/uc?id=1EqFMIbvxffxtjiVrtykroF6_mUh-5Z3s&export=download&confirm=t", 370554559), - "vocoder": ("https://drive.google.com/uc?export=download&id=1cf2NO6FtI0jDuy8AV3Xgn6leO6dHjIgu", 53845290), -} - - -class DownloadProgressBar(tqdm): - def update_to(self, b=1, bsize=1, tsize=None): - if tsize is not None: - self.total = tsize - self.update(b * bsize - self.n) - - -def download(url: str, target: Path, bar_pos=0): - # Ensure the directory exists - target.parent.mkdir(exist_ok=True, parents=True) - - desc = f"Downloading {target.name}" - with DownloadProgressBar(unit="B", unit_scale=True, miniters=1, desc=desc, position=bar_pos, leave=False) as t: - try: - urllib.request.urlretrieve(url, filename=target, reporthook=t.update_to) - except HTTPError: - return - - -def ensure_default_models(models_dir: Path): - # Define download tasks - jobs = [] - for model_name, (url, size) in default_models.items(): - target_path = models_dir / "default" / f"{model_name}.pt" - if target_path.exists(): - if target_path.stat().st_size != size: - print(f"File {target_path} is not of expected size, redownloading...") - else: - continue - - thread = Thread(target=download, args=(url, target_path, len(jobs))) - thread.start() - jobs.append((thread, target_path, size)) - - # Run and join threads - for thread, target_path, size in jobs: - thread.join() - - assert target_path.exists() and target_path.stat().st_size == size, \ - f"Download for {target_path.name} failed. You may download models manually instead.\n" \ - f"https://drive.google.com/drive/folders/1fU6umc5uQAVR2udZdHX-lDgXYzTyqG_j" diff --git a/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/train.py b/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/train.py deleted file mode 100644 index 6dc2f892e1fc134b311e2c9ee42250a2d3713547..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/Real-Time-Voice-Cloning/vocoder/train.py +++ /dev/null @@ -1,127 +0,0 @@ -from vocoder.models.fatchord_version import WaveRNN -from vocoder.vocoder_dataset import VocoderDataset, collate_vocoder -from vocoder.distribution import discretized_mix_logistic_loss -from vocoder.display import stream, simple_table -from vocoder.gen_wavernn import gen_testset -from torch.utils.data import DataLoader -from pathlib import Path -from torch import optim -import torch.nn.functional as F -import vocoder.hparams as hp -import numpy as np -import time -import torch -import platform - -def train(run_id: str, syn_dir: Path, voc_dir: Path, models_dir: Path, ground_truth: bool, - save_every: int, backup_every: int, force_restart: bool): - # Check to make sure the hop length is correctly factorised - assert np.cumprod(hp.voc_upsample_factors)[-1] == hp.hop_length - - # Instantiate the model - print("Initializing the model...") - model = WaveRNN( - rnn_dims=hp.voc_rnn_dims, - fc_dims=hp.voc_fc_dims, - bits=hp.bits, - pad=hp.voc_pad, - upsample_factors=hp.voc_upsample_factors, - feat_dims=hp.num_mels, - compute_dims=hp.voc_compute_dims, - res_out_dims=hp.voc_res_out_dims, - res_blocks=hp.voc_res_blocks, - hop_length=hp.hop_length, - sample_rate=hp.sample_rate, - mode=hp.voc_mode - ) - - if torch.cuda.is_available(): - model = model.cuda() - device = torch.device('cuda') - else: - device = torch.device('cpu') - - # Initialize the optimizer - optimizer = optim.Adam(model.parameters()) - for p in optimizer.param_groups: - p["lr"] = hp.voc_lr - loss_func = F.cross_entropy if model.mode == "RAW" else discretized_mix_logistic_loss - - # Load the weights - model_dir = models_dir.joinpath(run_id) - model_dir.mkdir(exist_ok=True) - weights_fpath = model_dir.joinpath(run_id + ".pt") - if force_restart or not weights_fpath.exists(): - print("\nStarting the training of WaveRNN from scratch\n") - model.save(weights_fpath, optimizer) - else: - print("\nLoading weights at %s" % weights_fpath) - model.load(weights_fpath, optimizer) - print("WaveRNN weights loaded from step %d" % model.step) - - # Initialize the dataset - metadata_fpath = syn_dir.joinpath("train.txt") if ground_truth else \ - voc_dir.joinpath("synthesized.txt") - mel_dir = syn_dir.joinpath("mels") if ground_truth else voc_dir.joinpath("mels_gta") - wav_dir = syn_dir.joinpath("audio") - dataset = VocoderDataset(metadata_fpath, mel_dir, wav_dir) - test_loader = DataLoader(dataset, - batch_size=1, - shuffle=True, - pin_memory=True) - - # Begin the training - simple_table([('Batch size', hp.voc_batch_size), - ('LR', hp.voc_lr), - ('Sequence Len', hp.voc_seq_len)]) - - for epoch in range(1, 350): - data_loader = DataLoader(dataset, - collate_fn=collate_vocoder, - batch_size=hp.voc_batch_size, - num_workers=2 if platform.system() != "Windows" else 0, - shuffle=True, - pin_memory=True) - start = time.time() - running_loss = 0. - - for i, (x, y, m) in enumerate(data_loader, 1): - if torch.cuda.is_available(): - x, m, y = x.cuda(), m.cuda(), y.cuda() - - # Forward pass - y_hat = model(x, m) - if model.mode == 'RAW': - y_hat = y_hat.transpose(1, 2).unsqueeze(-1) - elif model.mode == 'MOL': - y = y.float() - y = y.unsqueeze(-1) - - # Backward pass - loss = loss_func(y_hat, y) - optimizer.zero_grad() - loss.backward() - optimizer.step() - - running_loss += loss.item() - speed = i / (time.time() - start) - avg_loss = running_loss / i - - step = model.get_step() - k = step // 1000 - - if backup_every != 0 and step % backup_every == 0 : - model.checkpoint(model_dir, optimizer) - - if save_every != 0 and step % save_every == 0 : - model.save(weights_fpath, optimizer) - - msg = f"| Epoch: {epoch} ({i}/{len(data_loader)}) | " \ - f"Loss: {avg_loss:.4f} | {speed:.1f} " \ - f"steps/s | Step: {k}k | " - stream(msg) - - - gen_testset(model, test_loader, hp.voc_gen_at_checkpoint, hp.voc_gen_batched, - hp.voc_target, hp.voc_overlap, model_dir) - print("") diff --git a/spaces/KyanChen/RSPrompter/mmdet/structures/mask/utils.py b/spaces/KyanChen/RSPrompter/mmdet/structures/mask/utils.py deleted file mode 100644 index 6bd445e4fce1a312949f222d54d230a1a622d726..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/structures/mask/utils.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import pycocotools.mask as mask_util -import torch -from mmengine.utils import slice_list - - -def split_combined_polys(polys, poly_lens, polys_per_mask): - """Split the combined 1-D polys into masks. - - A mask is represented as a list of polys, and a poly is represented as - a 1-D array. In dataset, all masks are concatenated into a single 1-D - tensor. Here we need to split the tensor into original representations. - - Args: - polys (list): a list (length = image num) of 1-D tensors - poly_lens (list): a list (length = image num) of poly length - polys_per_mask (list): a list (length = image num) of poly number - of each mask - - Returns: - list: a list (length = image num) of list (length = mask num) of \ - list (length = poly num) of numpy array. - """ - mask_polys_list = [] - for img_id in range(len(polys)): - polys_single = polys[img_id] - polys_lens_single = poly_lens[img_id].tolist() - polys_per_mask_single = polys_per_mask[img_id].tolist() - - split_polys = slice_list(polys_single, polys_lens_single) - mask_polys = slice_list(split_polys, polys_per_mask_single) - mask_polys_list.append(mask_polys) - return mask_polys_list - - -# TODO: move this function to more proper place -def encode_mask_results(mask_results): - """Encode bitmap mask to RLE code. - - Args: - mask_results (list): bitmap mask results. - - Returns: - list | tuple: RLE encoded mask. - """ - encoded_mask_results = [] - for mask in mask_results: - encoded_mask_results.append( - mask_util.encode( - np.array(mask[:, :, np.newaxis], order='F', - dtype='uint8'))[0]) # encoded with RLE - return encoded_mask_results - - -def mask2bbox(masks): - """Obtain tight bounding boxes of binary masks. - - Args: - masks (Tensor): Binary mask of shape (n, h, w). - - Returns: - Tensor: Bboxe with shape (n, 4) of \ - positive region in binary mask. - """ - N = masks.shape[0] - bboxes = masks.new_zeros((N, 4), dtype=torch.float32) - x_any = torch.any(masks, dim=1) - y_any = torch.any(masks, dim=2) - for i in range(N): - x = torch.where(x_any[i, :])[0] - y = torch.where(y_any[i, :])[0] - if len(x) > 0 and len(y) > 0: - bboxes[i, :] = bboxes.new_tensor( - [x[0], y[0], x[-1] + 1, y[-1] + 1]) - - return bboxes diff --git a/spaces/LEBEI/00002/wbc/guided_filter.py b/spaces/LEBEI/00002/wbc/guided_filter.py deleted file mode 100644 index fd019d145efc7f308cd96de90f4e7b648f6820b4..0000000000000000000000000000000000000000 --- a/spaces/LEBEI/00002/wbc/guided_filter.py +++ /dev/null @@ -1,87 +0,0 @@ -import tensorflow as tf -import numpy as np - - - - -def tf_box_filter(x, r): - k_size = int(2*r+1) - ch = x.get_shape().as_list()[-1] - weight = 1/(k_size**2) - box_kernel = weight*np.ones((k_size, k_size, ch, 1)) - box_kernel = np.array(box_kernel).astype(np.float32) - output = tf.nn.depthwise_conv2d(x, box_kernel, [1, 1, 1, 1], 'SAME') - return output - - - -def guided_filter(x, y, r, eps=1e-2): - - x_shape = tf.shape(x) - #y_shape = tf.shape(y) - - N = tf_box_filter(tf.ones((1, x_shape[1], x_shape[2], 1), dtype=x.dtype), r) - - mean_x = tf_box_filter(x, r) / N - mean_y = tf_box_filter(y, r) / N - cov_xy = tf_box_filter(x * y, r) / N - mean_x * mean_y - var_x = tf_box_filter(x * x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf_box_filter(A, r) / N - mean_b = tf_box_filter(b, r) / N - - output = mean_A * x + mean_b - - return output - - - -def fast_guided_filter(lr_x, lr_y, hr_x, r=1, eps=1e-8): - - #assert lr_x.shape.ndims == 4 and lr_y.shape.ndims == 4 and hr_x.shape.ndims == 4 - - lr_x_shape = tf.shape(lr_x) - #lr_y_shape = tf.shape(lr_y) - hr_x_shape = tf.shape(hr_x) - - N = tf_box_filter(tf.ones((1, lr_x_shape[1], lr_x_shape[2], 1), dtype=lr_x.dtype), r) - - mean_x = tf_box_filter(lr_x, r) / N - mean_y = tf_box_filter(lr_y, r) / N - cov_xy = tf_box_filter(lr_x * lr_y, r) / N - mean_x * mean_y - var_x = tf_box_filter(lr_x * lr_x, r) / N - mean_x * mean_x - - A = cov_xy / (var_x + eps) - b = mean_y - A * mean_x - - mean_A = tf.image.resize_images(A, hr_x_shape[1: 3]) - mean_b = tf.image.resize_images(b, hr_x_shape[1: 3]) - - output = mean_A * hr_x + mean_b - - return output - - -if __name__ == '__main__': - import cv2 - from tqdm import tqdm - - input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - #input_superpixel = tf.placeholder(tf.float32, [16, 256, 256, 3]) - output = guided_filter(input_photo, input_photo, 5, eps=1) - image = cv2.imread('output_figure1/cartoon2.jpg') - image = image/127.5 - 1 - image = np.expand_dims(image, axis=0) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - sess = tf.Session(config=config) - sess.run(tf.global_variables_initializer()) - - out = sess.run(output, feed_dict={input_photo: image}) - out = (np.squeeze(out)+1)*127.5 - out = np.clip(out, 0, 255).astype(np.uint8) - cv2.imwrite('output_figure1/cartoon2_filter.jpg', out) diff --git a/spaces/Laihiujin/OneFormer/oneformer/__init__.py b/spaces/Laihiujin/OneFormer/oneformer/__init__.py deleted file mode 100644 index 39ebcd384f616ae2ba170407cee3267d461a5914..0000000000000000000000000000000000000000 --- a/spaces/Laihiujin/OneFormer/oneformer/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from . import data # register all new datasets -from . import modeling - -# config -from .config import * - -# models -from .oneformer_model import OneFormer \ No newline at end of file diff --git a/spaces/LaynzKunz/RCVAICOVER/src/webui.py b/spaces/LaynzKunz/RCVAICOVER/src/webui.py deleted file mode 100644 index 106997faf4bd830ee1dd74400c42c0371a4f9c27..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/RCVAICOVER/src/webui.py +++ /dev/null @@ -1,322 +0,0 @@ -import json -import os -import shutil -import urllib.request -import zipfile -from argparse import ArgumentParser - -import gradio as gr - -from main import song_cover_pipeline - -BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) - -mdxnet_models_dir = os.path.join(BASE_DIR, 'mdxnet_models') -rvc_models_dir = os.path.join(BASE_DIR, 'rvc_models') -output_dir = os.path.join(BASE_DIR, 'song_output') - - -def get_current_models(models_dir): - models_list = os.listdir(models_dir) - items_to_remove = ['hubert_base.pt', 'MODELS.txt', 'public_models.json', 'rmvpe.pt'] - return [item for item in models_list if item not in items_to_remove] - - -def update_models_list(): - models_l = get_current_models(rvc_models_dir) - return gr.Dropdown.update(choices=models_l) - - -def load_public_models(): - models_table = [] - for model in public_models['voice_models']: - if not model['name'] in voice_models: - model = [model['name'], model['description'], model['credit'], model['url'], ', '.join(model['tags'])] - models_table.append(model) - - tags = list(public_models['tags'].keys()) - return gr.DataFrame.update(value=models_table), gr.CheckboxGroup.update(choices=tags) - - -def extract_zip(extraction_folder, zip_name): - os.makedirs(extraction_folder) - with zipfile.ZipFile(zip_name, 'r') as zip_ref: - zip_ref.extractall(extraction_folder) - os.remove(zip_name) - - index_filepath, model_filepath = None, None - for root, dirs, files in os.walk(extraction_folder): - for name in files: - if name.endswith('.index') and os.stat(os.path.join(root, name)).st_size > 1024 * 100: - index_filepath = os.path.join(root, name) - - if name.endswith('.pth') and os.stat(os.path.join(root, name)).st_size > 1024 * 1024 * 40: - model_filepath = os.path.join(root, name) - - if not model_filepath: - raise gr.Error(f'No .pth model file was found in the extracted zip. Please check {extraction_folder}.') - - # move model and index file to extraction folder - os.rename(model_filepath, os.path.join(extraction_folder, os.path.basename(model_filepath))) - if index_filepath: - os.rename(index_filepath, os.path.join(extraction_folder, os.path.basename(index_filepath))) - - # remove any unnecessary nested folders - for filepath in os.listdir(extraction_folder): - if os.path.isdir(os.path.join(extraction_folder, filepath)): - shutil.rmtree(os.path.join(extraction_folder, filepath)) - - -def download_online_model(url, dir_name, progress=gr.Progress()): - try: - progress(0, desc=f'[~] Downloading voice model with name {dir_name}...') - zip_name = url.split('/')[-1] - extraction_folder = os.path.join(rvc_models_dir, dir_name) - if os.path.exists(extraction_folder): - raise gr.Error(f'Voice model directory {dir_name} already exists! Choose a different name for your voice model.') - - if 'pixeldrain.com' in url: - url = f'https://pixeldrain.com/api/file/{zip_name}' - - urllib.request.urlretrieve(url, zip_name) - - progress(0.5, desc='[~] Extracting zip...') - extract_zip(extraction_folder, zip_name) - return f'[+] {dir_name} Model successfully downloaded!' - - except Exception as e: - raise gr.Error(str(e)) - - -def upload_local_model(zip_path, dir_name, progress=gr.Progress()): - try: - extraction_folder = os.path.join(rvc_models_dir, dir_name) - if os.path.exists(extraction_folder): - raise gr.Error(f'Voice model directory {dir_name} already exists! Choose a different name for your voice model.') - - zip_name = zip_path.name - progress(0.5, desc='[~] Extracting zip...') - extract_zip(extraction_folder, zip_name) - return f'[+] {dir_name} Model successfully uploaded!' - - except Exception as e: - raise gr.Error(str(e)) - - -def filter_models(tags, query): - models_table = [] - - # no filter - if len(tags) == 0 and len(query) == 0: - for model in public_models['voice_models']: - models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']]) - - # filter based on tags and query - elif len(tags) > 0 and len(query) > 0: - for model in public_models['voice_models']: - if all(tag in model['tags'] for tag in tags): - model_attributes = f"{model['name']} {model['description']} {model['credit']} {' '.join(model['tags'])}".lower() - if query.lower() in model_attributes: - models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']]) - - # filter based on only tags - elif len(tags) > 0: - for model in public_models['voice_models']: - if all(tag in model['tags'] for tag in tags): - models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']]) - - # filter based on only query - else: - for model in public_models['voice_models']: - model_attributes = f"{model['name']} {model['description']} {model['credit']} {' '.join(model['tags'])}".lower() - if query.lower() in model_attributes: - models_table.append([model['name'], model['description'], model['credit'], model['url'], model['tags']]) - - return gr.DataFrame.update(value=models_table) - - -def pub_dl_autofill(pub_models, event: gr.SelectData): - return gr.Text.update(value=pub_models.loc[event.index[0], 'URL']), gr.Text.update(value=pub_models.loc[event.index[0], 'Model Name']) - - -def swap_visibility(): - return gr.update(visible=True), gr.update(visible=False), gr.update(value=''), gr.update(value=None) - - -def process_file_upload(file): - return file.name, gr.update(value=file.name) - - -def show_hop_slider(pitch_detection_algo): - if pitch_detection_algo == 'mangio-crepe': - return gr.update(visible=True) - else: - return gr.update(visible=False) - - -if __name__ == '__main__': - parser = ArgumentParser(description='Generate a AI cover song in the song_output/id directory.', add_help=True) - parser.add_argument("--share", action="store_true", dest="share_enabled", default=False, help="Enable sharing") - parser.add_argument("--listen", action="store_true", default=False, help="Make the WebUI reachable from your local network.") - parser.add_argument('--listen-host', type=str, help='The hostname that the server will use.') - parser.add_argument('--listen-port', type=int, help='The listening port that the server will use.') - args = parser.parse_args() - - voice_models = get_current_models(rvc_models_dir) - with open(os.path.join(rvc_models_dir, 'public_models.json'), encoding='utf8') as infile: - public_models = json.load(infile) - - with gr.Blocks(title='AICoverGenWebUI') as app: - - gr.Label('AICoverGen WebUI created with ❤️', show_label=False) - - # main tab - with gr.Tab("Generate"): - - with gr.Accordion('Main Options'): - with gr.Row(): - with gr.Column(): - rvc_model = gr.Dropdown(voice_models, label='Voice Models', info='Models folder "AICoverGen --> rvc_models". After new models are added into this folder, click the refresh button') - ref_btn = gr.Button('Refresh Models 🔁', variant='primary') - - with gr.Column() as yt_link_col: - song_input = gr.Text(label='Song input', info='Link to a song on YouTube or full path to a local file. For file upload, click the button below.') - show_file_upload_button = gr.Button('Upload file instead') - - with gr.Column(visible=False) as file_upload_col: - local_file = gr.File(label='Audio file') - song_input_file = gr.UploadButton('Upload 📂', file_types=['audio'], variant='primary') - show_yt_link_button = gr.Button('Paste YouTube link/Path to local file instead') - song_input_file.upload(process_file_upload, inputs=[song_input_file], outputs=[local_file, song_input]) - - with gr.Column(): - pitch = gr.Slider(-3, 3, value=0, step=1, label='Pitch Change (Vocals ONLY)', info='Generally, use 1 for male to female conversions and -1 for vice-versa. (Octaves)') - pitch_all = gr.Slider(-12, 12, value=0, step=1, label='Overall Pitch Change', info='Changes pitch/key of vocals and instrumentals together. Altering this slightly reduces sound quality. (Semitones)') - show_file_upload_button.click(swap_visibility, outputs=[file_upload_col, yt_link_col, song_input, local_file]) - show_yt_link_button.click(swap_visibility, outputs=[yt_link_col, file_upload_col, song_input, local_file]) - - with gr.Accordion('Voice conversion options', open=False): - with gr.Row(): - index_rate = gr.Slider(0, 1, value=0.5, label='Index Rate', info="Controls how much of the AI voice's accent to keep in the vocals") - filter_radius = gr.Slider(0, 7, value=3, step=1, label='Filter radius', info='If >=3: apply median filtering median filtering to the harvested pitch results. Can reduce breathiness') - rms_mix_rate = gr.Slider(0, 1, value=0.25, label='RMS mix rate', info="Control how much to mimic the original vocal's loudness (0) or a fixed loudness (1)") - protect = gr.Slider(0, 0.5, value=0.33, label='Protect rate', info='Protect voiceless consonants and breath sounds. Set to 0.5 to disable.') - with gr.Column(): - f0_method = gr.Dropdown(['rmvpe', 'mangio-crepe'], value='rmvpe', label='Pitch detection algorithm', info='Best option is rmvpe (clarity in vocals), then mangio-crepe (smoother vocals)') - crepe_hop_length = gr.Slider(32, 320, value=128, step=1, visible=False, label='Crepe hop length', info='Lower values leads to longer conversions and higher risk of voice cracks, but better pitch accuracy.') - f0_method.change(show_hop_slider, inputs=f0_method, outputs=crepe_hop_length) - keep_files = gr.Checkbox(label='Keep intermediate files', info='Keep all audio files generated in the song_output/id directory, e.g. Isolated Vocals/Instrumentals. Leave unchecked to save space') - - with gr.Accordion('Audio mixing options', open=False): - gr.Markdown('### Volume Change (decibels)') - with gr.Row(): - main_gain = gr.Slider(-20, 20, value=0, step=1, label='Main Vocals') - backup_gain = gr.Slider(-20, 20, value=0, step=1, label='Backup Vocals') - inst_gain = gr.Slider(-20, 20, value=0, step=1, label='Music') - - gr.Markdown('### Reverb Control on AI Vocals') - with gr.Row(): - reverb_rm_size = gr.Slider(0, 1, value=0.15, label='Room size', info='The larger the room, the longer the reverb time') - reverb_wet = gr.Slider(0, 1, value=0.2, label='Wetness level', info='Level of AI vocals with reverb') - reverb_dry = gr.Slider(0, 1, value=0.8, label='Dryness level', info='Level of AI vocals without reverb') - reverb_damping = gr.Slider(0, 1, value=0.7, label='Damping level', info='Absorption of high frequencies in the reverb') - - gr.Markdown('### Audio Output Format') - output_format = gr.Dropdown(['mp3', 'wav'], value='mp3', label='Output file type', info='mp3: small file size, decent quality. wav: Large file size, best quality') - - with gr.Row(): - clear_btn = gr.ClearButton(value='Clear', components=[song_input, rvc_model, keep_files, local_file]) - generate_btn = gr.Button("Generate", variant='primary') - ai_cover = gr.Audio(label='AI Cover', show_share_button=False) - - ref_btn.click(update_models_list, None, outputs=rvc_model) - is_webui = gr.Number(value=1, visible=False) - generate_btn.click(song_cover_pipeline, - inputs=[song_input, rvc_model, pitch, keep_files, is_webui, main_gain, backup_gain, - inst_gain, index_rate, filter_radius, rms_mix_rate, f0_method, crepe_hop_length, - protect, pitch_all, reverb_rm_size, reverb_wet, reverb_dry, reverb_damping, - output_format], - outputs=[ai_cover]) - clear_btn.click(lambda: [0, 0, 0, 0, 0.5, 3, 0.25, 0.33, 'rmvpe', 128, 0, 0.15, 0.2, 0.8, 0.7, 'mp3', None], - outputs=[pitch, main_gain, backup_gain, inst_gain, index_rate, filter_radius, rms_mix_rate, - protect, f0_method, crepe_hop_length, pitch_all, reverb_rm_size, reverb_wet, - reverb_dry, reverb_damping, output_format, ai_cover]) - - # Download tab - with gr.Tab('Download model'): - - with gr.Tab('From HuggingFace/Pixeldrain URL'): - with gr.Row(): - model_zip_link = gr.Text(label='Download link to model', info='Should be a zip file containing a .pth model file and an optional .index file.') - model_name = gr.Text(label='Name your model', info='Give your new model a unique name from your other voice models.') - - with gr.Row(): - download_btn = gr.Button('Download 🌐', variant='primary', scale=19) - dl_output_message = gr.Text(label='Output Message', interactive=False, scale=20) - - download_btn.click(download_online_model, inputs=[model_zip_link, model_name], outputs=dl_output_message) - - gr.Markdown('## Input Examples') - gr.Examples( - [ - ['https://huggingface.co/phant0m4r/LiSA/resolve/main/LiSA.zip', 'Lisa'], - ['https://pixeldrain.com/u/3tJmABXA', 'Gura'], - ['https://huggingface.co/Kit-Lemonfoot/kitlemonfoot_rvc_models/resolve/main/AZKi%20(Hybrid).zip', 'Azki'] - ], - [model_zip_link, model_name], - [], - download_online_model, - ) - - with gr.Tab('From Public Index'): - - gr.Markdown('## How to use') - gr.Markdown('- Click Initialize public models table') - gr.Markdown('- Filter models using tags or search bar') - gr.Markdown('- Select a row to autofill the download link and model name') - gr.Markdown('- Click Download') - - with gr.Row(): - pub_zip_link = gr.Text(label='Download link to model') - pub_model_name = gr.Text(label='Model name') - - with gr.Row(): - download_pub_btn = gr.Button('Download 🌐', variant='primary', scale=19) - pub_dl_output_message = gr.Text(label='Output Message', interactive=False, scale=20) - - filter_tags = gr.CheckboxGroup(value=[], label='Show voice models with tags', choices=[]) - search_query = gr.Text(label='Search') - load_public_models_button = gr.Button(value='Initialize public models table', variant='primary') - - public_models_table = gr.DataFrame(value=[], headers=['Model Name', 'Description', 'Credit', 'URL', 'Tags'], label='Available Public Models', interactive=False) - public_models_table.select(pub_dl_autofill, inputs=[public_models_table], outputs=[pub_zip_link, pub_model_name]) - load_public_models_button.click(load_public_models, outputs=[public_models_table, filter_tags]) - search_query.change(filter_models, inputs=[filter_tags, search_query], outputs=public_models_table) - filter_tags.change(filter_models, inputs=[filter_tags, search_query], outputs=public_models_table) - download_pub_btn.click(download_online_model, inputs=[pub_zip_link, pub_model_name], outputs=pub_dl_output_message) - - # Upload tab - with gr.Tab('Upload model'): - gr.Markdown('## Upload locally trained RVC v2 model and index file') - gr.Markdown('- Find model file (weights folder) and optional index file (logs/[name] folder)') - gr.Markdown('- Compress files into zip file') - gr.Markdown('- Upload zip file and give unique name for voice') - gr.Markdown('- Click Upload model') - - with gr.Row(): - with gr.Column(): - zip_file = gr.File(label='Zip file') - - local_model_name = gr.Text(label='Model name') - - with gr.Row(): - model_upload_button = gr.Button('Upload model', variant='primary', scale=19) - local_upload_output_message = gr.Text(label='Output Message', interactive=False, scale=20) - model_upload_button.click(upload_local_model, inputs=[zip_file, local_model_name], outputs=local_upload_output_message) - - app.launch( - share=args.share_enabled, - enable_queue=True, - server_name=None if not args.listen else (args.listen_host or '0.0.0.0'), - server_port=args.listen_port, - ) diff --git "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\345\233\276\347\211\207\347\224\237\346\210\220.py" "b/spaces/Liu-LAB/GPT-academic/crazy_functions/\345\233\276\347\211\207\347\224\237\346\210\220.py" deleted file mode 100644 index a1cc2441e2d4ed4f461323bec7e617f936dec2c8..0000000000000000000000000000000000000000 --- "a/spaces/Liu-LAB/GPT-academic/crazy_functions/\345\233\276\347\211\207\347\224\237\346\210\220.py" +++ /dev/null @@ -1,69 +0,0 @@ -from toolbox import CatchException, update_ui, get_conf, select_api_key -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime - - -def gen_image(llm_kwargs, prompt, resolution="256x256"): - import requests, json, time, os - from request_llm.bridge_all import model_info - - proxies, = get_conf('proxies') - # Set up OpenAI API key and model - api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model']) - chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] - # 'https://api.openai.com/v1/chat/completions' - img_endpoint = chat_endpoint.replace('chat/completions','images/generations') - # # Generate the image - url = img_endpoint - headers = { - 'Authorization': f"Bearer {api_key}", - 'Content-Type': 'application/json' - } - data = { - 'prompt': prompt, - 'n': 1, - 'size': resolution, - 'response_format': 'url' - } - response = requests.post(url, headers=headers, json=data, proxies=proxies) - print(response.content) - try: - image_url = json.loads(response.content.decode('utf8'))['data'][0]['url'] - except: - raise RuntimeError(response.content.decode()) - # 文件保存到本地 - r = requests.get(image_url, proxies=proxies) - file_path = 'gpt_log/image_gen/' - os.makedirs(file_path, exist_ok=True) - file_name = 'Image' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.png' - with open(file_path+file_name, 'wb+') as f: f.write(r.content) - - - return image_url, file_path+file_name - - - -@CatchException -def 图片生成(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append(("这是什么功能?", "[Local Message] 生成图像, 请先把模型切换至gpt-*或者api2d-*。如果中文效果不理想, 请尝试英文Prompt。正在处理中 .....")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") - resolution = plugin_kwargs.get("advanced_arg", '256x256') - image_url, image_path = gen_image(llm_kwargs, prompt, resolution) - chatbot.append([prompt, - f'图像中转网址:
    `{image_url}`
    '+ - f'中转网址预览:
    ' - f'本地文件地址:
    `{image_path}`
    '+ - f'本地文件预览:
    ' - ]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 diff --git a/spaces/LuxOAI/ChatGpt-Web/Dockerfile b/spaces/LuxOAI/ChatGpt-Web/Dockerfile deleted file mode 100644 index 7755b1a539ca8fd6207dea7258457fded41f6ff5..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/Dockerfile +++ /dev/null @@ -1,59 +0,0 @@ -FROM node:18-alpine AS base - -FROM base AS deps - -RUN apk add --no-cache libc6-compat - -WORKDIR /app - -COPY package.json yarn.lock ./ - -RUN yarn config set registry 'https://registry.npmmirror.com/' -RUN yarn install - -FROM base AS builder - -RUN apk update && apk add --no-cache git - -ENV OPENAI_API_KEY="" -ENV CODE="" - -WORKDIR /app -COPY --from=deps /app/node_modules ./node_modules -COPY . . - -RUN yarn build - -FROM base AS runner -WORKDIR /app - -RUN apk add proxychains-ng - -ENV PROXY_URL="" -ENV OPENAI_API_KEY="" -ENV CODE="" - -COPY --from=builder /app/public ./public -COPY --from=builder /app/.next/standalone ./ -COPY --from=builder /app/.next/static ./.next/static -COPY --from=builder /app/.next/server ./.next/server - -EXPOSE 3000 - -CMD if [ -n "$PROXY_URL" ]; then \ - protocol=$(echo $PROXY_URL | cut -d: -f1); \ - host=$(echo $PROXY_URL | cut -d/ -f3 | cut -d: -f1); \ - port=$(echo $PROXY_URL | cut -d: -f3); \ - conf=/etc/proxychains.conf; \ - echo "strict_chain" > $conf; \ - echo "proxy_dns" >> $conf; \ - echo "remote_dns_subnet 224" >> $conf; \ - echo "tcp_read_time_out 15000" >> $conf; \ - echo "tcp_connect_time_out 8000" >> $conf; \ - echo "[ProxyList]" >> $conf; \ - echo "$protocol $host $port" >> $conf; \ - cat /etc/proxychains.conf; \ - proxychains -f $conf node server.js; \ - else \ - node server.js; \ - fi diff --git a/spaces/LuxOAI/ChatGpt-Web/app/components/error.tsx b/spaces/LuxOAI/ChatGpt-Web/app/components/error.tsx deleted file mode 100644 index 0e01c41708a358b2d4848592bd48b61c94cb79ec..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/components/error.tsx +++ /dev/null @@ -1,73 +0,0 @@ -import React from "react"; -import { IconButton } from "./button"; -import GithubIcon from "../icons/github.svg"; -import ResetIcon from "../icons/reload.svg"; -import { ISSUE_URL } from "../constant"; -import Locale from "../locales"; -import { downloadAs } from "../utils"; - -interface IErrorBoundaryState { - hasError: boolean; - error: Error | null; - info: React.ErrorInfo | null; -} - -export class ErrorBoundary extends React.Component { - constructor(props: any) { - super(props); - this.state = { hasError: false, error: null, info: null }; - } - - componentDidCatch(error: Error, info: React.ErrorInfo) { - // Update state with error details - this.setState({ hasError: true, error, info }); - } - - clearAndSaveData() { - try { - downloadAs( - JSON.stringify(localStorage), - "chatgpt-next-web-snapshot.json", - ); - } finally { - localStorage.clear(); - location.reload(); - } - } - - render() { - if (this.state.hasError) { - // Render error message - return ( -
    -

    Oops, something went wrong!

    -
    -            {this.state.error?.toString()}
    -            {this.state.info?.componentStack}
    -          
    - -
    - - } - bordered - /> - - } - text="Clear All Data" - onClick={() => - confirm(Locale.Settings.Actions.ConfirmClearAll) && - this.clearAndSaveData() - } - bordered - /> -
    -
    - ); - } - // if no error occurred, render children - return this.props.children; - } -} diff --git a/spaces/MS19/TestSpaceFastAI/app.py b/spaces/MS19/TestSpaceFastAI/app.py deleted file mode 100644 index 7c536c12f93d97e8049abaaa2498f433dada3cf8..0000000000000000000000000000000000000000 --- a/spaces/MS19/TestSpaceFastAI/app.py +++ /dev/null @@ -1,23 +0,0 @@ -import gradio as gr -from fastai.vision.all import * - - -def is_cat(x): return x[0].isupper() - - -learn = load_learner('model_doggo_catto.pkl') - -categories = ('Dog', 'Cat') - - -def classify_image(img): - pred, ids, probs = learn.predict(img) - return dict(zip(categories, map(float, probs))) - - -image = gr.inputs.Image(shape=(192, 192)) -label = gr.outputs.Label() -examples = ['dog1.jpg', 'cat1.jpg', 'dunno1.jpg'] - -iface = gr.Interface(fn=classify_image, inputs=image, outputs=label, examples=examples) -iface.launch(inline=False) diff --git a/spaces/MrSinan/Reconstruction/mask_the_face.py b/spaces/MrSinan/Reconstruction/mask_the_face.py deleted file mode 100644 index c40c2439f6af704bbef3f6a4013f07a27b3b793f..0000000000000000000000000000000000000000 --- a/spaces/MrSinan/Reconstruction/mask_the_face.py +++ /dev/null @@ -1,132 +0,0 @@ -# Author: aqeelanwar -# Created: 27 April,2020, 10:22 PM -# Email: aqeel.anwar@gatech.edu - -import argparse -import dlib -from aux_functions import * -import numpy as np - -def maskThisImages(myImg): - - # Command-line input setup - parser = argparse.ArgumentParser( - description="MaskTheFace - Python code to mask faces dataset" - ) - parser.add_argument( - "--path", - type=str, - default="", - help="Path to either the folder containing images or the image itself", - ) - parser.add_argument( - "--mask_type", - type=str, - default="surgical", - choices=["surgical", "N95", "KN95", "cloth", "gas", "inpaint", "random", "all"], - help="Type of the mask to be applied. Available options: all, surgical_blue, surgical_green, N95, cloth", - ) - - parser.add_argument( - "--pattern", - type=str, - default="", - help="Type of the pattern. Available options in masks/textures", - ) - - parser.add_argument( - "--pattern_weight", - type=float, - default=0.5, - help="Weight of the pattern. Must be between 0 and 1", - ) - - parser.add_argument( - "--color", - type=str, - default="cyan", - help="Hex color value that need to be overlayed to the mask", - ) - - parser.add_argument( - "--color_weight", - type=float, - default=0.5, - help="Weight of the color intensity. Must be between 0 and 1", - ) - - parser.add_argument( - "--code", - type=str, - # default="cloth-masks/textures/check/check_4.jpg, cloth-#e54294, cloth-#ff0000, cloth, cloth-masks/textures/others/heart_1.png, cloth-masks/textures/fruits/pineapple.png, N95, surgical_blue, surgical_green", - default="", - help="Generate specific formats", - ) - - - parser.add_argument( - "--verbose", dest="verbose", action="store_true", help="Turn verbosity on" - ) - parser.add_argument( - "--write_original_image", - dest="write_original_image", - action="store_true", - help="If true, original image is also stored in the masked folder", - ) - parser.set_defaults(feature=False) - - args, unknown = parser.parse_known_args() - args.write_path = args.path + "_masked" - - # Set up dlib face detector and predictor - args.detector = dlib.get_frontal_face_detector() - path_to_dlib_model = "shape_predictor_68_face_landmarks.dat" - if not os.path.exists(path_to_dlib_model): - download_dlib_model() - - args.predictor = dlib.shape_predictor(path_to_dlib_model) - - # Extract data from code - mask_code = "".join(args.code.split()).split(",") - args.code_count = np.zeros(len(mask_code)) - args.mask_dict_of_dict = {} - - - for i, entry in enumerate(mask_code): - print - mask_dict = {} - mask_color = "" - mask_texture = "" - mask_type = entry.split("-")[0] - if len(entry.split("-")) == 2: - mask_variation = entry.split("-")[1] - if "#" in mask_variation: - mask_color = mask_variation - else: - mask_texture = mask_variation - mask_dict["type"] = mask_type - mask_dict["color"] = mask_color - mask_dict["texture"] = mask_texture - args.mask_dict_of_dict[i] = mask_dict - - # Check if path is file or directory or none - is_file=True - # Process if the path was a file - if is_file: - print("Masking image file") - image_path = args.path - write_path = args.path.rsplit(".")[0] - if True: - # Proceed if file is image - # masked_images, mask, mask_binary_array, original_image - masked_image, mask, mask_binary_array, original_image = mask_image( - myImg, args - ) - if len(masked_image)==0: - return masked_image - else: - img = masked_image[i] - return img - else: - print("Path is neither a valid file or a valid directory") - print("Processing Done") diff --git a/spaces/MrVicente/RA-BART/kgs_binding/conceptnet/__init__.py b/spaces/MrVicente/RA-BART/kgs_binding/conceptnet/__init__.py deleted file mode 100644 index b9742821a6f164200bc145e7a847382f08778303..0000000000000000000000000000000000000000 --- a/spaces/MrVicente/RA-BART/kgs_binding/conceptnet/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from . import * \ No newline at end of file diff --git a/spaces/MuGeminorum/insecta/khandy/image/flip.py b/spaces/MuGeminorum/insecta/khandy/image/flip.py deleted file mode 100644 index 627f05906bffd58cec9933f7ec616634d765520a..0000000000000000000000000000000000000000 --- a/spaces/MuGeminorum/insecta/khandy/image/flip.py +++ /dev/null @@ -1,72 +0,0 @@ -import khandy -import numpy as np - - -def flip_image(image, direction='h', copy=True): - """ - References: - np.flipud, np.fliplr, np.flip - cv2.flip - tf.image.flip_up_down - tf.image.flip_left_right - """ - assert khandy.is_numpy_image(image) - assert direction in ['x', 'h', 'horizontal', - 'y', 'v', 'vertical', - 'o', 'b', 'both'] - if copy: - image = image.copy() - if direction in ['o', 'b', 'both', 'x', 'h', 'horizontal']: - image = np.fliplr(image) - if direction in ['o', 'b', 'both', 'y', 'v', 'vertical']: - image = np.flipud(image) - return image - - -def transpose_image(image, copy=True): - """Transpose image. - - References: - np.transpose - cv2.transpose - tf.image.transpose - """ - assert khandy.is_numpy_image(image) - if copy: - image = image.copy() - if image.ndim == 2: - transpose_axes = (1, 0) - else: - transpose_axes = (1, 0, 2) - image = np.transpose(image, transpose_axes) - return image - - -def rot90_image(image, n=1, copy=True): - """Rotate image counter-clockwise by 90 degrees. - - References: - np.rot90 - cv2.rotate - tf.image.rot90 - """ - assert khandy.is_numpy_image(image) - if copy: - image = image.copy() - if image.ndim == 2: - transpose_axes = (1, 0) - else: - transpose_axes = (1, 0, 2) - - n = n % 4 - if n == 0: - return image[:] - elif n == 1: - image = np.transpose(image, transpose_axes) - image = np.flipud(image) - elif n == 2: - image = np.fliplr(np.flipud(image)) - else: - image = np.transpose(image, transpose_axes) - image = np.fliplr(image) - return image diff --git a/spaces/NAACL2022/GlobEnc/src/modeling/modeling_bert.py b/spaces/NAACL2022/GlobEnc/src/modeling/modeling_bert.py deleted file mode 100644 index 12249b01b0a52fc02f5d4209c7163cc2861a0e26..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/GlobEnc/src/modeling/modeling_bert.py +++ /dev/null @@ -1,2073 +0,0 @@ -# coding=utf-8 -# Copyright 2018 The Google AI Language Team Authors and The HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -"""PyTorch BERT model.""" - -import math -import os -import warnings -from dataclasses import dataclass -from typing import List, Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from packaging import version -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from transformers.activations import ACT2FN -from transformers.modeling_outputs import ( - BaseModelOutputWithPastAndCrossAttentions, - BaseModelOutputWithPoolingAndCrossAttentions, - CausalLMOutputWithCrossAttentions, - MaskedLMOutput, - MultipleChoiceModelOutput, - NextSentencePredictorOutput, - QuestionAnsweringModelOutput, - SequenceClassifierOutput, - TokenClassifierOutput, -) -from transformers.modeling_utils import ( - PreTrainedModel, - apply_chunking_to_forward, - find_pruneable_heads_and_indices, - prune_linear_layer, -) -from transformers.utils import ( - ModelOutput, - add_code_sample_docstrings, - add_start_docstrings, - add_start_docstrings_to_model_forward, - logging, - replace_return_docstrings, -) -from transformers.models.bert.configuration_bert import BertConfig - -logger = logging.get_logger(__name__) - -_CHECKPOINT_FOR_DOC = "bert-base-uncased" -_CONFIG_FOR_DOC = "BertConfig" -_TOKENIZER_FOR_DOC = "BertTokenizer" - -BERT_PRETRAINED_MODEL_ARCHIVE_LIST = [ - "bert-base-uncased", - "bert-large-uncased", - "bert-base-cased", - "bert-large-cased", - "bert-base-multilingual-uncased", - "bert-base-multilingual-cased", - "bert-base-chinese", - "bert-base-german-cased", - "bert-large-uncased-whole-word-masking", - "bert-large-cased-whole-word-masking", - "bert-large-uncased-whole-word-masking-finetuned-squad", - "bert-large-cased-whole-word-masking-finetuned-squad", - "bert-base-cased-finetuned-mrpc", - "bert-base-german-dbmdz-cased", - "bert-base-german-dbmdz-uncased", - "cl-tohoku/bert-base-japanese", - "cl-tohoku/bert-base-japanese-whole-word-masking", - "cl-tohoku/bert-base-japanese-char", - "cl-tohoku/bert-base-japanese-char-whole-word-masking", - "TurkuNLP/bert-base-finnish-cased-v1", - "TurkuNLP/bert-base-finnish-uncased-v1", - "wietsedv/bert-base-dutch-cased", - # See all BERT models at https://huggingface.co/models?filter=bert -] - - -def load_tf_weights_in_bert(model, config, tf_checkpoint_path): - """Load tf checkpoints in a pytorch model.""" - try: - import re - - import numpy as np - import tensorflow as tf - except ImportError: - logger.error( - "Loading a TensorFlow model in PyTorch, requires TensorFlow to be installed. Please see " - "https://www.tensorflow.org/install/ for installation instructions." - ) - raise - tf_path = os.path.abspath(tf_checkpoint_path) - logger.info(f"Converting TensorFlow checkpoint from {tf_path}") - # Load weights from TF model - init_vars = tf.train.list_variables(tf_path) - names = [] - arrays = [] - for name, shape in init_vars: - logger.info(f"Loading TF weight {name} with shape {shape}") - array = tf.train.load_variable(tf_path, name) - names.append(name) - arrays.append(array) - - for name, array in zip(names, arrays): - name = name.split("/") - # adam_v and adam_m are variables used in AdamWeightDecayOptimizer to calculated m and v - # which are not required for using pretrained model - if any( - n in ["adam_v", "adam_m", "AdamWeightDecayOptimizer", "AdamWeightDecayOptimizer_1", "global_step"] - for n in name - ): - logger.info(f"Skipping {'/'.join(name)}") - continue - pointer = model - for m_name in name: - if re.fullmatch(r"[A-Za-z]+_\d+", m_name): - scope_names = re.split(r"_(\d+)", m_name) - else: - scope_names = [m_name] - if scope_names[0] == "kernel" or scope_names[0] == "gamma": - pointer = getattr(pointer, "weight") - elif scope_names[0] == "output_bias" or scope_names[0] == "beta": - pointer = getattr(pointer, "bias") - elif scope_names[0] == "output_weights": - pointer = getattr(pointer, "weight") - elif scope_names[0] == "squad": - pointer = getattr(pointer, "classifier") - else: - try: - pointer = getattr(pointer, scope_names[0]) - except AttributeError: - logger.info(f"Skipping {'/'.join(name)}") - continue - if len(scope_names) >= 2: - num = int(scope_names[1]) - pointer = pointer[num] - if m_name[-11:] == "_embeddings": - pointer = getattr(pointer, "weight") - elif m_name == "kernel": - array = np.transpose(array) - try: - if pointer.shape != array.shape: - raise ValueError(f"Pointer shape {pointer.shape} and array shape {array.shape} mismatched") - except AssertionError as e: - e.args += (pointer.shape, array.shape) - raise - logger.info(f"Initialize PyTorch weight {name}") - pointer.data = torch.from_numpy(array) - return model - - -class BertEmbeddings(nn.Module): - """Construct the embeddings from word, position and token_type embeddings.""" - - def __init__(self, config): - super().__init__() - self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size, padding_idx=config.pad_token_id) - self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) - self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) - - # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load - # any TensorFlow checkpoint file - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - # position_ids (1, len position emb) is contiguous in memory and exported when serialized - self.position_embedding_type = getattr(config, "position_embedding_type", "absolute") - self.register_buffer("position_ids", torch.arange(config.max_position_embeddings).expand((1, -1))) - if version.parse(torch.__version__) > version.parse("1.6.0"): - self.register_buffer( - "token_type_ids", - torch.zeros(self.position_ids.size(), dtype=torch.long), - persistent=False, - ) - - def forward( - self, input_ids=None, token_type_ids=None, position_ids=None, inputs_embeds=None, past_key_values_length=0 - ): - if input_ids is not None: - input_shape = input_ids.size() - else: - input_shape = inputs_embeds.size()[:-1] - - seq_length = input_shape[1] - - if position_ids is None: - position_ids = self.position_ids[:, past_key_values_length: seq_length + past_key_values_length] - - # Setting the token_type_ids to the registered buffer in constructor where it is all zeros, which usually occurs - # when its auto-generated, registered buffer helps users when tracing the model without passing token_type_ids, solves - # issue #5664 - if token_type_ids is None: - if hasattr(self, "token_type_ids"): - buffered_token_type_ids = self.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(input_shape[0], seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=self.position_ids.device) - - if inputs_embeds is None: - inputs_embeds = self.word_embeddings(input_ids) - token_type_embeddings = self.token_type_embeddings(token_type_ids) - - embeddings = inputs_embeds + token_type_embeddings - if self.position_embedding_type == "absolute": - position_embeddings = self.position_embeddings(position_ids) - embeddings += position_embeddings - embeddings = self.LayerNorm(embeddings) - embeddings = self.dropout(embeddings) - return embeddings - - -class BertSelfAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - if config.hidden_size % config.num_attention_heads != 0 and not hasattr(config, "embedding_size"): - raise ValueError( - f"The hidden size ({config.hidden_size}) is not a multiple of the number of attention " - f"heads ({config.num_attention_heads})" - ) - - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - self.query = nn.Linear(config.hidden_size, self.all_head_size) - self.key = nn.Linear(config.hidden_size, self.all_head_size) - self.value = nn.Linear(config.hidden_size, self.all_head_size) - - self.dropout = nn.Dropout(config.attention_probs_dropout_prob) - self.position_embedding_type = position_embedding_type or getattr( - config, "position_embedding_type", "absolute" - ) - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - self.max_position_embeddings = config.max_position_embeddings - self.distance_embedding = nn.Embedding(2 * config.max_position_embeddings - 1, self.attention_head_size) - - self.is_decoder = config.is_decoder - - def transpose_for_scores(self, x): - new_x_shape = x.size()[:-1] + (self.num_attention_heads, self.attention_head_size) - x = x.view(new_x_shape) - return x.permute(0, 2, 1, 3) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - output_norms: Optional[bool] = False, # added by Fayyaz / Modarressi - ) -> Tuple[torch.Tensor]: - mixed_query_layer = self.query(hidden_states) - - # If this is instantiated as a cross-attention module, the keys - # and values come from an encoder; the attention mask needs to be - # such that the encoder's padding tokens are not attended to. - is_cross_attention = encoder_hidden_states is not None - - if is_cross_attention and past_key_value is not None: - # reuse k,v, cross_attentions - key_layer = past_key_value[0] - value_layer = past_key_value[1] - attention_mask = encoder_attention_mask - elif is_cross_attention: - key_layer = self.transpose_for_scores(self.key(encoder_hidden_states)) - value_layer = self.transpose_for_scores(self.value(encoder_hidden_states)) - attention_mask = encoder_attention_mask - elif past_key_value is not None: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - key_layer = torch.cat([past_key_value[0], key_layer], dim=2) - value_layer = torch.cat([past_key_value[1], value_layer], dim=2) - else: - key_layer = self.transpose_for_scores(self.key(hidden_states)) - value_layer = self.transpose_for_scores(self.value(hidden_states)) - - query_layer = self.transpose_for_scores(mixed_query_layer) - - if self.is_decoder: - # if cross_attention save Tuple(torch.Tensor, torch.Tensor) of all cross attention key/value_states. - # Further calls to cross_attention layer can then reuse all cross-attention - # key/value_states (first "if" case) - # if uni-directional self-attention (decoder) save Tuple(torch.Tensor, torch.Tensor) of - # all previous decoder key/value_states. Further calls to uni-directional self-attention - # can concat previous decoder key/value_states to current projected key/value_states (third "elif" case) - # if encoder bi-directional self-attention `past_key_value` is always `None` - past_key_value = (key_layer, value_layer) - - # Take the dot product between "query" and "key" to get the raw attention scores. - attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) - - if self.position_embedding_type == "relative_key" or self.position_embedding_type == "relative_key_query": - seq_length = hidden_states.size()[1] - position_ids_l = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(-1, 1) - position_ids_r = torch.arange(seq_length, dtype=torch.long, device=hidden_states.device).view(1, -1) - distance = position_ids_l - position_ids_r - positional_embedding = self.distance_embedding(distance + self.max_position_embeddings - 1) - positional_embedding = positional_embedding.to(dtype=query_layer.dtype) # fp16 compatibility - - if self.position_embedding_type == "relative_key": - relative_position_scores = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores - elif self.position_embedding_type == "relative_key_query": - relative_position_scores_query = torch.einsum("bhld,lrd->bhlr", query_layer, positional_embedding) - relative_position_scores_key = torch.einsum("bhrd,lrd->bhlr", key_layer, positional_embedding) - attention_scores = attention_scores + relative_position_scores_query + relative_position_scores_key - - attention_scores = attention_scores / math.sqrt(self.attention_head_size) - if attention_mask is not None: - # Apply the attention mask is (precomputed for all layers in BertModel forward() function) - attention_scores = attention_scores + attention_mask - - # Normalize the attention scores to probabilities. - attention_probs = nn.functional.softmax(attention_scores, dim=-1) - - # This is actually dropping out entire tokens to attend to, which might - # seem a bit unusual, but is taken from the original Transformer paper. - attention_probs = self.dropout(attention_probs) - - # Mask heads if we want to - if head_mask is not None: - attention_probs = attention_probs * head_mask - - context_layer = torch.matmul(attention_probs, value_layer) - - context_layer = context_layer.permute(0, 2, 1, 3).contiguous() - new_context_layer_shape = context_layer.size()[:-2] + (self.all_head_size,) - context_layer = context_layer.view(new_context_layer_shape) - - # added by Fayyaz / Modarressi - # ------------------------------- - if output_norms: - outputs = (context_layer, attention_probs, value_layer) - return outputs - # ------------------------------- - - outputs = (context_layer, attention_probs) if output_attentions else (context_layer,) - - if self.is_decoder: - outputs = outputs + (past_key_value,) - return outputs - - -class BertSelfOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor, - output_norms=False): # added by Fayyaz / Modarressi - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - # hidden_states = self.LayerNorm(hidden_states + input_tensor) - pre_ln_states = hidden_states + input_tensor # added by Fayyaz / Modarressi - post_ln_states = self.LayerNorm(pre_ln_states) # added by Fayyaz / Modarressi - # added by Fayyaz / Modarressi - if output_norms: - return post_ln_states, pre_ln_states - else: - return post_ln_states - - -class BertNormOutput(nn.Module): # This class was added by Goro Kobayashi - def __init__(self, config): - super().__init__() - self.num_attention_heads = config.num_attention_heads - self.attention_head_size = int(config.hidden_size / config.num_attention_heads) - self.all_head_size = self.num_attention_heads * self.attention_head_size - - def forward(self, hidden_states, attention_probs, value_layer, dense, LayerNorm, pre_ln_states): - # Args: - # hidden_states: Representations from previous layer and inputs to self-attention. (batch, seq_length, all_head_size) - # attention_probs: Attention weights calculated in self-attention. (batch, num_heads, seq_length, seq_length) - # value_layer: Value vectors calculated in self-attention. (batch, num_heads, seq_length, head_size) - # dense: Dense layer in self-attention. nn.Linear(all_head_size, all_head_size) - # LayerNorm: nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - # pre_ln_states: Vectors just before LayerNorm (batch, seq_length, all_head_size) - - with torch.no_grad(): - # Make transformed vectors f(x) from Value vectors (value_layer) and weight matrix (dense). - dense = dense.weight.view(self.all_head_size, self.num_attention_heads, - self.attention_head_size) # W^o (768, 768) - transformed_layer = torch.einsum('bhsv,dhv->bhsd', value_layer, dense) # V * W^o (z=(qk)v) - - # Make weighted vectors αf(x) from transformed vectors (transformed_layer) - # and attention weights (attentions): - # (batch, num_heads, seq_length, seq_length, all_head_size) - weighted_layer = torch.einsum('bhks,bhsd->bhksd', attention_probs, - transformed_layer) # attention_probs(Q*K^t) * V * W^o - weighted_norm = torch.norm(weighted_layer, dim=-1) # norm of attended tokens representations - - # Sum each weighted vectors αf(x) over all heads: - # (batch, seq_length, seq_length, all_head_size) - summed_weighted_layer = weighted_layer.sum(dim=1) # sum over heads - summed_weighted_norm = torch.norm(summed_weighted_layer, dim=-1) # norm of ||Σαf(x)|| - - """ここからがnew""" - # Make residual matrix (batch, seq_length, seq_length, all_head_size) - hidden_shape = hidden_states.size() # (batch, seq_length, all_head_size) - device = hidden_states.device - residual = torch.einsum('sk,bsd->bskd', torch.eye(hidden_shape[1]).to(device), - hidden_states) # diagonal representations (hidden states) - - # Make matrix of summed weighted vector + residual vectors - residual_weighted_layer = summed_weighted_layer + residual - residual_weighted_norm = torch.norm(residual_weighted_layer, dim=-1) # ||Σαf(x) + x|| - - # consider layernorm - ln_weight = LayerNorm.weight.data # gama - ln_eps = LayerNorm.eps - - # 実際にLayerNormにかけられるベクトル pre_ln_states の平均・分散を計算 - mean = pre_ln_states.mean(-1, keepdim=True) # (batch, seq_len, 1) m(y=Σy_j) - var = (pre_ln_states - mean).pow(2).mean(-1, keepdim=True).unsqueeze(dim=2) # (batch, seq_len, 1, 1) s(y) - - # attention + residual のサムの中のベクトルごとに平均を計算 - each_mean = residual_weighted_layer.mean(-1, keepdim=True) # (batch, seq_len, seq_len, 1) m(y_j) - - # attention + residual のサムの中の各ベクトルから,各平均を引き,標準偏差で割る - # (LayerNorm の normalization 部分をサムの中のベクトルごとに実行していることに相当) - normalized_layer = torch.div(residual_weighted_layer - each_mean, - (var + ln_eps) ** (1 / 2)) # (batch, seq_len, seq_len, all_head_size) - - # さらに,LayerNorm の重みでエレメント積を各ベクトルに対して実行 - post_ln_layer = torch.einsum('bskd,d->bskd', normalized_layer, - ln_weight) # (batch, seq_len, seq_len, all_head_size) - post_ln_norm = torch.norm(post_ln_layer, dim=-1) # (batch, seq_len, seq_len) - - # Attn-N の mixing ratio - attn_preserving = torch.diagonal(summed_weighted_layer, dim1=1, dim2=2).permute(0, 2, 1) - attn_mixing = torch.sum(summed_weighted_layer, dim=2) - attn_preserving - attn_preserving_norm = torch.norm(attn_preserving, dim=-1) - attn_mixing_norm = torch.norm(attn_mixing, dim=-1) - attn_n_mixing_ratio = attn_mixing_norm / (attn_mixing_norm + attn_preserving_norm) - - # AttnRes-N の mixing ratio - before_ln_preserving = torch.diagonal(residual_weighted_layer, dim1=1, dim2=2).permute(0, 2, 1) - before_ln_mixing = torch.sum(residual_weighted_layer, dim=2) - before_ln_preserving - before_ln_preserving_norm = torch.norm(before_ln_preserving, dim=-1) - before_ln_mixing_norm = torch.norm(before_ln_mixing, dim=-1) - attnres_n_mixing_ratio = before_ln_mixing_norm / (before_ln_mixing_norm + before_ln_preserving_norm) - - # AttnResLn-N の mixing ratio - post_ln_preserving = torch.diagonal(post_ln_layer, dim1=1, dim2=2).permute(0, 2, 1) - post_ln_mixing = torch.sum(post_ln_layer, dim=2) - post_ln_preserving - post_ln_preserving_norm = torch.norm(post_ln_preserving, dim=-1) - post_ln_mixing_norm = torch.norm(post_ln_mixing, dim=-1) - attnresln_n_mixing_ratio = post_ln_mixing_norm / (post_ln_mixing_norm + post_ln_preserving_norm) - - outputs = (weighted_norm, # ||αf(x)|| - summed_weighted_norm, # ||Σαf(x)|| - residual_weighted_norm, # ||Σαf(x) + x|| - post_ln_norm, # Norm of vectors after LayerNorm - post_ln_layer, - attn_n_mixing_ratio, # Mixing ratio for Attn-N - attnres_n_mixing_ratio, # Mixing ratio for AttnRes-N - attnresln_n_mixing_ratio, # Mixing ratio for AttnResLn-N - ) - return outputs - - -class BertAttention(nn.Module): - def __init__(self, config, position_embedding_type=None): - super().__init__() - self.self = BertSelfAttention(config, position_embedding_type=position_embedding_type) - self.output = BertSelfOutput(config) - self.pruned_heads = set() - self.norm = BertNormOutput(config) # added by Goro Kobayashi - - def prune_heads(self, heads): - if len(heads) == 0: - return - heads, index = find_pruneable_heads_and_indices( - heads, self.self.num_attention_heads, self.self.attention_head_size, self.pruned_heads - ) - - # Prune linear layers - self.self.query = prune_linear_layer(self.self.query, index) - self.self.key = prune_linear_layer(self.self.key, index) - self.self.value = prune_linear_layer(self.self.value, index) - self.output.dense = prune_linear_layer(self.output.dense, index, dim=1) - - # Update hyper params and store pruned heads - self.self.num_attention_heads = self.self.num_attention_heads - len(heads) - self.self.all_head_size = self.self.attention_head_size * self.self.num_attention_heads - self.pruned_heads = self.pruned_heads.union(heads) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - output_norms=False, # added by Goro Kobayashi - ) -> Tuple[torch.Tensor]: - self_outputs = self.self( - hidden_states, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - output_norms=output_norms, # added by Goro Kobayashi - ) - attention_output = self.output( - self_outputs[0], - hidden_states, - output_norms=output_norms, # added by Goro Kobayashi - ) - - # Added by Fayyaz / Modarressi - # ------------------------------- - if output_norms: - _, attention_probs, value_layer = self_outputs - attention_output, pre_ln_states = attention_output - norms_outputs = self.norm( - hidden_states, - attention_probs, - value_layer, - self.output.dense, - self.output.LayerNorm, - pre_ln_states, - ) - outputs = (attention_output, attention_probs,) + norms_outputs # add attentions and norms if we output them - """ - # outputs: - attention_output - attention_probs - transformed_norm - summed_weighted_norm - residual_weighted_norm - post_ln_norm - before_ln_mixing_ratio - post_ln_mixing_ratio - """ - return outputs - # ------------------------------- - - outputs = (attention_output,) + self_outputs[1:] # add attentions if we output them - return outputs - - -class BertIntermediate(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.intermediate_size) - if isinstance(config.hidden_act, str): - self.intermediate_act_fn = ACT2FN[config.hidden_act] - else: - self.intermediate_act_fn = config.hidden_act - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.intermediate_act_fn(hidden_states) - return hidden_states - - -class BertOutput(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.intermediate_size, config.hidden_size) - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - self.dropout = nn.Dropout(config.hidden_dropout_prob) - - def forward(self, hidden_states: torch.Tensor, input_tensor: torch.Tensor): - hidden_states = self.dense(hidden_states) - hidden_states = self.dropout(hidden_states) - # hidden_states = self.LayerNorm(hidden_states + input_tensor) - # return hidden_states - # Added by Fayyaz / Modarressi - # ------------------------------- - pre_ln_states = hidden_states + input_tensor - hidden_states = self.LayerNorm(pre_ln_states) - return hidden_states, pre_ln_states - # ------------------------------- - - - -class BertLayer(nn.Module): - def __init__(self, config): - super().__init__() - self.chunk_size_feed_forward = config.chunk_size_feed_forward - self.seq_len_dim = 1 - self.attention = BertAttention(config) - self.is_decoder = config.is_decoder - self.add_cross_attention = config.add_cross_attention - if self.add_cross_attention: - if not self.is_decoder: - raise ValueError(f"{self} should be used as a decoder model if cross attention is added") - self.crossattention = BertAttention(config, position_embedding_type="absolute") - self.intermediate = BertIntermediate(config) - self.output = BertOutput(config) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_value: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - output_attentions: Optional[bool] = False, - output_norms: Optional[bool] = False, # added by Goro Kobayashi - ) -> Tuple[torch.Tensor]: - # decoder uni-directional self-attention cached key/values tuple is at positions 1,2 - # self_attn_past_key_value = past_key_value[:2] if past_key_value is not None else None - # self_attention_outputs = self.attention( - # hidden_states, - # attention_mask, - # head_mask, - # output_attentions=output_attentions, - # past_key_value=self_attn_past_key_value, - # ) - self_attention_outputs = self.attention( - hidden_states, - attention_mask, - head_mask, - output_attentions=output_attentions, - output_norms=output_norms, - ) # changed by Goro Kobayashi - attention_output = self_attention_outputs[0] - - # if decoder, the last output is tuple of self-attn cache - if self.is_decoder: - outputs = self_attention_outputs[1:-1] - present_key_value = self_attention_outputs[-1] - else: - outputs = self_attention_outputs[1:] # add self attentions if we output attention weights - - cross_attn_present_key_value = None - if self.is_decoder and encoder_hidden_states is not None: - if not hasattr(self, "crossattention"): - raise ValueError( - f"If `encoder_hidden_states` are passed, {self} has to be instantiated with cross-attention layers by setting `config.add_cross_attention=True`" - ) - - # cross_attn cached key/values tuple is at positions 3,4 of past_key_value tuple - cross_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None - cross_attention_outputs = self.crossattention( - attention_output, - attention_mask, - head_mask, - encoder_hidden_states, - encoder_attention_mask, - cross_attn_past_key_value, - output_attentions, - ) - attention_output = cross_attention_outputs[0] - outputs = outputs + cross_attention_outputs[1:-1] # add cross attentions if we output attention weights - - # add cross-attn cache to positions 3,4 of present_key_value tuple - cross_attn_present_key_value = cross_attention_outputs[-1] - present_key_value = present_key_value + cross_attn_present_key_value - - # layer_output = apply_chunking_to_forward( - # self.feed_forward_chunk, self.chunk_size_feed_forward, self.seq_len_dim, attention_output - # ) - - # Added by Fayyaz / Modarressi - # ------------------------------- - intermediate_output = self.intermediate(attention_output) - layer_output, pre_ln2_states = self.output(intermediate_output, attention_output) - if output_norms: - post_ln_layer = outputs[5] - each_mean = post_ln_layer.mean(-1, keepdim=True) - - mean = pre_ln2_states.mean(-1, keepdim=True) - var = (pre_ln2_states - mean).pow(2).mean(-1, keepdim=True).unsqueeze(dim=2) - - normalized_layer = torch.div(post_ln_layer - each_mean, (var + self.output.LayerNorm.eps) ** (1 / 2)) - post_ln2_layer = torch.einsum('bskd,d->bskd', normalized_layer, self.output.LayerNorm.weight) - post_ln2_norm = torch.norm(post_ln2_layer, dim=-1) - - # N-ResOut mixing ratio - post_ln2_preserving = torch.diagonal(post_ln2_layer, dim1=1, dim2=2).permute(0, 2, 1) - post_ln2_mixing = torch.sum(post_ln2_layer, dim=2) - post_ln2_preserving - post_ln2_preserving_norm = torch.norm(post_ln2_preserving, dim=-1) - post_ln2_mixing_norm = torch.norm(post_ln2_mixing, dim=-1) - attnresln2_n_mixing_ratio = post_ln2_mixing_norm / (post_ln2_mixing_norm + post_ln2_preserving_norm) - - new_outputs = outputs[:5] + (post_ln2_norm,) + outputs[6:] + (attnresln2_n_mixing_ratio,) - return (layer_output,) + new_outputs - # ------------------------------- - outputs = (layer_output,) + outputs - - # if decoder, return the attn key/values as the last output - if self.is_decoder: - outputs = outputs + (present_key_value,) - - return outputs - - def feed_forward_chunk(self, attention_output): - intermediate_output = self.intermediate(attention_output) - layer_output = self.output(intermediate_output, attention_output) - return layer_output - - -class BertEncoder(nn.Module): - def __init__(self, config): - super().__init__() - self.config = config - self.layer = nn.ModuleList([BertLayer(config) for _ in range(config.num_hidden_layers)]) - self.gradient_checkpointing = False - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.FloatTensor] = None, - head_mask: Optional[torch.FloatTensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.FloatTensor] = None, - past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = False, - output_hidden_states: Optional[bool] = False, - return_dict: Optional[bool] = True, - output_norms: Optional[bool] = False, # added by Goro Kobayashi - ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: - all_hidden_states = () if output_hidden_states else None - all_self_attentions = () if output_attentions else None - all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None - - next_decoder_cache = () if use_cache else None - all_norms = () # added by Goro Kobayashi - for i, layer_module in enumerate(self.layer): - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - layer_head_mask = head_mask[i] if head_mask is not None else None - past_key_value = past_key_values[i] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - if use_cache: - logger.warning( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - return module(*inputs, past_key_value, output_attentions) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(layer_module), - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - ) - else: - layer_outputs = layer_module( - hidden_states, - attention_mask, - layer_head_mask, - encoder_hidden_states, - encoder_attention_mask, - past_key_value, - output_attentions, - output_norms, # added by Goro Kobayashi - ) - - hidden_states = layer_outputs[0] - if use_cache: - next_decoder_cache += (layer_outputs[-1],) - if output_attentions: - all_self_attentions = all_self_attentions + (layer_outputs[1],) - if self.config.add_cross_attention: - all_cross_attentions = all_cross_attentions + (layer_outputs[2],) - - # added by Goro Kobayashi - if output_norms: - all_norms = all_norms + (layer_outputs[2:],) - - if output_hidden_states: - all_hidden_states = all_hidden_states + (hidden_states,) - - if not return_dict: - return tuple( - v - for v in [ - hidden_states, - next_decoder_cache, - all_hidden_states, - all_self_attentions, - all_cross_attentions, - all_norms, # Added by Fayyaz / Modarressi - ] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_decoder_cache, - hidden_states=all_hidden_states, - attentions=all_self_attentions, - cross_attentions=all_cross_attentions, - ) - - -class BertPooler(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - self.activation = nn.Tanh() - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - # We "pool" the model by simply taking the hidden state corresponding - # to the first token. - first_token_tensor = hidden_states[:, 0] - pooled_output = self.dense(first_token_tensor) - pooled_output = self.activation(pooled_output) - return pooled_output - - -class BertPredictionHeadTransform(nn.Module): - def __init__(self, config): - super().__init__() - self.dense = nn.Linear(config.hidden_size, config.hidden_size) - if isinstance(config.hidden_act, str): - self.transform_act_fn = ACT2FN[config.hidden_act] - else: - self.transform_act_fn = config.hidden_act - self.LayerNorm = nn.LayerNorm(config.hidden_size, eps=config.layer_norm_eps) - - def forward(self, hidden_states: torch.Tensor) -> torch.Tensor: - hidden_states = self.dense(hidden_states) - hidden_states = self.transform_act_fn(hidden_states) - hidden_states = self.LayerNorm(hidden_states) - return hidden_states - - -class BertLMPredictionHead(nn.Module): - def __init__(self, config): - super().__init__() - self.transform = BertPredictionHeadTransform(config) - - # The output weights are the same as the input embeddings, but there is - # an output-only bias for each token. - self.decoder = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - self.bias = nn.Parameter(torch.zeros(config.vocab_size)) - - # Need a link between the two variables so that the bias is correctly resized with `resize_token_embeddings` - self.decoder.bias = self.bias - - def forward(self, hidden_states): - hidden_states = self.transform(hidden_states) - hidden_states = self.decoder(hidden_states) - return hidden_states - - -class BertOnlyMLMHead(nn.Module): - def __init__(self, config): - super().__init__() - self.predictions = BertLMPredictionHead(config) - - def forward(self, sequence_output: torch.Tensor) -> torch.Tensor: - prediction_scores = self.predictions(sequence_output) - return prediction_scores - - -class BertOnlyNSPHead(nn.Module): - def __init__(self, config): - super().__init__() - self.seq_relationship = nn.Linear(config.hidden_size, 2) - - def forward(self, pooled_output): - seq_relationship_score = self.seq_relationship(pooled_output) - return seq_relationship_score - - -class BertPreTrainingHeads(nn.Module): - def __init__(self, config): - super().__init__() - self.predictions = BertLMPredictionHead(config) - self.seq_relationship = nn.Linear(config.hidden_size, 2) - - def forward(self, sequence_output, pooled_output): - prediction_scores = self.predictions(sequence_output) - seq_relationship_score = self.seq_relationship(pooled_output) - return prediction_scores, seq_relationship_score - - -class BertPreTrainedModel(PreTrainedModel): - """ - An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained - models. - """ - - config_class = BertConfig - load_tf_weights = load_tf_weights_in_bert - base_model_prefix = "bert" - supports_gradient_checkpointing = True - _keys_to_ignore_on_load_missing = [r"position_ids"] - - def _init_weights(self, module): - """Initialize the weights""" - if isinstance(module, nn.Linear): - # Slightly different from the TF version which uses truncated_normal for initialization - # cf https://github.com/pytorch/pytorch/pull/5617 - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - elif isinstance(module, nn.LayerNorm): - module.bias.data.zero_() - module.weight.data.fill_(1.0) - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, BertEncoder): - module.gradient_checkpointing = value - - -@dataclass -class BertForPreTrainingOutput(ModelOutput): - """ - Output type of [`BertForPreTraining`]. - - Args: - loss (*optional*, returned when `labels` is provided, `torch.FloatTensor` of shape `(1,)`): - Total loss as the sum of the masked language modeling loss and the next sequence prediction - (classification) loss. - prediction_logits (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`): - Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax). - seq_relationship_logits (`torch.FloatTensor` of shape `(batch_size, 2)`): - Prediction scores of the next sequence prediction (classification) head (scores of True/False continuation - before SoftMax). - hidden_states (`tuple(torch.FloatTensor)`, *optional*, returned when `output_hidden_states=True` is passed or when `config.output_hidden_states=True`): - Tuple of `torch.FloatTensor` (one for the output of the embeddings + one for the output of each layer) of - shape `(batch_size, sequence_length, hidden_size)`. - - Hidden-states of the model at the output of each layer plus the initial embedding outputs. - attentions (`tuple(torch.FloatTensor)`, *optional*, returned when `output_attentions=True` is passed or when `config.output_attentions=True`): - Tuple of `torch.FloatTensor` (one for each layer) of shape `(batch_size, num_heads, sequence_length, - sequence_length)`. - - Attentions weights after the attention softmax, used to compute the weighted average in the self-attention - heads. - """ - - loss: Optional[torch.FloatTensor] = None - prediction_logits: torch.FloatTensor = None - seq_relationship_logits: torch.FloatTensor = None - hidden_states: Optional[Tuple[torch.FloatTensor]] = None - attentions: Optional[Tuple[torch.FloatTensor]] = None - - -BERT_START_DOCSTRING = r""" - - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`BertConfig`]): Model configuration class with all the parameters of the model. - Initializing with a config file does not load the weights associated with the model, only the - configuration. Check out the [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - -BERT_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `({0})`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using [`BertTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.FloatTensor` of shape `({0})`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - token_type_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Segment token indices to indicate first and second portions of the inputs. Indices are selected in `[0, - 1]`: - - - 0 corresponds to a *sentence A* token, - - 1 corresponds to a *sentence B* token. - - [What are token type IDs?](../glossary#token-type-ids) - position_ids (`torch.LongTensor` of shape `({0})`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.max_position_embeddings - 1]`. - - [What are position IDs?](../glossary#position-ids) - head_mask (`torch.FloatTensor` of shape `(num_heads,)` or `(num_layers, num_heads)`, *optional*): - Mask to nullify selected heads of the self-attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - inputs_embeds (`torch.FloatTensor` of shape `({0}, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare Bert Model transformer outputting raw hidden-states without any specific head on top.", - BERT_START_DOCSTRING, -) -class BertModel(BertPreTrainedModel): - """ - - The model can behave as an encoder (with only self-attention) as well as a decoder, in which case a layer of - cross-attention is added between the self-attention layers, following the architecture described in [Attention is - all you need](https://arxiv.org/abs/1706.03762) by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, - Llion Jones, Aidan N. Gomez, Lukasz Kaiser and Illia Polosukhin. - - To behave as an decoder the model needs to be initialized with the `is_decoder` argument of the configuration set - to `True`. To be used in a Seq2Seq model, the model needs to initialized with both `is_decoder` argument and - `add_cross_attention` set to `True`; an `encoder_hidden_states` is then expected as an input to the forward pass. - """ - - def __init__(self, config, add_pooling_layer=True): - super().__init__(config) - self.config = config - - self.embeddings = BertEmbeddings(config) - self.encoder = BertEncoder(config) - - self.pooler = BertPooler(config) if add_pooling_layer else None - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embeddings.word_embeddings - - def set_input_embeddings(self, value): - self.embeddings.word_embeddings = value - - def _prune_heads(self, heads_to_prune): - """ - Prunes heads of the model. heads_to_prune: dict of {layer_num: list of heads to prune in this layer} See base - class PreTrainedModel - """ - for layer, heads in heads_to_prune.items(): - self.encoder.layer[layer].attention.prune_heads(heads) - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=BaseModelOutputWithPoolingAndCrossAttentions, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - output_norms: Optional[bool] = None, # added by Goro Kobayashi - ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPoolingAndCrossAttentions]: - r""" - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention if - the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used in - the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - if self.config.is_decoder: - use_cache = use_cache if use_cache is not None else self.config.use_cache - else: - use_cache = False - - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either input_ids or inputs_embeds") - - batch_size, seq_length = input_shape - device = input_ids.device if input_ids is not None else inputs_embeds.device - - # past_key_values_length - past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 - - if attention_mask is None: - attention_mask = torch.ones(((batch_size, seq_length + past_key_values_length)), device=device) - - if token_type_ids is None: - if hasattr(self.embeddings, "token_type_ids"): - buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] - buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) - token_type_ids = buffered_token_type_ids_expanded - else: - token_type_ids = torch.zeros(input_shape, dtype=torch.long, device=device) - - # We can provide a self-attention mask of dimensions [batch_size, from_seq_length, to_seq_length] - # ourselves in which case we just need to make it broadcastable to all heads. - extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device) - - # If a 2D or 3D attention mask is provided for the cross-attention - # we need to make broadcastable to [batch_size, num_heads, seq_length, seq_length] - if self.config.is_decoder and encoder_hidden_states is not None: - encoder_batch_size, encoder_sequence_length, _ = encoder_hidden_states.size() - encoder_hidden_shape = (encoder_batch_size, encoder_sequence_length) - if encoder_attention_mask is None: - encoder_attention_mask = torch.ones(encoder_hidden_shape, device=device) - encoder_extended_attention_mask = self.invert_attention_mask(encoder_attention_mask) - else: - encoder_extended_attention_mask = None - - # Prepare head mask if needed - # 1.0 in head_mask indicate we keep the head - # attention_probs has shape bsz x n_heads x N x N - # input head_mask has shape [num_heads] or [num_hidden_layers x num_heads] - # and head_mask is converted to shape [num_hidden_layers x batch x num_heads x seq_length x seq_length] - head_mask = self.get_head_mask(head_mask, self.config.num_hidden_layers) - - embedding_output = self.embeddings( - input_ids=input_ids, - position_ids=position_ids, - token_type_ids=token_type_ids, - inputs_embeds=inputs_embeds, - past_key_values_length=past_key_values_length, - ) - encoder_outputs = self.encoder( - embedding_output, - attention_mask=extended_attention_mask, - head_mask=head_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_extended_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - output_norms=output_norms, # added by Goro Kobayashi - ) - sequence_output = encoder_outputs[0] - pooled_output = self.pooler(sequence_output) if self.pooler is not None else None - - if not return_dict: - return (sequence_output, pooled_output) + encoder_outputs[1:] - - return BaseModelOutputWithPoolingAndCrossAttentions( - last_hidden_state=sequence_output, - pooler_output=pooled_output, - past_key_values=encoder_outputs.past_key_values, - hidden_states=encoder_outputs.hidden_states, - attentions=encoder_outputs.attentions, - cross_attentions=encoder_outputs.cross_attentions, - ) - - -@add_start_docstrings( - """ - Bert Model with two heads on top as done during the pretraining: a `masked language modeling` head and a `next - sentence prediction (classification)` head. - """, - BERT_START_DOCSTRING, -) -class BertForPreTraining(BertPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config) - self.cls = BertPreTrainingHeads(config) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=BertForPreTrainingOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - next_sentence_label: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], BertForPreTrainingOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), - the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - next_sentence_label (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the next sequence prediction (classification) loss. Input should be a sequence - pair (see `input_ids` docstring) Indices should be in `[0, 1]`: - - - 0 indicates sequence B is a continuation of sequence A, - - 1 indicates sequence B is a random sequence. - kwargs (`Dict[str, any]`, optional, defaults to *{}*): - Used to hide legacy arguments that have been deprecated. - - Returns: - - Example: - - ```python - >>> from transformers import BertTokenizer, BertForPreTraining - >>> import torch - - >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") - >>> model = BertForPreTraining.from_pretrained("bert-base-uncased") - - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") - >>> outputs = model(**inputs) - - >>> prediction_logits = outputs.prediction_logits - >>> seq_relationship_logits = outputs.seq_relationship_logits - ``` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output, pooled_output = outputs[:2] - prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output) - - total_loss = None - if labels is not None and next_sentence_label is not None: - loss_fct = CrossEntropyLoss() - masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - next_sentence_loss = loss_fct(seq_relationship_score.view(-1, 2), next_sentence_label.view(-1)) - total_loss = masked_lm_loss + next_sentence_loss - - if not return_dict: - output = (prediction_scores, seq_relationship_score) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return BertForPreTrainingOutput( - loss=total_loss, - prediction_logits=prediction_scores, - seq_relationship_logits=seq_relationship_score, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """Bert Model with a `language modeling` head on top for CLM fine-tuning.""", BERT_START_DOCSTRING -) -class BertLMHeadModel(BertPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] - - def __init__(self, config): - super().__init__(config) - - if not config.is_decoder: - logger.warning("If you want to use `BertLMHeadModel` as a standalone, add `is_decoder=True.`") - - self.bert = BertModel(config, add_pooling_layer=False) - self.cls = BertOnlyMLMHead(config) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=CausalLMOutputWithCrossAttentions, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.Tensor]] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], CausalLMOutputWithCrossAttentions]: - r""" - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention - if the model is configured as a decoder. - encoder_attention_mask (`torch.FloatTensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on the padding token indices of the encoder input. This mask is used - in the cross-attention if the model is configured as a decoder. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the left-to-right language modeling loss (next word prediction). Indices should be - in `[-100, 0, ..., config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` - are ignored (masked), the loss is only computed for the tokens with labels n `[0, ..., - config.vocab_size]` - past_key_values (`tuple(tuple(torch.FloatTensor))` of length `config.n_layers` with each tuple having 4 tensors of shape `(batch_size, num_heads, sequence_length - 1, embed_size_per_head)`): - Contains precomputed key and value hidden states of the attention blocks. Can be used to speed up - decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those - that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of - all `decoder_input_ids` of shape `(batch_size, sequence_length)`. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding - (see `past_key_values`). - - Returns: - - Example: - - ```python - >>> from transformers import BertTokenizer, BertLMHeadModel, BertConfig - >>> import torch - - >>> tokenizer = BertTokenizer.from_pretrained("bert-base-cased") - >>> config = BertConfig.from_pretrained("bert-base-cased") - >>> config.is_decoder = True - >>> model = BertLMHeadModel.from_pretrained("bert-base-cased", config=config) - - >>> inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") - >>> outputs = model(**inputs) - - >>> prediction_logits = outputs.logits - ``` - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - if labels is not None: - use_cache = False - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - past_key_values=past_key_values, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - prediction_scores = self.cls(sequence_output) - - lm_loss = None - if labels is not None: - # we are doing next-token prediction; shift prediction scores and input ids by one - shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous() - labels = labels[:, 1:].contiguous() - loss_fct = CrossEntropyLoss() - lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((lm_loss,) + output) if lm_loss is not None else output - - return CausalLMOutputWithCrossAttentions( - loss=lm_loss, - logits=prediction_scores, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - cross_attentions=outputs.cross_attentions, - ) - - def prepare_inputs_for_generation(self, input_ids, past=None, attention_mask=None, **model_kwargs): - input_shape = input_ids.shape - # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly - if attention_mask is None: - attention_mask = input_ids.new_ones(input_shape) - - # cut decoder_input_ids if past is used - if past is not None: - input_ids = input_ids[:, -1:] - - return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past} - - def _reorder_cache(self, past, beam_idx): - reordered_past = () - for layer_past in past: - reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),) - return reordered_past - - -@add_start_docstrings("""Bert Model with a `language modeling` head on top.""", BERT_START_DOCSTRING) -class BertForMaskedLM(BertPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - _keys_to_ignore_on_load_missing = [r"position_ids", r"predictions.decoder.bias"] - - def __init__(self, config): - super().__init__(config) - - if config.is_decoder: - logger.warning( - "If you want to use `BertForMaskedLM` make sure `config.is_decoder=False` for " - "bi-directional self-attention." - ) - - self.bert = BertModel(config, add_pooling_layer=False) - self.cls = BertOnlyMLMHead(config) - - # Initialize weights and apply final processing - self.post_init() - - def get_output_embeddings(self): - return self.cls.predictions.decoder - - def set_output_embeddings(self, new_embeddings): - self.cls.predictions.decoder = new_embeddings - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=MaskedLMOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.Tensor] = None, - encoder_attention_mask: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], MaskedLMOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should be in `[-100, 0, ..., - config.vocab_size]` (see `input_ids` docstring) Tokens with indices set to `-100` are ignored (masked), the - loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]` - """ - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - prediction_scores = self.cls(sequence_output) - - masked_lm_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() # -100 index = padding token - masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) - - if not return_dict: - output = (prediction_scores,) + outputs[2:] - return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output - - return MaskedLMOutput( - loss=masked_lm_loss, - logits=prediction_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - def prepare_inputs_for_generation(self, input_ids, attention_mask=None, **model_kwargs): - input_shape = input_ids.shape - effective_batch_size = input_shape[0] - - # add a dummy token - if self.config.pad_token_id is None: - raise ValueError("The PAD token should be defined for generation") - - attention_mask = torch.cat([attention_mask, attention_mask.new_zeros((attention_mask.shape[0], 1))], dim=-1) - dummy_token = torch.full( - (effective_batch_size, 1), self.config.pad_token_id, dtype=torch.long, device=input_ids.device - ) - input_ids = torch.cat([input_ids, dummy_token], dim=1) - - return {"input_ids": input_ids, "attention_mask": attention_mask} - - -@add_start_docstrings( - """Bert Model with a `next sentence prediction (classification)` head on top.""", - BERT_START_DOCSTRING, -) -class BertForNextSentencePrediction(BertPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config) - self.cls = BertOnlyNSPHead(config) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @replace_return_docstrings(output_type=NextSentencePredictorOutput, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - **kwargs, - ) -> Union[Tuple[torch.Tensor], NextSentencePredictorOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the next sequence prediction (classification) loss. Input should be a sequence pair - (see `input_ids` docstring). Indices should be in `[0, 1]`: - - - 0 indicates sequence B is a continuation of sequence A, - - 1 indicates sequence B is a random sequence. - - Returns: - - Example: - - ```python - >>> from transformers import BertTokenizer, BertForNextSentencePrediction - >>> import torch - - >>> tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") - >>> model = BertForNextSentencePrediction.from_pretrained("bert-base-uncased") - - >>> prompt = "In Italy, pizza served in formal settings, such as at a restaurant, is presented unsliced." - >>> next_sentence = "The sky is blue due to the shorter wavelength of blue light." - >>> encoding = tokenizer(prompt, next_sentence, return_tensors="pt") - - >>> outputs = model(**encoding, labels=torch.LongTensor([1])) - >>> logits = outputs.logits - >>> assert logits[0, 0] < logits[0, 1] # next sentence was random - ``` - """ - - if "next_sentence_label" in kwargs: - warnings.warn( - "The `next_sentence_label` argument is deprecated and will be removed in a future version, use `labels` instead.", - FutureWarning, - ) - labels = kwargs.pop("next_sentence_label") - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = outputs[1] - - seq_relationship_scores = self.cls(pooled_output) - - next_sentence_loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - next_sentence_loss = loss_fct(seq_relationship_scores.view(-1, 2), labels.view(-1)) - - if not return_dict: - output = (seq_relationship_scores,) + outputs[2:] - return ((next_sentence_loss,) + output) if next_sentence_loss is not None else output - - return NextSentencePredictorOutput( - loss=next_sentence_loss, - logits=seq_relationship_scores, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Bert Model transformer with a sequence classification/regression head on top (a linear layer on top of the pooled - output) e.g. for GLUE tasks. - """, - BERT_START_DOCSTRING, -) -class BertForSequenceClassification(BertPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - self.config = config - - self.bert = BertModel(config) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=SequenceClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - output_norms: Optional[bool] = None, # added by Fayyaz / Modarressi - ) -> Union[Tuple[torch.Tensor], SequenceClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the sequence classification/regression loss. Indices should be in `[0, ..., - config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If - `config.num_labels > 1` a classification loss is computed (Cross-Entropy). - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - output_norms=output_norms, # added by Fayyaz / Modarressi - ) - - pooled_output = outputs[1] - - pooled_output = self.dropout(pooled_output) - logits = self.classifier(pooled_output) - - loss = None - if labels is not None: - if self.config.problem_type is None: - if self.num_labels == 1: - self.config.problem_type = "regression" - elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int): - self.config.problem_type = "single_label_classification" - else: - self.config.problem_type = "multi_label_classification" - - if self.config.problem_type == "regression": - loss_fct = MSELoss() - if self.num_labels == 1: - loss = loss_fct(logits.squeeze(), labels.squeeze()) - else: - loss = loss_fct(logits, labels) - elif self.config.problem_type == "single_label_classification": - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - elif self.config.problem_type == "multi_label_classification": - loss_fct = BCEWithLogitsLoss() - loss = loss_fct(logits, labels) - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output # (loss), logits, (hidden_states), (attentions) - - return SequenceClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Bert Model with a multiple choice classification head on top (a linear layer on top of the pooled output and a - softmax) e.g. for RocStories/SWAG tasks. - """, - BERT_START_DOCSTRING, -) -class BertForMultipleChoice(BertPreTrainedModel): - def __init__(self, config): - super().__init__(config) - - self.bert = BertModel(config) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, 1) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, num_choices, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=MultipleChoiceModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], MultipleChoiceModelOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for computing the multiple choice classification loss. Indices should be in `[0, ..., - num_choices-1]` where `num_choices` is the size of the second dimension of the input tensors. (See - `input_ids` above) - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - num_choices = input_ids.shape[1] if input_ids is not None else inputs_embeds.shape[1] - - input_ids = input_ids.view(-1, input_ids.size(-1)) if input_ids is not None else None - attention_mask = attention_mask.view(-1, attention_mask.size(-1)) if attention_mask is not None else None - token_type_ids = token_type_ids.view(-1, token_type_ids.size(-1)) if token_type_ids is not None else None - position_ids = position_ids.view(-1, position_ids.size(-1)) if position_ids is not None else None - inputs_embeds = ( - inputs_embeds.view(-1, inputs_embeds.size(-2), inputs_embeds.size(-1)) - if inputs_embeds is not None - else None - ) - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - pooled_output = outputs[1] - - pooled_output = self.dropout(pooled_output) - logits = self.classifier(pooled_output) - reshaped_logits = logits.view(-1, num_choices) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(reshaped_logits, labels) - - if not return_dict: - output = (reshaped_logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return MultipleChoiceModelOutput( - loss=loss, - logits=reshaped_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Bert Model with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for - Named-Entity-Recognition (NER) tasks. - """, - BERT_START_DOCSTRING, -) -class BertForTokenClassification(BertPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.bert = BertModel(config, add_pooling_layer=False) - classifier_dropout = ( - config.classifier_dropout if config.classifier_dropout is not None else config.hidden_dropout_prob - ) - self.dropout = nn.Dropout(classifier_dropout) - self.classifier = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=TokenClassifierOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - labels: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]: - r""" - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the token classification loss. Indices should be in `[0, ..., config.num_labels - 1]`. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - sequence_output = self.dropout(sequence_output) - logits = self.classifier(sequence_output) - - loss = None - if labels is not None: - loss_fct = CrossEntropyLoss() - loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) - - if not return_dict: - output = (logits,) + outputs[2:] - return ((loss,) + output) if loss is not None else output - - return TokenClassifierOutput( - loss=loss, - logits=logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - -@add_start_docstrings( - """ - Bert Model with a span classification head on top for extractive question-answering tasks like SQuAD (a linear - layers on top of the hidden-states output to compute `span start logits` and `span end logits`). - """, - BERT_START_DOCSTRING, -) -class BertForQuestionAnswering(BertPreTrainedModel): - _keys_to_ignore_on_load_unexpected = [r"pooler"] - - def __init__(self, config): - super().__init__(config) - self.num_labels = config.num_labels - - self.bert = BertModel(config, add_pooling_layer=False) - self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) - - # Initialize weights and apply final processing - self.post_init() - - @add_start_docstrings_to_model_forward(BERT_INPUTS_DOCSTRING.format("batch_size, sequence_length")) - @add_code_sample_docstrings( - processor_class=_TOKENIZER_FOR_DOC, - checkpoint=_CHECKPOINT_FOR_DOC, - output_type=QuestionAnsweringModelOutput, - config_class=_CONFIG_FOR_DOC, - ) - def forward( - self, - input_ids: Optional[torch.Tensor] = None, - attention_mask: Optional[torch.Tensor] = None, - token_type_ids: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - head_mask: Optional[torch.Tensor] = None, - inputs_embeds: Optional[torch.Tensor] = None, - start_positions: Optional[torch.Tensor] = None, - end_positions: Optional[torch.Tensor] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple[torch.Tensor], QuestionAnsweringModelOutput]: - r""" - start_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the start of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - end_positions (`torch.LongTensor` of shape `(batch_size,)`, *optional*): - Labels for position (index) of the end of the labelled span for computing the token classification loss. - Positions are clamped to the length of the sequence (`sequence_length`). Position outside of the sequence - are not taken into account for computing the loss. - """ - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - outputs = self.bert( - input_ids, - attention_mask=attention_mask, - token_type_ids=token_type_ids, - position_ids=position_ids, - head_mask=head_mask, - inputs_embeds=inputs_embeds, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - sequence_output = outputs[0] - - logits = self.qa_outputs(sequence_output) - start_logits, end_logits = logits.split(1, dim=-1) - start_logits = start_logits.squeeze(-1).contiguous() - end_logits = end_logits.squeeze(-1).contiguous() - - total_loss = None - if start_positions is not None and end_positions is not None: - # If we are on multi-GPU, split add a dimension - if len(start_positions.size()) > 1: - start_positions = start_positions.squeeze(-1) - if len(end_positions.size()) > 1: - end_positions = end_positions.squeeze(-1) - # sometimes the start/end positions are outside our model inputs, we ignore these terms - ignored_index = start_logits.size(1) - start_positions = start_positions.clamp(0, ignored_index) - end_positions = end_positions.clamp(0, ignored_index) - - loss_fct = CrossEntropyLoss(ignore_index=ignored_index) - start_loss = loss_fct(start_logits, start_positions) - end_loss = loss_fct(end_logits, end_positions) - total_loss = (start_loss + end_loss) / 2 - - if not return_dict: - output = (start_logits, end_logits) + outputs[2:] - return ((total_loss,) + output) if total_loss is not None else output - - return QuestionAnsweringModelOutput( - loss=total_loss, - start_logits=start_logits, - end_logits=end_logits, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) diff --git a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/dense_einsum_test.py b/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/dense_einsum_test.py deleted file mode 100644 index 57a60fe52fa835c09df228274d42ed7eb8f39595..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/nlp/modeling/layers/dense_einsum_test.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for Keras-based einsum layer.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import numpy as np -import tensorflow as tf - -from tensorflow.python.keras import keras_parameterized # pylint: disable=g-direct-tensorflow-import -from official.nlp.modeling.layers import dense_einsum - - -# This decorator runs the test in V1, V2-Eager, and V2-Functional mode. It -# guarantees forward compatibility of this code for the V2 switchover. -@keras_parameterized.run_all_keras_modes -class DenseEinsumLayer(keras_parameterized.TestCase): - - def test_3D_einsum_with_two_bound_dimensions(self): - test_layer = dense_einsum.DenseEinsum( - output_shape=(64,), num_summed_dimensions=2) - # Create a 4-dimensional input (the first dimension is implicit). - input_tensor = tf.keras.Input(shape=(None, 40, 80)) - _ = test_layer(input_tensor) - self.assertEqual(test_layer._einsum_string, "abcd,cde->abe") - self.assertEqual(test_layer._kernel_shape, (40, 80, 64)) - - def test_3D_einsum_with_one_bound_dimensions(self): - test_layer = dense_einsum.DenseEinsum( - output_shape=(64, 32), num_summed_dimensions=1) - # Create a 3-dimensional input (the first dimension is implicit). - input_tensor = tf.keras.Input(shape=(None, 80)) - _ = test_layer(input_tensor) - self.assertEqual(test_layer._einsum_string, "abc,cde->abde") - self.assertEqual(test_layer._kernel_shape, (80, 64, 32)) - - def test_2D_einsum_with_one_bound_dimensions(self): - test_layer = dense_einsum.DenseEinsum( - output_shape=(64,), num_summed_dimensions=1) - # Create a 3-dimensional input (the first dimension is implicit). - input_tensor = tf.keras.Input(shape=(None, 80)) - _ = test_layer(input_tensor) - self.assertEqual(test_layer._einsum_string, "abc,cd->abd") - self.assertEqual(test_layer._kernel_shape, (80, 64)) - - def test_bias_term_can_be_disabled(self): - # A layer created using the bias should have two weights. - test_layer = dense_einsum.DenseEinsum( - output_shape=64, num_summed_dimensions=1, use_bias=True) - input_tensor = tf.keras.Input(shape=(None, 80)) - _ = test_layer(input_tensor) - self.assertEqual(2, len(test_layer.get_weights())) - - # A layer created without the bias should have only one weight. - test_layer = dense_einsum.DenseEinsum( - output_shape=64, num_summed_dimensions=1, use_bias=False) - input_tensor = tf.keras.Input(shape=(None, 80)) - _ = test_layer(input_tensor) - self.assertEqual(1, len(test_layer.get_weights())) - - def test_activation(self): - # Create a model that does not use an activation. - no_activation_layer = dense_einsum.DenseEinsum( - output_shape=64, num_summed_dimensions=1, activation=None) - input_tensor = tf.keras.Input(shape=(None, 80)) - output_tensor = no_activation_layer(input_tensor) - no_activation_model = tf.keras.Model(input_tensor, output_tensor) - - # Create a model that uses a softmax activation. - activation_layer = dense_einsum.DenseEinsum( - output_shape=64, num_summed_dimensions=1, activation="softmax") - input_tensor = tf.keras.Input(shape=(None, 80)) - output_tensor = activation_layer(input_tensor) - activation_model = tf.keras.Model(input_tensor, output_tensor) - - # Make sure the models' weights are identical. - activation_model.set_weights(no_activation_model.get_weights()) - - # Predict using each model on the same input data. The output should be - # different, since one is using a softmax - even though the models' weights - # are the same. - input_values = 10 * np.random.random_sample((10, 4, 80)) - non_activated_data = no_activation_model.predict(input_values) - activated_data = activation_model.predict(input_values) - self.assertNotAllClose(activated_data, non_activated_data) - - def test_non_iterable_output_shape(self): - test_layer = dense_einsum.DenseEinsum( - output_shape=64, num_summed_dimensions=1) - # Create a 3-dimensional input (the first dimension is implicit). - input_tensor = tf.keras.Input(shape=(None, 80)) - _ = test_layer(input_tensor) - self.assertEqual(test_layer._einsum_string, "abc,cd->abd") - self.assertEqual(test_layer._kernel_shape, (80, 64)) - - def test_with_explicit_initializer(self): - test_layer = dense_einsum.DenseEinsum( - output_shape=(64,), - num_summed_dimensions=2, - kernel_initializer=tf.keras.initializers.TruncatedNormal(stddev=0.02)) - # Create a 4-dimensional input (the first dimension is implicit). - input_tensor = tf.keras.Input(shape=(None, 40, 80)) - _ = test_layer(input_tensor) - self.assertEqual(test_layer._einsum_string, "abcd,cde->abe") - self.assertEqual(test_layer._kernel_shape, (40, 80, 64)) - - -if __name__ == "__main__": - tf.test.main() diff --git a/spaces/NN520/AI/src/lib/hooks/use-bing.ts b/spaces/NN520/AI/src/lib/hooks/use-bing.ts deleted file mode 100644 index dcdb1667ced0cba299b0825c0e91c4732411308c..0000000000000000000000000000000000000000 --- a/spaces/NN520/AI/src/lib/hooks/use-bing.ts +++ /dev/null @@ -1,173 +0,0 @@ -'use client' - -import { useState, useCallback, useEffect, useMemo } from 'react' -import { useAtom, useAtomValue } from 'jotai' -import { chatFamily, bingConversationStyleAtom, GreetMessages, hashAtom, voiceAtom } from '@/state' -import { setConversationMessages } from './chat-history' -import { ChatMessageModel, BotId, FileItem } from '@/lib/bots/bing/types' -import { nanoid } from '../utils' -import { TTS } from '../bots/bing/tts' - -export function useBing(botId: BotId = 'bing') { - const chatAtom = useMemo(() => chatFamily({ botId, page: 'singleton' }), [botId]) - const [enableTTS] = useAtom(voiceAtom) - const speaker = useMemo(() => new TTS(), []) - const [hash, setHash] = useAtom(hashAtom) - const bingConversationStyle = useAtomValue(bingConversationStyleAtom) - const [chatState, setChatState] = useAtom(chatAtom) - const [input, setInput] = useState('') - const [attachmentList, setAttachmentList] = useState([]) - - const updateMessage = useCallback( - (messageId: string, updater: (message: ChatMessageModel) => void) => { - setChatState((draft) => { - const message = draft.messages.find((m) => m.id === messageId) - if (message) { - updater(message) - } - }) - }, - [setChatState], - ) - - const sendMessage = useCallback( - async (input: string, options = {}) => { - const botMessageId = nanoid() - const imageUrl = attachmentList?.[0]?.status === 'loaded' ? attachmentList[0].url : undefined - setChatState((draft) => { - const text = imageUrl ? `${input}\n\n![image](${imageUrl})` : input - draft.messages.push({ id: nanoid(), text, author: 'user' }, { id: botMessageId, text: '', author: 'bot' }) - setAttachmentList([]) - }) - const abortController = new AbortController() - setChatState((draft) => { - draft.generatingMessageId = botMessageId - draft.abortController = abortController - }) - speaker.reset() - await chatState.bot.sendMessage({ - prompt: input, - imageUrl: /\?bcid=([^&]+)/.test(imageUrl ?? '') ? `https://www.bing.com/images/blob?bcid=${RegExp.$1}` : imageUrl, - options: { - ...options, - bingConversationStyle, - }, - signal: abortController.signal, - onEvent(event) { - if (event.type === 'UPDATE_ANSWER') { - updateMessage(botMessageId, (message) => { - if (event.data.text.length > message.text.length) { - message.text = event.data.text - } - - if (event.data.spokenText && enableTTS) { - speaker.speak(event.data.spokenText) - } - - message.throttling = event.data.throttling || message.throttling - message.sourceAttributions = event.data.sourceAttributions || message.sourceAttributions - message.suggestedResponses = event.data.suggestedResponses || message.suggestedResponses - }) - } else if (event.type === 'ERROR') { - updateMessage(botMessageId, (message) => { - message.error = event.error - }) - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } else if (event.type === 'DONE') { - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - }) - } - }, - }) - }, - [botId, attachmentList, chatState.bot, setChatState, updateMessage], - ) - - const uploadImage = useCallback(async (imgUrl: string) => { - setAttachmentList([{ url: imgUrl, status: 'loading' }]) - const response = await chatState.bot.uploadImage(imgUrl, bingConversationStyle) - if (response?.blobId) { - setAttachmentList([{ url: `/api/blob?bcid=${response.blobId}`, status: 'loaded' }]) - } else { - setAttachmentList([{ url: imgUrl, status: 'error' }]) - } - }, [chatState.bot]) - - const resetConversation = useCallback(() => { - chatState.bot.resetConversation() - speaker.abort() - setChatState((draft) => { - draft.abortController = undefined - draft.generatingMessageId = '' - draft.messages = [{ author: 'bot', text: GreetMessages[Math.floor(GreetMessages.length * Math.random())], id: nanoid() }] - draft.conversationId = nanoid() - }) - }, [chatState.bot, setChatState]) - - const stopGenerating = useCallback(() => { - chatState.abortController?.abort() - if (chatState.generatingMessageId) { - updateMessage(chatState.generatingMessageId, (message) => { - if (!message.text && !message.error) { - message.text = 'Cancelled' - } - }) - } - setChatState((draft) => { - draft.generatingMessageId = '' - }) - }, [chatState.abortController, chatState.generatingMessageId, setChatState, updateMessage]) - - useEffect(() => { - if (chatState.messages.length) { - setConversationMessages(botId, chatState.conversationId, chatState.messages) - } - }, [botId, chatState.conversationId, chatState.messages]) - - useEffect(() => { - if (hash === 'reset') { - resetConversation() - setHash('') - } - }, [hash, setHash]) - - const chat = useMemo( - () => ({ - botId, - bot: chatState.bot, - isSpeaking: speaker.isSpeaking, - messages: chatState.messages, - sendMessage, - setInput, - input, - resetConversation, - generating: !!chatState.generatingMessageId, - stopGenerating, - uploadImage, - setAttachmentList, - attachmentList, - }), - [ - botId, - bingConversationStyle, - chatState.bot, - chatState.generatingMessageId, - chatState.messages, - speaker.isSpeaking, - setInput, - input, - setAttachmentList, - attachmentList, - resetConversation, - sendMessage, - stopGenerating, - ], - ) - - return chat -} diff --git a/spaces/NS11890/demo-app/app.py b/spaces/NS11890/demo-app/app.py deleted file mode 100644 index edce3843b2765433cf73dcdbd7b576ff6521c060..0000000000000000000000000000000000000000 --- a/spaces/NS11890/demo-app/app.py +++ /dev/null @@ -1,9 +0,0 @@ -from transformers import pipeline - -pipe = pipeline('text-generation') - -user_input = input("Enter some text: ") -if user_input: - output = pipe(user_input, max_length=50, num_return_sequences=1, do_sample=True) - generated_text = output[0]['generated_text'] - print(f"Generated Text: {generated_text}") diff --git a/spaces/Neo-Salvatore/translate-locale/app.py b/spaces/Neo-Salvatore/translate-locale/app.py deleted file mode 100644 index 1bace059627cf702bf356f1a8289f3ef7c82598d..0000000000000000000000000000000000000000 --- a/spaces/Neo-Salvatore/translate-locale/app.py +++ /dev/null @@ -1,82 +0,0 @@ -import streamlit as st -import asyncio -from utils import ( - parse_docx, - parse_pdf, - parse_txt, - parse_csv, - text_to_docs, - parse_any, - trans_docs -) -from openai.error import OpenAIError - -st.title('Translate-Locale') - -def set_openai_api_key(api_key: str): - st.session_state["OPENAI_API_KEY"] = api_key - -def set_target_language(language: str): - st.session_state['LANGUAGE'] = language - -def clear_submit(): - st.session_state["submit"] = False - -index = None -doc = None - -user_secret = st.text_input( - "OpenAI API Key", - type="password", - placeholder="Paste your OpenAI API key here (sk-...)", - help="You can get your API key from https://platform.openai.com/account/api-keys.", - value=st.session_state.get("OPENAI_API_KEY", ""), - ) -if user_secret: - set_openai_api_key(user_secret) - -language = st.selectbox( - 'Select the language you want to translate', - ('English', 'Japanese', 'Simplified Chinese', 'Traditional Chinese'), -) -if language: - set_target_language(language) -else: - st.session_state['LANGUAGE'] = 'English' - -uploaded_file = st.file_uploader( - "Upload a pdf, docx, or txt file", - type=["pdf", "docx", "txt"], - help="Scanned documents are not supported yet!", -) - -button = st.button("Start translation") -if button: - if uploaded_file is not None: - if uploaded_file.name.endswith(".pdf"): - doc = parse_pdf(uploaded_file) - elif uploaded_file.name.endswith(".docx"): - doc = parse_docx(uploaded_file) - # elif uploaded_file.name.endswith(".csv"): - # doc = parse_csv(uploaded_file) - elif uploaded_file.name.endswith(".txt"): - doc = parse_txt(uploaded_file) - # else: - # doc = parse_any(uploaded_file) - else: - st.error("File type not supported") - #doc = None - text = text_to_docs(doc) - # st.write(text) - try: - with st.spinner("Translate file... This may take a while⏳"): - index = trans_docs(text, language) - if index: - st.download_button('Download translated text', index) - st.session_state["api_key_configured"] = True - except OpenAIError as e: - st.error(e._message) - else: - st.warning('Please upload the file first.') - - diff --git a/spaces/OAOA/DifFace/datapipe/prepare/face/degradation_split.py b/spaces/OAOA/DifFace/datapipe/prepare/face/degradation_split.py deleted file mode 100644 index eaa5d1abbaabaa09c917792b15ba5ce2cfb1db4c..0000000000000000000000000000000000000000 --- a/spaces/OAOA/DifFace/datapipe/prepare/face/degradation_split.py +++ /dev/null @@ -1,72 +0,0 @@ -#!/usr/bin/env python -# -*- coding:utf-8 -*- -# Power by Zongsheng Yue 2022-07-16 12:11:42 - -import sys -from pathlib import Path -sys.path.append(str(Path(__file__).resolve().parents[3])) - -import os -import math -import torch -import random -import argparse -import numpy as np -from einops import rearrange - -from utils import util_image -from utils import util_common - -from datapipe.face_degradation_testing import face_degradation - -parser = argparse.ArgumentParser() -parser.add_argument("--lq_dir", type=str, default='', help="floder for the lq image") -parser.add_argument("--source_txt", type=str, default='', help="ffhq or celeba") -parser.add_argument("--prefix", type=str, default='celeba512', help="Data type") -parser.add_argument("--seed", type=int, default=10000, help="Random seed") -args = parser.parse_args() - -qf_list = [30, 40, 50, 60, 70] # quality factor for jpeg compression -sf_list = [4, 8, 16, 24, 30] # scale factor for upser-resolution -nf_list = [1, 5, 10, 15, 20] # noise level for gaussian noise -sig_list = [2, 4, 6, 8, 10, 12, 14] # sigma for gaussian kernel -theta_list = [x*math.pi for x in [0, 0.25, 0.5, 0.75]] # angle for gaussian kernel -num_val = len(qf_list) * len(sf_list) * len(nf_list) * len(sig_list) * len(theta_list) - -# setting seed -random.seed(args.seed) -np.random.seed(args.seed) -torch.manual_seed(args.seed) - -files_path = util_common.readline_txt(args.source_txt) -assert num_val <= len(files_path) -print(f'Number of images in validation: {num_val}') - -save_dir = Path(args.lq_dir).parent / (Path(args.lq_dir).stem+'_split') -if not save_dir.exists(): - save_dir.mkdir() - -for sf_target in sf_list: - num_iters = 0 - - num_sf = 0 - file_path = save_dir / f"{args.prefix}_val_sf{sf_target}.txt" - if file_path.exists(): - file_path.unlink() - with open(file_path, mode='w') as ff: - for qf in qf_list: - for sf in sf_list: - for nf in nf_list: - for sig_x in sig_list: - for theta in theta_list: - - im_name = Path(files_path[num_iters]).name - im_path = str(Path(args.lq_dir).parent / im_name) - if sf == sf_target: - ff.write(im_path+'\n') - num_sf += 1 - - num_iters += 1 - - print(f'{num_sf} images for sf: {sf_target}') - diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/docs/common_voice_example.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/docs/common_voice_example.md deleted file mode 100644 index 40e841b284a7e34b458b286eb0bb60e33c0601da..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/speech_synthesis/docs/common_voice_example.md +++ /dev/null @@ -1,56 +0,0 @@ -[[Back]](..) - -# Common Voice - -[Common Voice](https://commonvoice.mozilla.org/en/datasets) is a public domain speech corpus with 11.2K hours of read -speech in 76 languages (the latest version 7.0). We provide examples for building -[Transformer](https://arxiv.org/abs/1809.08895) models on this dataset. - - -## Data preparation -[Download](https://commonvoice.mozilla.org/en/datasets) and unpack Common Voice v4 to a path `${DATA_ROOT}/${LANG_ID}`. -Create splits and generate audio manifests with -```bash -python -m examples.speech_synthesis.preprocessing.get_common_voice_audio_manifest \ - --data-root ${DATA_ROOT} \ - --lang ${LANG_ID} \ - --output-manifest-root ${AUDIO_MANIFEST_ROOT} --convert-to-wav -``` - -Then, extract log-Mel spectrograms, generate feature manifest and create data configuration YAML with -```bash -python -m examples.speech_synthesis.preprocessing.get_feature_manifest \ - --audio-manifest-root ${AUDIO_MANIFEST_ROOT} \ - --output-root ${FEATURE_MANIFEST_ROOT} \ - --ipa-vocab --lang ${LANG_ID} -``` -where we use phoneme inputs (`--ipa-vocab`) as example. - -To denoise audio and trim leading/trailing silence using signal processing based VAD, run -```bash -for SPLIT in dev test train; do - python -m examples.speech_synthesis.preprocessing.denoise_and_vad_audio \ - --audio-manifest ${AUDIO_MANIFEST_ROOT}/${SPLIT}.audio.tsv \ - --output-dir ${PROCESSED_DATA_ROOT} \ - --denoise --vad --vad-agg-level 2 -done -``` - - -## Training -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#transformer).) - - -## Inference -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#inference).) - -## Automatic Evaluation -(Please refer to [the LJSpeech example](../docs/ljspeech_example.md#automatic-evaluation).) - -## Results - -| Language | Speakers | --arch | Params | Test MCD | Model | -|---|---|---|---|---|---| -| English | 200 | tts_transformer | 54M | 3.8 | [Download](https://dl.fbaipublicfiles.com/fairseq/s2/cv4_en200_transformer_phn.tar) | - -[[Back]](..) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/utils.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/utils.py deleted file mode 100644 index 14c015b7c19aae65812e864cf1d95ef3d39de606..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/modules/quantization/pq/utils.py +++ /dev/null @@ -1,374 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import logging -import re -from operator import attrgetter, itemgetter -import torch -import numpy as np -import torch.distributed as dist -import torch.nn as nn - -from .modules import PQConv2d, PQEmbedding, PQLinear -from .pq import PQ - - -def quantize_model_( - model, - size_tracker, - layers_to_quantize, - block_sizes_config, - n_centroids_config, - step=0, - n_iter=15, - eps=1e-6, - max_tentatives=100, - remove_weights=False, - verbose=True, - state_dict=None, -): - """ - Quantize a model in-place by stages. All the targeted - layers are replaced by their quantized counterpart, - and the model is ready for the finetuning of the - centroids in a standard training loop (no modifications - required). Note that we do not quantize biases. - - Args: - - model: a nn.Module - - size_tracker: useful for tracking quatization statistics - - layers_to_quantize: a list containing regexps for - filtering the layers to quantize at each stage according - to their name (as in model.named_parameters()) - - block_sizes_config: dict like - { - 'Conv2d': ('kernel_size', {'(3, 3)': 9, '(1, 1)': 4}), - 'Linear': ('in_features', {'*': 8}) - } - For instance, all conv2d layers with kernel size 3x3 have - a block size of 9 and all Linear layers are quantized with - a block size of 8, irrespective of their size. - - n_centroids_config: dict like - { - 'Conv2d': ('kernel_size', {'*': 256}), - 'Linear': ('in_features', {'*': 256}) - } - For instance, all conv2d layers are quantized with 256 centroids - - step: the layers to quantize inplace corresponding - to layers_to_quantize[step] - """ - - quantized_layers = get_layers(model, layers_to_quantize[step], remove_weights=remove_weights) - - for layer in quantized_layers: - - # book-keeping - is_master_process = (not dist.is_initialized()) or ( - dist.is_initialized() and dist.get_rank() == 0 - ) - verbose = verbose and is_master_process - - # get block size and centroids - module = attrgetter(layer)(model) - block_size = get_param(module, layer, block_sizes_config) - n_centroids = get_param(module, layer, n_centroids_config) - if verbose: - logging.info( - f"Quantizing layer {layer} with block size {block_size} and {n_centroids} centroids" - ) - - # quantize layer - weight = module.weight.data.clone() - is_bias = "bias" in [x[0] for x in module.named_parameters()] - bias = module.bias.data.clone() if is_bias else None - quantizer = PQ( - weight, - block_size, - n_centroids=n_centroids, - n_iter=n_iter, - eps=eps, - max_tentatives=max_tentatives, - verbose=verbose, - ) - - # quantization performed on all GPUs with same seed - quantizer.encode() - centroids = quantizer.centroids.contiguous() - assignments = quantizer.assignments.contiguous() - - # If n_iter = 0 and state_dict is provided, then - # we initialize random assignments and centroids to - # random values of the appropriate dimensions - # because the quantized model parameters will - # overwritten by the state_dict later on. - if n_iter == 0 and state_dict: - # Initialize random centroids of the correct size - centroids = torch.rand(centroids.size()) - centroids.cuda() - # Get counts and assignment keys from layer in loaded checkpoint. - counts_key = layer+"."+"counts" - assignment_key = layer+"."+"assignments" - # Get number of different bins to include. - counts = list(state_dict[counts_key].shape)[0] - print(layer) - print(state_dict[counts_key]) - print(counts) - # Initialize random assignments of the correct size - # with an appropriate number of bins. - num_assignments = list(state_dict[assignment_key].shape)[0] - num_extra = num_assignments - counts - print(num_assignments) - print(num_extra) - assignments_bins = torch.arange(counts) - assignments_rand = torch.randint(0, counts-1, (num_extra, )) - assignments = torch.cat((assignments_bins, assignments_rand), 0) - # assignments = assignments.type(torch.IntTensor) - assignments.cuda() - print("assignments") - print(assignments) - - # broadcast results to make sure weights are up-to-date - if dist.is_initialized(): - dist.broadcast(centroids, 0) - dist.broadcast(assignments, 0) - - # instantiate the quantized counterpart - if isinstance(module, nn.Linear): - out_features, in_features = map( - lambda k: module.__dict__[k], ["out_features", "in_features"] - ) - quantized_module = PQLinear( - centroids, assignments, bias, in_features, out_features - ) - elif isinstance(module, nn.Embedding): - num_embeddings, embedding_dim = map( - lambda k: module.__dict__[k], ["num_embeddings", "embedding_dim"] - ) - quantized_module = PQEmbedding( - centroids, assignments, num_embeddings, embedding_dim - ) - elif isinstance(module, nn.Conv2d): - out_channels, in_channels, kernel_size = map( - lambda k: module.__dict__[k], - ["out_channels", "in_channels", "kernel_size"], - ) - stride, padding, dilation, groups, padding_mode = map( - lambda k: module.__dict__[k], - ["stride", "padding", "dilation", "groups", "padding_mode"], - ) - - quantized_module = PQConv2d( - centroids, - assignments, - bias, - in_channels, - out_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=groups, - padding_mode=padding_mode, - ) - else: - raise ValueError(f"Module {module} not yet supported for quantization") - - # replace layer by its quantized counterpart - attrsetter(layer)(model, quantized_module) - - # update statistics - size_tracker.update(weight, block_size, n_centroids) - - # return name of quantized layers - return quantized_layers - - -def get_layers(model, filter_regexp, remove_weights=False): - """ - Filters out the layers according to a regexp. Note that - we omit biases. - - Args: - - model: a nn.Module - - filter_regexp: a regexp to filter the layers to keep - according to their name in model.named_parameters(). - For instance, the regexp: - - down_layers\\.[123456]\\.(conv[12]|identity\\.conv)) - - is keeping blocks down_layers from 1 to 6, and inside - each block is keeping conv1, conv2 and identity.conv. - - Remarks: - - We add (module\\.)? at the beginning of the regexp to - account for the possible use of nn.parallel.DataParallel - """ - - # get all parameter names - all_layers = map(itemgetter(0), model.named_parameters()) - - # remove biases - all_layers = filter(lambda x: "bias" not in x, all_layers) - - # remove .weight in all other names (or .weight_orig is spectral norm) - all_layers = map(lambda x: x.replace(".weight_orig", ""), all_layers) - # remove weights indicates whether the weights extension should be removed, in addition to - # weight_orig and weight extension on names - if remove_weights: - all_layers = map(lambda x: x.replace(".weights", ""), all_layers) - all_layers = map(lambda x: x.replace(".weight", ""), all_layers) - - # return filtered layers - filter_regexp = "(module\\.)?" + "(" + filter_regexp + ")" - r = re.compile(filter_regexp) - - return list(filter(r.match, all_layers)) - - -def get_param(module, layer_name, param_config): - """ - Given a quantization configuration, get the right parameter - for the module to be quantized. - - Args: - - module: a nn.Module - - layer_name: the name of the layer - - param_config: a dict like - { - 'Conv2d': ('kernel_size', {'(3, 3)': 9, '(1, 1)': 4}), - 'Linear': ('in_features', {'*': 8}) - } - For instance, all conv2d layers with kernel size 3x3 have - a block size of 9 and all Linear layers are quantized with - a block size of 8, irrespective of their size. - - Remarks: - - if 'fuzzy_name' is passed as a parameter, layers whose layer_name - include 'fuzzy_name' will be assigned the given parameter. - In the following example, conv.expand layers will have a block - size of 9 while conv.reduce will have a block size of 4 and all - other layers will have a block size of 2. - { - 'Conv2d': ('fuzzy_name', {'expand': 9, 'reduce': 4, '*': 2}), - 'Linear': ('fuzzy_name', {'classifier': 8, 'projection': 4}) - } - - """ - - layer_type = module.__class__.__name__ - - if layer_type not in param_config: - raise KeyError(f"Layer type {layer_type} not in config for layer {module}") - - feature, params = param_config[module.__class__.__name__] - - if feature != "fuzzy_name": - feature_value = str(getattr(module, feature)) - if feature_value not in params: - if "*" in params: - feature_value = "*" - else: - raise KeyError( - f"{feature}={feature_value} not in config for layer {module}" - ) - else: - feature_values = [name for name in params if name in layer_name] - if len(feature_values) == 0: - if "*" in params: - feature_value = "*" - else: - raise KeyError(f"name={layer_name} not in config for {module}") - else: - feature_value = feature_values[0] - - return params[feature_value] - - -class SizeTracker(object): - """ - Class to keep track of the compressed network size with iPQ. - - Args: - - model: a nn.Module - - Remarks: - - The compressed size is the sum of three components - for each layer in the network: - (1) Storing the centroids given by iPQ in fp16 - (2) Storing the assignments of the blocks in int8 - (3) Storing all non-compressed elements such as biases - - This cost in only valid if we use 256 centroids (then - indexing can indeed by done with int8). - """ - - def __init__(self, model): - self.model = model - self.size_non_compressed_model = self.compute_size() - self.size_non_quantized = self.size_non_compressed_model - self.size_index = 0 - self.size_centroids = 0 - self.n_quantized_layers = 0 - - def compute_size(self): - """ - Computes the size of the model (in MB). - """ - - res = 0 - for _, p in self.model.named_parameters(): - res += p.numel() - return res * 4 / 1024 / 1024 - - def update(self, W, block_size, n_centroids): - """ - Updates the running statistics when quantizing a new layer. - """ - - # bits per weights - bits_per_weight = np.log2(n_centroids) / block_size - self.n_quantized_layers += 1 - - # size of indexing the subvectors of size block_size (in MB) - size_index_layer = bits_per_weight * W.numel() / 8 / 1024 / 1024 - self.size_index += size_index_layer - - # size of the centroids stored in float16 (in MB) - size_centroids_layer = n_centroids * block_size * 2 / 1024 / 1024 - self.size_centroids += size_centroids_layer - - # size of non-compressed layers, e.g. LayerNorms or biases (in MB) - size_uncompressed_layer = W.numel() * 4 / 1024 / 1024 - self.size_non_quantized -= size_uncompressed_layer - - def __repr__(self): - size_compressed = ( - self.size_index + self.size_centroids + self.size_non_quantized - ) - compression_ratio = self.size_non_compressed_model / size_compressed # NOQA - return ( - f"Non-compressed model size: {self.size_non_compressed_model:.2f} MB. " - f"After quantizing {self.n_quantized_layers} layers, size " - f"(indexing + centroids + other): {self.size_index:.2f} MB + " - f"{self.size_centroids:.2f} MB + {self.size_non_quantized:.2f} MB = " - f"{size_compressed:.2f} MB, compression ratio: {compression_ratio:.2f}x" - ) - - -def attrsetter(*items): - def resolve_attr(obj, attr): - attrs = attr.split(".") - head = attrs[:-1] - tail = attrs[-1] - - for name in head: - obj = getattr(obj, name) - return obj, tail - - def g(obj, val): - for attr in items: - resolved_obj, resolved_attr = resolve_attr(obj, attr) - setattr(resolved_obj, resolved_attr, val) - - return g diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quant_noise.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quant_noise.py deleted file mode 100644 index d777dfbb6c1bf6a9b769dfdaec35d5ef084c8a8b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/quant_noise.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn - - -def quant_noise(module, p, block_size): - """ - Wraps modules and applies quantization noise to the weights for - subsequent quantization with Iterative Product Quantization as - described in "Training with Quantization Noise for Extreme Model Compression" - - Args: - - module: nn.Module - - p: amount of Quantization Noise - - block_size: size of the blocks for subsequent quantization with iPQ - - Remarks: - - Module weights must have the right sizes wrt the block size - - Only Linear, Embedding and Conv2d modules are supported for the moment - - For more detail on how to quantize by blocks with convolutional weights, - see "And the Bit Goes Down: Revisiting the Quantization of Neural Networks" - - We implement the simplest form of noise here as stated in the paper - which consists in randomly dropping blocks - """ - - # if no quantization noise, don't register hook - if p <= 0: - return module - - # supported modules - assert isinstance(module, (nn.Linear, nn.Embedding, nn.Conv2d)) - - # test whether module.weight has the right sizes wrt block_size - is_conv = module.weight.ndim == 4 - - # 2D matrix - if not is_conv: - assert ( - module.weight.size(1) % block_size == 0 - ), "Input features must be a multiple of block sizes" - - # 4D matrix - else: - # 1x1 convolutions - if module.kernel_size == (1, 1): - assert ( - module.in_channels % block_size == 0 - ), "Input channels must be a multiple of block sizes" - # regular convolutions - else: - k = module.kernel_size[0] * module.kernel_size[1] - assert k % block_size == 0, "Kernel size must be a multiple of block size" - - def _forward_pre_hook(mod, input): - # no noise for evaluation - if mod.training: - if not is_conv: - # gather weight and sizes - weight = mod.weight - in_features = weight.size(1) - out_features = weight.size(0) - - # split weight matrix into blocks and randomly drop selected blocks - mask = torch.zeros( - in_features // block_size * out_features, device=weight.device - ) - mask.bernoulli_(p) - mask = mask.repeat_interleave(block_size, -1).view(-1, in_features) - - else: - # gather weight and sizes - weight = mod.weight - in_channels = mod.in_channels - out_channels = mod.out_channels - - # split weight matrix into blocks and randomly drop selected blocks - if mod.kernel_size == (1, 1): - mask = torch.zeros( - int(in_channels // block_size * out_channels), - device=weight.device, - ) - mask.bernoulli_(p) - mask = mask.repeat_interleave(block_size, -1).view(-1, in_channels) - else: - mask = torch.zeros( - weight.size(0), weight.size(1), device=weight.device - ) - mask.bernoulli_(p) - mask = ( - mask.unsqueeze(2) - .unsqueeze(3) - .repeat(1, 1, mod.kernel_size[0], mod.kernel_size[1]) - ) - - # scale weights and apply mask - mask = mask.to( - torch.bool - ) # x.bool() is not currently supported in TorchScript - s = 1 / (1 - p) - mod.weight.data = s * weight.masked_fill(mask, 0) - - module.register_forward_pre_hook(_forward_pre_hook) - return module diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/docs/enja-waitk.md b/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/docs/enja-waitk.md deleted file mode 100644 index fb9d82576f80b4405564a99774fc98ac2fe6ad3b..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/examples/simultaneous_translation/docs/enja-waitk.md +++ /dev/null @@ -1,106 +0,0 @@ -# An example of English to Japaneses Simultaneous Translation System - -This is an example of training and evaluating a transformer *wait-k* English to Japanese simultaneous text-to-text translation model. - -## Data Preparation -This section introduces the data preparation for training and evaluation. -If you only want to evaluate the model, please jump to [Inference & Evaluation](#inference-&-evaluation) - -For illustration, we only use the following subsets of the available data from [WMT20 news translation task](http://www.statmt.org/wmt20/translation-task.html), which results in 7,815,391 sentence pairs. -- News Commentary v16 -- Wiki Titles v3 -- WikiMatrix V1 -- Japanese-English Subtitle Corpus -- The Kyoto Free Translation Task Corpus - -We use WMT20 development data as development set. Training `transformer_vaswani_wmt_en_de_big` model on such amount of data will result in 17.3 BLEU with greedy search and 19.7 with beam (10) search. Notice that a better performance can be achieved with the full WMT training data. - -We use [sentencepiece](https://github.com/google/sentencepiece) toolkit to tokenize the data with a vocabulary size of 32000. -Additionally, we filtered out the sentences longer than 200 words after tokenization. -Assuming the tokenized text data is saved at `${DATA_DIR}`, -we prepare the data binary with the following command. - -```bash -fairseq-preprocess \ - --source-lang en --target-lang ja \ - --trainpref ${DATA_DIR}/train \ - --validpref ${DATA_DIR}/dev \ - --testpref ${DATA_DIR}/test \ - --destdir ${WMT20_ENJA_DATA_BIN} \ - --nwordstgt 32000 --nwordssrc 32000 \ - --workers 20 -``` - -## Simultaneous Translation Model Training -To train a wait-k `(k=10)` model. -```bash -fairseq-train ${WMT20_ENJA_DATA_BIN} \ - --save-dir ${SAVEDIR} - --simul-type waitk \ - --waitk-lagging 10 \ - --max-epoch 70 \ - --arch transformer_monotonic_vaswani_wmt_en_de_big \ - --optimizer adam \ - --adam-betas '(0.9, 0.98)' \ - --lr-scheduler inverse_sqrt \ - --warmup-init-lr 1e-07 \ - --warmup-updates 4000 \ - --lr 0.0005 \ - --stop-min-lr 1e-09 \ - --clip-norm 10.0 \ - --dropout 0.3 \ - --weight-decay 0.0 \ - --criterion label_smoothed_cross_entropy \ - --label-smoothing 0.1 \ - --max-tokens 3584 -``` -This command is for training on 8 GPUs. Equivalently, the model can be trained on one GPU with `--update-freq 8`. - -## Inference & Evaluation -First of all, install [SimulEval](https://github.com/facebookresearch/SimulEval) for evaluation. - -```bash -git clone https://github.com/facebookresearch/SimulEval.git -cd SimulEval -pip install -e . -``` - -The following command is for the evaluation. -Assuming the source and reference files are `${SRC_FILE}` and `${REF_FILE}`, the sentencepiece model file for English is saved at `${SRC_SPM_PATH}` - - -```bash -simuleval \ - --source ${SRC_FILE} \ - --target ${TGT_FILE} \ - --data-bin ${WMT20_ENJA_DATA_BIN} \ - --sacrebleu-tokenizer ja-mecab \ - --eval-latency-unit char \ - --no-space \ - --src-splitter-type sentencepiecemodel \ - --src-splitter-path ${SRC_SPM_PATH} \ - --agent ${FAIRSEQ}/examples/simultaneous_translation/agents/simul_trans_text_agent_enja.py \ - --model-path ${SAVE_DIR}/${CHECKPOINT_FILENAME} \ - --output ${OUTPUT} \ - --scores -``` - -The `--data-bin` should be the same in previous sections if you prepare the data from the scratch. -If only for evaluation, a prepared data directory can be found [here](https://dl.fbaipublicfiles.com/simultaneous_translation/wmt20_enja_medium_databin.tgz) and a pretrained checkpoint (wait-k=10 model) can be downloaded from [here](https://dl.fbaipublicfiles.com/simultaneous_translation/wmt20_enja_medium_wait10_ckpt.pt). - -The output should look like this: -```bash -{ - "Quality": { - "BLEU": 11.442253287568398 - }, - "Latency": { - "AL": 8.6587861866951, - "AP": 0.7863304776251316, - "DAL": 9.477850951194764 - } -} -``` -The latency is evaluated by characters (`--eval-latency-unit`) on the target side. The latency is evaluated with `sacrebleu` with `MeCab` tokenizer `--sacrebleu-tokenizer ja-mecab`. `--no-space` indicates that do not add space when merging the predicted words. - -If `--output ${OUTPUT}` option is used, the detailed log and scores will be stored under the `${OUTPUT}` directory. diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/composite_loss.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/composite_loss.py deleted file mode 100644 index 98e835fa6e4c0bcad062df9c519701bf795c98be..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/criterions/composite_loss.py +++ /dev/null @@ -1,100 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from fairseq import utils -from fairseq.criterions import LegacyFairseqCriterion, register_criterion -from torch import nn - - -@register_criterion("composite_loss") -class CompositeLoss(LegacyFairseqCriterion): - """This is a composite loss that, given a list of model outputs and a list of targets, - computes an average of losses for each output-target pair""" - - def __init__(self, args, task): - super().__init__(args, task) - self.underlying_criterion = args.underlying_criterion - - @staticmethod - def add_args(parser): - """Add criterion-specific arguments to the parser.""" - # fmt: off - parser.add_argument('--underlying-criterion', type=str, metavar='VAL', required=True, - help='underlying criterion to use for the composite loss') - # fmt: on - - @staticmethod - def build_underlying_criterion(args, task): - saved_criterion = args.criterion - args.criterion = args.underlying_criterion - assert saved_criterion != args.underlying_criterion - underlying_criterion = task.build_criterion(args) - args.criterion = saved_criterion - return underlying_criterion - - @classmethod - def build_criterion(cls, args, task): - underlying_criterion = CompositeLoss.build_underlying_criterion(args, task) - - class FakeModel(nn.Module): - def __init__(self, model, net_out, target): - super().__init__() - self.model = model - self.net_out = net_out - self.target = target - - def forward(self, **unused): - return self.net_out - - def get_normalized_probs(self, net_output, log_probs, sample=None): - return self.model.get_normalized_probs( - net_output, log_probs, sample=sample - ) - - def get_targets(self, *unused): - return self.target - - @property - def decoder(self): - return self.model.decoder - - class _CompositeLoss(LegacyFairseqCriterion): - def __init__(self, args, task, underlying_criterion): - super().__init__(args, task) - self.underlying_criterion = underlying_criterion - - def forward(self, model, sample, reduce=True): - net_outputs = model(**sample["net_input"]) - targets = sample["target"] - - bsz = targets[0].size(0) - loss = net_outputs[0][0].new(1 if reduce else bsz).float().zero_() - - sample_size = 0 - logging_output = {} - for o, t in zip(net_outputs[0], targets): - m = FakeModel(model, (o, net_outputs[1]), t) - sample["target"] = t - l, ss, logging_output = self.underlying_criterion(m, sample, reduce) - loss += l - sample_size += ss - - loss.div_(len(targets)) - sample_size /= len(targets) - - logging_output["loss"] = utils.item(loss.data) if reduce else loss.data - return loss, sample_size, logging_output - - @staticmethod - def aggregate_logging_outputs(logging_outputs): - return underlying_criterion.__class__.aggregate_logging_outputs( - logging_outputs - ) - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - underlying_criterion.__class__.reduce_metrics(logging_outputs) - - return _CompositeLoss(args, task, underlying_criterion) diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/distributed/utils.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/distributed/utils.py deleted file mode 100644 index dbf318e7035603c1294eb45af7e98097df36289d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/distributed/utils.py +++ /dev/null @@ -1,826 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import io -import logging -import os -import pickle -import random -import socket -import struct -import subprocess -import warnings -from argparse import Namespace -from collections import OrderedDict -from dataclasses import dataclass -from typing import Any, Dict, List, Mapping, Optional -import sys -import time - -import torch -import torch.distributed as dist -from fairseq.dataclass.configs import DistributedTrainingConfig, FairseqConfig -from omegaconf import open_dict - -try: - import torch_xla.core.xla_model as xm -except ImportError: - xm = None - - -# Flag to indicate if we're using Megatron -# NOTE: this is a temporary hack until we move away from Megatron's model parallel init -_USE_MEGATRON = False - -# Whether to use XLA ops (e.g., on TPUs) instead of CUDA ops. -_USE_XLA = False - - -logger = logging.getLogger(__name__) - - -def is_master(cfg: DistributedTrainingConfig): - return cfg.distributed_rank == 0 - - -def infer_init_method(cfg: DistributedTrainingConfig, force_distributed=False): - if cfg.distributed_init_method is not None or cfg.tpu: - return - - num_pipelines_per_node = None - if cfg.pipeline_model_parallel: - num_pipeline_devices, num_pipelines_per_node = _pipeline_parallel_pre_init(cfg) - - if all( - key in os.environ - for key in ["MASTER_ADDR", "MASTER_PORT", "WORLD_SIZE", "RANK"] - ): - # support torch.distributed.launch - _infer_torch_distributed_launch_init(cfg) - elif cfg.distributed_port > 0: - # we can determine the init method automatically for Slurm - _infer_slurm_init(cfg, num_pipelines_per_node) - elif cfg.distributed_world_size > 1 or force_distributed: - # fallback for single node with multiple GPUs - _infer_single_node_init(cfg) - - if cfg.pipeline_model_parallel: - _pipeline_parallel_post_init(cfg, num_pipeline_devices, num_pipelines_per_node) - elif not cfg.distributed_no_spawn: - with open_dict(cfg): - cfg.distributed_num_procs = min( - torch.cuda.device_count(), cfg.distributed_world_size - ) - - -def _infer_torch_distributed_launch_init(cfg: DistributedTrainingConfig): - cfg.distributed_init_method = "env://" - cfg.distributed_world_size = int(os.environ["WORLD_SIZE"]) - cfg.distributed_rank = int(os.environ["RANK"]) - # processes are created by torch.distributed.launch - cfg.distributed_no_spawn = True - - -def _infer_slurm_init(cfg: DistributedTrainingConfig, num_pipelines_per_node): - node_list = os.environ.get("SLURM_STEP_NODELIST") - if node_list is None: - node_list = os.environ.get("SLURM_JOB_NODELIST") - if node_list is not None: - try: - hostnames = subprocess.check_output( - ["scontrol", "show", "hostnames", node_list] - ) - cfg.distributed_init_method = "tcp://{host}:{port}".format( - host=hostnames.split()[0].decode("utf-8"), - port=cfg.distributed_port, - ) - nnodes = int(os.environ.get("SLURM_NNODES")) - ntasks_per_node = os.environ.get("SLURM_NTASKS_PER_NODE") - if ntasks_per_node is not None: - ntasks_per_node = int(ntasks_per_node) - else: - ntasks = int(os.environ.get("SLURM_NTASKS")) - nnodes = int(os.environ.get("SLURM_NNODES")) - assert ntasks % nnodes == 0 - ntasks_per_node = int(ntasks / nnodes) - if ntasks_per_node == 1: - gpus_per_node = torch.cuda.device_count() - node_id = int(os.environ.get("SLURM_NODEID")) - cfg.distributed_rank = node_id * gpus_per_node - cfg.distributed_world_size = nnodes * gpus_per_node - elif cfg.pipeline_model_parallel: - assert ntasks_per_node == num_pipelines_per_node, ( - "SLURM --ntasks-per-node must match number of pipelines per " - "node (={})".format(num_pipelines_per_node) - ) - cfg.distributed_no_spawn = True - # For 4-way MP on nodes with 8 GPUs, ranks will be [0, 1] on - # the first node, [1, 2] on the second node, etc. This - # matches torch.distributed.launch. - node_id = int(os.environ.get("SLURM_NODEID")) - local_id = int(os.environ.get("SLURM_LOCALID")) - cfg.distributed_rank = node_id * num_pipelines_per_node + local_id - # In the above example, device_id will always be in [0, 1], - # which also matches torch.distributed.launch. - cfg.device_id = local_id - # We also want to set distributed_world_size to be the total - # number of pipelines across all nodes. - cfg.distributed_world_size = nnodes * num_pipelines_per_node - else: - assert ntasks_per_node == cfg.distributed_world_size // nnodes - cfg.distributed_no_spawn = True - cfg.distributed_rank = int(os.environ.get("SLURM_PROCID")) - cfg.device_id = int(os.environ.get("SLURM_LOCALID")) - except subprocess.CalledProcessError as e: # scontrol failed - raise e - except FileNotFoundError: # Slurm is not installed - pass - - -def _infer_single_node_init(cfg: DistributedTrainingConfig): - assert ( - cfg.distributed_world_size <= torch.cuda.device_count() - ), f"world size is {cfg.distributed_world_size} but have {torch.cuda.device_count()} available devices" - port = random.randint(10000, 20000) - cfg.distributed_init_method = "tcp://localhost:{port}".format(port=port) - - -def _pipeline_parallel_pre_init(cfg: DistributedTrainingConfig): - from fairseq import utils - - balance_exists = ( - cfg.pipeline_balance is not None - or cfg.pipeline_encoder_balance is not None - or cfg.pipeline_decoder_balance is not None - ) - devices_exist = ( - cfg.pipeline_devices is not None - or cfg.pipeline_encoder_devices is not None - or cfg.pipeline_decoder_devices is not None - ) - if not balance_exists: - raise ValueError( - "--pipeline-balance is currently required for pipeline model parallelism" - ) - if not devices_exist: - raise ValueError( - "--pipeline-devices is currently required for pipeline model parallelism" - ) - - cfg.pipeline_balance = utils.eval_str_list(cfg.pipeline_balance, type=int) - if cfg.pipeline_devices is not None: - cfg.pipeline_devices = utils.eval_str_list(cfg.pipeline_devices, type=int) - num_pipeline_devices = len(set(cfg.pipeline_devices)) - else: - cfg.pipeline_encoder_devices = utils.eval_str_list( - cfg.pipeline_encoder_devices, type=int - ) - cfg.pipeline_decoder_devices = utils.eval_str_list( - cfg.pipeline_decoder_devices, type=int - ) - num_pipeline_devices = len( - set(cfg.pipeline_encoder_devices + cfg.pipeline_decoder_devices) - ) - gpus_per_node = torch.cuda.device_count() - assert ( - gpus_per_node >= num_pipeline_devices - and gpus_per_node % num_pipeline_devices == 0 - ), ( - "the number of unique device IDs in --pipeline-devices must evenly divide " - "the number of GPUs per node (multi-node pipelining is not yet supported)" - ) - num_pipelines_per_node = gpus_per_node // num_pipeline_devices - return num_pipeline_devices, num_pipelines_per_node - - -def _pipeline_parallel_post_init( - cfg: DistributedTrainingConfig, num_pipeline_devices, num_pipelines_per_node -): - if not cfg.distributed_no_spawn: - # When distributed_no_spawn is False, we expect distributed_rank and - # distributed_world_size to be based on the total number of GPUs, so - # we need to correct them to be based on the number of pipelines. - assert cfg.distributed_world_size % num_pipeline_devices == 0 - cfg.distributed_world_size = ( - cfg.distributed_world_size // num_pipeline_devices - ) - # In the case of 4-way MP on nodes with 8 GPUs, we want - # distributed_rank to be the starting GPU index for each pipeline - # i.e., 0, 2, ... - gpus_per_node = torch.cuda.device_count() - assert cfg.distributed_rank % gpus_per_node == 0 - assert cfg.distributed_rank % num_pipeline_devices == 0 - - with open_dict(cfg): - cfg.distributed_rank = cfg.distributed_rank // num_pipeline_devices - # launch one process per pipeline - cfg.distributed_num_procs = num_pipelines_per_node - - # if we have 4-way MP on a node with 8 GPUs, we want device_ids to be 0 - # and 4, indicating the starting device IDs for each pipeline - cfg.device_id *= num_pipeline_devices - - if cfg.device_id > 0: - # if there's multiple pipelines on a node (e.g., 4-way MP on an 8 - # GPU node), we need to adjust pipeline_devices accordingly - logger.debug( - "setting CUDA device={} on rank {}".format( - cfg.device_id, cfg.distributed_rank - ) - ) - torch.cuda.set_device(cfg.device_id) - with open_dict(cfg): - cfg.pipeline_devices = [cfg.device_id + d for d in cfg.pipeline_devices] - logger.info( - "setting pipeline_devices={} on rank {}".format( - cfg.pipeline_devices, cfg.distributed_rank - ) - ) - - -def distributed_init(cfg: FairseqConfig): - if isinstance(cfg, Namespace): - from fairseq.dataclass.utils import convert_namespace_to_omegaconf - - cfg = convert_namespace_to_omegaconf(cfg) - - if not cfg.common.tpu: - if torch.distributed.is_available() and torch.distributed.is_initialized(): - warnings.warn( - "Distributed is already initialized, cannot initialize twice!" - ) - else: - logger.info( - "distributed init (rank {}): {}".format( - cfg.distributed_training.distributed_rank, - cfg.distributed_training.distributed_init_method, - ) - ) - logger.info('Start init') - max_time_wait = 600 - for i in range(max_time_wait): - try: - dist.init_process_group( - backend=cfg.distributed_training.distributed_backend, - init_method=cfg.distributed_training.distributed_init_method, - world_size=cfg.distributed_training.distributed_world_size, - rank=cfg.distributed_training.distributed_rank, - ) - logger.info( - "initialized host {} as rank {}".format( - socket.gethostname(), - cfg.distributed_training.distributed_rank, - ) - ) - if torch.distributed.is_initialized(): - print("single-machine distributed training is initialized.") - break - except ValueError: - # This is caused by TCPStore failure. - print('Retry: {}, with value error {}'.format( - i + 1, sys.exc_info()[0])) - time.sleep(5) - if i == max_time_wait - 1: - print('k8s resource wait too long time') - exit(-1) - except Exception: - print('Retry: {}, with value error {}'.format( - i + 1, sys.exc_info()[0])) - exit(-1) - # perform a dummy all-reduce to initialize the NCCL communicator - if torch.cuda.is_available(): - dist.all_reduce(torch.zeros(1).cuda()) - - cfg.distributed_training.distributed_rank = torch.distributed.get_rank() - else: - assert xm.xrt_world_size() == cfg.distributed_training.distributed_world_size - global _USE_XLA - _USE_XLA = True - cfg.distributed_training.device_id = xm.get_local_ordinal() - cfg.distributed_training.distributed_rank = xm.get_ordinal() - xm.rendezvous("distributed_init") # wait for all workers - - if is_master(cfg.distributed_training): - logging.getLogger().setLevel(logging.INFO) - else: - logging.getLogger().setLevel(logging.WARNING) - - if cfg.common.model_parallel_size > 1: - try: - from fairseq.model_parallel.megatron.mpu import ( - initialize_model_parallel, - model_parallel_cuda_manual_seed, - ) - except ImportError: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - global _USE_MEGATRON - _USE_MEGATRON = True - initialize_model_parallel(cfg.common.model_parallel_size) - model_parallel_cuda_manual_seed(cfg.common.seed) - model_part_number = get_model_parallel_rank() - cfg.checkpoint.checkpoint_suffix += "-model_part-{0}".format(model_part_number) - - if hasattr(cfg, "model") and getattr(cfg.model, "base_layers", 0) > 0: - cfg.checkpoint.checkpoint_suffix = f"-rank-{cfg.distributed_training.distributed_rank}" - - return cfg.distributed_training.distributed_rank - - -def distributed_main(i, main, cfg: FairseqConfig, kwargs): - cfg.distributed_training.device_id = i - if torch.cuda.is_available() and not cfg.common.cpu and not cfg.common.tpu: - torch.cuda.set_device(cfg.distributed_training.device_id) - if cfg.distributed_training.distributed_rank is None: # torch.multiprocessing.spawn - cfg.distributed_training.distributed_rank = kwargs.pop("start_rank", 0) + i - - cfg.distributed_training.distributed_rank = distributed_init(cfg) - - after_distributed_init_fn = kwargs.pop("after_distributed_init_fn", None) - if after_distributed_init_fn: - cfg = after_distributed_init_fn(cfg) - - main(cfg, **kwargs) - - if torch.distributed.is_initialized(): - torch.distributed.barrier(get_global_group()) - - -def call_main(cfg: FairseqConfig, main, **kwargs): - if cfg.distributed_training.distributed_init_method is None: - infer_init_method(cfg.distributed_training) - - if cfg.distributed_training.distributed_init_method is not None: - # distributed training - if not cfg.distributed_training.distributed_no_spawn: - start_rank = cfg.distributed_training.distributed_rank - cfg.distributed_training.distributed_rank = None # assign automatically - kwargs["start_rank"] = start_rank - torch.multiprocessing.spawn( - fn=distributed_main, - args=(main, cfg, kwargs), - nprocs=min( - torch.cuda.device_count(), - cfg.distributed_training.distributed_world_size, - ), - join=True, - ) - else: - distributed_main(cfg.distributed_training.device_id, main, cfg, kwargs) - elif cfg.common.tpu and cfg.distributed_training.distributed_world_size > 1: - import torch_xla.distributed.xla_multiprocessing as xmp - - torch.multiprocessing.set_sharing_strategy("file_system") - xmp.spawn( - fn=distributed_main, - args=(main, cfg, kwargs), - # tpu-comment: - # 8 devices in one TPU VM, is the max processes to be spawned. - # The rest is driven by xm.distributed.xla_dist - nprocs=min(cfg.distributed_training.distributed_world_size, 8), - ) - else: - # single GPU main - main(cfg, **kwargs) - - -def use_xla(): - global _USE_XLA - return _USE_XLA - - -def new_groups(grouped_ranks: List[List[int]]): - if use_xla(): - return ("tpu", grouped_ranks) - else: - groups = [dist.new_group(g) for g in grouped_ranks] - my_group_idx = _find_my_group_index(grouped_ranks) - return groups[my_group_idx] - - -def _find_my_group_index(grouped_ranks): - my_rank = get_global_rank() - for i, group in enumerate(grouped_ranks): - if my_rank in group: - return i - raise RuntimeError - - -def _find_my_group(grouped_ranks): - index = _find_my_group_index(grouped_ranks) - return grouped_ranks[index] - - -def get_rank(group): - if use_xla(): - assert group[0] == "tpu" - my_group = _find_my_group(group[1]) - return my_group.index(get_global_rank()) - else: - return dist.get_rank(group=group) - - -def get_world_size(group): - if use_xla(): - assert group[0] == "tpu" - my_group = _find_my_group(group[1]) - return len(my_group) - elif torch.distributed.is_initialized(): - return dist.get_world_size(group=group) - else: - return 1 - - -def get_global_group(): - if use_xla(): - return new_groups([list(range(get_global_world_size()))]) - elif torch.distributed.is_initialized(): - if not hasattr(get_global_group, "_global_group"): - # ideally we could use torch.distributed.group.WORLD, but it seems - # to cause random NCCL hangs in some cases - get_global_group._global_group = dist.new_group() - return get_global_group._global_group - else: - return None - - -def get_global_rank(): - if use_xla(): - return xm.get_ordinal() - elif torch.distributed.is_initialized(): - return torch.distributed.get_rank() - else: - return 0 - - -def get_global_world_size(): - if use_xla(): - return xm.xrt_world_size() - elif torch.distributed.is_initialized(): - return torch.distributed.get_world_size() - else: - return 1 - - -def get_data_parallel_group(): - """Get the data parallel group the caller rank belongs to.""" - global _USE_MEGATRON - if _USE_MEGATRON: - from fairseq.model_parallel.megatron import mpu - - return mpu.get_data_parallel_group() - else: - return get_global_group() - - -def get_data_parallel_rank(): - """Return my rank for the data parallel group.""" - return get_rank(get_data_parallel_group()) - - -def get_data_parallel_world_size(): - """Return world size for the data parallel group.""" - return get_world_size(get_data_parallel_group()) - - -def get_model_parallel_group(): - global _USE_MEGATRON - if _USE_MEGATRON: - from fairseq.model_parallel.megatron import mpu - - return mpu.get_model_parallel_group() - else: - return None - - -def get_model_parallel_rank(): - """Return my rank for the model parallel group.""" - return get_rank(get_model_parallel_group()) - - -def get_model_parallel_world_size(): - """Return world size for the model parallel group.""" - return get_world_size(get_model_parallel_group()) - - -def all_reduce(tensor, group, op="sum"): - if use_xla(): - assert isinstance(group, tuple) and group[0] == "tpu" - tensor = [tensor] # wrap in a list to make xm.all_reduce in-place - return xm.all_reduce(op, tensor, groups=group[1])[0] - else: - if op == "sum": - op = dist.ReduceOp.SUM - elif op == "max": - op = dist.ReduceOp.MAX - else: - raise NotImplementedError - dist.all_reduce(tensor, op=op, group=group) - return tensor - - -def broadcast(tensor, src, group): - if use_xla(): - # XLA doesn't support broadcast, hack it with all_reduce - if get_rank(group) != src: - tensor.zero_() - all_reduce(tensor, group) - else: - dist.broadcast(tensor, src=src, group=group) - - -def all_to_all(tensor, group): - """Perform an all-to-all operation on a 1D Tensor.""" - assert tensor.dim() == 1 - split_count = get_world_size(group=group) - assert tensor.numel() % split_count == 0 - if use_xla(): - assert isinstance(group, tuple) and group[0] == "tpu" - return xm.all_to_all( - tensor, - split_dimension=0, - concat_dimension=0, - split_count=split_count, - groups=group[1], - ) - else: - output = torch.zeros_like(tensor) - dist.all_to_all_single(output, tensor, group=group) - return output - - -def all_gather(tensor, group, return_tensor=False): - """Perform an all-gather operation.""" - if use_xla(): - result = xm.all_gather(tensor, groups=group[1]) - world_size = get_world_size(group=group) - result = result.view(world_size, *tensor.size()) - if return_tensor: - return result - else: - return [result[i] for i in range(world_size)] - else: - world_size = get_world_size(group=group) - rank = get_rank(group=group) - tensor_list = [ - tensor if i == rank else torch.empty_like(tensor) for i in range(world_size) - ] - dist.all_gather(tensor_list, tensor, group=group) - if return_tensor: - return torch.stack(tensor_list, dim=0) - else: - return tensor_list - - -def all_gather_list(data, group=None, max_size=16384): - """Gathers arbitrary data from all nodes into a list. - - Similar to :func:`~torch.distributed.all_gather` but for arbitrary Python - data. Note that *data* must be picklable and any CUDA tensors will be moved - to CPU and returned on CPU as well. - - Args: - data (Any): data from the local worker to be gathered on other workers - group: group of the collective - max_size (int, optional): maximum size of the data to be gathered - across workers - """ - from fairseq import utils - - if group is None: - group = get_global_group() - torch.distributed.barrier(group=group) - rank = get_rank(group=group) - world_size = get_world_size(group=group) - - buffer_size = max_size * world_size - if ( - not hasattr(all_gather_list, "_buffer") - or all_gather_list._buffer.numel() < buffer_size - ): - all_gather_list._buffer = torch.cuda.ByteTensor(buffer_size) - all_gather_list._cpu_buffer = torch.ByteTensor(max_size).pin_memory() - buffer = all_gather_list._buffer - buffer.zero_() - cpu_buffer = all_gather_list._cpu_buffer - - data = utils.move_to_cpu(data) - enc = pickle.dumps(data) - enc_size = len(enc) - header_size = 4 # size of header that contains the length of the encoded data - size = header_size + enc_size - if size > max_size: - raise ValueError( - "encoded data size ({}) exceeds max_size ({})".format(size, max_size) - ) - - header = struct.pack(">I", enc_size) - cpu_buffer[:size] = torch.ByteTensor(list(header + enc)) - start = rank * max_size - buffer[start : start + size].copy_(cpu_buffer[:size]) - - all_reduce(buffer, group=group) - - buffer = buffer.cpu() - try: - result = [] - for i in range(world_size): - out_buffer = buffer[i * max_size : (i + 1) * max_size] - (enc_size,) = struct.unpack(">I", bytes(out_buffer[:header_size].tolist())) - if enc_size > 0: - result.append( - pickle.loads( - bytes(out_buffer[header_size : header_size + enc_size].tolist()) - ) - ) - return result - except pickle.UnpicklingError: - raise Exception( - "Unable to unpickle data from other workers. all_gather_list requires all " - "workers to enter the function together, so this error usually indicates " - "that the workers have fallen out of sync somehow. Workers can fall out of " - "sync if one of them runs out of memory, or if there are other conditions " - "in your training script that can cause one worker to finish an epoch " - "while other workers are still iterating over their portions of the data. " - "Try rerunning with --ddp-backend=legacy_ddp and see if that helps." - ) - - -def all_reduce_dict(data: Mapping[str, Any], device, group) -> Dict[str, Any]: - """ - AllReduce a dictionary of values across workers. We separately - reduce items that are already on the device and items on CPU for - better performance. - - Args: - data (Mapping[str, Any]): dictionary of data to all-reduce, but - cannot be a nested dictionary - device (torch.device): device for the reduction - group: group of the collective - """ - data_keys = list(data.keys()) - - # We want to separately reduce items that are already on the - # device and items on CPU for performance reasons. - cpu_data = OrderedDict() - device_data = OrderedDict() - for k in data_keys: - t = data[k] - if not torch.is_tensor(t): - cpu_data[k] = torch.tensor(t, dtype=torch.double) - elif t.device.type != device.type: - cpu_data[k] = t.to(dtype=torch.double) - else: - device_data[k] = t.to(dtype=torch.double) - - def _all_reduce_dict(data: OrderedDict): - if len(data) == 0: - return data - buf = torch.cat([t.view(-1) for t in data.values()]).to(device=device) - all_reduce(buf, group=group) - split_buf = torch.split(buf, [t.numel() for t in data.values()]) - reduced_data = [t.view_as(orig) for t, orig in zip(split_buf, data.values())] - return OrderedDict(zip(data.keys(), reduced_data)) - - cpu_data = _all_reduce_dict(cpu_data) - device_data = _all_reduce_dict(device_data) - - def get_from_stack(key): - if key in cpu_data: - return cpu_data[key] - elif key in device_data: - return device_data[key] - raise KeyError - - return OrderedDict([(key, get_from_stack(key)) for key in data_keys]) - - -def broadcast_tensors( - tensors: Optional[List[torch.Tensor]], - src_rank: int, - group: object, - dist_device: Optional[torch.device] = None, -) -> List[torch.Tensor]: - """ - Broadcasts a list of tensors without other (non-src) ranks needing to know - the dtypes/shapes of the tensors. - """ - if dist_device is None: - if torch.distributed.get_backend(group) == "nccl": - dist_device = torch.device("cuda") - else: - dist_device = torch.device("cpu") - - # share metadata first to simplify transfer - is_src_rank = (get_rank(group) == src_rank) - if is_src_rank: - metadata = [ - {"size": t.size(), "dtype": t.dtype, "device": t.device} for t in tensors - ] - metadata = _broadcast_object_slow(metadata, src_rank, group, dist_device) - else: - metadata = _broadcast_object_slow(None, src_rank, group, dist_device) - - out_tensors = [] - for i, meta in enumerate(metadata): - if is_src_rank: - tensor = tensors[i] - broadcast(tensors[i].to(dist_device), src=src_rank, group=group) - else: - tensor = torch.zeros( - [meta["size"].numel()], dtype=meta["dtype"], device=dist_device - ) - broadcast(tensor, src=src_rank, group=group) - tensor = tensor.view(meta["size"]).to(meta["device"]) - out_tensors.append(tensor) - return out_tensors - - -def broadcast_object( - obj: Any, - src_rank: int, - group: object, - dist_device: Optional[torch.device] = None, -) -> Any: - """Broadcast an arbitrary Python object to other workers.""" - if dist_device is None: - if torch.distributed.get_backend(group) == "nccl": - dist_device = torch.device("cuda") - else: - dist_device = torch.device("cpu") - - if get_rank(group) == src_rank: - # split the tensors from the non-tensors so we can broadcast them - # directly, avoiding unnecessary serialization/deserialization - tensors = [] - obj = _split_tensors_from_obj(obj, tensors) - obj = _broadcast_object_slow(obj, src_rank, group, dist_device) - tensors = broadcast_tensors(tensors, src_rank, group, dist_device) - else: - obj = _broadcast_object_slow(None, src_rank, group, dist_device) - tensors = broadcast_tensors(None, src_rank, group, dist_device) - return _put_tensors_in_obj(obj, tensors) - - -def _broadcast_object_slow( - obj: Any, src_rank: int, group: object, dist_device: torch.device, -) -> Any: - if get_rank(group) == src_rank: - # Emit data - buffer = io.BytesIO() - torch.save(obj, buffer) - buffer = torch.ByteTensor(buffer.getbuffer()).to(dist_device) - length = torch.LongTensor([len(buffer)]).to(dist_device) - broadcast(length, src=src_rank, group=group) - broadcast(buffer, src=src_rank, group=group) - else: - # Fetch from the source - length = torch.LongTensor([0]).to(dist_device) - broadcast(length, src=src_rank, group=group) - buffer = torch.ByteTensor(int(length.item())).to(dist_device) - broadcast(buffer, src=src_rank, group=group) - buffer = io.BytesIO(buffer.cpu().numpy()) - obj = torch.load(buffer, map_location="cpu") - return obj - - -@dataclass(frozen=True) -class _TensorPlaceholder: - index: int - - -def _split_tensors_from_obj(obj: Any, tensors: List[torch.Tensor]) -> Any: - if torch.is_tensor(obj): - placeholder = _TensorPlaceholder(index=len(tensors)) - tensors.append(obj) - return placeholder - elif isinstance(obj, dict): - return {k: _split_tensors_from_obj(v, tensors) for k, v in obj.items()} - elif isinstance(obj, list): - return [_split_tensors_from_obj(v, tensors) for v in obj] - elif isinstance(obj, tuple): - return tuple(_split_tensors_from_obj(v, tensors) for v in obj) - elif isinstance(obj, set): - return {_split_tensors_from_obj(v, tensors) for v in obj} - else: - return obj - - -def _put_tensors_in_obj(obj: Any, tensors: List[torch.Tensor]) -> Any: - if isinstance(obj, _TensorPlaceholder): - return tensors[obj.index] - elif isinstance(obj, dict): - return {k: _put_tensors_in_obj(v, tensors) for k, v in obj.items()} - elif isinstance(obj, list): - return [_put_tensors_in_obj(v, tensors) for v in obj] - elif isinstance(obj, tuple): - return tuple(_put_tensors_in_obj(v, tensors) for v in obj) - elif isinstance(obj, set): - return {_put_tensors_in_obj(v, tensors) for v in obj} - else: - return obj diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/correlation.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/correlation.py deleted file mode 100644 index 3d0b79c301b29915dfaf4d2b1846c59be73127d3..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv/ops/correlation.py +++ /dev/null @@ -1,196 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -from torch import Tensor, nn -from torch.autograd import Function -from torch.autograd.function import once_differentiable -from torch.nn.modules.utils import _pair - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext( - '_ext', ['correlation_forward', 'correlation_backward']) - - -class CorrelationFunction(Function): - - @staticmethod - def forward(ctx, - input1, - input2, - kernel_size=1, - max_displacement=1, - stride=1, - padding=1, - dilation=1, - dilation_patch=1): - - ctx.save_for_backward(input1, input2) - - kH, kW = ctx.kernel_size = _pair(kernel_size) - patch_size = max_displacement * 2 + 1 - ctx.patch_size = patch_size - dH, dW = ctx.stride = _pair(stride) - padH, padW = ctx.padding = _pair(padding) - dilationH, dilationW = ctx.dilation = _pair(dilation) - dilation_patchH, dilation_patchW = ctx.dilation_patch = _pair( - dilation_patch) - - output_size = CorrelationFunction._output_size(ctx, input1) - - output = input1.new_zeros(output_size) - - ext_module.correlation_forward( - input1, - input2, - output, - kH=kH, - kW=kW, - patchH=patch_size, - patchW=patch_size, - padH=padH, - padW=padW, - dilationH=dilationH, - dilationW=dilationW, - dilation_patchH=dilation_patchH, - dilation_patchW=dilation_patchW, - dH=dH, - dW=dW) - - return output - - @staticmethod - @once_differentiable - def backward(ctx, grad_output): - input1, input2 = ctx.saved_tensors - - kH, kW = ctx.kernel_size - patch_size = ctx.patch_size - padH, padW = ctx.padding - dilationH, dilationW = ctx.dilation - dilation_patchH, dilation_patchW = ctx.dilation_patch - dH, dW = ctx.stride - grad_input1 = torch.zeros_like(input1) - grad_input2 = torch.zeros_like(input2) - - ext_module.correlation_backward( - grad_output, - input1, - input2, - grad_input1, - grad_input2, - kH=kH, - kW=kW, - patchH=patch_size, - patchW=patch_size, - padH=padH, - padW=padW, - dilationH=dilationH, - dilationW=dilationW, - dilation_patchH=dilation_patchH, - dilation_patchW=dilation_patchW, - dH=dH, - dW=dW) - return grad_input1, grad_input2, None, None, None, None, None, None - - @staticmethod - def _output_size(ctx, input1): - iH, iW = input1.size(2), input1.size(3) - batch_size = input1.size(0) - kH, kW = ctx.kernel_size - patch_size = ctx.patch_size - dH, dW = ctx.stride - padH, padW = ctx.padding - dilationH, dilationW = ctx.dilation - dilatedKH = (kH - 1) * dilationH + 1 - dilatedKW = (kW - 1) * dilationW + 1 - - oH = int((iH + 2 * padH - dilatedKH) / dH + 1) - oW = int((iW + 2 * padW - dilatedKW) / dW + 1) - - output_size = (batch_size, patch_size, patch_size, oH, oW) - return output_size - - -class Correlation(nn.Module): - r"""Correlation operator - - This correlation operator works for optical flow correlation computation. - - There are two batched tensors with shape :math:`(N, C, H, W)`, - and the correlation output's shape is :math:`(N, max\_displacement \times - 2 + 1, max\_displacement * 2 + 1, H_{out}, W_{out})` - - where - - .. math:: - H_{out} = \left\lfloor\frac{H_{in} + 2 \times padding - - dilation \times (kernel\_size - 1) - 1} - {stride} + 1\right\rfloor - - .. math:: - W_{out} = \left\lfloor\frac{W_{in} + 2 \times padding - dilation - \times (kernel\_size - 1) - 1} - {stride} + 1\right\rfloor - - the correlation item :math:`(N_i, dy, dx)` is formed by taking the sliding - window convolution between input1 and shifted input2, - - .. math:: - Corr(N_i, dx, dy) = - \sum_{c=0}^{C-1} - input1(N_i, c) \star - \mathcal{S}(input2(N_i, c), dy, dx) - - where :math:`\star` is the valid 2d sliding window convolution operator, - and :math:`\mathcal{S}` means shifting the input features (auto-complete - zero marginal), and :math:`dx, dy` are shifting distance, :math:`dx, dy \in - [-max\_displacement \times dilation\_patch, max\_displacement \times - dilation\_patch]`. - - Args: - kernel_size (int): The size of sliding window i.e. local neighborhood - representing the center points and involved in correlation - computation. Defaults to 1. - max_displacement (int): The radius for computing correlation volume, - but the actual working space can be dilated by dilation_patch. - Defaults to 1. - stride (int): The stride of the sliding blocks in the input spatial - dimensions. Defaults to 1. - padding (int): Zero padding added to all four sides of the input1. - Defaults to 0. - dilation (int): The spacing of local neighborhood that will involved - in correlation. Defaults to 1. - dilation_patch (int): The spacing between position need to compute - correlation. Defaults to 1. - """ - - def __init__(self, - kernel_size: int = 1, - max_displacement: int = 1, - stride: int = 1, - padding: int = 0, - dilation: int = 1, - dilation_patch: int = 1) -> None: - super().__init__() - self.kernel_size = kernel_size - self.max_displacement = max_displacement - self.stride = stride - self.padding = padding - self.dilation = dilation - self.dilation_patch = dilation_patch - - def forward(self, input1: Tensor, input2: Tensor) -> Tensor: - return CorrelationFunction.apply(input1, input2, self.kernel_size, - self.max_displacement, self.stride, - self.padding, self.dilation, - self.dilation_patch) - - def __repr__(self) -> str: - s = self.__class__.__name__ - s += f'(kernel_size={self.kernel_size}, ' - s += f'max_displacement={self.max_displacement}, ' - s += f'stride={self.stride}, ' - s += f'padding={self.padding}, ' - s += f'dilation={self.dilation}, ' - s += f'dilation_patch={self.dilation_patch})' - return s diff --git a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv_custom/checkpoint.py b/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv_custom/checkpoint.py deleted file mode 100644 index 19b87fef0a52d31babcdb3edb8f3089b6420173f..0000000000000000000000000000000000000000 --- a/spaces/PAIR/Text2Video-Zero/annotator/uniformer/mmcv_custom/checkpoint.py +++ /dev/null @@ -1,500 +0,0 @@ -# Copyright (c) Open-MMLab. All rights reserved. -import io -import os -import os.path as osp -import pkgutil -import time -import warnings -from collections import OrderedDict -from importlib import import_module -from tempfile import TemporaryDirectory - -import torch -import torchvision -from torch.optim import Optimizer -from torch.utils import model_zoo -from torch.nn import functional as F - -import annotator.uniformer.mmcv as mmcv -from annotator.uniformer.mmcv.fileio import FileClient -from annotator.uniformer.mmcv.fileio import load as load_file -from annotator.uniformer.mmcv.parallel import is_module_wrapper -from annotator.uniformer.mmcv.utils import mkdir_or_exist -from annotator.uniformer.mmcv.runner import get_dist_info - -ENV_MMCV_HOME = 'MMCV_HOME' -ENV_XDG_CACHE_HOME = 'XDG_CACHE_HOME' -DEFAULT_CACHE_DIR = '~/.cache' - - -def _get_mmcv_home(): - mmcv_home = os.path.expanduser( - os.getenv( - ENV_MMCV_HOME, - os.path.join( - os.getenv(ENV_XDG_CACHE_HOME, DEFAULT_CACHE_DIR), 'mmcv'))) - - mkdir_or_exist(mmcv_home) - return mmcv_home - - -def load_state_dict(module, state_dict, strict=False, logger=None): - """Load state_dict to a module. - - This method is modified from :meth:`torch.nn.Module.load_state_dict`. - Default value for ``strict`` is set to ``False`` and the message for - param mismatch will be shown even if strict is False. - - Args: - module (Module): Module that receives the state_dict. - state_dict (OrderedDict): Weights. - strict (bool): whether to strictly enforce that the keys - in :attr:`state_dict` match the keys returned by this module's - :meth:`~torch.nn.Module.state_dict` function. Default: ``False``. - logger (:obj:`logging.Logger`, optional): Logger to log the error - message. If not specified, print function will be used. - """ - unexpected_keys = [] - all_missing_keys = [] - err_msg = [] - - metadata = getattr(state_dict, '_metadata', None) - state_dict = state_dict.copy() - if metadata is not None: - state_dict._metadata = metadata - - # use _load_from_state_dict to enable checkpoint version control - def load(module, prefix=''): - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - local_metadata = {} if metadata is None else metadata.get( - prefix[:-1], {}) - module._load_from_state_dict(state_dict, prefix, local_metadata, True, - all_missing_keys, unexpected_keys, - err_msg) - for name, child in module._modules.items(): - if child is not None: - load(child, prefix + name + '.') - - load(module) - load = None # break load->load reference cycle - - # ignore "num_batches_tracked" of BN layers - missing_keys = [ - key for key in all_missing_keys if 'num_batches_tracked' not in key - ] - - if unexpected_keys: - err_msg.append('unexpected key in source ' - f'state_dict: {", ".join(unexpected_keys)}\n') - if missing_keys: - err_msg.append( - f'missing keys in source state_dict: {", ".join(missing_keys)}\n') - - rank, _ = get_dist_info() - if len(err_msg) > 0 and rank == 0: - err_msg.insert( - 0, 'The model and loaded state dict do not match exactly\n') - err_msg = '\n'.join(err_msg) - if strict: - raise RuntimeError(err_msg) - elif logger is not None: - logger.warning(err_msg) - else: - print(err_msg) - - -def load_url_dist(url, model_dir=None): - """In distributed setting, this function only download checkpoint at local - rank 0.""" - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - if rank == 0: - checkpoint = model_zoo.load_url(url, model_dir=model_dir) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - checkpoint = model_zoo.load_url(url, model_dir=model_dir) - return checkpoint - - -def load_pavimodel_dist(model_path, map_location=None): - """In distributed setting, this function only download checkpoint at local - rank 0.""" - try: - from pavi import modelcloud - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - if rank == 0: - model = modelcloud.get(model_path) - with TemporaryDirectory() as tmp_dir: - downloaded_file = osp.join(tmp_dir, model.name) - model.download(downloaded_file) - checkpoint = torch.load(downloaded_file, map_location=map_location) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - model = modelcloud.get(model_path) - with TemporaryDirectory() as tmp_dir: - downloaded_file = osp.join(tmp_dir, model.name) - model.download(downloaded_file) - checkpoint = torch.load( - downloaded_file, map_location=map_location) - return checkpoint - - -def load_fileclient_dist(filename, backend, map_location): - """In distributed setting, this function only download checkpoint at local - rank 0.""" - rank, world_size = get_dist_info() - rank = int(os.environ.get('LOCAL_RANK', rank)) - allowed_backends = ['ceph'] - if backend not in allowed_backends: - raise ValueError(f'Load from Backend {backend} is not supported.') - if rank == 0: - fileclient = FileClient(backend=backend) - buffer = io.BytesIO(fileclient.get(filename)) - checkpoint = torch.load(buffer, map_location=map_location) - if world_size > 1: - torch.distributed.barrier() - if rank > 0: - fileclient = FileClient(backend=backend) - buffer = io.BytesIO(fileclient.get(filename)) - checkpoint = torch.load(buffer, map_location=map_location) - return checkpoint - - -def get_torchvision_models(): - model_urls = dict() - for _, name, ispkg in pkgutil.walk_packages(torchvision.models.__path__): - if ispkg: - continue - _zoo = import_module(f'torchvision.models.{name}') - if hasattr(_zoo, 'model_urls'): - _urls = getattr(_zoo, 'model_urls') - model_urls.update(_urls) - return model_urls - - -def get_external_models(): - mmcv_home = _get_mmcv_home() - default_json_path = osp.join(mmcv.__path__[0], 'model_zoo/open_mmlab.json') - default_urls = load_file(default_json_path) - assert isinstance(default_urls, dict) - external_json_path = osp.join(mmcv_home, 'open_mmlab.json') - if osp.exists(external_json_path): - external_urls = load_file(external_json_path) - assert isinstance(external_urls, dict) - default_urls.update(external_urls) - - return default_urls - - -def get_mmcls_models(): - mmcls_json_path = osp.join(mmcv.__path__[0], 'model_zoo/mmcls.json') - mmcls_urls = load_file(mmcls_json_path) - - return mmcls_urls - - -def get_deprecated_model_names(): - deprecate_json_path = osp.join(mmcv.__path__[0], - 'model_zoo/deprecated.json') - deprecate_urls = load_file(deprecate_json_path) - assert isinstance(deprecate_urls, dict) - - return deprecate_urls - - -def _process_mmcls_checkpoint(checkpoint): - state_dict = checkpoint['state_dict'] - new_state_dict = OrderedDict() - for k, v in state_dict.items(): - if k.startswith('backbone.'): - new_state_dict[k[9:]] = v - new_checkpoint = dict(state_dict=new_state_dict) - - return new_checkpoint - - -def _load_checkpoint(filename, map_location=None): - """Load checkpoint from somewhere (modelzoo, file, url). - - Args: - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str | None): Same as :func:`torch.load`. Default: None. - - Returns: - dict | OrderedDict: The loaded checkpoint. It can be either an - OrderedDict storing model weights or a dict containing other - information, which depends on the checkpoint. - """ - if filename.startswith('modelzoo://'): - warnings.warn('The URL scheme of "modelzoo://" is deprecated, please ' - 'use "torchvision://" instead') - model_urls = get_torchvision_models() - model_name = filename[11:] - checkpoint = load_url_dist(model_urls[model_name]) - elif filename.startswith('torchvision://'): - model_urls = get_torchvision_models() - model_name = filename[14:] - checkpoint = load_url_dist(model_urls[model_name]) - elif filename.startswith('open-mmlab://'): - model_urls = get_external_models() - model_name = filename[13:] - deprecated_urls = get_deprecated_model_names() - if model_name in deprecated_urls: - warnings.warn(f'open-mmlab://{model_name} is deprecated in favor ' - f'of open-mmlab://{deprecated_urls[model_name]}') - model_name = deprecated_urls[model_name] - model_url = model_urls[model_name] - # check if is url - if model_url.startswith(('http://', 'https://')): - checkpoint = load_url_dist(model_url) - else: - filename = osp.join(_get_mmcv_home(), model_url) - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - elif filename.startswith('mmcls://'): - model_urls = get_mmcls_models() - model_name = filename[8:] - checkpoint = load_url_dist(model_urls[model_name]) - checkpoint = _process_mmcls_checkpoint(checkpoint) - elif filename.startswith(('http://', 'https://')): - checkpoint = load_url_dist(filename) - elif filename.startswith('pavi://'): - model_path = filename[7:] - checkpoint = load_pavimodel_dist(model_path, map_location=map_location) - elif filename.startswith('s3://'): - checkpoint = load_fileclient_dist( - filename, backend='ceph', map_location=map_location) - else: - if not osp.isfile(filename): - raise IOError(f'{filename} is not a checkpoint file') - checkpoint = torch.load(filename, map_location=map_location) - return checkpoint - - -def load_checkpoint(model, - filename, - map_location='cpu', - strict=False, - logger=None): - """Load checkpoint from a file or URI. - - Args: - model (Module): Module to load checkpoint. - filename (str): Accept local filepath, URL, ``torchvision://xxx``, - ``open-mmlab://xxx``. Please refer to ``docs/model_zoo.md`` for - details. - map_location (str): Same as :func:`torch.load`. - strict (bool): Whether to allow different params for the model and - checkpoint. - logger (:mod:`logging.Logger` or None): The logger for error message. - - Returns: - dict or OrderedDict: The loaded checkpoint. - """ - checkpoint = _load_checkpoint(filename, map_location) - # OrderedDict is a subclass of dict - if not isinstance(checkpoint, dict): - raise RuntimeError( - f'No state_dict found in checkpoint file {filename}') - # get state_dict from checkpoint - if 'state_dict' in checkpoint: - state_dict = checkpoint['state_dict'] - elif 'model' in checkpoint: - state_dict = checkpoint['model'] - else: - state_dict = checkpoint - # strip prefix of state_dict - if list(state_dict.keys())[0].startswith('module.'): - state_dict = {k[7:]: v for k, v in state_dict.items()} - - # for MoBY, load model of online branch - if sorted(list(state_dict.keys()))[0].startswith('encoder'): - state_dict = {k.replace('encoder.', ''): v for k, v in state_dict.items() if k.startswith('encoder.')} - - # reshape absolute position embedding - if state_dict.get('absolute_pos_embed') is not None: - absolute_pos_embed = state_dict['absolute_pos_embed'] - N1, L, C1 = absolute_pos_embed.size() - N2, C2, H, W = model.absolute_pos_embed.size() - if N1 != N2 or C1 != C2 or L != H*W: - logger.warning("Error in loading absolute_pos_embed, pass") - else: - state_dict['absolute_pos_embed'] = absolute_pos_embed.view(N2, H, W, C2).permute(0, 3, 1, 2) - - # interpolate position bias table if needed - relative_position_bias_table_keys = [k for k in state_dict.keys() if "relative_position_bias_table" in k] - for table_key in relative_position_bias_table_keys: - table_pretrained = state_dict[table_key] - table_current = model.state_dict()[table_key] - L1, nH1 = table_pretrained.size() - L2, nH2 = table_current.size() - if nH1 != nH2: - logger.warning(f"Error in loading {table_key}, pass") - else: - if L1 != L2: - S1 = int(L1 ** 0.5) - S2 = int(L2 ** 0.5) - table_pretrained_resized = F.interpolate( - table_pretrained.permute(1, 0).view(1, nH1, S1, S1), - size=(S2, S2), mode='bicubic') - state_dict[table_key] = table_pretrained_resized.view(nH2, L2).permute(1, 0) - - # load state_dict - load_state_dict(model, state_dict, strict, logger) - return checkpoint - - -def weights_to_cpu(state_dict): - """Copy a model state_dict to cpu. - - Args: - state_dict (OrderedDict): Model weights on GPU. - - Returns: - OrderedDict: Model weights on GPU. - """ - state_dict_cpu = OrderedDict() - for key, val in state_dict.items(): - state_dict_cpu[key] = val.cpu() - return state_dict_cpu - - -def _save_to_state_dict(module, destination, prefix, keep_vars): - """Saves module state to `destination` dictionary. - - This method is modified from :meth:`torch.nn.Module._save_to_state_dict`. - - Args: - module (nn.Module): The module to generate state_dict. - destination (dict): A dict where state will be stored. - prefix (str): The prefix for parameters and buffers used in this - module. - """ - for name, param in module._parameters.items(): - if param is not None: - destination[prefix + name] = param if keep_vars else param.detach() - for name, buf in module._buffers.items(): - # remove check of _non_persistent_buffers_set to allow nn.BatchNorm2d - if buf is not None: - destination[prefix + name] = buf if keep_vars else buf.detach() - - -def get_state_dict(module, destination=None, prefix='', keep_vars=False): - """Returns a dictionary containing a whole state of the module. - - Both parameters and persistent buffers (e.g. running averages) are - included. Keys are corresponding parameter and buffer names. - - This method is modified from :meth:`torch.nn.Module.state_dict` to - recursively check parallel module in case that the model has a complicated - structure, e.g., nn.Module(nn.Module(DDP)). - - Args: - module (nn.Module): The module to generate state_dict. - destination (OrderedDict): Returned dict for the state of the - module. - prefix (str): Prefix of the key. - keep_vars (bool): Whether to keep the variable property of the - parameters. Default: False. - - Returns: - dict: A dictionary containing a whole state of the module. - """ - # recursively check parallel module in case that the model has a - # complicated structure, e.g., nn.Module(nn.Module(DDP)) - if is_module_wrapper(module): - module = module.module - - # below is the same as torch.nn.Module.state_dict() - if destination is None: - destination = OrderedDict() - destination._metadata = OrderedDict() - destination._metadata[prefix[:-1]] = local_metadata = dict( - version=module._version) - _save_to_state_dict(module, destination, prefix, keep_vars) - for name, child in module._modules.items(): - if child is not None: - get_state_dict( - child, destination, prefix + name + '.', keep_vars=keep_vars) - for hook in module._state_dict_hooks.values(): - hook_result = hook(module, destination, prefix, local_metadata) - if hook_result is not None: - destination = hook_result - return destination - - -def save_checkpoint(model, filename, optimizer=None, meta=None): - """Save checkpoint to file. - - The checkpoint will have 3 fields: ``meta``, ``state_dict`` and - ``optimizer``. By default ``meta`` will contain version and time info. - - Args: - model (Module): Module whose params are to be saved. - filename (str): Checkpoint filename. - optimizer (:obj:`Optimizer`, optional): Optimizer to be saved. - meta (dict, optional): Metadata to be saved in checkpoint. - """ - if meta is None: - meta = {} - elif not isinstance(meta, dict): - raise TypeError(f'meta must be a dict or None, but got {type(meta)}') - meta.update(mmcv_version=mmcv.__version__, time=time.asctime()) - - if is_module_wrapper(model): - model = model.module - - if hasattr(model, 'CLASSES') and model.CLASSES is not None: - # save class name to the meta - meta.update(CLASSES=model.CLASSES) - - checkpoint = { - 'meta': meta, - 'state_dict': weights_to_cpu(get_state_dict(model)) - } - # save optimizer state dict in the checkpoint - if isinstance(optimizer, Optimizer): - checkpoint['optimizer'] = optimizer.state_dict() - elif isinstance(optimizer, dict): - checkpoint['optimizer'] = {} - for name, optim in optimizer.items(): - checkpoint['optimizer'][name] = optim.state_dict() - - if filename.startswith('pavi://'): - try: - from pavi import modelcloud - from pavi.exception import NodeNotFoundError - except ImportError: - raise ImportError( - 'Please install pavi to load checkpoint from modelcloud.') - model_path = filename[7:] - root = modelcloud.Folder() - model_dir, model_name = osp.split(model_path) - try: - model = modelcloud.get(model_dir) - except NodeNotFoundError: - model = root.create_training_model(model_dir) - with TemporaryDirectory() as tmp_dir: - checkpoint_file = osp.join(tmp_dir, model_name) - with open(checkpoint_file, 'wb') as f: - torch.save(checkpoint, f) - f.flush() - model.create_file(checkpoint_file, name=model_name) - else: - mmcv.mkdir_or_exist(osp.dirname(filename)) - # immediately flush buffer - with open(filename, 'wb') as f: - torch.save(checkpoint, f) - f.flush() \ No newline at end of file diff --git a/spaces/PatrickTyBrown/LoanDocumentClassifier/app.py b/spaces/PatrickTyBrown/LoanDocumentClassifier/app.py deleted file mode 100644 index 3e966010b21071025a3e43e910e40094c69f994a..0000000000000000000000000000000000000000 --- a/spaces/PatrickTyBrown/LoanDocumentClassifier/app.py +++ /dev/null @@ -1,72 +0,0 @@ -import gradio -import cv2 -from sklearn.naive_bayes import BernoulliNB -import pickle -import numpy as np - -# multiclass_model = pickle.load(open('models/MulticlassModel_200x200', 'rb')) -ensemble_model = pickle.load(open('EnsembleModels_200x200', 'rb')) - -examples = ['images/test2.jpg','images/test4.jpg','images/test6.jpg', - "images/Incom.jpg", "images/DLC.jpg", 'images/EHD.jpg', - 'images/IDR.jpg','images/PPD.jpg','images/PSLF.jpg' ,'images/SCD.jpg', - 'images/TLF.jpg'] - -def preprocess(img): - img = cv2.resize(img, (200,200)) - img = cv2.adaptiveThreshold(cv2.cvtColor(img, cv2.COLOR_BGR2GRAY),255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY,11,2) - img = np.reshape(img, (1,200*200))/255 - return img - -def predict(img): - img = preprocess(img) - categories = { - "Inco": 2, - "Teac": 1, - "Cons": 0, - "Publ": 4, - "Econ": 3, - "Reaf": 5} - - proba = np.zeros((6)) - for key in categories.keys(): - proba[categories[key]] = ensemble_model[key].predict_proba(img)[:,0] - - return proba - -def generate_results(proba): - categories = [ - "DLC", - "TLF", - "IDR", - "EHD", - "PLSF", - "REA", - "UNKNOWN"] - - scores = [0,0,0,0,0,0,0] - - choice = np.where(proba == np.amin(proba))[0] - - if len(choice)>1: - choice = 6 - scores[int(choice)] = 1 - - results = dict(zip(categories, scores)) - return results - -def inference(img): - proba = predict(img) - results = generate_results(proba) - return results - -demo = gradio.Interface( - fn=inference, - inputs=gradio.Image(), - outputs=gradio.Label(), - title='Document Classification', - description='Loan Document Classification Using A Naive Bayes Classifier Ensemble', - article='The purpose of this demo was to provide a simple baseline for the classification of document images. View the complete write up here https://github.com/PatrickTyBrown/document_classification/blob/main/project_writeup.pdf\n\n\nLinkedin: https://www.linkedin.com/in/patrick-ty-brown/\nGithub: https://github.com/PatrickTyBrown/document_classification\nPortfolio: https://sites.google.com/view/patrick-brown/home', - examples=examples) - -demo.launch() diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/rotate-loops.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/rotate-loops.go deleted file mode 100644 index 6064feadead21fdccbaf3c54b3bf0223b36fa5c3..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/language/cps/rotate-loops.go and /dev/null differ diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/framework-cairo.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/framework-cairo.go deleted file mode 100644 index 6c3b72cb2f399fe3a144b9785dfa915a133344a9..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/lilypond/2.24.2/ccache/lily/framework-cairo.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/AutoGPT/autogpt/json_utils/utilities.py b/spaces/PeepDaSlan9/AutoGPT/autogpt/json_utils/utilities.py deleted file mode 100644 index eb9bb687750460fed2f4547b67e41f8e8c877a41..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/AutoGPT/autogpt/json_utils/utilities.py +++ /dev/null @@ -1,54 +0,0 @@ -"""Utilities for the json_fixes package.""" -import json -import re - -from jsonschema import Draft7Validator - -from autogpt.config import Config -from autogpt.logs import logger - -CFG = Config() - - -def extract_char_position(error_message: str) -> int: - """Extract the character position from the JSONDecodeError message. - - Args: - error_message (str): The error message from the JSONDecodeError - exception. - - Returns: - int: The character position. - """ - - char_pattern = re.compile(r"\(char (\d+)\)") - if match := char_pattern.search(error_message): - return int(match[1]) - else: - raise ValueError("Character position not found in the error message.") - - -def validate_json(json_object: object, schema_name: object) -> object: - """ - :type schema_name: object - :param schema_name: - :type json_object: object - """ - with open(f"autogpt/json_utils/{schema_name}.json", "r") as f: - schema = json.load(f) - validator = Draft7Validator(schema) - - if errors := sorted(validator.iter_errors(json_object), key=lambda e: e.path): - logger.error("The JSON object is invalid.") - if CFG.debug_mode: - logger.error( - json.dumps(json_object, indent=4) - ) # Replace 'json_object' with the variable containing the JSON data - logger.error("The following issues were found:") - - for error in errors: - logger.error(f"Error: {error.message}") - elif CFG.debug_mode: - print("The JSON object is valid.") - - return json_object diff --git a/spaces/Pfs2021Funny/Text-to-Music-ExtendedVersion/constants.py b/spaces/Pfs2021Funny/Text-to-Music-ExtendedVersion/constants.py deleted file mode 100644 index f88bf0231fe0ad3652f09cee1d32223f66ae4d73..0000000000000000000000000000000000000000 --- a/spaces/Pfs2021Funny/Text-to-Music-ExtendedVersion/constants.py +++ /dev/null @@ -1,7 +0,0 @@ -import numpy as np - -MUBERT_TAGS_STRING = 'tribal,action,kids,neo-classic,run 130,pumped,jazz / funk,ethnic,dubtechno,reggae,acid jazz,liquidfunk,funk,witch house,tech house,underground,artists,mystical,disco,sensorium,r&b,agender,psychedelic trance / psytrance,peaceful,run 140,piano,run 160,setting,meditation,christmas,ambient,horror,cinematic,electro house,idm,bass,minimal,underscore,drums,glitchy,beautiful,technology,tribal house,country pop,jazz & funk,documentary,space,classical,valentines,chillstep,experimental,trap,new jack swing,drama,post-rock,tense,corporate,neutral,happy,analog,funky,spiritual,sberzvuk special,chill hop,dramatic,catchy,holidays,fitness 90,optimistic,orchestra,acid techno,energizing,romantic,minimal house,breaks,hyper pop,warm up,dreamy,dark,urban,microfunk,dub,nu disco,vogue,keys,hardcore,aggressive,indie,electro funk,beauty,relaxing,trance,pop,hiphop,soft,acoustic,chillrave / ethno-house,deep techno,angry,dance,fun,dubstep,tropical,latin pop,heroic,world music,inspirational,uplifting,atmosphere,art,epic,advertising,chillout,scary,spooky,slow ballad,saxophone,summer,erotic,jazzy,energy 100,kara mar,xmas,atmospheric,indie pop,hip-hop,yoga,reggaeton,lounge,travel,running,folk,chillrave & ethno-house,detective,darkambient,chill,fantasy,minimal techno,special,night,tropical house,downtempo,lullaby,meditative,upbeat,glitch hop,fitness,neurofunk,sexual,indie rock,future pop,jazz,cyberpunk,melancholic,happy hardcore,family / kids,synths,electric guitar,comedy,psychedelic trance & psytrance,edm,psychedelic rock,calm,zen,bells,podcast,melodic house,ethnic percussion,nature,heavy,bassline,indie dance,techno,drumnbass,synth pop,vaporwave,sad,8-bit,chillgressive,deep,orchestral,futuristic,hardtechno,nostalgic,big room,sci-fi,tutorial,joyful,pads,minimal 170,drill,ethnic 108,amusing,sleepy ambient,psychill,italo disco,lofi,house,acoustic guitar,bassline house,rock,k-pop,synthwave,deep house,electronica,gabber,nightlife,sport & fitness,road trip,celebration,electro,disco house,electronic,logo,news' -MUBERT_TAGS = np.array(MUBERT_TAGS_STRING.split(',')) -MUBERT_LICENSE = "ttmmubertlicense#f0acYBenRcfeFpNT4wpYGaTQIyDI4mJGv5MfIhBFz97NXDwDNFHmMRsBSzmGsJwbTpP1A6i07AXcIeAHo5" -MUBERT_MODE = "loop" -MUBERT_TOKEN = "4951f6428e83172a4f39de05d5b3ab10d58560b8" diff --git a/spaces/Pie31415/control-animation/annotator/midas/midas/__init__.py b/spaces/Pie31415/control-animation/annotator/midas/midas/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/RamAnanth1/co_chat_voice/README.md b/spaces/RamAnanth1/co_chat_voice/README.md deleted file mode 100644 index 0a01d2f84d84dcbcbb4a6795b364f05c544bd5a1..0000000000000000000000000000000000000000 --- a/spaces/RamAnanth1/co_chat_voice/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: CatGPT Voice -emoji: 📊 -colorFrom: blue -colorTo: blue -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -duplicated_from: RamAnanth1/chatGPT_voice ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/typing_extensions.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/typing_extensions.py deleted file mode 100644 index ef42417c208e93c55d704728d3e88dfe46250d92..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/typing_extensions.py +++ /dev/null @@ -1,2209 +0,0 @@ -import abc -import collections -import collections.abc -import functools -import operator -import sys -import types as _types -import typing - - -__all__ = [ - # Super-special typing primitives. - 'Any', - 'ClassVar', - 'Concatenate', - 'Final', - 'LiteralString', - 'ParamSpec', - 'ParamSpecArgs', - 'ParamSpecKwargs', - 'Self', - 'Type', - 'TypeVar', - 'TypeVarTuple', - 'Unpack', - - # ABCs (from collections.abc). - 'Awaitable', - 'AsyncIterator', - 'AsyncIterable', - 'Coroutine', - 'AsyncGenerator', - 'AsyncContextManager', - 'ChainMap', - - # Concrete collection types. - 'ContextManager', - 'Counter', - 'Deque', - 'DefaultDict', - 'NamedTuple', - 'OrderedDict', - 'TypedDict', - - # Structural checks, a.k.a. protocols. - 'SupportsIndex', - - # One-off things. - 'Annotated', - 'assert_never', - 'assert_type', - 'clear_overloads', - 'dataclass_transform', - 'get_overloads', - 'final', - 'get_args', - 'get_origin', - 'get_type_hints', - 'IntVar', - 'is_typeddict', - 'Literal', - 'NewType', - 'overload', - 'override', - 'Protocol', - 'reveal_type', - 'runtime', - 'runtime_checkable', - 'Text', - 'TypeAlias', - 'TypeGuard', - 'TYPE_CHECKING', - 'Never', - 'NoReturn', - 'Required', - 'NotRequired', -] - -# for backward compatibility -PEP_560 = True -GenericMeta = type - -# The functions below are modified copies of typing internal helpers. -# They are needed by _ProtocolMeta and they provide support for PEP 646. - -_marker = object() - - -def _check_generic(cls, parameters, elen=_marker): - """Check correct count for parameters of a generic cls (internal helper). - This gives a nice error message in case of count mismatch. - """ - if not elen: - raise TypeError(f"{cls} is not a generic class") - if elen is _marker: - if not hasattr(cls, "__parameters__") or not cls.__parameters__: - raise TypeError(f"{cls} is not a generic class") - elen = len(cls.__parameters__) - alen = len(parameters) - if alen != elen: - if hasattr(cls, "__parameters__"): - parameters = [p for p in cls.__parameters__ if not _is_unpack(p)] - num_tv_tuples = sum(isinstance(p, TypeVarTuple) for p in parameters) - if (num_tv_tuples > 0) and (alen >= elen - num_tv_tuples): - return - raise TypeError(f"Too {'many' if alen > elen else 'few'} parameters for {cls};" - f" actual {alen}, expected {elen}") - - -if sys.version_info >= (3, 10): - def _should_collect_from_parameters(t): - return isinstance( - t, (typing._GenericAlias, _types.GenericAlias, _types.UnionType) - ) -elif sys.version_info >= (3, 9): - def _should_collect_from_parameters(t): - return isinstance(t, (typing._GenericAlias, _types.GenericAlias)) -else: - def _should_collect_from_parameters(t): - return isinstance(t, typing._GenericAlias) and not t._special - - -def _collect_type_vars(types, typevar_types=None): - """Collect all type variable contained in types in order of - first appearance (lexicographic order). For example:: - - _collect_type_vars((T, List[S, T])) == (T, S) - """ - if typevar_types is None: - typevar_types = typing.TypeVar - tvars = [] - for t in types: - if ( - isinstance(t, typevar_types) and - t not in tvars and - not _is_unpack(t) - ): - tvars.append(t) - if _should_collect_from_parameters(t): - tvars.extend([t for t in t.__parameters__ if t not in tvars]) - return tuple(tvars) - - -NoReturn = typing.NoReturn - -# Some unconstrained type variables. These are used by the container types. -# (These are not for export.) -T = typing.TypeVar('T') # Any type. -KT = typing.TypeVar('KT') # Key type. -VT = typing.TypeVar('VT') # Value type. -T_co = typing.TypeVar('T_co', covariant=True) # Any type covariant containers. -T_contra = typing.TypeVar('T_contra', contravariant=True) # Ditto contravariant. - - -if sys.version_info >= (3, 11): - from typing import Any -else: - - class _AnyMeta(type): - def __instancecheck__(self, obj): - if self is Any: - raise TypeError("typing_extensions.Any cannot be used with isinstance()") - return super().__instancecheck__(obj) - - def __repr__(self): - if self is Any: - return "typing_extensions.Any" - return super().__repr__() - - class Any(metaclass=_AnyMeta): - """Special type indicating an unconstrained type. - - Any is compatible with every type. - - Any assumed to have all methods. - - All values assumed to be instances of Any. - Note that all the above statements are true from the point of view of - static type checkers. At runtime, Any should not be used with instance - checks. - """ - def __new__(cls, *args, **kwargs): - if cls is Any: - raise TypeError("Any cannot be instantiated") - return super().__new__(cls, *args, **kwargs) - - -ClassVar = typing.ClassVar - -# On older versions of typing there is an internal class named "Final". -# 3.8+ -if hasattr(typing, 'Final') and sys.version_info[:2] >= (3, 7): - Final = typing.Final -# 3.7 -else: - class _FinalForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - Final = _FinalForm('Final', - doc="""A special typing construct to indicate that a name - cannot be re-assigned or overridden in a subclass. - For example: - - MAX_SIZE: Final = 9000 - MAX_SIZE += 1 # Error reported by type checker - - class Connection: - TIMEOUT: Final[int] = 10 - class FastConnector(Connection): - TIMEOUT = 1 # Error reported by type checker - - There is no runtime checking of these properties.""") - -if sys.version_info >= (3, 11): - final = typing.final -else: - # @final exists in 3.8+, but we backport it for all versions - # before 3.11 to keep support for the __final__ attribute. - # See https://bugs.python.org/issue46342 - def final(f): - """This decorator can be used to indicate to type checkers that - the decorated method cannot be overridden, and decorated class - cannot be subclassed. For example: - - class Base: - @final - def done(self) -> None: - ... - class Sub(Base): - def done(self) -> None: # Error reported by type checker - ... - @final - class Leaf: - ... - class Other(Leaf): # Error reported by type checker - ... - - There is no runtime checking of these properties. The decorator - sets the ``__final__`` attribute to ``True`` on the decorated object - to allow runtime introspection. - """ - try: - f.__final__ = True - except (AttributeError, TypeError): - # Skip the attribute silently if it is not writable. - # AttributeError happens if the object has __slots__ or a - # read-only property, TypeError if it's a builtin class. - pass - return f - - -def IntVar(name): - return typing.TypeVar(name) - - -# 3.8+: -if hasattr(typing, 'Literal'): - Literal = typing.Literal -# 3.7: -else: - class _LiteralForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - return typing._GenericAlias(self, parameters) - - Literal = _LiteralForm('Literal', - doc="""A type that can be used to indicate to type checkers - that the corresponding value has a value literally equivalent - to the provided parameter. For example: - - var: Literal[4] = 4 - - The type checker understands that 'var' is literally equal to - the value 4 and no other value. - - Literal[...] cannot be subclassed. There is no runtime - checking verifying that the parameter is actually a value - instead of a type.""") - - -_overload_dummy = typing._overload_dummy # noqa - - -if hasattr(typing, "get_overloads"): # 3.11+ - overload = typing.overload - get_overloads = typing.get_overloads - clear_overloads = typing.clear_overloads -else: - # {module: {qualname: {firstlineno: func}}} - _overload_registry = collections.defaultdict( - functools.partial(collections.defaultdict, dict) - ) - - def overload(func): - """Decorator for overloaded functions/methods. - - In a stub file, place two or more stub definitions for the same - function in a row, each decorated with @overload. For example: - - @overload - def utf8(value: None) -> None: ... - @overload - def utf8(value: bytes) -> bytes: ... - @overload - def utf8(value: str) -> bytes: ... - - In a non-stub file (i.e. a regular .py file), do the same but - follow it with an implementation. The implementation should *not* - be decorated with @overload. For example: - - @overload - def utf8(value: None) -> None: ... - @overload - def utf8(value: bytes) -> bytes: ... - @overload - def utf8(value: str) -> bytes: ... - def utf8(value): - # implementation goes here - - The overloads for a function can be retrieved at runtime using the - get_overloads() function. - """ - # classmethod and staticmethod - f = getattr(func, "__func__", func) - try: - _overload_registry[f.__module__][f.__qualname__][ - f.__code__.co_firstlineno - ] = func - except AttributeError: - # Not a normal function; ignore. - pass - return _overload_dummy - - def get_overloads(func): - """Return all defined overloads for *func* as a sequence.""" - # classmethod and staticmethod - f = getattr(func, "__func__", func) - if f.__module__ not in _overload_registry: - return [] - mod_dict = _overload_registry[f.__module__] - if f.__qualname__ not in mod_dict: - return [] - return list(mod_dict[f.__qualname__].values()) - - def clear_overloads(): - """Clear all overloads in the registry.""" - _overload_registry.clear() - - -# This is not a real generic class. Don't use outside annotations. -Type = typing.Type - -# Various ABCs mimicking those in collections.abc. -# A few are simply re-exported for completeness. - - -Awaitable = typing.Awaitable -Coroutine = typing.Coroutine -AsyncIterable = typing.AsyncIterable -AsyncIterator = typing.AsyncIterator -Deque = typing.Deque -ContextManager = typing.ContextManager -AsyncContextManager = typing.AsyncContextManager -DefaultDict = typing.DefaultDict - -# 3.7.2+ -if hasattr(typing, 'OrderedDict'): - OrderedDict = typing.OrderedDict -# 3.7.0-3.7.2 -else: - OrderedDict = typing._alias(collections.OrderedDict, (KT, VT)) - -Counter = typing.Counter -ChainMap = typing.ChainMap -AsyncGenerator = typing.AsyncGenerator -NewType = typing.NewType -Text = typing.Text -TYPE_CHECKING = typing.TYPE_CHECKING - - -_PROTO_WHITELIST = ['Callable', 'Awaitable', - 'Iterable', 'Iterator', 'AsyncIterable', 'AsyncIterator', - 'Hashable', 'Sized', 'Container', 'Collection', 'Reversible', - 'ContextManager', 'AsyncContextManager'] - - -def _get_protocol_attrs(cls): - attrs = set() - for base in cls.__mro__[:-1]: # without object - if base.__name__ in ('Protocol', 'Generic'): - continue - annotations = getattr(base, '__annotations__', {}) - for attr in list(base.__dict__.keys()) + list(annotations.keys()): - if (not attr.startswith('_abc_') and attr not in ( - '__abstractmethods__', '__annotations__', '__weakref__', - '_is_protocol', '_is_runtime_protocol', '__dict__', - '__args__', '__slots__', - '__next_in_mro__', '__parameters__', '__origin__', - '__orig_bases__', '__extra__', '__tree_hash__', - '__doc__', '__subclasshook__', '__init__', '__new__', - '__module__', '_MutableMapping__marker', '_gorg')): - attrs.add(attr) - return attrs - - -def _is_callable_members_only(cls): - return all(callable(getattr(cls, attr, None)) for attr in _get_protocol_attrs(cls)) - - -def _maybe_adjust_parameters(cls): - """Helper function used in Protocol.__init_subclass__ and _TypedDictMeta.__new__. - - The contents of this function are very similar - to logic found in typing.Generic.__init_subclass__ - on the CPython main branch. - """ - tvars = [] - if '__orig_bases__' in cls.__dict__: - tvars = typing._collect_type_vars(cls.__orig_bases__) - # Look for Generic[T1, ..., Tn] or Protocol[T1, ..., Tn]. - # If found, tvars must be a subset of it. - # If not found, tvars is it. - # Also check for and reject plain Generic, - # and reject multiple Generic[...] and/or Protocol[...]. - gvars = None - for base in cls.__orig_bases__: - if (isinstance(base, typing._GenericAlias) and - base.__origin__ in (typing.Generic, Protocol)): - # for error messages - the_base = base.__origin__.__name__ - if gvars is not None: - raise TypeError( - "Cannot inherit from Generic[...]" - " and/or Protocol[...] multiple types.") - gvars = base.__parameters__ - if gvars is None: - gvars = tvars - else: - tvarset = set(tvars) - gvarset = set(gvars) - if not tvarset <= gvarset: - s_vars = ', '.join(str(t) for t in tvars if t not in gvarset) - s_args = ', '.join(str(g) for g in gvars) - raise TypeError(f"Some type variables ({s_vars}) are" - f" not listed in {the_base}[{s_args}]") - tvars = gvars - cls.__parameters__ = tuple(tvars) - - -# 3.8+ -if hasattr(typing, 'Protocol'): - Protocol = typing.Protocol -# 3.7 -else: - - def _no_init(self, *args, **kwargs): - if type(self)._is_protocol: - raise TypeError('Protocols cannot be instantiated') - - class _ProtocolMeta(abc.ABCMeta): # noqa: B024 - # This metaclass is a bit unfortunate and exists only because of the lack - # of __instancehook__. - def __instancecheck__(cls, instance): - # We need this method for situations where attributes are - # assigned in __init__. - if ((not getattr(cls, '_is_protocol', False) or - _is_callable_members_only(cls)) and - issubclass(instance.__class__, cls)): - return True - if cls._is_protocol: - if all(hasattr(instance, attr) and - (not callable(getattr(cls, attr, None)) or - getattr(instance, attr) is not None) - for attr in _get_protocol_attrs(cls)): - return True - return super().__instancecheck__(instance) - - class Protocol(metaclass=_ProtocolMeta): - # There is quite a lot of overlapping code with typing.Generic. - # Unfortunately it is hard to avoid this while these live in two different - # modules. The duplicated code will be removed when Protocol is moved to typing. - """Base class for protocol classes. Protocol classes are defined as:: - - class Proto(Protocol): - def meth(self) -> int: - ... - - Such classes are primarily used with static type checkers that recognize - structural subtyping (static duck-typing), for example:: - - class C: - def meth(self) -> int: - return 0 - - def func(x: Proto) -> int: - return x.meth() - - func(C()) # Passes static type check - - See PEP 544 for details. Protocol classes decorated with - @typing_extensions.runtime act as simple-minded runtime protocol that checks - only the presence of given attributes, ignoring their type signatures. - - Protocol classes can be generic, they are defined as:: - - class GenProto(Protocol[T]): - def meth(self) -> T: - ... - """ - __slots__ = () - _is_protocol = True - - def __new__(cls, *args, **kwds): - if cls is Protocol: - raise TypeError("Type Protocol cannot be instantiated; " - "it can only be used as a base class") - return super().__new__(cls) - - @typing._tp_cache - def __class_getitem__(cls, params): - if not isinstance(params, tuple): - params = (params,) - if not params and cls is not typing.Tuple: - raise TypeError( - f"Parameter list to {cls.__qualname__}[...] cannot be empty") - msg = "Parameters to generic types must be types." - params = tuple(typing._type_check(p, msg) for p in params) # noqa - if cls is Protocol: - # Generic can only be subscripted with unique type variables. - if not all(isinstance(p, typing.TypeVar) for p in params): - i = 0 - while isinstance(params[i], typing.TypeVar): - i += 1 - raise TypeError( - "Parameters to Protocol[...] must all be type variables." - f" Parameter {i + 1} is {params[i]}") - if len(set(params)) != len(params): - raise TypeError( - "Parameters to Protocol[...] must all be unique") - else: - # Subscripting a regular Generic subclass. - _check_generic(cls, params, len(cls.__parameters__)) - return typing._GenericAlias(cls, params) - - def __init_subclass__(cls, *args, **kwargs): - if '__orig_bases__' in cls.__dict__: - error = typing.Generic in cls.__orig_bases__ - else: - error = typing.Generic in cls.__bases__ - if error: - raise TypeError("Cannot inherit from plain Generic") - _maybe_adjust_parameters(cls) - - # Determine if this is a protocol or a concrete subclass. - if not cls.__dict__.get('_is_protocol', None): - cls._is_protocol = any(b is Protocol for b in cls.__bases__) - - # Set (or override) the protocol subclass hook. - def _proto_hook(other): - if not cls.__dict__.get('_is_protocol', None): - return NotImplemented - if not getattr(cls, '_is_runtime_protocol', False): - if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']: - return NotImplemented - raise TypeError("Instance and class checks can only be used with" - " @runtime protocols") - if not _is_callable_members_only(cls): - if sys._getframe(2).f_globals['__name__'] in ['abc', 'functools']: - return NotImplemented - raise TypeError("Protocols with non-method members" - " don't support issubclass()") - if not isinstance(other, type): - # Same error as for issubclass(1, int) - raise TypeError('issubclass() arg 1 must be a class') - for attr in _get_protocol_attrs(cls): - for base in other.__mro__: - if attr in base.__dict__: - if base.__dict__[attr] is None: - return NotImplemented - break - annotations = getattr(base, '__annotations__', {}) - if (isinstance(annotations, typing.Mapping) and - attr in annotations and - isinstance(other, _ProtocolMeta) and - other._is_protocol): - break - else: - return NotImplemented - return True - if '__subclasshook__' not in cls.__dict__: - cls.__subclasshook__ = _proto_hook - - # We have nothing more to do for non-protocols. - if not cls._is_protocol: - return - - # Check consistency of bases. - for base in cls.__bases__: - if not (base in (object, typing.Generic) or - base.__module__ == 'collections.abc' and - base.__name__ in _PROTO_WHITELIST or - isinstance(base, _ProtocolMeta) and base._is_protocol): - raise TypeError('Protocols can only inherit from other' - f' protocols, got {repr(base)}') - cls.__init__ = _no_init - - -# 3.8+ -if hasattr(typing, 'runtime_checkable'): - runtime_checkable = typing.runtime_checkable -# 3.7 -else: - def runtime_checkable(cls): - """Mark a protocol class as a runtime protocol, so that it - can be used with isinstance() and issubclass(). Raise TypeError - if applied to a non-protocol class. - - This allows a simple-minded structural check very similar to the - one-offs in collections.abc such as Hashable. - """ - if not isinstance(cls, _ProtocolMeta) or not cls._is_protocol: - raise TypeError('@runtime_checkable can be only applied to protocol classes,' - f' got {cls!r}') - cls._is_runtime_protocol = True - return cls - - -# Exists for backwards compatibility. -runtime = runtime_checkable - - -# 3.8+ -if hasattr(typing, 'SupportsIndex'): - SupportsIndex = typing.SupportsIndex -# 3.7 -else: - @runtime_checkable - class SupportsIndex(Protocol): - __slots__ = () - - @abc.abstractmethod - def __index__(self) -> int: - pass - - -if hasattr(typing, "Required"): - # The standard library TypedDict in Python 3.8 does not store runtime information - # about which (if any) keys are optional. See https://bugs.python.org/issue38834 - # The standard library TypedDict in Python 3.9.0/1 does not honour the "total" - # keyword with old-style TypedDict(). See https://bugs.python.org/issue42059 - # The standard library TypedDict below Python 3.11 does not store runtime - # information about optional and required keys when using Required or NotRequired. - # Generic TypedDicts are also impossible using typing.TypedDict on Python <3.11. - TypedDict = typing.TypedDict - _TypedDictMeta = typing._TypedDictMeta - is_typeddict = typing.is_typeddict -else: - def _check_fails(cls, other): - try: - if sys._getframe(1).f_globals['__name__'] not in ['abc', - 'functools', - 'typing']: - # Typed dicts are only for static structural subtyping. - raise TypeError('TypedDict does not support instance and class checks') - except (AttributeError, ValueError): - pass - return False - - def _dict_new(*args, **kwargs): - if not args: - raise TypeError('TypedDict.__new__(): not enough arguments') - _, args = args[0], args[1:] # allow the "cls" keyword be passed - return dict(*args, **kwargs) - - _dict_new.__text_signature__ = '($cls, _typename, _fields=None, /, **kwargs)' - - def _typeddict_new(*args, total=True, **kwargs): - if not args: - raise TypeError('TypedDict.__new__(): not enough arguments') - _, args = args[0], args[1:] # allow the "cls" keyword be passed - if args: - typename, args = args[0], args[1:] # allow the "_typename" keyword be passed - elif '_typename' in kwargs: - typename = kwargs.pop('_typename') - import warnings - warnings.warn("Passing '_typename' as keyword argument is deprecated", - DeprecationWarning, stacklevel=2) - else: - raise TypeError("TypedDict.__new__() missing 1 required positional " - "argument: '_typename'") - if args: - try: - fields, = args # allow the "_fields" keyword be passed - except ValueError: - raise TypeError('TypedDict.__new__() takes from 2 to 3 ' - f'positional arguments but {len(args) + 2} ' - 'were given') - elif '_fields' in kwargs and len(kwargs) == 1: - fields = kwargs.pop('_fields') - import warnings - warnings.warn("Passing '_fields' as keyword argument is deprecated", - DeprecationWarning, stacklevel=2) - else: - fields = None - - if fields is None: - fields = kwargs - elif kwargs: - raise TypeError("TypedDict takes either a dict or keyword arguments," - " but not both") - - ns = {'__annotations__': dict(fields)} - try: - # Setting correct module is necessary to make typed dict classes pickleable. - ns['__module__'] = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - pass - - return _TypedDictMeta(typename, (), ns, total=total) - - _typeddict_new.__text_signature__ = ('($cls, _typename, _fields=None,' - ' /, *, total=True, **kwargs)') - - class _TypedDictMeta(type): - def __init__(cls, name, bases, ns, total=True): - super().__init__(name, bases, ns) - - def __new__(cls, name, bases, ns, total=True): - # Create new typed dict class object. - # This method is called directly when TypedDict is subclassed, - # or via _typeddict_new when TypedDict is instantiated. This way - # TypedDict supports all three syntaxes described in its docstring. - # Subclasses and instances of TypedDict return actual dictionaries - # via _dict_new. - ns['__new__'] = _typeddict_new if name == 'TypedDict' else _dict_new - # Don't insert typing.Generic into __bases__ here, - # or Generic.__init_subclass__ will raise TypeError - # in the super().__new__() call. - # Instead, monkey-patch __bases__ onto the class after it's been created. - tp_dict = super().__new__(cls, name, (dict,), ns) - - if any(issubclass(base, typing.Generic) for base in bases): - tp_dict.__bases__ = (typing.Generic, dict) - _maybe_adjust_parameters(tp_dict) - - annotations = {} - own_annotations = ns.get('__annotations__', {}) - msg = "TypedDict('Name', {f0: t0, f1: t1, ...}); each t must be a type" - own_annotations = { - n: typing._type_check(tp, msg) for n, tp in own_annotations.items() - } - required_keys = set() - optional_keys = set() - - for base in bases: - annotations.update(base.__dict__.get('__annotations__', {})) - required_keys.update(base.__dict__.get('__required_keys__', ())) - optional_keys.update(base.__dict__.get('__optional_keys__', ())) - - annotations.update(own_annotations) - for annotation_key, annotation_type in own_annotations.items(): - annotation_origin = get_origin(annotation_type) - if annotation_origin is Annotated: - annotation_args = get_args(annotation_type) - if annotation_args: - annotation_type = annotation_args[0] - annotation_origin = get_origin(annotation_type) - - if annotation_origin is Required: - required_keys.add(annotation_key) - elif annotation_origin is NotRequired: - optional_keys.add(annotation_key) - elif total: - required_keys.add(annotation_key) - else: - optional_keys.add(annotation_key) - - tp_dict.__annotations__ = annotations - tp_dict.__required_keys__ = frozenset(required_keys) - tp_dict.__optional_keys__ = frozenset(optional_keys) - if not hasattr(tp_dict, '__total__'): - tp_dict.__total__ = total - return tp_dict - - __instancecheck__ = __subclasscheck__ = _check_fails - - TypedDict = _TypedDictMeta('TypedDict', (dict,), {}) - TypedDict.__module__ = __name__ - TypedDict.__doc__ = \ - """A simple typed name space. At runtime it is equivalent to a plain dict. - - TypedDict creates a dictionary type that expects all of its - instances to have a certain set of keys, with each key - associated with a value of a consistent type. This expectation - is not checked at runtime but is only enforced by type checkers. - Usage:: - - class Point2D(TypedDict): - x: int - y: int - label: str - - a: Point2D = {'x': 1, 'y': 2, 'label': 'good'} # OK - b: Point2D = {'z': 3, 'label': 'bad'} # Fails type check - - assert Point2D(x=1, y=2, label='first') == dict(x=1, y=2, label='first') - - The type info can be accessed via the Point2D.__annotations__ dict, and - the Point2D.__required_keys__ and Point2D.__optional_keys__ frozensets. - TypedDict supports two additional equivalent forms:: - - Point2D = TypedDict('Point2D', x=int, y=int, label=str) - Point2D = TypedDict('Point2D', {'x': int, 'y': int, 'label': str}) - - The class syntax is only supported in Python 3.6+, while two other - syntax forms work for Python 2.7 and 3.2+ - """ - - if hasattr(typing, "_TypedDictMeta"): - _TYPEDDICT_TYPES = (typing._TypedDictMeta, _TypedDictMeta) - else: - _TYPEDDICT_TYPES = (_TypedDictMeta,) - - def is_typeddict(tp): - """Check if an annotation is a TypedDict class - - For example:: - class Film(TypedDict): - title: str - year: int - - is_typeddict(Film) # => True - is_typeddict(Union[list, str]) # => False - """ - return isinstance(tp, tuple(_TYPEDDICT_TYPES)) - - -if hasattr(typing, "assert_type"): - assert_type = typing.assert_type - -else: - def assert_type(__val, __typ): - """Assert (to the type checker) that the value is of the given type. - - When the type checker encounters a call to assert_type(), it - emits an error if the value is not of the specified type:: - - def greet(name: str) -> None: - assert_type(name, str) # ok - assert_type(name, int) # type checker error - - At runtime this returns the first argument unchanged and otherwise - does nothing. - """ - return __val - - -if hasattr(typing, "Required"): - get_type_hints = typing.get_type_hints -else: - import functools - import types - - # replaces _strip_annotations() - def _strip_extras(t): - """Strips Annotated, Required and NotRequired from a given type.""" - if isinstance(t, _AnnotatedAlias): - return _strip_extras(t.__origin__) - if hasattr(t, "__origin__") and t.__origin__ in (Required, NotRequired): - return _strip_extras(t.__args__[0]) - if isinstance(t, typing._GenericAlias): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return t.copy_with(stripped_args) - if hasattr(types, "GenericAlias") and isinstance(t, types.GenericAlias): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return types.GenericAlias(t.__origin__, stripped_args) - if hasattr(types, "UnionType") and isinstance(t, types.UnionType): - stripped_args = tuple(_strip_extras(a) for a in t.__args__) - if stripped_args == t.__args__: - return t - return functools.reduce(operator.or_, stripped_args) - - return t - - def get_type_hints(obj, globalns=None, localns=None, include_extras=False): - """Return type hints for an object. - - This is often the same as obj.__annotations__, but it handles - forward references encoded as string literals, adds Optional[t] if a - default value equal to None is set and recursively replaces all - 'Annotated[T, ...]', 'Required[T]' or 'NotRequired[T]' with 'T' - (unless 'include_extras=True'). - - The argument may be a module, class, method, or function. The annotations - are returned as a dictionary. For classes, annotations include also - inherited members. - - TypeError is raised if the argument is not of a type that can contain - annotations, and an empty dictionary is returned if no annotations are - present. - - BEWARE -- the behavior of globalns and localns is counterintuitive - (unless you are familiar with how eval() and exec() work). The - search order is locals first, then globals. - - - If no dict arguments are passed, an attempt is made to use the - globals from obj (or the respective module's globals for classes), - and these are also used as the locals. If the object does not appear - to have globals, an empty dictionary is used. - - - If one dict argument is passed, it is used for both globals and - locals. - - - If two dict arguments are passed, they specify globals and - locals, respectively. - """ - if hasattr(typing, "Annotated"): - hint = typing.get_type_hints( - obj, globalns=globalns, localns=localns, include_extras=True - ) - else: - hint = typing.get_type_hints(obj, globalns=globalns, localns=localns) - if include_extras: - return hint - return {k: _strip_extras(t) for k, t in hint.items()} - - -# Python 3.9+ has PEP 593 (Annotated) -if hasattr(typing, 'Annotated'): - Annotated = typing.Annotated - # Not exported and not a public API, but needed for get_origin() and get_args() - # to work. - _AnnotatedAlias = typing._AnnotatedAlias -# 3.7-3.8 -else: - class _AnnotatedAlias(typing._GenericAlias, _root=True): - """Runtime representation of an annotated type. - - At its core 'Annotated[t, dec1, dec2, ...]' is an alias for the type 't' - with extra annotations. The alias behaves like a normal typing alias, - instantiating is the same as instantiating the underlying type, binding - it to types is also the same. - """ - def __init__(self, origin, metadata): - if isinstance(origin, _AnnotatedAlias): - metadata = origin.__metadata__ + metadata - origin = origin.__origin__ - super().__init__(origin, origin) - self.__metadata__ = metadata - - def copy_with(self, params): - assert len(params) == 1 - new_type = params[0] - return _AnnotatedAlias(new_type, self.__metadata__) - - def __repr__(self): - return (f"typing_extensions.Annotated[{typing._type_repr(self.__origin__)}, " - f"{', '.join(repr(a) for a in self.__metadata__)}]") - - def __reduce__(self): - return operator.getitem, ( - Annotated, (self.__origin__,) + self.__metadata__ - ) - - def __eq__(self, other): - if not isinstance(other, _AnnotatedAlias): - return NotImplemented - if self.__origin__ != other.__origin__: - return False - return self.__metadata__ == other.__metadata__ - - def __hash__(self): - return hash((self.__origin__, self.__metadata__)) - - class Annotated: - """Add context specific metadata to a type. - - Example: Annotated[int, runtime_check.Unsigned] indicates to the - hypothetical runtime_check module that this type is an unsigned int. - Every other consumer of this type can ignore this metadata and treat - this type as int. - - The first argument to Annotated must be a valid type (and will be in - the __origin__ field), the remaining arguments are kept as a tuple in - the __extra__ field. - - Details: - - - It's an error to call `Annotated` with less than two arguments. - - Nested Annotated are flattened:: - - Annotated[Annotated[T, Ann1, Ann2], Ann3] == Annotated[T, Ann1, Ann2, Ann3] - - - Instantiating an annotated type is equivalent to instantiating the - underlying type:: - - Annotated[C, Ann1](5) == C(5) - - - Annotated can be used as a generic type alias:: - - Optimized = Annotated[T, runtime.Optimize()] - Optimized[int] == Annotated[int, runtime.Optimize()] - - OptimizedList = Annotated[List[T], runtime.Optimize()] - OptimizedList[int] == Annotated[List[int], runtime.Optimize()] - """ - - __slots__ = () - - def __new__(cls, *args, **kwargs): - raise TypeError("Type Annotated cannot be instantiated.") - - @typing._tp_cache - def __class_getitem__(cls, params): - if not isinstance(params, tuple) or len(params) < 2: - raise TypeError("Annotated[...] should be used " - "with at least two arguments (a type and an " - "annotation).") - allowed_special_forms = (ClassVar, Final) - if get_origin(params[0]) in allowed_special_forms: - origin = params[0] - else: - msg = "Annotated[t, ...]: t must be a type." - origin = typing._type_check(params[0], msg) - metadata = tuple(params[1:]) - return _AnnotatedAlias(origin, metadata) - - def __init_subclass__(cls, *args, **kwargs): - raise TypeError( - f"Cannot subclass {cls.__module__}.Annotated" - ) - -# Python 3.8 has get_origin() and get_args() but those implementations aren't -# Annotated-aware, so we can't use those. Python 3.9's versions don't support -# ParamSpecArgs and ParamSpecKwargs, so only Python 3.10's versions will do. -if sys.version_info[:2] >= (3, 10): - get_origin = typing.get_origin - get_args = typing.get_args -# 3.7-3.9 -else: - try: - # 3.9+ - from typing import _BaseGenericAlias - except ImportError: - _BaseGenericAlias = typing._GenericAlias - try: - # 3.9+ - from typing import GenericAlias as _typing_GenericAlias - except ImportError: - _typing_GenericAlias = typing._GenericAlias - - def get_origin(tp): - """Get the unsubscripted version of a type. - - This supports generic types, Callable, Tuple, Union, Literal, Final, ClassVar - and Annotated. Return None for unsupported types. Examples:: - - get_origin(Literal[42]) is Literal - get_origin(int) is None - get_origin(ClassVar[int]) is ClassVar - get_origin(Generic) is Generic - get_origin(Generic[T]) is Generic - get_origin(Union[T, int]) is Union - get_origin(List[Tuple[T, T]][int]) == list - get_origin(P.args) is P - """ - if isinstance(tp, _AnnotatedAlias): - return Annotated - if isinstance(tp, (typing._GenericAlias, _typing_GenericAlias, _BaseGenericAlias, - ParamSpecArgs, ParamSpecKwargs)): - return tp.__origin__ - if tp is typing.Generic: - return typing.Generic - return None - - def get_args(tp): - """Get type arguments with all substitutions performed. - - For unions, basic simplifications used by Union constructor are performed. - Examples:: - get_args(Dict[str, int]) == (str, int) - get_args(int) == () - get_args(Union[int, Union[T, int], str][int]) == (int, str) - get_args(Union[int, Tuple[T, int]][str]) == (int, Tuple[str, int]) - get_args(Callable[[], T][int]) == ([], int) - """ - if isinstance(tp, _AnnotatedAlias): - return (tp.__origin__,) + tp.__metadata__ - if isinstance(tp, (typing._GenericAlias, _typing_GenericAlias)): - if getattr(tp, "_special", False): - return () - res = tp.__args__ - if get_origin(tp) is collections.abc.Callable and res[0] is not Ellipsis: - res = (list(res[:-1]), res[-1]) - return res - return () - - -# 3.10+ -if hasattr(typing, 'TypeAlias'): - TypeAlias = typing.TypeAlias -# 3.9 -elif sys.version_info[:2] >= (3, 9): - class _TypeAliasForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_TypeAliasForm - def TypeAlias(self, parameters): - """Special marker indicating that an assignment should - be recognized as a proper type alias definition by type - checkers. - - For example:: - - Predicate: TypeAlias = Callable[..., bool] - - It's invalid when used anywhere except as in the example above. - """ - raise TypeError(f"{self} is not subscriptable") -# 3.7-3.8 -else: - class _TypeAliasForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - TypeAlias = _TypeAliasForm('TypeAlias', - doc="""Special marker indicating that an assignment should - be recognized as a proper type alias definition by type - checkers. - - For example:: - - Predicate: TypeAlias = Callable[..., bool] - - It's invalid when used anywhere except as in the example - above.""") - - -class _DefaultMixin: - """Mixin for TypeVarLike defaults.""" - - __slots__ = () - - def __init__(self, default): - if isinstance(default, (tuple, list)): - self.__default__ = tuple((typing._type_check(d, "Default must be a type") - for d in default)) - elif default: - self.__default__ = typing._type_check(default, "Default must be a type") - else: - self.__default__ = None - - -# Add default and infer_variance parameters from PEP 696 and 695 -class TypeVar(typing.TypeVar, _DefaultMixin, _root=True): - """Type variable.""" - - __module__ = 'typing' - - def __init__(self, name, *constraints, bound=None, - covariant=False, contravariant=False, - default=None, infer_variance=False): - super().__init__(name, *constraints, bound=bound, covariant=covariant, - contravariant=contravariant) - _DefaultMixin.__init__(self, default) - self.__infer_variance__ = infer_variance - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - -# Python 3.10+ has PEP 612 -if hasattr(typing, 'ParamSpecArgs'): - ParamSpecArgs = typing.ParamSpecArgs - ParamSpecKwargs = typing.ParamSpecKwargs -# 3.7-3.9 -else: - class _Immutable: - """Mixin to indicate that object should not be copied.""" - __slots__ = () - - def __copy__(self): - return self - - def __deepcopy__(self, memo): - return self - - class ParamSpecArgs(_Immutable): - """The args for a ParamSpec object. - - Given a ParamSpec object P, P.args is an instance of ParamSpecArgs. - - ParamSpecArgs objects have a reference back to their ParamSpec: - - P.args.__origin__ is P - - This type is meant for runtime introspection and has no special meaning to - static type checkers. - """ - def __init__(self, origin): - self.__origin__ = origin - - def __repr__(self): - return f"{self.__origin__.__name__}.args" - - def __eq__(self, other): - if not isinstance(other, ParamSpecArgs): - return NotImplemented - return self.__origin__ == other.__origin__ - - class ParamSpecKwargs(_Immutable): - """The kwargs for a ParamSpec object. - - Given a ParamSpec object P, P.kwargs is an instance of ParamSpecKwargs. - - ParamSpecKwargs objects have a reference back to their ParamSpec: - - P.kwargs.__origin__ is P - - This type is meant for runtime introspection and has no special meaning to - static type checkers. - """ - def __init__(self, origin): - self.__origin__ = origin - - def __repr__(self): - return f"{self.__origin__.__name__}.kwargs" - - def __eq__(self, other): - if not isinstance(other, ParamSpecKwargs): - return NotImplemented - return self.__origin__ == other.__origin__ - -# 3.10+ -if hasattr(typing, 'ParamSpec'): - - # Add default Parameter - PEP 696 - class ParamSpec(typing.ParamSpec, _DefaultMixin, _root=True): - """Parameter specification variable.""" - - __module__ = 'typing' - - def __init__(self, name, *, bound=None, covariant=False, contravariant=False, - default=None): - super().__init__(name, bound=bound, covariant=covariant, - contravariant=contravariant) - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - -# 3.7-3.9 -else: - - # Inherits from list as a workaround for Callable checks in Python < 3.9.2. - class ParamSpec(list, _DefaultMixin): - """Parameter specification variable. - - Usage:: - - P = ParamSpec('P') - - Parameter specification variables exist primarily for the benefit of static - type checkers. They are used to forward the parameter types of one - callable to another callable, a pattern commonly found in higher order - functions and decorators. They are only valid when used in ``Concatenate``, - or s the first argument to ``Callable``. In Python 3.10 and higher, - they are also supported in user-defined Generics at runtime. - See class Generic for more information on generic types. An - example for annotating a decorator:: - - T = TypeVar('T') - P = ParamSpec('P') - - def add_logging(f: Callable[P, T]) -> Callable[P, T]: - '''A type-safe decorator to add logging to a function.''' - def inner(*args: P.args, **kwargs: P.kwargs) -> T: - logging.info(f'{f.__name__} was called') - return f(*args, **kwargs) - return inner - - @add_logging - def add_two(x: float, y: float) -> float: - '''Add two numbers together.''' - return x + y - - Parameter specification variables defined with covariant=True or - contravariant=True can be used to declare covariant or contravariant - generic types. These keyword arguments are valid, but their actual semantics - are yet to be decided. See PEP 612 for details. - - Parameter specification variables can be introspected. e.g.: - - P.__name__ == 'T' - P.__bound__ == None - P.__covariant__ == False - P.__contravariant__ == False - - Note that only parameter specification variables defined in global scope can - be pickled. - """ - - # Trick Generic __parameters__. - __class__ = typing.TypeVar - - @property - def args(self): - return ParamSpecArgs(self) - - @property - def kwargs(self): - return ParamSpecKwargs(self) - - def __init__(self, name, *, bound=None, covariant=False, contravariant=False, - default=None): - super().__init__([self]) - self.__name__ = name - self.__covariant__ = bool(covariant) - self.__contravariant__ = bool(contravariant) - if bound: - self.__bound__ = typing._type_check(bound, 'Bound must be a type.') - else: - self.__bound__ = None - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - def __repr__(self): - if self.__covariant__: - prefix = '+' - elif self.__contravariant__: - prefix = '-' - else: - prefix = '~' - return prefix + self.__name__ - - def __hash__(self): - return object.__hash__(self) - - def __eq__(self, other): - return self is other - - def __reduce__(self): - return self.__name__ - - # Hack to get typing._type_check to pass. - def __call__(self, *args, **kwargs): - pass - - -# 3.7-3.9 -if not hasattr(typing, 'Concatenate'): - # Inherits from list as a workaround for Callable checks in Python < 3.9.2. - class _ConcatenateGenericAlias(list): - - # Trick Generic into looking into this for __parameters__. - __class__ = typing._GenericAlias - - # Flag in 3.8. - _special = False - - def __init__(self, origin, args): - super().__init__(args) - self.__origin__ = origin - self.__args__ = args - - def __repr__(self): - _type_repr = typing._type_repr - return (f'{_type_repr(self.__origin__)}' - f'[{", ".join(_type_repr(arg) for arg in self.__args__)}]') - - def __hash__(self): - return hash((self.__origin__, self.__args__)) - - # Hack to get typing._type_check to pass in Generic. - def __call__(self, *args, **kwargs): - pass - - @property - def __parameters__(self): - return tuple( - tp for tp in self.__args__ if isinstance(tp, (typing.TypeVar, ParamSpec)) - ) - - -# 3.7-3.9 -@typing._tp_cache -def _concatenate_getitem(self, parameters): - if parameters == (): - raise TypeError("Cannot take a Concatenate of no types.") - if not isinstance(parameters, tuple): - parameters = (parameters,) - if not isinstance(parameters[-1], ParamSpec): - raise TypeError("The last parameter to Concatenate should be a " - "ParamSpec variable.") - msg = "Concatenate[arg, ...]: each arg must be a type." - parameters = tuple(typing._type_check(p, msg) for p in parameters) - return _ConcatenateGenericAlias(self, parameters) - - -# 3.10+ -if hasattr(typing, 'Concatenate'): - Concatenate = typing.Concatenate - _ConcatenateGenericAlias = typing._ConcatenateGenericAlias # noqa -# 3.9 -elif sys.version_info[:2] >= (3, 9): - @_TypeAliasForm - def Concatenate(self, parameters): - """Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a - higher order function which adds, removes or transforms parameters of a - callable. - - For example:: - - Callable[Concatenate[int, P], int] - - See PEP 612 for detailed information. - """ - return _concatenate_getitem(self, parameters) -# 3.7-8 -else: - class _ConcatenateForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - return _concatenate_getitem(self, parameters) - - Concatenate = _ConcatenateForm( - 'Concatenate', - doc="""Used in conjunction with ``ParamSpec`` and ``Callable`` to represent a - higher order function which adds, removes or transforms parameters of a - callable. - - For example:: - - Callable[Concatenate[int, P], int] - - See PEP 612 for detailed information. - """) - -# 3.10+ -if hasattr(typing, 'TypeGuard'): - TypeGuard = typing.TypeGuard -# 3.9 -elif sys.version_info[:2] >= (3, 9): - class _TypeGuardForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_TypeGuardForm - def TypeGuard(self, parameters): - """Special typing form used to annotate the return type of a user-defined - type guard function. ``TypeGuard`` only accepts a single type argument. - At runtime, functions marked this way should return a boolean. - - ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static - type checkers to determine a more precise type of an expression within a - program's code flow. Usually type narrowing is done by analyzing - conditional code flow and applying the narrowing to a block of code. The - conditional expression here is sometimes referred to as a "type guard". - - Sometimes it would be convenient to use a user-defined boolean function - as a type guard. Such a function should use ``TypeGuard[...]`` as its - return type to alert static type checkers to this intention. - - Using ``-> TypeGuard`` tells the static type checker that for a given - function: - - 1. The return value is a boolean. - 2. If the return value is ``True``, the type of its argument - is the type inside ``TypeGuard``. - - For example:: - - def is_str(val: Union[str, float]): - # "isinstance" type guard - if isinstance(val, str): - # Type of ``val`` is narrowed to ``str`` - ... - else: - # Else, type of ``val`` is narrowed to ``float``. - ... - - Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower - form of ``TypeA`` (it can even be a wider form) and this may lead to - type-unsafe results. The main reason is to allow for things like - narrowing ``List[object]`` to ``List[str]`` even though the latter is not - a subtype of the former, since ``List`` is invariant. The responsibility of - writing type-safe type guards is left to the user. - - ``TypeGuard`` also works with type variables. For more information, see - PEP 647 (User-Defined Type Guards). - """ - item = typing._type_check(parameters, f'{self} accepts only a single type.') - return typing._GenericAlias(self, (item,)) -# 3.7-3.8 -else: - class _TypeGuardForm(typing._SpecialForm, _root=True): - - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type') - return typing._GenericAlias(self, (item,)) - - TypeGuard = _TypeGuardForm( - 'TypeGuard', - doc="""Special typing form used to annotate the return type of a user-defined - type guard function. ``TypeGuard`` only accepts a single type argument. - At runtime, functions marked this way should return a boolean. - - ``TypeGuard`` aims to benefit *type narrowing* -- a technique used by static - type checkers to determine a more precise type of an expression within a - program's code flow. Usually type narrowing is done by analyzing - conditional code flow and applying the narrowing to a block of code. The - conditional expression here is sometimes referred to as a "type guard". - - Sometimes it would be convenient to use a user-defined boolean function - as a type guard. Such a function should use ``TypeGuard[...]`` as its - return type to alert static type checkers to this intention. - - Using ``-> TypeGuard`` tells the static type checker that for a given - function: - - 1. The return value is a boolean. - 2. If the return value is ``True``, the type of its argument - is the type inside ``TypeGuard``. - - For example:: - - def is_str(val: Union[str, float]): - # "isinstance" type guard - if isinstance(val, str): - # Type of ``val`` is narrowed to ``str`` - ... - else: - # Else, type of ``val`` is narrowed to ``float``. - ... - - Strict type narrowing is not enforced -- ``TypeB`` need not be a narrower - form of ``TypeA`` (it can even be a wider form) and this may lead to - type-unsafe results. The main reason is to allow for things like - narrowing ``List[object]`` to ``List[str]`` even though the latter is not - a subtype of the former, since ``List`` is invariant. The responsibility of - writing type-safe type guards is left to the user. - - ``TypeGuard`` also works with type variables. For more information, see - PEP 647 (User-Defined Type Guards). - """) - - -# Vendored from cpython typing._SpecialFrom -class _SpecialForm(typing._Final, _root=True): - __slots__ = ('_name', '__doc__', '_getitem') - - def __init__(self, getitem): - self._getitem = getitem - self._name = getitem.__name__ - self.__doc__ = getitem.__doc__ - - def __getattr__(self, item): - if item in {'__name__', '__qualname__'}: - return self._name - - raise AttributeError(item) - - def __mro_entries__(self, bases): - raise TypeError(f"Cannot subclass {self!r}") - - def __repr__(self): - return f'typing_extensions.{self._name}' - - def __reduce__(self): - return self._name - - def __call__(self, *args, **kwds): - raise TypeError(f"Cannot instantiate {self!r}") - - def __or__(self, other): - return typing.Union[self, other] - - def __ror__(self, other): - return typing.Union[other, self] - - def __instancecheck__(self, obj): - raise TypeError(f"{self} cannot be used with isinstance()") - - def __subclasscheck__(self, cls): - raise TypeError(f"{self} cannot be used with issubclass()") - - @typing._tp_cache - def __getitem__(self, parameters): - return self._getitem(self, parameters) - - -if hasattr(typing, "LiteralString"): - LiteralString = typing.LiteralString -else: - @_SpecialForm - def LiteralString(self, params): - """Represents an arbitrary literal string. - - Example:: - - from typing_extensions import LiteralString - - def query(sql: LiteralString) -> ...: - ... - - query("SELECT * FROM table") # ok - query(f"SELECT * FROM {input()}") # not ok - - See PEP 675 for details. - - """ - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, "Self"): - Self = typing.Self -else: - @_SpecialForm - def Self(self, params): - """Used to spell the type of "self" in classes. - - Example:: - - from typing import Self - - class ReturnsSelf: - def parse(self, data: bytes) -> Self: - ... - return self - - """ - - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, "Never"): - Never = typing.Never -else: - @_SpecialForm - def Never(self, params): - """The bottom type, a type that has no members. - - This can be used to define a function that should never be - called, or a function that never returns:: - - from typing_extensions import Never - - def never_call_me(arg: Never) -> None: - pass - - def int_or_str(arg: int | str) -> None: - never_call_me(arg) # type checker error - match arg: - case int(): - print("It's an int") - case str(): - print("It's a str") - case _: - never_call_me(arg) # ok, arg is of type Never - - """ - - raise TypeError(f"{self} is not subscriptable") - - -if hasattr(typing, 'Required'): - Required = typing.Required - NotRequired = typing.NotRequired -elif sys.version_info[:2] >= (3, 9): - class _ExtensionsSpecialForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - @_ExtensionsSpecialForm - def Required(self, parameters): - """A special typing construct to mark a key of a total=False TypedDict - as required. For example: - - class Movie(TypedDict, total=False): - title: Required[str] - year: int - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - - There is no runtime checking that a required key is actually provided - when instantiating a related TypedDict. - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - @_ExtensionsSpecialForm - def NotRequired(self, parameters): - """A special typing construct to mark a key of a TypedDict as - potentially missing. For example: - - class Movie(TypedDict): - title: str - year: NotRequired[int] - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - -else: - class _RequiredForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return typing._GenericAlias(self, (item,)) - - Required = _RequiredForm( - 'Required', - doc="""A special typing construct to mark a key of a total=False TypedDict - as required. For example: - - class Movie(TypedDict, total=False): - title: Required[str] - year: int - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - - There is no runtime checking that a required key is actually provided - when instantiating a related TypedDict. - """) - NotRequired = _RequiredForm( - 'NotRequired', - doc="""A special typing construct to mark a key of a TypedDict as - potentially missing. For example: - - class Movie(TypedDict): - title: str - year: NotRequired[int] - - m = Movie( - title='The Matrix', # typechecker error if key is omitted - year=1999, - ) - """) - - -if hasattr(typing, "Unpack"): # 3.11+ - Unpack = typing.Unpack -elif sys.version_info[:2] >= (3, 9): - class _UnpackSpecialForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - class _UnpackAlias(typing._GenericAlias, _root=True): - __class__ = typing.TypeVar - - @_UnpackSpecialForm - def Unpack(self, parameters): - """A special typing construct to unpack a variadic type. For example: - - Shape = TypeVarTuple('Shape') - Batch = NewType('Batch', int) - - def add_batch_axis( - x: Array[Unpack[Shape]] - ) -> Array[Batch, Unpack[Shape]]: ... - - """ - item = typing._type_check(parameters, f'{self._name} accepts only a single type.') - return _UnpackAlias(self, (item,)) - - def _is_unpack(obj): - return isinstance(obj, _UnpackAlias) - -else: - class _UnpackAlias(typing._GenericAlias, _root=True): - __class__ = typing.TypeVar - - class _UnpackForm(typing._SpecialForm, _root=True): - def __repr__(self): - return 'typing_extensions.' + self._name - - def __getitem__(self, parameters): - item = typing._type_check(parameters, - f'{self._name} accepts only a single type.') - return _UnpackAlias(self, (item,)) - - Unpack = _UnpackForm( - 'Unpack', - doc="""A special typing construct to unpack a variadic type. For example: - - Shape = TypeVarTuple('Shape') - Batch = NewType('Batch', int) - - def add_batch_axis( - x: Array[Unpack[Shape]] - ) -> Array[Batch, Unpack[Shape]]: ... - - """) - - def _is_unpack(obj): - return isinstance(obj, _UnpackAlias) - - -if hasattr(typing, "TypeVarTuple"): # 3.11+ - - # Add default Parameter - PEP 696 - class TypeVarTuple(typing.TypeVarTuple, _DefaultMixin, _root=True): - """Type variable tuple.""" - - def __init__(self, name, *, default=None): - super().__init__(name) - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - -else: - class TypeVarTuple(_DefaultMixin): - """Type variable tuple. - - Usage:: - - Ts = TypeVarTuple('Ts') - - In the same way that a normal type variable is a stand-in for a single - type such as ``int``, a type variable *tuple* is a stand-in for a *tuple* - type such as ``Tuple[int, str]``. - - Type variable tuples can be used in ``Generic`` declarations. - Consider the following example:: - - class Array(Generic[*Ts]): ... - - The ``Ts`` type variable tuple here behaves like ``tuple[T1, T2]``, - where ``T1`` and ``T2`` are type variables. To use these type variables - as type parameters of ``Array``, we must *unpack* the type variable tuple using - the star operator: ``*Ts``. The signature of ``Array`` then behaves - as if we had simply written ``class Array(Generic[T1, T2]): ...``. - In contrast to ``Generic[T1, T2]``, however, ``Generic[*Shape]`` allows - us to parameterise the class with an *arbitrary* number of type parameters. - - Type variable tuples can be used anywhere a normal ``TypeVar`` can. - This includes class definitions, as shown above, as well as function - signatures and variable annotations:: - - class Array(Generic[*Ts]): - - def __init__(self, shape: Tuple[*Ts]): - self._shape: Tuple[*Ts] = shape - - def get_shape(self) -> Tuple[*Ts]: - return self._shape - - shape = (Height(480), Width(640)) - x: Array[Height, Width] = Array(shape) - y = abs(x) # Inferred type is Array[Height, Width] - z = x + x # ... is Array[Height, Width] - x.get_shape() # ... is tuple[Height, Width] - - """ - - # Trick Generic __parameters__. - __class__ = typing.TypeVar - - def __iter__(self): - yield self.__unpacked__ - - def __init__(self, name, *, default=None): - self.__name__ = name - _DefaultMixin.__init__(self, default) - - # for pickling: - try: - def_mod = sys._getframe(1).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): - def_mod = None - if def_mod != 'typing_extensions': - self.__module__ = def_mod - - self.__unpacked__ = Unpack[self] - - def __repr__(self): - return self.__name__ - - def __hash__(self): - return object.__hash__(self) - - def __eq__(self, other): - return self is other - - def __reduce__(self): - return self.__name__ - - def __init_subclass__(self, *args, **kwds): - if '_root' not in kwds: - raise TypeError("Cannot subclass special typing classes") - - -if hasattr(typing, "reveal_type"): - reveal_type = typing.reveal_type -else: - def reveal_type(__obj: T) -> T: - """Reveal the inferred type of a variable. - - When a static type checker encounters a call to ``reveal_type()``, - it will emit the inferred type of the argument:: - - x: int = 1 - reveal_type(x) - - Running a static type checker (e.g., ``mypy``) on this example - will produce output similar to 'Revealed type is "builtins.int"'. - - At runtime, the function prints the runtime type of the - argument and returns it unchanged. - - """ - print(f"Runtime type is {type(__obj).__name__!r}", file=sys.stderr) - return __obj - - -if hasattr(typing, "assert_never"): - assert_never = typing.assert_never -else: - def assert_never(__arg: Never) -> Never: - """Assert to the type checker that a line of code is unreachable. - - Example:: - - def int_or_str(arg: int | str) -> None: - match arg: - case int(): - print("It's an int") - case str(): - print("It's a str") - case _: - assert_never(arg) - - If a type checker finds that a call to assert_never() is - reachable, it will emit an error. - - At runtime, this throws an exception when called. - - """ - raise AssertionError("Expected code to be unreachable") - - -if hasattr(typing, 'dataclass_transform'): - dataclass_transform = typing.dataclass_transform -else: - def dataclass_transform( - *, - eq_default: bool = True, - order_default: bool = False, - kw_only_default: bool = False, - field_specifiers: typing.Tuple[ - typing.Union[typing.Type[typing.Any], typing.Callable[..., typing.Any]], - ... - ] = (), - **kwargs: typing.Any, - ) -> typing.Callable[[T], T]: - """Decorator that marks a function, class, or metaclass as providing - dataclass-like behavior. - - Example: - - from typing_extensions import dataclass_transform - - _T = TypeVar("_T") - - # Used on a decorator function - @dataclass_transform() - def create_model(cls: type[_T]) -> type[_T]: - ... - return cls - - @create_model - class CustomerModel: - id: int - name: str - - # Used on a base class - @dataclass_transform() - class ModelBase: ... - - class CustomerModel(ModelBase): - id: int - name: str - - # Used on a metaclass - @dataclass_transform() - class ModelMeta(type): ... - - class ModelBase(metaclass=ModelMeta): ... - - class CustomerModel(ModelBase): - id: int - name: str - - Each of the ``CustomerModel`` classes defined in this example will now - behave similarly to a dataclass created with the ``@dataclasses.dataclass`` - decorator. For example, the type checker will synthesize an ``__init__`` - method. - - The arguments to this decorator can be used to customize this behavior: - - ``eq_default`` indicates whether the ``eq`` parameter is assumed to be - True or False if it is omitted by the caller. - - ``order_default`` indicates whether the ``order`` parameter is - assumed to be True or False if it is omitted by the caller. - - ``kw_only_default`` indicates whether the ``kw_only`` parameter is - assumed to be True or False if it is omitted by the caller. - - ``field_specifiers`` specifies a static list of supported classes - or functions that describe fields, similar to ``dataclasses.field()``. - - At runtime, this decorator records its arguments in the - ``__dataclass_transform__`` attribute on the decorated object. - - See PEP 681 for details. - - """ - def decorator(cls_or_fn): - cls_or_fn.__dataclass_transform__ = { - "eq_default": eq_default, - "order_default": order_default, - "kw_only_default": kw_only_default, - "field_specifiers": field_specifiers, - "kwargs": kwargs, - } - return cls_or_fn - return decorator - - -if hasattr(typing, "override"): - override = typing.override -else: - _F = typing.TypeVar("_F", bound=typing.Callable[..., typing.Any]) - - def override(__arg: _F) -> _F: - """Indicate that a method is intended to override a method in a base class. - - Usage: - - class Base: - def method(self) -> None: ... - pass - - class Child(Base): - @override - def method(self) -> None: - super().method() - - When this decorator is applied to a method, the type checker will - validate that it overrides a method with the same name on a base class. - This helps prevent bugs that may occur when a base class is changed - without an equivalent change to a child class. - - See PEP 698 for details. - - """ - return __arg - - -# We have to do some monkey patching to deal with the dual nature of -# Unpack/TypeVarTuple: -# - We want Unpack to be a kind of TypeVar so it gets accepted in -# Generic[Unpack[Ts]] -# - We want it to *not* be treated as a TypeVar for the purposes of -# counting generic parameters, so that when we subscript a generic, -# the runtime doesn't try to substitute the Unpack with the subscripted type. -if not hasattr(typing, "TypeVarTuple"): - typing._collect_type_vars = _collect_type_vars - typing._check_generic = _check_generic - - -# Backport typing.NamedTuple as it exists in Python 3.11. -# In 3.11, the ability to define generic `NamedTuple`s was supported. -# This was explicitly disallowed in 3.9-3.10, and only half-worked in <=3.8. -if sys.version_info >= (3, 11): - NamedTuple = typing.NamedTuple -else: - def _caller(): - try: - return sys._getframe(2).f_globals.get('__name__', '__main__') - except (AttributeError, ValueError): # For platforms without _getframe() - return None - - def _make_nmtuple(name, types, module, defaults=()): - fields = [n for n, t in types] - annotations = {n: typing._type_check(t, f"field {n} annotation must be a type") - for n, t in types} - nm_tpl = collections.namedtuple(name, fields, - defaults=defaults, module=module) - nm_tpl.__annotations__ = nm_tpl.__new__.__annotations__ = annotations - # The `_field_types` attribute was removed in 3.9; - # in earlier versions, it is the same as the `__annotations__` attribute - if sys.version_info < (3, 9): - nm_tpl._field_types = annotations - return nm_tpl - - _prohibited_namedtuple_fields = typing._prohibited - _special_namedtuple_fields = frozenset({'__module__', '__name__', '__annotations__'}) - - class _NamedTupleMeta(type): - def __new__(cls, typename, bases, ns): - assert _NamedTuple in bases - for base in bases: - if base is not _NamedTuple and base is not typing.Generic: - raise TypeError( - 'can only inherit from a NamedTuple type and Generic') - bases = tuple(tuple if base is _NamedTuple else base for base in bases) - types = ns.get('__annotations__', {}) - default_names = [] - for field_name in types: - if field_name in ns: - default_names.append(field_name) - elif default_names: - raise TypeError(f"Non-default namedtuple field {field_name} " - f"cannot follow default field" - f"{'s' if len(default_names) > 1 else ''} " - f"{', '.join(default_names)}") - nm_tpl = _make_nmtuple( - typename, types.items(), - defaults=[ns[n] for n in default_names], - module=ns['__module__'] - ) - nm_tpl.__bases__ = bases - if typing.Generic in bases: - class_getitem = typing.Generic.__class_getitem__.__func__ - nm_tpl.__class_getitem__ = classmethod(class_getitem) - # update from user namespace without overriding special namedtuple attributes - for key in ns: - if key in _prohibited_namedtuple_fields: - raise AttributeError("Cannot overwrite NamedTuple attribute " + key) - elif key not in _special_namedtuple_fields and key not in nm_tpl._fields: - setattr(nm_tpl, key, ns[key]) - if typing.Generic in bases: - nm_tpl.__init_subclass__() - return nm_tpl - - def NamedTuple(__typename, __fields=None, **kwargs): - if __fields is None: - __fields = kwargs.items() - elif kwargs: - raise TypeError("Either list of fields or keywords" - " can be provided to NamedTuple, not both") - return _make_nmtuple(__typename, __fields, module=_caller()) - - NamedTuple.__doc__ = typing.NamedTuple.__doc__ - _NamedTuple = type.__new__(_NamedTupleMeta, 'NamedTuple', (), {}) - - # On 3.8+, alter the signature so that it matches typing.NamedTuple. - # The signature of typing.NamedTuple on >=3.8 is invalid syntax in Python 3.7, - # so just leave the signature as it is on 3.7. - if sys.version_info >= (3, 8): - NamedTuple.__text_signature__ = '(typename, fields=None, /, **kwargs)' - - def _namedtuple_mro_entries(bases): - assert NamedTuple in bases - return (_NamedTuple,) - - NamedTuple.__mro_entries__ = _namedtuple_mro_entries diff --git a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/models/transformer/layers/dino_head.py b/spaces/Realcat/image-matching-webui/third_party/Roma/roma/models/transformer/layers/dino_head.py deleted file mode 100644 index 1147dd3a3c046aee8d427b42b1055f38a218275b..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/Roma/roma/models/transformer/layers/dino_head.py +++ /dev/null @@ -1,68 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from torch.nn.init import trunc_normal_ -from torch.nn.utils import weight_norm - - -class DINOHead(nn.Module): - def __init__( - self, - in_dim, - out_dim, - use_bn=False, - nlayers=3, - hidden_dim=2048, - bottleneck_dim=256, - mlp_bias=True, - ): - super().__init__() - nlayers = max(nlayers, 1) - self.mlp = _build_mlp( - nlayers, - in_dim, - bottleneck_dim, - hidden_dim=hidden_dim, - use_bn=use_bn, - bias=mlp_bias, - ) - self.apply(self._init_weights) - self.last_layer = weight_norm(nn.Linear(bottleneck_dim, out_dim, bias=False)) - self.last_layer.weight_g.data.fill_(1) - - def _init_weights(self, m): - if isinstance(m, nn.Linear): - trunc_normal_(m.weight, std=0.02) - if isinstance(m, nn.Linear) and m.bias is not None: - nn.init.constant_(m.bias, 0) - - def forward(self, x): - x = self.mlp(x) - eps = 1e-6 if x.dtype == torch.float16 else 1e-12 - x = nn.functional.normalize(x, dim=-1, p=2, eps=eps) - x = self.last_layer(x) - return x - - -def _build_mlp( - nlayers, in_dim, bottleneck_dim, hidden_dim=None, use_bn=False, bias=True -): - if nlayers == 1: - return nn.Linear(in_dim, bottleneck_dim, bias=bias) - else: - layers = [nn.Linear(in_dim, hidden_dim, bias=bias)] - if use_bn: - layers.append(nn.BatchNorm1d(hidden_dim)) - layers.append(nn.GELU()) - for _ in range(nlayers - 2): - layers.append(nn.Linear(hidden_dim, hidden_dim, bias=bias)) - if use_bn: - layers.append(nn.BatchNorm1d(hidden_dim)) - layers.append(nn.GELU()) - layers.append(nn.Linear(hidden_dim, bottleneck_dim, bias=bias)) - return nn.Sequential(*layers) diff --git a/spaces/Rehman1603/YouTubeToTextInVariousLanguage/app.py b/spaces/Rehman1603/YouTubeToTextInVariousLanguage/app.py deleted file mode 100644 index 25fb0394981cadc246f026f0eec10fcc1288896a..0000000000000000000000000000000000000000 --- a/spaces/Rehman1603/YouTubeToTextInVariousLanguage/app.py +++ /dev/null @@ -1,11 +0,0 @@ -import gradio as gr -from YouTubeDataExtraction import Video_To_Text - -interface=gr.Interface(fn=Video_To_Text,inputs=[gr.inputs.Textbox(placeholder="Enter YouTube Video Link",label="YouTube Video Link"),gr.inputs.Radio(["Urdu","German","Hindi"],type="value",label="Select any one Language")], - outputs=[gr.outputs.Textbox(label="Result in English"),gr.outputs.Textbox(label="Result in Choose language")], - examples=[ - ["https://www.youtube.com/watch?v=fLeJJPxua3E","Urdu"] - ], - enable_queu=True) - -interface.launch(debug=True) \ No newline at end of file diff --git a/spaces/Ritori/TTS_Yui/hifi-gan/models.py b/spaces/Ritori/TTS_Yui/hifi-gan/models.py deleted file mode 100644 index cc06acfffc495e6642889bc56891ea777f745183..0000000000000000000000000000000000000000 --- a/spaces/Ritori/TTS_Yui/hifi-gan/models.py +++ /dev/null @@ -1,283 +0,0 @@ -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from hifiutils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.conv_pre = weight_norm(Conv1d(80, h.upsample_initial_channel, 7, 1, padding=3)) - resblock = ResBlock1 if h.resblock == '1' else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(h.upsample_initial_channel//(2**i), h.upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiPeriodDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorP(2), - DiscriminatorP(3), - DiscriminatorP(5), - DiscriminatorP(7), - DiscriminatorP(11), - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=2), - AvgPool1d(4, 2, padding=2) - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i-1](y) - y_hat = self.meanpools[i-1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss*2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1-dr)**2) - g_loss = torch.mean(dg**2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1-dg)**2) - gen_losses.append(l) - loss += l - - return loss, gen_losses - diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_small/run.sh b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_small/run.sh deleted file mode 100644 index 9fb22edfa7a32624ea08a63fe7d720c40db3b696..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/exp/upernet_global_small/run.sh +++ /dev/null @@ -1,10 +0,0 @@ -#!/usr/bin/env bash - -work_path=$(dirname $0) -PYTHONPATH="$(dirname $0)/../../":$PYTHONPATH \ -python -m torch.distributed.launch --nproc_per_node=8 \ - tools/train.py ${work_path}/config.py \ - --launcher pytorch \ - --options model.backbone.pretrained_path='your_model_path/uniformer_small_in1k.pth' \ - --work-dir ${work_path}/ckpt \ - 2>&1 | tee -a ${work_path}/log.txt diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/optimizer.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/optimizer.py deleted file mode 100644 index 4ef3e9ff8f9c6926e32bdf027612267b64ed80df..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/optimizer.py +++ /dev/null @@ -1,508 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy -from collections import defaultdict -from itertools import chain - -from torch.nn.utils import clip_grad - -from annotator.uniformer.mmcv.utils import TORCH_VERSION, _BatchNorm, digit_version -from ..dist_utils import allreduce_grads -from ..fp16_utils import LossScaler, wrap_fp16_model -from .hook import HOOKS, Hook - -try: - # If PyTorch version >= 1.6.0, torch.cuda.amp.GradScaler would be imported - # and used; otherwise, auto fp16 will adopt mmcv's implementation. - from torch.cuda.amp import GradScaler -except ImportError: - pass - - -@HOOKS.register_module() -class OptimizerHook(Hook): - - def __init__(self, grad_clip=None): - self.grad_clip = grad_clip - - def clip_grads(self, params): - params = list( - filter(lambda p: p.requires_grad and p.grad is not None, params)) - if len(params) > 0: - return clip_grad.clip_grad_norm_(params, **self.grad_clip) - - def after_train_iter(self, runner): - runner.optimizer.zero_grad() - runner.outputs['loss'].backward() - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - - -@HOOKS.register_module() -class GradientCumulativeOptimizerHook(OptimizerHook): - """Optimizer Hook implements multi-iters gradient cumulating. - - Args: - cumulative_iters (int, optional): Num of gradient cumulative iters. - The optimizer will step every `cumulative_iters` iters. - Defaults to 1. - - Examples: - >>> # Use cumulative_iters to simulate a large batch size - >>> # It is helpful when the hardware cannot handle a large batch size. - >>> loader = DataLoader(data, batch_size=64) - >>> optim_hook = GradientCumulativeOptimizerHook(cumulative_iters=4) - >>> # almost equals to - >>> loader = DataLoader(data, batch_size=256) - >>> optim_hook = OptimizerHook() - """ - - def __init__(self, cumulative_iters=1, **kwargs): - super(GradientCumulativeOptimizerHook, self).__init__(**kwargs) - - assert isinstance(cumulative_iters, int) and cumulative_iters > 0, \ - f'cumulative_iters only accepts positive int, but got ' \ - f'{type(cumulative_iters)} instead.' - - self.cumulative_iters = cumulative_iters - self.divisible_iters = 0 - self.remainder_iters = 0 - self.initialized = False - - def has_batch_norm(self, module): - if isinstance(module, _BatchNorm): - return True - for m in module.children(): - if self.has_batch_norm(m): - return True - return False - - def _init(self, runner): - if runner.iter % self.cumulative_iters != 0: - runner.logger.warning( - 'Resume iter number is not divisible by cumulative_iters in ' - 'GradientCumulativeOptimizerHook, which means the gradient of ' - 'some iters is lost and the result may be influenced slightly.' - ) - - if self.has_batch_norm(runner.model) and self.cumulative_iters > 1: - runner.logger.warning( - 'GradientCumulativeOptimizerHook may slightly decrease ' - 'performance if the model has BatchNorm layers.') - - residual_iters = runner.max_iters - runner.iter - - self.divisible_iters = ( - residual_iters // self.cumulative_iters * self.cumulative_iters) - self.remainder_iters = residual_iters - self.divisible_iters - - self.initialized = True - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - runner.optimizer.step() - runner.optimizer.zero_grad() - - -if (TORCH_VERSION != 'parrots' - and digit_version(TORCH_VERSION) >= digit_version('1.6.0')): - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (using PyTorch's implementation). - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of GradScalar. - Defaults to 512. For Pytorch >= 1.6, mmcv uses official - implementation of GradScaler. If you use a dict version of - loss_scale to create GradScaler, please refer to: - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler - for the parameters. - - Examples: - >>> loss_scale = dict( - ... init_scale=65536.0, - ... growth_factor=2.0, - ... backoff_factor=0.5, - ... growth_interval=2000 - ... ) - >>> optimizer_hook = Fp16OptimizerHook(loss_scale=loss_scale) - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - self._scale_update_param = None - if loss_scale == 'dynamic': - self.loss_scaler = GradScaler() - elif isinstance(loss_scale, float): - self._scale_update_param = loss_scale - self.loss_scaler = GradScaler(init_scale=loss_scale) - elif isinstance(loss_scale, dict): - self.loss_scaler = GradScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training.""" - # wrap model mode to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer to - https://pytorch.org/docs/stable/amp.html#torch.cuda.amp.GradScaler. - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients. - 3. Unscale the optimizer’s gradient tensors. - 4. Call optimizer.step() and update scale factor. - 5. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - - self.loss_scaler.scale(runner.outputs['loss']).backward() - self.loss_scaler.unscale_(runner.optimizer) - # grad clip - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update({'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook, - Fp16OptimizerHook): - """Fp16 optimizer Hook (using PyTorch's implementation) implements - multi-iters gradient cumulating. - - If you are using PyTorch >= 1.6, torch.cuda.amp is used as the backend, - to take care of the optimization procedure. - """ - - def __init__(self, *args, **kwargs): - super(GradientCumulativeFp16OptimizerHook, - self).__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - loss = runner.outputs['loss'] - loss = loss / loss_factor - - self.loss_scaler.scale(loss).backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - self.loss_scaler.unscale_(runner.optimizer) - - if self.grad_clip is not None: - grad_norm = self.clip_grads(runner.model.parameters()) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - - # backward and update scaler - self.loss_scaler.step(runner.optimizer) - self.loss_scaler.update(self._scale_update_param) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() - -else: - - @HOOKS.register_module() - class Fp16OptimizerHook(OptimizerHook): - """FP16 optimizer hook (mmcv's implementation). - - The steps of fp16 optimizer is as follows. - 1. Scale the loss value. - 2. BP in the fp16 model. - 2. Copy gradients from fp16 model to fp32 weights. - 3. Update fp32 weights. - 4. Copy updated parameters from fp32 weights to fp16 model. - - Refer to https://arxiv.org/abs/1710.03740 for more details. - - Args: - loss_scale (float | str | dict): Scale factor configuration. - If loss_scale is a float, static loss scaling will be used with - the specified scale. If loss_scale is a string, it must be - 'dynamic', then dynamic loss scaling will be used. - It can also be a dict containing arguments of LossScaler. - Defaults to 512. - """ - - def __init__(self, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - loss_scale=512., - distributed=True): - self.grad_clip = grad_clip - self.coalesce = coalesce - self.bucket_size_mb = bucket_size_mb - self.distributed = distributed - if loss_scale == 'dynamic': - self.loss_scaler = LossScaler(mode='dynamic') - elif isinstance(loss_scale, float): - self.loss_scaler = LossScaler( - init_scale=loss_scale, mode='static') - elif isinstance(loss_scale, dict): - self.loss_scaler = LossScaler(**loss_scale) - else: - raise ValueError('loss_scale must be of type float, dict, or ' - f'"dynamic", got {loss_scale}') - - def before_run(self, runner): - """Preparing steps before Mixed Precision Training. - - 1. Make a master copy of fp32 weights for optimization. - 2. Convert the main model from fp32 to fp16. - """ - # keep a copy of fp32 weights - old_groups = runner.optimizer.param_groups - runner.optimizer.param_groups = copy.deepcopy( - runner.optimizer.param_groups) - state = defaultdict(dict) - p_map = { - old_p: p - for old_p, p in zip( - chain(*(g['params'] for g in old_groups)), - chain(*(g['params'] - for g in runner.optimizer.param_groups))) - } - for k, v in runner.optimizer.state.items(): - state[p_map[k]] = v - runner.optimizer.state = state - # convert model to fp16 - wrap_fp16_model(runner.model) - # resume from state dict - if 'fp16' in runner.meta and 'loss_scaler' in runner.meta['fp16']: - scaler_state_dict = runner.meta['fp16']['loss_scaler'] - self.loss_scaler.load_state_dict(scaler_state_dict) - - def copy_grads_to_fp32(self, fp16_net, fp32_weights): - """Copy gradients from fp16 model to fp32 weight copy.""" - for fp32_param, fp16_param in zip(fp32_weights, - fp16_net.parameters()): - if fp16_param.grad is not None: - if fp32_param.grad is None: - fp32_param.grad = fp32_param.data.new( - fp32_param.size()) - fp32_param.grad.copy_(fp16_param.grad) - - def copy_params_to_fp16(self, fp16_net, fp32_weights): - """Copy updated params from fp32 weight copy to fp16 model.""" - for fp16_param, fp32_param in zip(fp16_net.parameters(), - fp32_weights): - fp16_param.data.copy_(fp32_param.data) - - def after_train_iter(self, runner): - """Backward optimization steps for Mixed Precision Training. For - dynamic loss scaling, please refer `loss_scalar.py` - - 1. Scale the loss by a scale factor. - 2. Backward the loss to obtain the gradients (fp16). - 3. Copy gradients from the model to the fp32 weight copy. - 4. Scale the gradients back and update the fp32 weight copy. - 5. Copy back the params from fp32 weight copy to the fp16 model. - 6. Save loss_scaler state_dict for resume purpose. - """ - # clear grads of last iteration - runner.model.zero_grad() - runner.optimizer.zero_grad() - # scale the loss value - scaled_loss = runner.outputs['loss'] * self.loss_scaler.loss_scale - scaled_loss.backward() - # copy fp16 grads in the model to fp32 params in the optimizer - - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - self.loss_scaler.update_scale(has_overflow) - if has_overflow: - runner.logger.warning('Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - @HOOKS.register_module() - class GradientCumulativeFp16OptimizerHook(GradientCumulativeOptimizerHook, - Fp16OptimizerHook): - """Fp16 optimizer Hook (using mmcv implementation) implements multi- - iters gradient cumulating.""" - - def __init__(self, *args, **kwargs): - super(GradientCumulativeFp16OptimizerHook, - self).__init__(*args, **kwargs) - - def after_train_iter(self, runner): - if not self.initialized: - self._init(runner) - - if runner.iter < self.divisible_iters: - loss_factor = self.cumulative_iters - else: - loss_factor = self.remainder_iters - - loss = runner.outputs['loss'] - loss = loss / loss_factor - - # scale the loss value - scaled_loss = loss * self.loss_scaler.loss_scale - scaled_loss.backward() - - if (self.every_n_iters(runner, self.cumulative_iters) - or self.is_last_iter(runner)): - - # copy fp16 grads in the model to fp32 params in the optimizer - fp32_weights = [] - for param_group in runner.optimizer.param_groups: - fp32_weights += param_group['params'] - self.copy_grads_to_fp32(runner.model, fp32_weights) - # allreduce grads - if self.distributed: - allreduce_grads(fp32_weights, self.coalesce, - self.bucket_size_mb) - - has_overflow = self.loss_scaler.has_overflow(fp32_weights) - # if has overflow, skip this iteration - if not has_overflow: - # scale the gradients back - for param in fp32_weights: - if param.grad is not None: - param.grad.div_(self.loss_scaler.loss_scale) - if self.grad_clip is not None: - grad_norm = self.clip_grads(fp32_weights) - if grad_norm is not None: - # Add grad norm to the logger - runner.log_buffer.update( - {'grad_norm': float(grad_norm)}, - runner.outputs['num_samples']) - # update fp32 params - runner.optimizer.step() - # copy fp32 params to the fp16 model - self.copy_params_to_fp16(runner.model, fp32_weights) - else: - runner.logger.warning( - 'Check overflow, downscale loss scale ' - f'to {self.loss_scaler.cur_scale}') - - self.loss_scaler.update_scale(has_overflow) - - # save state_dict of loss_scaler - runner.meta.setdefault( - 'fp16', {})['loss_scaler'] = self.loss_scaler.state_dict() - - # clear grads - runner.model.zero_grad() - runner.optimizer.zero_grad() diff --git a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/__init__.py b/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/__init__.py deleted file mode 100644 index 3e2aeb4fb2b7f1315adb3a2ddea6aec42e806779..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_half_diffusers/pipelines/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -from ..utils import is_onnx_available, is_transformers_available -from .ddim import DDIMPipeline -from .ddpm import DDPMPipeline -from .latent_diffusion_uncond import LDMPipeline -from .pndm import PNDMPipeline -from .score_sde_ve import ScoreSdeVePipeline -from .stochastic_karras_ve import KarrasVePipeline - - -if is_transformers_available(): - from .latent_diffusion import LDMTextToImagePipeline - from .stable_diffusion import ( - StableDiffusionImg2ImgPipeline, - StableDiffusionInpaintPipeline, - StableDiffusionPipeline, - ) - -if is_transformers_available() and is_onnx_available(): - from .stable_diffusion import StableDiffusionOnnxPipeline diff --git a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/ONNXVITS_utils.py b/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/ONNXVITS_utils.py deleted file mode 100644 index b634ce380421571e6e07fb45dd59717b3f63115c..0000000000000000000000000000000000000000 --- a/spaces/Sarst/VITS-Umamusume-voice-synthesizer2/ONNXVITS_utils.py +++ /dev/null @@ -1,19 +0,0 @@ -import torch -import numpy as np -import random -import onnxruntime as ort -def set_random_seed(seed=0): - ort.set_seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.backends.cudnn.deterministic = True - random.seed(seed) - np.random.seed(seed) - -def runonnx(model_path, **kwargs): - ort_session = ort.InferenceSession(model_path) - outputs = ort_session.run( - None, - kwargs - ) - return outputs \ No newline at end of file diff --git a/spaces/SpacesExamples/fastapi_dummy/README.md b/spaces/SpacesExamples/fastapi_dummy/README.md deleted file mode 100644 index 2ee620359c9ba804d4b5deb03fdbabf67e596c1f..0000000000000000000000000000000000000000 --- a/spaces/SpacesExamples/fastapi_dummy/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Fastapi Dummy -emoji: 🐢 -colorFrom: purple -colorTo: blue -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/StarbucksCN/starbucks_doc/app.py b/spaces/StarbucksCN/starbucks_doc/app.py deleted file mode 100644 index bdcb84b21291bbf59c4dcb816ea147d2ddf68d7d..0000000000000000000000000000000000000000 --- a/spaces/StarbucksCN/starbucks_doc/app.py +++ /dev/null @@ -1,44 +0,0 @@ -import logging -import sys - -import streamlit as st -from dotenv import load_dotenv - -from faq.manager_factory import FAQRobotManagerFactory, FAQRobotRevision - -logging.basicConfig( - stream=sys.stdout, level=logging.INFO -) # logging.DEBUG for more verbose output -# logging.getLogger().addHandler(logging.StreamHandler(stream=sys.stdout)) - -# # Sidebar contents -with st.sidebar: - st.title("🤗💬 LLM Chat App") - st.markdown( - """ - ## About - This app is an LLM-powered chatbot built using: - - [Streamlit](https://streamlit.io/) - - [LangChain](https://python.langchain.com/) - """ - ) - # add_vertical_space(5) - st.write("Made by Nick") - - -def main() -> None: - st.header("星巴克门店伙伴小蜜 💬") - - robot_manager = FAQRobotManagerFactory.get_or_create( - FAQRobotRevision.SIMPLE_OPENAI_VERSION_0 - ) - robot = robot_manager.get_robot() - query = st.text_input("请输入你的问题:") - if query: - response = robot.ask(question=query) - st.write(response) - - -if __name__ == "__main__": - load_dotenv() - main() diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/models/builders.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/models/builders.py deleted file mode 100644 index 038bf99c3d0fbbb86005683d5a2a1b4edcac4298..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/models/builders.py +++ /dev/null @@ -1,252 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -All the functions to build the relevant models and modules -from the Hydra config. -""" - -import typing as tp - -import audiocraft -import omegaconf -import torch - -from .encodec import CompressionModel, EncodecModel -from .lm import LMModel -from ..modules.codebooks_patterns import ( - CodebooksPatternProvider, - DelayedPatternProvider, - MusicLMPattern, - ParallelPatternProvider, - UnrolledPatternProvider, - VALLEPattern, -) -from ..modules.conditioners import ( - BaseConditioner, - ChromaStemConditioner, - CLAPEmbeddingConditioner, - ConditionFuser, - ConditioningProvider, - LUTConditioner, - T5Conditioner, -) -from .unet import DiffusionUnet -from .. import quantization as qt -from ..utils.utils import dict_from_config -from ..modules.diffusion_schedule import MultiBandProcessor, SampleProcessor - - -def get_quantizer(quantizer: str, cfg: omegaconf.DictConfig, dimension: int) -> qt.BaseQuantizer: - klass = { - 'no_quant': qt.DummyQuantizer, - 'rvq': qt.ResidualVectorQuantizer - }[quantizer] - kwargs = dict_from_config(getattr(cfg, quantizer)) - if quantizer != 'no_quant': - kwargs['dimension'] = dimension - return klass(**kwargs) - - -def get_encodec_autoencoder(encoder_name: str, cfg: omegaconf.DictConfig): - if encoder_name == 'seanet': - kwargs = dict_from_config(getattr(cfg, 'seanet')) - encoder_override_kwargs = kwargs.pop('encoder') - decoder_override_kwargs = kwargs.pop('decoder') - encoder_kwargs = {**kwargs, **encoder_override_kwargs} - decoder_kwargs = {**kwargs, **decoder_override_kwargs} - encoder = audiocraft.modules.SEANetEncoder(**encoder_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**decoder_kwargs) - return encoder, decoder - else: - raise KeyError(f"Unexpected compression model {cfg.compression_model}") - - -def get_compression_model(cfg: omegaconf.DictConfig) -> CompressionModel: - """Instantiate a compression model.""" - if cfg.compression_model == 'encodec': - kwargs = dict_from_config(getattr(cfg, 'encodec')) - encoder_name = kwargs.pop('autoencoder') - quantizer_name = kwargs.pop('quantizer') - encoder, decoder = get_encodec_autoencoder(encoder_name, cfg) - quantizer = get_quantizer(quantizer_name, cfg, encoder.dimension) - frame_rate = kwargs['sample_rate'] // encoder.hop_length - renormalize = kwargs.pop('renormalize', False) - # deprecated params - kwargs.pop('renorm', None) - return EncodecModel(encoder, decoder, quantizer, - frame_rate=frame_rate, renormalize=renormalize, **kwargs).to(cfg.device) - else: - raise KeyError(f"Unexpected compression model {cfg.compression_model}") - - -def get_lm_model(cfg: omegaconf.DictConfig) -> LMModel: - """Instantiate a transformer LM.""" - if cfg.lm_model == 'transformer_lm': - kwargs = dict_from_config(getattr(cfg, 'transformer_lm')) - n_q = kwargs['n_q'] - q_modeling = kwargs.pop('q_modeling', None) - codebooks_pattern_cfg = getattr(cfg, 'codebooks_pattern') - attribute_dropout = dict_from_config(getattr(cfg, 'attribute_dropout')) - cls_free_guidance = dict_from_config(getattr(cfg, 'classifier_free_guidance')) - cfg_prob, cfg_coef = cls_free_guidance['training_dropout'], cls_free_guidance['inference_coef'] - fuser = get_condition_fuser(cfg) - condition_provider = get_conditioner_provider(kwargs["dim"], cfg).to(cfg.device) - if len(fuser.fuse2cond['cross']) > 0: # enforce cross-att programmatically - kwargs['cross_attention'] = True - if codebooks_pattern_cfg.modeling is None: - assert q_modeling is not None, \ - "LM model should either have a codebook pattern defined or transformer_lm.q_modeling" - codebooks_pattern_cfg = omegaconf.OmegaConf.create( - {'modeling': q_modeling, 'delay': {'delays': list(range(n_q))}} - ) - pattern_provider = get_codebooks_pattern_provider(n_q, codebooks_pattern_cfg) - return LMModel( - pattern_provider=pattern_provider, - condition_provider=condition_provider, - fuser=fuser, - cfg_dropout=cfg_prob, - cfg_coef=cfg_coef, - attribute_dropout=attribute_dropout, - dtype=getattr(torch, cfg.dtype), - device=cfg.device, - **kwargs - ).to(cfg.device) - else: - raise KeyError(f"Unexpected LM model {cfg.lm_model}") - - -def get_conditioner_provider(output_dim: int, cfg: omegaconf.DictConfig) -> ConditioningProvider: - """Instantiate a conditioning model.""" - device = cfg.device - duration = cfg.dataset.segment_duration - cfg = getattr(cfg, 'conditioners') - dict_cfg = {} if cfg is None else dict_from_config(cfg) - conditioners: tp.Dict[str, BaseConditioner] = {} - condition_provider_args = dict_cfg.pop('args', {}) - condition_provider_args.pop('merge_text_conditions_p', None) - condition_provider_args.pop('drop_desc_p', None) - - for cond, cond_cfg in dict_cfg.items(): - model_type = cond_cfg['model'] - model_args = cond_cfg[model_type] - if model_type == 't5': - conditioners[str(cond)] = T5Conditioner(output_dim=output_dim, device=device, **model_args) - elif model_type == 'lut': - conditioners[str(cond)] = LUTConditioner(output_dim=output_dim, **model_args) - elif model_type == 'chroma_stem': - conditioners[str(cond)] = ChromaStemConditioner( - output_dim=output_dim, - duration=duration, - device=device, - **model_args - ) - elif model_type == 'clap': - conditioners[str(cond)] = CLAPEmbeddingConditioner( - output_dim=output_dim, - device=device, - **model_args - ) - else: - raise ValueError(f"Unrecognized conditioning model: {model_type}") - conditioner = ConditioningProvider(conditioners, device=device, **condition_provider_args) - return conditioner - - -def get_condition_fuser(cfg: omegaconf.DictConfig) -> ConditionFuser: - """Instantiate a condition fuser object.""" - fuser_cfg = getattr(cfg, 'fuser') - fuser_methods = ['sum', 'cross', 'prepend', 'input_interpolate'] - fuse2cond = {k: fuser_cfg[k] for k in fuser_methods} - kwargs = {k: v for k, v in fuser_cfg.items() if k not in fuser_methods} - fuser = ConditionFuser(fuse2cond=fuse2cond, **kwargs) - return fuser - - -def get_codebooks_pattern_provider(n_q: int, cfg: omegaconf.DictConfig) -> CodebooksPatternProvider: - """Instantiate a codebooks pattern provider object.""" - pattern_providers = { - 'parallel': ParallelPatternProvider, - 'delay': DelayedPatternProvider, - 'unroll': UnrolledPatternProvider, - 'valle': VALLEPattern, - 'musiclm': MusicLMPattern, - } - name = cfg.modeling - kwargs = dict_from_config(cfg.get(name)) if hasattr(cfg, name) else {} - klass = pattern_providers[name] - return klass(n_q, **kwargs) - - -def get_debug_compression_model(device='cpu', sample_rate: int = 32000): - """Instantiate a debug compression model to be used for unit tests.""" - assert sample_rate in [16000, 32000], "unsupported sample rate for debug compression model" - model_ratios = { - 16000: [10, 8, 8], # 25 Hz at 16kHz - 32000: [10, 8, 16] # 25 Hz at 32kHz - } - ratios: tp.List[int] = model_ratios[sample_rate] - frame_rate = 25 - seanet_kwargs: dict = { - 'n_filters': 4, - 'n_residual_layers': 1, - 'dimension': 32, - 'ratios': ratios, - } - print(seanet_kwargs) - encoder = audiocraft.modules.SEANetEncoder(**seanet_kwargs) - decoder = audiocraft.modules.SEANetDecoder(**seanet_kwargs) - quantizer = qt.ResidualVectorQuantizer(dimension=32, bins=400, n_q=4) - init_x = torch.randn(8, 32, 128) - quantizer(init_x, 1) # initialize kmeans etc. - compression_model = EncodecModel( - encoder, decoder, quantizer, - frame_rate=frame_rate, sample_rate=sample_rate, channels=1).to(device) - return compression_model.eval() - - -def get_diffusion_model(cfg: omegaconf.DictConfig): - # TODO Find a way to infer the channels from dset - channels = cfg.channels - num_steps = cfg.schedule.num_steps - return DiffusionUnet( - chin=channels, num_steps=num_steps, **cfg.diffusion_unet) - - -def get_processor(cfg, sample_rate: int = 24000): - sample_processor = SampleProcessor() - if cfg.use: - kw = dict(cfg) - kw.pop('use') - kw.pop('name') - if cfg.name == "multi_band_processor": - sample_processor = MultiBandProcessor(sample_rate=sample_rate, **kw) - return sample_processor - - -def get_debug_lm_model(device='cpu'): - """Instantiate a debug LM to be used for unit tests.""" - pattern = DelayedPatternProvider(n_q=4) - dim = 16 - providers = { - 'description': LUTConditioner(n_bins=128, dim=dim, output_dim=dim, tokenizer="whitespace"), - } - condition_provider = ConditioningProvider(providers) - fuser = ConditionFuser( - {'cross': ['description'], 'prepend': [], - 'sum': [], 'input_interpolate': []}) - lm = LMModel( - pattern, condition_provider, fuser, - n_q=4, card=400, dim=dim, num_heads=4, custom=True, num_layers=2, - cross_attention=True, causal=True) - return lm.to(device).eval() - - -def get_wrapped_compression_model( - compression_model: CompressionModel, - cfg: omegaconf.DictConfig) -> CompressionModel: - # more to come. - return compression_model diff --git a/spaces/Sumit7864/Image-Enhancer/docs/anime_comparisons_CN.md b/spaces/Sumit7864/Image-Enhancer/docs/anime_comparisons_CN.md deleted file mode 100644 index 43ba58344ed9554d5b30e2815d1b7d4ab8bc503f..0000000000000000000000000000000000000000 --- a/spaces/Sumit7864/Image-Enhancer/docs/anime_comparisons_CN.md +++ /dev/null @@ -1,68 +0,0 @@ -# 动漫视频模型比较 - -[English](anime_comparisons.md) **|** [简体中文](anime_comparisons_CN.md) - -## 更新 - -- 2022/04/24: 发布 **AnimeVideo-v3**. 主要做了以下更新: - - **更自然** - - **更少瑕疵** - - **颜色保持得更好** - - **更好的纹理恢复** - - **虚化背景处理** - -## 比较 - -我们将 RealESRGAN-AnimeVideo-v3 与以下方法进行了比较。我们的 RealESRGAN-AnimeVideo-v3 可以以更快的推理速度获得更好的结果。 - -- [waifu2x](https://github.com/nihui/waifu2x-ncnn-vulkan). 超参数: `tile=0`, `noiselevel=2` -- [Real-CUGAN](https://github.com/bilibili/ailab/tree/main/Real-CUGAN): 我们使用了[20220227](https://github.com/bilibili/ailab/releases/tag/Real-CUGAN-add-faster-low-memory-mode)版本, 超参: `cache_mode=0`, `tile=0`, `alpha=1`. -- 我们的 RealESRGAN-AnimeVideo-v3 - -## 结果 - -您可能需要**放大**以比较详细信息, 或者**单击图像**以查看完整尺寸。 请注意下面表格的图片是从原图里裁剪patch并且resize后的结果,您可以从 -[Google Drive](https://drive.google.com/drive/folders/1bc_Hje1Nqop9NDkUvci2VACSjL7HZMRp?usp=sharing) 里下载原始的输入和输出。 - -**更自然的结果,更好的虚化背景恢复** - -| 输入 | waifu2x | Real-CUGAN | RealESRGAN
    AnimeVideo-v3 | -| :---: | :---: | :---: | :---: | -|![157083983-bec52c67-9a5e-4eed-afef-01fe6cd2af85_patch](https://user-images.githubusercontent.com/11482921/164452769-5d8cb4f8-1708-42d2-b941-f44a6f136feb.png) | ![](https://user-images.githubusercontent.com/11482921/164452767-c825cdec-f721-4ff1-aef1-fec41f146c4c.png) | ![](https://user-images.githubusercontent.com/11482921/164452755-3be50895-e3d4-432d-a7b9-9085c2a8e771.png) | ![](https://user-images.githubusercontent.com/11482921/164452771-be300656-379a-4323-a755-df8025a8c451.png) | -|![a0010_patch](https://user-images.githubusercontent.com/11482921/164454047-22eeb493-3fa9-4142-9fc2-6f2a1c074cd5.png) | ![](https://user-images.githubusercontent.com/11482921/164454046-d5e79f8f-00a0-4b55-bc39-295d0d69747a.png) | ![](https://user-images.githubusercontent.com/11482921/164454040-87886b11-9d08-48bd-862f-0d4aed72eb19.png) | ![](https://user-images.githubusercontent.com/11482921/164454055-73dc9f02-286e-4d5c-8f70-c13742e08f42.png) | -|![00000044_patch](https://user-images.githubusercontent.com/11482921/164451232-bacf64fc-e55a-44db-afbb-6b31ab0f8973.png) | ![](https://user-images.githubusercontent.com/11482921/164451318-f309b61a-75b8-4b74-b5f3-595725f1cf0b.png) | ![](https://user-images.githubusercontent.com/11482921/164451348-994f8a35-adbe-4a4b-9c61-feaa294af06a.png) | ![](https://user-images.githubusercontent.com/11482921/164451361-9b7d376e-6f75-4648-b752-542b44845d1c.png) | - -**更少瑕疵,更好的细节纹理** - -| 输入 | waifu2x | Real-CUGAN | RealESRGAN
    AnimeVideo-v3 | -| :---: | :---: | :---: | :---: | -|![00000053_patch](https://user-images.githubusercontent.com/11482921/164448411-148a7e5c-cfcd-4504-8bc7-e318eb883bb6.png) | ![](https://user-images.githubusercontent.com/11482921/164448633-dfc15224-b6d2-4403-a3c9-4bb819979364.png) | ![](https://user-images.githubusercontent.com/11482921/164448771-0d359509-5293-4d4c-8e3c-86a2a314ea88.png) | ![](https://user-images.githubusercontent.com/11482921/164448848-1a4ff99e-075b-4458-9db7-2c89e8160aa0.png) | -|![Disney_v4_22_018514_s2_patch](https://user-images.githubusercontent.com/11482921/164451898-83311cdf-bd3e-450f-b9f6-34d7fea3ab79.png) | ![](https://user-images.githubusercontent.com/11482921/164451894-6c56521c-6561-40d6-a3a5-8dde2c167b8a.png) | ![](https://user-images.githubusercontent.com/11482921/164451888-af9b47e3-39dc-4f3e-b0d7-d372d8191e2a.png) | ![](https://user-images.githubusercontent.com/11482921/164451901-31ca4dd4-9847-4baa-8cde-ad50f4053dcf.png) | -|![Japan_v2_0_007261_s2_patch](https://user-images.githubusercontent.com/11482921/164454578-73c77392-77de-49c5-b03c-c36631723192.png) | ![](https://user-images.githubusercontent.com/11482921/164454574-b1ede5f0-4520-4eaa-8f59-086751a34e62.png) | ![](https://user-images.githubusercontent.com/11482921/164454567-4cb3fdd8-6a2d-4016-85b2-a305a8ff80e4.png) | ![](https://user-images.githubusercontent.com/11482921/164454583-7f243f20-eca3-4500-ac43-eb058a4a101a.png) | -|![huluxiongdi_2_patch](https://user-images.githubusercontent.com/11482921/164453482-0726c842-337e-40ec-bf6c-f902ee956a8b.png) | ![](https://user-images.githubusercontent.com/11482921/164453480-71d5e091-5bfa-4c77-9c57-4e37f66ca0a3.png) | ![](https://user-images.githubusercontent.com/11482921/164453468-c295d3c9-3661-45f0-9ecd-406a1877f76e.png) | ![](https://user-images.githubusercontent.com/11482921/164453486-3091887c-587c-450e-b6fe-905cb518d57e.png) | - -**其他更好的结果** - -| 输入 | waifu2x | Real-CUGAN | RealESRGAN
    AnimeVideo-v3 | -| :---: | :---: | :---: | :---: | -|![Japan_v2_1_128525_s1_patch](https://user-images.githubusercontent.com/11482921/164454933-67697f7c-b6ef-47dc-bfca-822a78af8acf.png) | ![](https://user-images.githubusercontent.com/11482921/164454931-9450de7c-f0b3-4638-9c1e-0668e0c41ef0.png) | ![](https://user-images.githubusercontent.com/11482921/164454926-ed746976-786d-41c5-8a83-7693cd774c3a.png) | ![](https://user-images.githubusercontent.com/11482921/164454936-8abdf0f0-fb30-40eb-8281-3b46c0bcb9ae.png) | -|![tianshuqitan_2_patch](https://user-images.githubusercontent.com/11482921/164456948-807c1476-90b6-4507-81da-cb986d01600c.png) | ![](https://user-images.githubusercontent.com/11482921/164456943-25e89de9-d7e5-4f61-a2e1-96786af6ae9e.png) | ![](https://user-images.githubusercontent.com/11482921/164456954-b468c447-59f5-4594-9693-3683e44ba3e6.png) | ![](https://user-images.githubusercontent.com/11482921/164456957-640f910c-3b04-407c-ac20-044d72e19735.png) | -|![00000051_patch](https://user-images.githubusercontent.com/11482921/164456044-e9a6b3fa-b24e-4eb7-acf9-1f7746551b1e.png) ![00000051_patch](https://user-images.githubusercontent.com/11482921/164456421-b67245b0-767d-4250-9105-80bbe507ecfc.png) | ![](https://user-images.githubusercontent.com/11482921/164456040-85763cf2-cb28-4ba3-abb6-1dbb48c55713.png) ![](https://user-images.githubusercontent.com/11482921/164456419-59cf342e-bc1e-4044-868c-e1090abad313.png) | ![](https://user-images.githubusercontent.com/11482921/164456031-4244bb7b-8649-4e01-86f4-40c2099c5afd.png) ![](https://user-images.githubusercontent.com/11482921/164456411-b6afcbe9-c054-448d-a6df-96d3ba3047f8.png) | ![](https://user-images.githubusercontent.com/11482921/164456035-12e270be-fd52-46d4-b18a-3d3b680731fe.png) ![](https://user-images.githubusercontent.com/11482921/164456417-dcaa8b62-f497-427d-b2d2-f390f1200fb9.png) | -|![00000099_patch](https://user-images.githubusercontent.com/11482921/164455312-6411b6e1-5823-4131-a4b0-a6be8a9ae89f.png) | ![](https://user-images.githubusercontent.com/11482921/164455310-f2b99646-3a22-47a4-805b-dc451ac86ddb.png) | ![](https://user-images.githubusercontent.com/11482921/164455294-35471b42-2826-4451-b7ec-6de01344954c.png) | ![](https://user-images.githubusercontent.com/11482921/164455305-fa4c9758-564a-4081-8b4e-f11057a0404d.png) | -|![00000016_patch](https://user-images.githubusercontent.com/11482921/164455672-447353c9-2da2-4fcb-ba4a-7dd6b94c19c1.png) | ![](https://user-images.githubusercontent.com/11482921/164455669-df384631-baaa-42f8-9150-40f658471558.png) | ![](https://user-images.githubusercontent.com/11482921/164455657-68006bf0-138d-4981-aaca-8aa927d2f78a.png) | ![](https://user-images.githubusercontent.com/11482921/164455664-0342b93e-a62a-4b36-a90e-7118f3f1e45d.png) | - -## 推理速度比较 - -### PyTorch - -请注意,我们只报告了**模型推理**的时间, 而忽略了读写硬盘的时间. - -| GPU | 输入尺寸 | waifu2x | Real-CUGAN | RealESRGAN-AnimeVideo-v3 -| :---: | :---: | :---: | :---: | :---: | -| V100 | 1921 x 1080 | - | 3.4 fps | **10.0** fps | -| V100 | 1280 x 720 | - | 7.2 fps | **22.6** fps | -| V100 | 640 x 480 | - | 24.4 fps | **65.9** fps | - -### ncnn - -- [ ] TODO diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_compat.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_compat.py deleted file mode 100644 index c3bf5e33ba4f9eeff3e41d9516fd847ecea4deb8..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/attr/_compat.py +++ /dev/null @@ -1,185 +0,0 @@ -# SPDX-License-Identifier: MIT - - -import inspect -import platform -import sys -import threading -import types -import warnings - -from collections.abc import Mapping, Sequence # noqa -from typing import _GenericAlias - - -PYPY = platform.python_implementation() == "PyPy" -PY_3_9_PLUS = sys.version_info[:2] >= (3, 9) -PY310 = sys.version_info[:2] >= (3, 10) -PY_3_12_PLUS = sys.version_info[:2] >= (3, 12) - - -def just_warn(*args, **kw): - warnings.warn( - "Running interpreter doesn't sufficiently support code object " - "introspection. Some features like bare super() or accessing " - "__class__ will not work with slotted classes.", - RuntimeWarning, - stacklevel=2, - ) - - -class _AnnotationExtractor: - """ - Extract type annotations from a callable, returning None whenever there - is none. - """ - - __slots__ = ["sig"] - - def __init__(self, callable): - try: - self.sig = inspect.signature(callable) - except (ValueError, TypeError): # inspect failed - self.sig = None - - def get_first_param_type(self): - """ - Return the type annotation of the first argument if it's not empty. - """ - if not self.sig: - return None - - params = list(self.sig.parameters.values()) - if params and params[0].annotation is not inspect.Parameter.empty: - return params[0].annotation - - return None - - def get_return_type(self): - """ - Return the return type if it's not empty. - """ - if ( - self.sig - and self.sig.return_annotation is not inspect.Signature.empty - ): - return self.sig.return_annotation - - return None - - -def make_set_closure_cell(): - """Return a function of two arguments (cell, value) which sets - the value stored in the closure cell `cell` to `value`. - """ - # pypy makes this easy. (It also supports the logic below, but - # why not do the easy/fast thing?) - if PYPY: - - def set_closure_cell(cell, value): - cell.__setstate__((value,)) - - return set_closure_cell - - # Otherwise gotta do it the hard way. - - try: - if sys.version_info >= (3, 8): - - def set_closure_cell(cell, value): - cell.cell_contents = value - - else: - # Create a function that will set its first cellvar to `value`. - def set_first_cellvar_to(value): - x = value - return - - # This function will be eliminated as dead code, but - # not before its reference to `x` forces `x` to be - # represented as a closure cell rather than a local. - def force_x_to_be_a_cell(): # pragma: no cover - return x - - # Extract the code object and make sure our assumptions about - # the closure behavior are correct. - co = set_first_cellvar_to.__code__ - if co.co_cellvars != ("x",) or co.co_freevars != (): - raise AssertionError # pragma: no cover - - # Convert this code object to a code object that sets the - # function's first _freevar_ (not cellvar) to the argument. - args = [co.co_argcount] - args.append(co.co_kwonlyargcount) - args.extend( - [ - co.co_nlocals, - co.co_stacksize, - co.co_flags, - co.co_code, - co.co_consts, - co.co_names, - co.co_varnames, - co.co_filename, - co.co_name, - co.co_firstlineno, - co.co_lnotab, - # These two arguments are reversed: - co.co_cellvars, - co.co_freevars, - ] - ) - set_first_freevar_code = types.CodeType(*args) - - def set_closure_cell(cell, value): - # Create a function using the set_first_freevar_code, - # whose first closure cell is `cell`. Calling it will - # change the value of that cell. - setter = types.FunctionType( - set_first_freevar_code, {}, "setter", (), (cell,) - ) - # And call it to set the cell. - setter(value) - - # Make sure it works on this interpreter: - def make_func_with_cell(): - x = None - - def func(): - return x # pragma: no cover - - return func - - cell = make_func_with_cell().__closure__[0] - set_closure_cell(cell, 100) - if cell.cell_contents != 100: - raise AssertionError # pragma: no cover - - except Exception: - return just_warn - else: - return set_closure_cell - - -set_closure_cell = make_set_closure_cell() - -# Thread-local global to track attrs instances which are already being repr'd. -# This is needed because there is no other (thread-safe) way to pass info -# about the instances that are already being repr'd through the call stack -# in order to ensure we don't perform infinite recursion. -# -# For instance, if an instance contains a dict which contains that instance, -# we need to know that we're already repr'ing the outside instance from within -# the dict's repr() call. -# -# This lives here rather than in _make.py so that the functions in _make.py -# don't have a direct reference to the thread-local in their globals dict. -# If they have such a reference, it breaks cloudpickle. -repr_context = threading.local() - - -def get_generic_base(cl): - """If this is a generic class (A[str]), return the generic base for it.""" - if cl.__class__ is _GenericAlias: - return cl.__origin__ - return None diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/segment/impl/vector/local_hnsw.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/segment/impl/vector/local_hnsw.py deleted file mode 100644 index d8bf8823cdbca4c9a0aa42f98842243ab29f39d7..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/chromadb/segment/impl/vector/local_hnsw.py +++ /dev/null @@ -1,363 +0,0 @@ -from overrides import override -from typing import Optional, Sequence, Dict, Set, List, Callable, Union, cast -from uuid import UUID -from chromadb.segment import VectorReader -from chromadb.ingest import Consumer -from chromadb.config import Component, System, Settings -from chromadb.types import ( - EmbeddingRecord, - VectorEmbeddingRecord, - VectorQuery, - VectorQueryResult, - SeqId, - Segment, - Metadata, - Operation, - Vector, -) -from chromadb.errors import InvalidDimensionException -import re -import multiprocessing -import hnswlib -from threading import Lock -import logging - -logger = logging.getLogger(__name__) - -DEFAULT_CAPACITY = 1000 - -Validator = Callable[[Union[str, int, float]], bool] - -param_validators: Dict[str, Validator] = { - "hnsw:space": lambda p: bool(re.match(r"^(l2|cosine|ip)$", str(p))), - "hnsw:construction_ef": lambda p: isinstance(p, int), - "hnsw:search_ef": lambda p: isinstance(p, int), - "hnsw:M": lambda p: isinstance(p, int), - "hnsw:num_threads": lambda p: isinstance(p, int), - "hnsw:resize_factor": lambda p: isinstance(p, (int, float)), -} - - -class HnswParams: - space: str - construction_ef: int - search_ef: int - M: int - num_threads: int - resize_factor: float - - def __init__(self, metadata: Metadata): - metadata = metadata or {} - - for param, value in metadata.items(): - if param.startswith("hnsw:"): - if param not in param_validators: - raise ValueError(f"Unknown HNSW parameter: {param}") - if not param_validators[param](value): - raise ValueError( - f"Invalid value for HNSW parameter: {param} = {value}" - ) - - self.space = str(metadata.get("hnsw:space", "l2")) - self.construction_ef = int(metadata.get("hnsw:construction_ef", 100)) - self.search_ef = int(metadata.get("hnsw:search_ef", 10)) - self.M = int(metadata.get("hnsw:M", 16)) - self.num_threads = int( - metadata.get("hnsw:num_threads", multiprocessing.cpu_count()) - ) - self.resize_factor = float(metadata.get("hnsw:resize_factor", 1.2)) - - -class Batch: - """Used to model the set of changes as an atomic operation""" - - labels: List[Optional[int]] - vectors: List[Vector] - seq_ids: List[SeqId] - ids: List[str] - delete_labels: List[int] - delete_ids: List[str] - add_count: int - delete_count: int - - def __init__(self) -> None: - self.labels = [] - self.vectors = [] - self.seq_ids = [] - self.ids = [] - self.delete_labels = [] - self.delete_ids = [] - self.add_count = 0 - self.delete_count = 0 - - def add(self, label: Optional[int], record: EmbeddingRecord) -> None: - self.labels.append(label) - self.vectors.append(cast(Vector, record["embedding"])) - self.seq_ids.append(record["seq_id"]) - self.ids.append(record["id"]) - if not label: - self.add_count += 1 - - def delete(self, label: int, id: str) -> None: - self.delete_labels.append(label) - self.delete_ids.append(id) - self.delete_count += 1 - - -class LocalHnswSegment(Component, VectorReader): - _id: UUID - _consumer: Consumer - _topic: Optional[str] - _subscription: UUID - _settings: Settings - _params: HnswParams - - _index: Optional[hnswlib.Index] - _dimensionality: Optional[int] - _elements: int - _max_seq_id: SeqId - - _lock: Lock - - _id_to_label: Dict[str, int] - _label_to_id: Dict[int, str] - _id_to_seq_id: Dict[str, SeqId] - - def __init__(self, system: System, segment: Segment): - self._consumer = system.instance(Consumer) - self._id = segment["id"] - self._topic = segment["topic"] - self._settings = system.settings - self._params = HnswParams(segment["metadata"] or {}) - - self._index = None - self._dimensionality = None - self._total_elements_added = 0 - self._max_seq_id = self._consumer.min_seqid() - - self._id_to_seq_id = {} - self._id_to_label = {} - self._label_to_id = {} - - self._lock = Lock() - super().__init__(system) - - @override - def start(self) -> None: - super().start() - if self._topic: - seq_id = self.max_seqid() - self._subscription = self._consumer.subscribe( - self._topic, self._write_records, start=seq_id - ) - - @override - def stop(self) -> None: - super().stop() - if self._subscription: - self._consumer.unsubscribe(self._subscription) - - @override - def get_vectors( - self, ids: Optional[Sequence[str]] = None - ) -> Sequence[VectorEmbeddingRecord]: - if ids is None: - labels = list(self._label_to_id.keys()) - else: - labels = [] - for id in ids: - if id in self._id_to_label: - labels.append(self._id_to_label[id]) - - results = [] - if self._index is not None: - vectors = cast(Sequence[Vector], self._index.get_items(labels)) - - for label, vector in zip(labels, vectors): - id = self._label_to_id[label] - seq_id = self._id_to_seq_id[id] - results.append( - VectorEmbeddingRecord(id=id, seq_id=seq_id, embedding=vector) - ) - - return results - - @override - def query_vectors( - self, query: VectorQuery - ) -> Sequence[Sequence[VectorQueryResult]]: - if self._index is None: - return [[] for _ in range(len(query["vectors"]))] - - k = query["k"] - size = len(self._id_to_label) - - if k > size: - logger.warning( - f"Number of requested results {k} is greater than number of elements in index {size}, updating n_results = {size}" - ) - k = size - - labels: Set[int] = set() - ids = query["allowed_ids"] - if ids is not None: - labels = {self._id_to_label[id] for id in ids} - if len(labels) < k: - k = len(labels) - - def filter_function(label: int) -> bool: - return label in labels - - query_vectors = query["vectors"] - - result_labels, distances = self._index.knn_query( - query_vectors, k=k, filter=filter_function if ids else None - ) - - distances = cast(List[List[float]], distances) - result_labels = cast(List[List[int]], result_labels) - - all_results: List[List[VectorQueryResult]] = [] - for result_i in range(len(result_labels)): - results: List[VectorQueryResult] = [] - for label, distance in zip(result_labels[result_i], distances[result_i]): - id = self._label_to_id[label] - seq_id = self._id_to_seq_id[id] - results.append( - VectorQueryResult(id=id, seq_id=seq_id, distance=distance) - ) - all_results.append(results) - - return all_results - - @override - def max_seqid(self) -> SeqId: - return self._max_seq_id - - @override - def count(self) -> int: - return len(self._id_to_label) - - def _init_index(self, dimensionality: int) -> None: - # more comments available at the source: https://github.com/nmslib/hnswlib - - index = hnswlib.Index( - space=self._params.space, dim=dimensionality - ) # possible options are l2, cosine or ip - index.init_index( - max_elements=DEFAULT_CAPACITY, - ef_construction=self._params.construction_ef, - M=self._params.M, - ) - index.set_ef(self._params.search_ef) - index.set_num_threads(self._params.num_threads) - - self._index = index - self._dimensionality = dimensionality - - def _ensure_index(self, n: int, dim: int) -> None: - """Create or resize the index as necessary to accomodate N new records""" - if not self._index: - self._dimensionality = dim - self._init_index(dim) - else: - if dim != self._dimensionality: - raise InvalidDimensionException( - f"Dimensionality of ({dim}) does not match index" - + f"dimensionality ({self._dimensionality})" - ) - - index = cast(hnswlib.Index, self._index) - - if (self._total_elements_added + n) > index.get_max_elements(): - new_size = int( - (self._total_elements_added + n) * self._params.resize_factor - ) - index.resize_index(max(new_size, DEFAULT_CAPACITY)) - - def _apply_batch(self, batch: Batch) -> None: - """Apply a batch of changes, as atomically as possible.""" - - if batch.delete_ids: - index = cast(hnswlib.Index, self._index) - for i in range(len(batch.delete_ids)): - label = batch.delete_labels[i] - id = batch.delete_ids[i] - - index.mark_deleted(label) - del self._id_to_label[id] - del self._label_to_id[label] - del self._id_to_seq_id[id] - - if batch.ids: - self._ensure_index(batch.add_count, len(batch.vectors[0])) - - next_label = self._total_elements_added + 1 - for i in range(len(batch.labels)): - if batch.labels[i] is None: - batch.labels[i] = next_label - next_label += 1 - - labels = cast(List[int], batch.labels) - - index = cast(hnswlib.Index, self._index) - - # First, update the index - index.add_items(batch.vectors, labels) - - # If that succeeds, update the mappings - for id, label, seq_id in zip(batch.ids, labels, batch.seq_ids): - self._id_to_seq_id[id] = seq_id - self._id_to_label[id] = label - self._label_to_id[label] = id - - # If that succeeds, update the total count - self._total_elements_added += batch.add_count - - # If that succeeds, finally the seq ID - self._max_seq_id = max(self._max_seq_id, max(batch.seq_ids)) - - def _write_records(self, records: Sequence[EmbeddingRecord]) -> None: - """Add a batch of embeddings to the index""" - if not self._running: - raise RuntimeError("Cannot add embeddings to stopped component") - - # Avoid all sorts of potential problems by ensuring single-threaded access - with self._lock: - batch = Batch() - - for record in records: - self._max_seq_id = max(self._max_seq_id, record["seq_id"]) - id = record["id"] - op = record["operation"] - label = self._id_to_label.get(id, None) - - if op == Operation.DELETE: - if label: - batch.delete(label, id) - else: - logger.warning(f"Delete of nonexisting embedding ID: {id}") - - elif op == Operation.UPDATE: - if record["embedding"] is not None: - if label is not None: - batch.add(label, record) - else: - logger.warning( - f"Update of nonexisting embedding ID: {record['id']}" - ) - elif op == Operation.ADD: - if not label: - batch.add(label, record) - else: - logger.warning(f"Add of existing embedding ID: {id}") - elif op == Operation.UPSERT: - batch.add(label, record) - - self._apply_batch(batch) - - -# TODO: Implement this as a performance improvement, if rebuilding the -# index on startup is too slow. But test this first. -class PersistentLocalHnswSegment(LocalHnswSegment): - pass diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/string.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/string.py deleted file mode 100644 index 36ce575e5aa35502203a390450dfc00b4511ed98..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/clickhouse_connect/datatypes/string.py +++ /dev/null @@ -1,120 +0,0 @@ -from typing import Sequence, MutableSequence, Union, Collection - -from clickhouse_connect.driver.ctypes import data_conv - -from clickhouse_connect.datatypes.base import ClickHouseType, TypeDef -from clickhouse_connect.driver.insert import InsertContext -from clickhouse_connect.driver.query import QueryContext -from clickhouse_connect.driver.types import ByteSource -from clickhouse_connect.driver.options import np, pd - - -class String(ClickHouseType): - valid_formats = 'bytes', 'native' - - def _active_encoding(self, ctx): - if self.read_format(ctx) == 'bytes': - return None - if ctx.encoding: - return ctx.encoding - return self.encoding - - def _data_size(self, sample: Collection) -> int: - if len(sample) == 0: - return 0 - total = 0 - for x in sample: - if x: - total += len(x) - return total // len(sample) + 1 - - def _read_column_binary(self, source: ByteSource, num_rows: int, ctx: QueryContext): - return source.read_str_col(num_rows, self._active_encoding(ctx)) - - def _read_nullable_column(self, source: ByteSource, num_rows: int, ctx: QueryContext) -> Sequence: - return source.read_str_col(num_rows, self._active_encoding(ctx), True, self._active_null(ctx)) - - def _finalize_column(self, column: Sequence, ctx: QueryContext) -> Sequence: - if ctx.use_extended_dtypes and self.read_format(ctx) == 'native': - return pd.array(column, dtype=pd.StringDtype()) - if ctx.use_numpy and ctx.max_str_len: - return np.array(column, dtype=f' 0 or pad_w > 0: - x = F.pad(x, [ - pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2 - ]) - return F.conv2d(x, self.weight, self.bias, self.stride, self.padding, - self.dilation, self.groups) diff --git a/spaces/TNR-5/semantic-image-search.img/postcss.config.js b/spaces/TNR-5/semantic-image-search.img/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/TNR-5/semantic-image-search.img/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/models/test_interaction_encoder.py b/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/models/test_interaction_encoder.py deleted file mode 100644 index 7e8094b21e9c2f9cb02633c9b9afd2db3fa41f58..0000000000000000000000000000000000000000 --- a/spaces/TRI-ML/risk_biased_prediction/tests/risk_biased/models/test_interaction_encoder.py +++ /dev/null @@ -1,250 +0,0 @@ -import os -import pytest - -import torch -import torch.nn as nn -from mmcv import Config - -from risk_biased.models.cvae_encoders import ( - CVAEEncoder, - BiasedEncoderNN, - FutureEncoderNN, - InferenceEncoderNN, -) -from risk_biased.models.latent_distributions import ( - GaussianLatentDistribution, - QuantizedDistributionCreator, -) - -from risk_biased.models.cvae_params import CVAEParams - - -@pytest.fixture(scope="module") -def params(): - torch.manual_seed(0) - working_dir = os.path.dirname(os.path.realpath(__file__)) - config_path = os.path.join( - working_dir, "..", "..", "..", "risk_biased", "config", "learning_config.py" - ) - waymo_config_path = os.path.join( - working_dir, "..", "..", "..", "risk_biased", "config", "waymo_config.py" - ) - paths = [config_path, waymo_config_path] - if isinstance(paths, str): - cfg = Config.fromfile(paths) - else: - cfg = Config.fromfile(paths[0]) - for path in paths[1:]: - c = Config.fromfile(path) - cfg.update(c) - cfg.batch_size = 4 - cfg.state_dim = 5 - cfg.dynamic_state_dim = 5 - cfg.map_state_dim = 2 - cfg.num_steps = 3 - cfg.num_steps_future = 4 - cfg.latent_dim = 2 - cfg.hidden_dim = 64 - cfg.device = "cpu" - cfg.sequence_encoder_type = "LSTM" - cfg.sequence_decoder_type = "MLP" - return cfg - - -@pytest.mark.parametrize( - "num_agents, num_map_objects, type, interaction_nn_class", - [ - (4, 5, "MLP", BiasedEncoderNN), - (2, 4, "LSTM", BiasedEncoderNN), - (3, 2, "maskedLSTM", BiasedEncoderNN), - (4, 5, "MLP", FutureEncoderNN), - (2, 4, "LSTM", FutureEncoderNN), - (3, 2, "maskedLSTM", FutureEncoderNN), - (4, 5, "MLP", InferenceEncoderNN), - (2, 4, "LSTM", InferenceEncoderNN), - (3, 2, "maskedLSTM", InferenceEncoderNN), - ], -) -def test_attention_encoder_nn( - params, - num_agents: int, - num_map_objects: int, - type: str, - interaction_nn_class: nn.Module, -): - params.sequence_encoder_type = type - cvae_params = CVAEParams.from_config(params) - if interaction_nn_class == BiasedEncoderNN: - model = interaction_nn_class( - cvae_params, - num_steps=cvae_params.num_steps, - latent_dim=2 * cvae_params.latent_dim, - ) - elif interaction_nn_class == FutureEncoderNN: - model = interaction_nn_class( - cvae_params, - num_steps=cvae_params.num_steps + cvae_params.num_steps_future, - latent_dim=2 * cvae_params.latent_dim, - ) - else: - model = interaction_nn_class( - cvae_params, - num_steps=cvae_params.num_steps, - latent_dim=2 * cvae_params.latent_dim, - ) - assert model.latent_dim == 2 * params.latent_dim - assert model.hidden_dim == params.hidden_dim - - x = torch.rand(params.batch_size, num_agents, params.num_steps, params.state_dim) - offset = x[:, :, -1, :] - x = x - offset.unsqueeze(-2) - mask_x = torch.rand(params.batch_size, num_agents, params.num_steps) > 0.1 - encoded_absolute = torch.rand(params.batch_size, num_agents, params.hidden_dim) - encoded_map = torch.rand(params.batch_size, num_map_objects, params.hidden_dim) - mask_map = torch.rand(params.batch_size, num_map_objects) > 0.1 - if interaction_nn_class == FutureEncoderNN: - y = torch.rand( - params.batch_size, num_agents, params.num_steps_future, params.state_dim - ) - y = y - offset.unsqueeze(-2) - y_ego = y[:, 0:1] - mask_y = torch.rand(params.batch_size, num_agents, params.num_steps_future) - else: - y = None - y_ego = None - mask_y = None - x_ego = x[:, 0:1] - if interaction_nn_class == BiasedEncoderNN: - risk_level = torch.rand(params.batch_size, num_agents) - else: - risk_level = None - - output = model( - x, - mask_x, - encoded_absolute, - encoded_map, - mask_map, - y=y, - mask_y=mask_y, - x_ego=x_ego, - y_ego=y_ego, - offset=offset, - risk_level=risk_level, - ) - # check shape - assert output.shape == (params.batch_size, num_agents, 2 * params.latent_dim) - - -@pytest.mark.parametrize( - "num_agents, num_map_objects, type, interaction_nn_class, latent_distribution_class", - [ - (2, 8, "MLP", BiasedEncoderNN, GaussianLatentDistribution), - (7, 5, "LSTM", BiasedEncoderNN, GaussianLatentDistribution), - (2, 10, "maskedLSTM", BiasedEncoderNN, QuantizedDistributionCreator), - (2, 8, "MLP", FutureEncoderNN, GaussianLatentDistribution), - (7, 5, "LSTM", FutureEncoderNN, QuantizedDistributionCreator), - (2, 10, "maskedLSTM", FutureEncoderNN, GaussianLatentDistribution), - (2, 8, "MLP", InferenceEncoderNN, QuantizedDistributionCreator), - (7, 5, "LSTM", InferenceEncoderNN, GaussianLatentDistribution), - (2, 10, "maskedLSTM", InferenceEncoderNN, GaussianLatentDistribution), - ], -) -# TODO: Add test for QuantizedDistributionCreator -def test_attention_cvae_encoder( - params, - num_agents: int, - num_map_objects: int, - type: str, - interaction_nn_class, - latent_distribution_class, -): - params.sequence_encoder_type = type - if interaction_nn_class == FutureEncoderNN: - risk_level = None - y = torch.rand( - params.batch_size, num_agents, params.num_steps_future, params.state_dim - ) - mask_y = torch.rand(params.batch_size, num_agents, params.num_steps_future) - else: - risk_level = torch.rand(params.batch_size, num_agents) - y = None - mask_y = None - - if interaction_nn_class == BiasedEncoderNN: - model = interaction_nn_class( - CVAEParams.from_config(params), - num_steps=params.num_steps, - latent_dim=2 * params.latent_dim, - ) - elif interaction_nn_class == FutureEncoderNN: - model = interaction_nn_class( - CVAEParams.from_config(params), - num_steps=params.num_steps + params.num_steps_future, - latent_dim=2 * params.latent_dim, - ) - else: - model = interaction_nn_class( - CVAEParams.from_config(params), - num_steps=params.num_steps, - latent_dim=2 * params.latent_dim, - ) - - encoder = CVAEEncoder(model, GaussianLatentDistribution) - # check latent_dim - assert encoder.latent_dim == 2 * params.latent_dim - - x = torch.rand(params.batch_size, num_agents, params.num_steps, params.state_dim) - offset = x[:, :, -1, :] - x = x - offset.unsqueeze(-2) - if y is not None: - y = y - offset.unsqueeze(-2) - x_ego = x[:, 0:1] - y_ego = y[:, 0:1] - else: - x_ego = x[:, 0:1] - y_ego = None - mask_x = torch.rand(params.batch_size, num_agents, params.num_steps) > 0.1 - encoded_absolute = torch.rand(params.batch_size, num_agents, params.hidden_dim) - encoded_map = torch.rand(params.batch_size, num_map_objects, params.hidden_dim) - mask_map = torch.rand(params.batch_size, num_map_objects) > 0.1 - - latent_distribution = encoder( - x=x, - mask_x=mask_x, - encoded_absolute=encoded_absolute, - encoded_map=encoded_map, - mask_map=mask_map, - y=y, - mask_y=mask_y, - x_ego=x_ego, - y_ego=y_ego, - offset=offset, - risk_level=risk_level, - ) - latent_mean = latent_distribution.mu - latent_log_std = latent_distribution.logvar - # check shape - assert ( - latent_mean.shape - == latent_log_std.shape - == (params.batch_size, num_agents, params.latent_dim) - ) - - latent_sample_1, weights = latent_distribution.sample() - # check shape when n_samples = 0 - assert latent_sample_1.shape == latent_mean.shape - assert latent_sample_1.shape[:-1] == weights.shape - - latent_sample_2, weights = latent_distribution.sample(n_samples=2) - # check shape when n_samples = 2 - assert latent_sample_2.shape == ( - params.batch_size, - num_agents, - 2, - params.latent_dim, - ) - - latent_sample_3, weights = latent_distribution.sample() - # make sure sampling is non-deterministic - assert not torch.allclose(latent_sample_1, latent_sample_3) diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/build_py.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/build_py.py deleted file mode 100644 index d9df95922f3e388ef62da386334549d7d07310ff..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/command/build_py.py +++ /dev/null @@ -1,406 +0,0 @@ -"""distutils.command.build_py - -Implements the Distutils 'build_py' command.""" - -import os -import importlib.util -import sys -import glob - -from ..core import Command -from ..errors import DistutilsOptionError, DistutilsFileError -from ..util import convert_path -from distutils._log import log - - -class build_py(Command): - description = "\"build\" pure Python modules (copy to build directory)" - - user_options = [ - ('build-lib=', 'd', "directory to \"build\" (copy) to"), - ('compile', 'c', "compile .py to .pyc"), - ('no-compile', None, "don't compile .py files [default]"), - ( - 'optimize=', - 'O', - "also compile with optimization: -O1 for \"python -O\", " - "-O2 for \"python -OO\", and -O0 to disable [default: -O0]", - ), - ('force', 'f', "forcibly build everything (ignore file timestamps)"), - ] - - boolean_options = ['compile', 'force'] - negative_opt = {'no-compile': 'compile'} - - def initialize_options(self): - self.build_lib = None - self.py_modules = None - self.package = None - self.package_data = None - self.package_dir = None - self.compile = 0 - self.optimize = 0 - self.force = None - - def finalize_options(self): - self.set_undefined_options( - 'build', ('build_lib', 'build_lib'), ('force', 'force') - ) - - # Get the distribution options that are aliases for build_py - # options -- list of packages and list of modules. - self.packages = self.distribution.packages - self.py_modules = self.distribution.py_modules - self.package_data = self.distribution.package_data - self.package_dir = {} - if self.distribution.package_dir: - for name, path in self.distribution.package_dir.items(): - self.package_dir[name] = convert_path(path) - self.data_files = self.get_data_files() - - # Ick, copied straight from install_lib.py (fancy_getopt needs a - # type system! Hell, *everything* needs a type system!!!) - if not isinstance(self.optimize, int): - try: - self.optimize = int(self.optimize) - assert 0 <= self.optimize <= 2 - except (ValueError, AssertionError): - raise DistutilsOptionError("optimize must be 0, 1, or 2") - - def run(self): - # XXX copy_file by default preserves atime and mtime. IMHO this is - # the right thing to do, but perhaps it should be an option -- in - # particular, a site administrator might want installed files to - # reflect the time of installation rather than the last - # modification time before the installed release. - - # XXX copy_file by default preserves mode, which appears to be the - # wrong thing to do: if a file is read-only in the working - # directory, we want it to be installed read/write so that the next - # installation of the same module distribution can overwrite it - # without problems. (This might be a Unix-specific issue.) Thus - # we turn off 'preserve_mode' when copying to the build directory, - # since the build directory is supposed to be exactly what the - # installation will look like (ie. we preserve mode when - # installing). - - # Two options control which modules will be installed: 'packages' - # and 'py_modules'. The former lets us work with whole packages, not - # specifying individual modules at all; the latter is for - # specifying modules one-at-a-time. - - if self.py_modules: - self.build_modules() - if self.packages: - self.build_packages() - self.build_package_data() - - self.byte_compile(self.get_outputs(include_bytecode=0)) - - def get_data_files(self): - """Generate list of '(package,src_dir,build_dir,filenames)' tuples""" - data = [] - if not self.packages: - return data - for package in self.packages: - # Locate package source directory - src_dir = self.get_package_dir(package) - - # Compute package build directory - build_dir = os.path.join(*([self.build_lib] + package.split('.'))) - - # Length of path to strip from found files - plen = 0 - if src_dir: - plen = len(src_dir) + 1 - - # Strip directory from globbed filenames - filenames = [file[plen:] for file in self.find_data_files(package, src_dir)] - data.append((package, src_dir, build_dir, filenames)) - return data - - def find_data_files(self, package, src_dir): - """Return filenames for package's data files in 'src_dir'""" - globs = self.package_data.get('', []) + self.package_data.get(package, []) - files = [] - for pattern in globs: - # Each pattern has to be converted to a platform-specific path - filelist = glob.glob( - os.path.join(glob.escape(src_dir), convert_path(pattern)) - ) - # Files that match more than one pattern are only added once - files.extend( - [fn for fn in filelist if fn not in files and os.path.isfile(fn)] - ) - return files - - def build_package_data(self): - """Copy data files into build directory""" - for package, src_dir, build_dir, filenames in self.data_files: - for filename in filenames: - target = os.path.join(build_dir, filename) - self.mkpath(os.path.dirname(target)) - self.copy_file( - os.path.join(src_dir, filename), target, preserve_mode=False - ) - - def get_package_dir(self, package): - """Return the directory, relative to the top of the source - distribution, where package 'package' should be found - (at least according to the 'package_dir' option, if any).""" - path = package.split('.') - - if not self.package_dir: - if path: - return os.path.join(*path) - else: - return '' - else: - tail = [] - while path: - try: - pdir = self.package_dir['.'.join(path)] - except KeyError: - tail.insert(0, path[-1]) - del path[-1] - else: - tail.insert(0, pdir) - return os.path.join(*tail) - else: - # Oops, got all the way through 'path' without finding a - # match in package_dir. If package_dir defines a directory - # for the root (nameless) package, then fallback on it; - # otherwise, we might as well have not consulted - # package_dir at all, as we just use the directory implied - # by 'tail' (which should be the same as the original value - # of 'path' at this point). - pdir = self.package_dir.get('') - if pdir is not None: - tail.insert(0, pdir) - - if tail: - return os.path.join(*tail) - else: - return '' - - def check_package(self, package, package_dir): - # Empty dir name means current directory, which we can probably - # assume exists. Also, os.path.exists and isdir don't know about - # my "empty string means current dir" convention, so we have to - # circumvent them. - if package_dir != "": - if not os.path.exists(package_dir): - raise DistutilsFileError( - "package directory '%s' does not exist" % package_dir - ) - if not os.path.isdir(package_dir): - raise DistutilsFileError( - "supposed package directory '%s' exists, " - "but is not a directory" % package_dir - ) - - # Directories without __init__.py are namespace packages (PEP 420). - if package: - init_py = os.path.join(package_dir, "__init__.py") - if os.path.isfile(init_py): - return init_py - - # Either not in a package at all (__init__.py not expected), or - # __init__.py doesn't exist -- so don't return the filename. - return None - - def check_module(self, module, module_file): - if not os.path.isfile(module_file): - log.warning("file %s (for module %s) not found", module_file, module) - return False - else: - return True - - def find_package_modules(self, package, package_dir): - self.check_package(package, package_dir) - module_files = glob.glob(os.path.join(glob.escape(package_dir), "*.py")) - modules = [] - setup_script = os.path.abspath(self.distribution.script_name) - - for f in module_files: - abs_f = os.path.abspath(f) - if abs_f != setup_script: - module = os.path.splitext(os.path.basename(f))[0] - modules.append((package, module, f)) - else: - self.debug_print("excluding %s" % setup_script) - return modules - - def find_modules(self): - """Finds individually-specified Python modules, ie. those listed by - module name in 'self.py_modules'. Returns a list of tuples (package, - module_base, filename): 'package' is a tuple of the path through - package-space to the module; 'module_base' is the bare (no - packages, no dots) module name, and 'filename' is the path to the - ".py" file (relative to the distribution root) that implements the - module. - """ - # Map package names to tuples of useful info about the package: - # (package_dir, checked) - # package_dir - the directory where we'll find source files for - # this package - # checked - true if we have checked that the package directory - # is valid (exists, contains __init__.py, ... ?) - packages = {} - - # List of (package, module, filename) tuples to return - modules = [] - - # We treat modules-in-packages almost the same as toplevel modules, - # just the "package" for a toplevel is empty (either an empty - # string or empty list, depending on context). Differences: - # - don't check for __init__.py in directory for empty package - for module in self.py_modules: - path = module.split('.') - package = '.'.join(path[0:-1]) - module_base = path[-1] - - try: - (package_dir, checked) = packages[package] - except KeyError: - package_dir = self.get_package_dir(package) - checked = 0 - - if not checked: - init_py = self.check_package(package, package_dir) - packages[package] = (package_dir, 1) - if init_py: - modules.append((package, "__init__", init_py)) - - # XXX perhaps we should also check for just .pyc files - # (so greedy closed-source bastards can distribute Python - # modules too) - module_file = os.path.join(package_dir, module_base + ".py") - if not self.check_module(module, module_file): - continue - - modules.append((package, module_base, module_file)) - - return modules - - def find_all_modules(self): - """Compute the list of all modules that will be built, whether - they are specified one-module-at-a-time ('self.py_modules') or - by whole packages ('self.packages'). Return a list of tuples - (package, module, module_file), just like 'find_modules()' and - 'find_package_modules()' do.""" - modules = [] - if self.py_modules: - modules.extend(self.find_modules()) - if self.packages: - for package in self.packages: - package_dir = self.get_package_dir(package) - m = self.find_package_modules(package, package_dir) - modules.extend(m) - return modules - - def get_source_files(self): - return [module[-1] for module in self.find_all_modules()] - - def get_module_outfile(self, build_dir, package, module): - outfile_path = [build_dir] + list(package) + [module + ".py"] - return os.path.join(*outfile_path) - - def get_outputs(self, include_bytecode=1): - modules = self.find_all_modules() - outputs = [] - for package, module, module_file in modules: - package = package.split('.') - filename = self.get_module_outfile(self.build_lib, package, module) - outputs.append(filename) - if include_bytecode: - if self.compile: - outputs.append( - importlib.util.cache_from_source(filename, optimization='') - ) - if self.optimize > 0: - outputs.append( - importlib.util.cache_from_source( - filename, optimization=self.optimize - ) - ) - - outputs += [ - os.path.join(build_dir, filename) - for package, src_dir, build_dir, filenames in self.data_files - for filename in filenames - ] - - return outputs - - def build_module(self, module, module_file, package): - if isinstance(package, str): - package = package.split('.') - elif not isinstance(package, (list, tuple)): - raise TypeError( - "'package' must be a string (dot-separated), list, or tuple" - ) - - # Now put the module source file into the "build" area -- this is - # easy, we just copy it somewhere under self.build_lib (the build - # directory for Python source). - outfile = self.get_module_outfile(self.build_lib, package, module) - dir = os.path.dirname(outfile) - self.mkpath(dir) - return self.copy_file(module_file, outfile, preserve_mode=0) - - def build_modules(self): - modules = self.find_modules() - for package, module, module_file in modules: - # Now "build" the module -- ie. copy the source file to - # self.build_lib (the build directory for Python source). - # (Actually, it gets copied to the directory for this package - # under self.build_lib.) - self.build_module(module, module_file, package) - - def build_packages(self): - for package in self.packages: - # Get list of (package, module, module_file) tuples based on - # scanning the package directory. 'package' is only included - # in the tuple so that 'find_modules()' and - # 'find_package_tuples()' have a consistent interface; it's - # ignored here (apart from a sanity check). Also, 'module' is - # the *unqualified* module name (ie. no dots, no package -- we - # already know its package!), and 'module_file' is the path to - # the .py file, relative to the current directory - # (ie. including 'package_dir'). - package_dir = self.get_package_dir(package) - modules = self.find_package_modules(package, package_dir) - - # Now loop over the modules we found, "building" each one (just - # copy it to self.build_lib). - for package_, module, module_file in modules: - assert package == package_ - self.build_module(module, module_file, package) - - def byte_compile(self, files): - if sys.dont_write_bytecode: - self.warn('byte-compiling is disabled, skipping.') - return - - from ..util import byte_compile - - prefix = self.build_lib - if prefix[-1] != os.sep: - prefix = prefix + os.sep - - # XXX this code is essentially the same as the 'byte_compile() - # method of the "install_lib" command, except for the determination - # of the 'prefix' string. Hmmm. - if self.compile: - byte_compile( - files, optimize=0, force=self.force, prefix=prefix, dry_run=self.dry_run - ) - if self.optimize > 0: - byte_compile( - files, - optimize=self.optimize, - force=self.force, - prefix=prefix, - dry_run=self.dry_run, - ) diff --git a/spaces/TangibleAI/mathtext/mathtext/api_scaling.py b/spaces/TangibleAI/mathtext/mathtext/api_scaling.py deleted file mode 100644 index cf688d3c3c2468347887f5b0fee620554ed44ca6..0000000000000000000000000000000000000000 --- a/spaces/TangibleAI/mathtext/mathtext/api_scaling.py +++ /dev/null @@ -1,96 +0,0 @@ -"""https://zetcode.com/python/concurrent-http-requests/""" - -import asyncio -import random -import time -import pandas as pd -import httpx -from os.path import exists - -NUMBER_OF_CALLS = 1 - -headers = {"Content-Type": "application/json; charset=utf-8"} - -# base_url = "https://tangibleai-mathtext.hf.space/run/{endpoint}" -base_url = "http://localhost:7860/run/{endpoint}" - -data_list_1 = { - "endpoint": "text2int", - "test_data": [ - "one hundred forty five", - "twenty thousand nine hundred fifty", - "one hundred forty five", - "nine hundred eighty three", - "five million", - ] -} - -data_list_2 = { - "endpoint": "text2int-preprocessed", - "test_data": [ - "one hundred forty five", - "twenty thousand nine hundred fifty", - "one hundred forty five", - "nine hundred eighty three", - "five million", - ] -} -data_list_3 = { - "endpoint": "sentiment-analysis", - "test_data": [ - "Totally agree", - "I like it", - "No more", - "I am not sure", - "Never", - ] -} - - -# async call to endpoint -async def call_api(url, data, call_number, number_of_calls): - json = {"data": [data]} - async with httpx.AsyncClient() as client: - start = time.perf_counter() # Used perf_counter for more precise result. - response = await client.post(url=url, headers=headers, json=json, timeout=30) - end = time.perf_counter() - return { - "endpoint": url.split("/")[-1], - "test data": data, - "status code": response.status_code, - "response": response.json().get("data"), - "call number": call_number, - "number of calls": number_of_calls, - "start": start.__round__(4), - "end": end.__round__(4), - "delay": (end - start).__round__(4) - } - - -data_lists = [data_list_1, data_list_2, data_list_3] - -results = [] - - -async def main(number_of_calls): - for data_list in data_lists: - calls = [] - for call_number in range(1, number_of_calls + 1): - url = base_url.format(endpoint=data_list["endpoint"]) - data = random.choice(data_list["test_data"]) - calls.append(call_api(url, data, call_number, number_of_calls)) - r = await asyncio.gather(*calls) - results.extend(r) - - - -start = time.perf_counter() -asyncio.run(main(NUMBER_OF_CALLS)) -end = time.perf_counter() -print(end-start) -df = pd.DataFrame(results) - -if exists("call_history.csv"): - df.to_csv(path_or_buf="call_history.csv", mode="a", header=False, index=False) -else: - df.to_csv(path_or_buf="call_history.csv", mode="w", header=True, index=False) diff --git a/spaces/TheTimeTraveller/StableDiffusion/app.py b/spaces/TheTimeTraveller/StableDiffusion/app.py deleted file mode 100644 index 1a4bb98f10059cbb7a5031b951d5ab497fa80b4e..0000000000000000000000000000000000000000 --- a/spaces/TheTimeTraveller/StableDiffusion/app.py +++ /dev/null @@ -1,102 +0,0 @@ -import streamlit as st -import time -import torch -import os -from torch import autocast -from diffusers import StableDiffusionPipeline -from datasets import load_dataset -from PIL import Image -import re - -st.title("Text-to-Image generation using Stable Diffusion") -st.subheader("Text Prompt") -text_prompt = st.text_area('Enter here:', height=100) - -sl1, sl2, sl3, sl4 = st.columns(4) - -num_samples = sl1.slider('Number of Images', 1, 4, 1) -num_steps = sl2.slider('Diffusion steps', 10, 150, 10) -scale = sl3.slider('Configuration scale', 0, 20, 10) -seed = sl4.number_input("Enter seed", 0, 25000, 47, 1) - - -model_id = "CompVis/stable-diffusion-v1-4" -device = "cuda" - -auth_token = os.environ.get("StableDiffusion") or True - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, use_auth_token=auth_token, revision="fp16", torch_dtype=torch.float16) -pipe = pipe.to(device) -word_list_dataset = load_dataset( - "stabilityai/word-list", data_files="list.txt", use_auth_token=auth_token) -word_list = word_list_dataset["train"]['text'] - - -def infer(prompt, samples, steps, scale, seed): - for filter in word_list: - if re.search(rf"\b{filter}\b", prompt): - raise Exception( - "Unsafe content found. Please try again with different prompts.") - - generator = torch.Generator(device=device).manual_seed(seed) - with autocast("cuda"): - images_list = pipe( - [prompt] * samples, - num_inference_steps=steps, - guidance_scale=scale, - generator=generator, - ) - images = [] - safe_image = Image.open(r"unsafe.png") - for i, image in enumerate(images_list["sample"]): - if (images_list["nsfw_content_detected"][i]): - images.append(safe_image) - else: - images.append(image) - return images - - -def check_and_infer(): - - if len(text_prompt) < 5: - st.write("Prompt too small, enter some more detail") - st.experimental_rerun() - else: - with st.spinner('Wait for it...'): - generated_images = infer( - text_prompt, num_samples, num_steps, scale, seed) - for image in generated_images: - st.image(image, caption=text_prompt) - st.success('Image generated!') - st.balloons() - - -button_clicked = st.button( - "Generate Image", on_click=check_and_infer, disabled=False) - -st.markdown("""---""") - -col1, col2, col3 = st.columns([1, 6, 1]) - -with col1: - col1.write("") - -with col2: - placeholder = col2.empty() - - placeholder.image("pl2.png") - -with col3: - col1.write("") - - -for image in []: - st.image(image, caption=text_prompt) - -st.markdown("""---""") - -st.text("Number of Images: Number of samples(Images) to generate") -st.text("Diffusion steps: How many steps to spend generating (diffusing) your image.") -st.text("Configuration scale: Scale adjusts how close the image will be to your prompt. Higher values keep your image closer to your prompt.") -st.text("Enter seed: Seed value to use for the model.") diff --git a/spaces/Tuyet3005/Sentiment_Analysis_using_BERT/README.md b/spaces/Tuyet3005/Sentiment_Analysis_using_BERT/README.md deleted file mode 100644 index a65fa6a454271ef9206106514bd368f1006b38a5..0000000000000000000000000000000000000000 --- a/spaces/Tuyet3005/Sentiment_Analysis_using_BERT/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Sentiment Detection Using Bert -emoji: 🐨 -colorFrom: gray -colorTo: gray -sdk: streamlit -sdk_version: 1.21.0 -app_file: streamlit_app.py/Homepage.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/UltraMarkoBR/SoftHunter/style.css b/spaces/UltraMarkoBR/SoftHunter/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/UltraMarkoBR/SoftHunter/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/Vegecken/sovits4dzl/inference/infer_tool.py b/spaces/Vegecken/sovits4dzl/inference/infer_tool.py deleted file mode 100644 index dbaff46f4f6eb792808e0a0cbb37fb86cb8372e2..0000000000000000000000000000000000000000 --- a/spaces/Vegecken/sovits4dzl/inference/infer_tool.py +++ /dev/null @@ -1,233 +0,0 @@ -import hashlib -import io -import json -import logging -import os -import time -from pathlib import Path -from inference import slicer - -import librosa -import numpy as np -# import onnxruntime -import parselmouth -import soundfile -import torch -import torchaudio - -import cluster -from hubert import hubert_model -import utils -from models import SynthesizerTrn - -logging.getLogger('matplotlib').setLevel(logging.WARNING) - - -def read_temp(file_name): - if not os.path.exists(file_name): - with open(file_name, "w") as f: - f.write(json.dumps({"info": "temp_dict"})) - return {} - else: - try: - with open(file_name, "r") as f: - data = f.read() - data_dict = json.loads(data) - if os.path.getsize(file_name) > 50 * 1024 * 1024: - f_name = file_name.replace("\\", "/").split("/")[-1] - print(f"clean {f_name}") - for wav_hash in list(data_dict.keys()): - if int(time.time()) - int(data_dict[wav_hash]["time"]) > 14 * 24 * 3600: - del data_dict[wav_hash] - except Exception as e: - print(e) - print(f"{file_name} error,auto rebuild file") - data_dict = {"info": "temp_dict"} - return data_dict - - -def write_temp(file_name, data): - with open(file_name, "w") as f: - f.write(json.dumps(data)) - - -def timeit(func): - def run(*args, **kwargs): - t = time.time() - res = func(*args, **kwargs) - print('executing \'%s\' costed %.3fs' % (func.__name__, time.time() - t)) - return res - - return run - - -def format_wav(audio_path): - if Path(audio_path).suffix == '.wav': - return - raw_audio, raw_sample_rate = librosa.load(audio_path, mono=True, sr=None) - soundfile.write(Path(audio_path).with_suffix(".wav"), raw_audio, raw_sample_rate) - - -def get_end_file(dir_path, end): - file_lists = [] - for root, dirs, files in os.walk(dir_path): - files = [f for f in files if f[0] != '.'] - dirs[:] = [d for d in dirs if d[0] != '.'] - for f_file in files: - if f_file.endswith(end): - file_lists.append(os.path.join(root, f_file).replace("\\", "/")) - return file_lists - - -def get_md5(content): - return hashlib.new("md5", content).hexdigest() - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -class Svc(object): - def __init__(self, net_g_path, config_path, - device=None, - cluster_model_path="logs/44k/kmeans_10000.pt"): - self.net_g_path = net_g_path - if device is None: - self.dev = torch.device("cuda" if torch.cuda.is_available() else "cpu") - else: - self.dev = torch.device(device) - self.net_g_ms = None - self.hps_ms = utils.get_hparams_from_file(config_path) - self.target_sample = self.hps_ms.data.sampling_rate - self.hop_size = self.hps_ms.data.hop_length - self.spk2id = self.hps_ms.spk - # 加载hubert - self.hubert_model = utils.get_hubert_model().to(self.dev) - self.load_model() - if os.path.exists(cluster_model_path): - self.cluster_model = cluster.get_cluster_model(cluster_model_path) - - def load_model(self): - # 获取模型配置 - self.net_g_ms = SynthesizerTrn( - self.hps_ms.data.filter_length // 2 + 1, - self.hps_ms.train.segment_size // self.hps_ms.data.hop_length, - **self.hps_ms.model) - _ = utils.load_checkpoint(self.net_g_path, self.net_g_ms, None) - if "half" in self.net_g_path and torch.cuda.is_available(): - _ = self.net_g_ms.half().eval().to(self.dev) - else: - _ = self.net_g_ms.eval().to(self.dev) - - - - def get_unit_f0(self, in_path, tran, cluster_infer_ratio, speaker): - - wav, sr = librosa.load(in_path, sr=self.target_sample) - - f0 = utils.compute_f0_parselmouth(wav, sampling_rate=self.target_sample, hop_length=self.hop_size) - f0, uv = utils.interpolate_f0(f0) - f0 = torch.FloatTensor(f0) - uv = torch.FloatTensor(uv) - f0 = f0 * 2 ** (tran / 12) - f0 = f0.unsqueeze(0).to(self.dev) - uv = uv.unsqueeze(0).to(self.dev) - - wav16k = librosa.resample(wav, orig_sr=self.target_sample, target_sr=16000) - wav16k = torch.from_numpy(wav16k).to(self.dev) - c = utils.get_hubert_content(self.hubert_model, wav_16k_tensor=wav16k) - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[1]) - - if cluster_infer_ratio !=0: - cluster_c = cluster.get_cluster_center_result(self.cluster_model, c.numpy().T, speaker).T - cluster_c = torch.FloatTensor(cluster_c) - c = cluster_infer_ratio * cluster_c + (1 - cluster_infer_ratio) * c - - c = c.unsqueeze(0) - return c, f0, uv - - def infer(self, speaker, tran, raw_path, - cluster_infer_ratio=0, - auto_predict_f0=False, - noice_scale=0.4): - speaker_id = self.spk2id[speaker] - sid = torch.LongTensor([int(speaker_id)]).to(self.dev).unsqueeze(0) - c, f0, uv = self.get_unit_f0(raw_path, tran, cluster_infer_ratio, speaker) - if "half" in self.net_g_path and torch.cuda.is_available(): - c = c.half() - with torch.no_grad(): - start = time.time() - audio = self.net_g_ms.infer(c, f0=f0, g=sid, uv=uv, predict_f0=auto_predict_f0, noice_scale=noice_scale)[0,0].data.float() - use_time = time.time() - start - print("vits use time:{}".format(use_time)) - return audio, audio.shape[-1] - - def slice_inference(self,raw_audio_path, spk, tran, slice_db,cluster_infer_ratio, auto_predict_f0,noice_scale, pad_seconds=0.5): - wav_path = raw_audio_path - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - data = np.concatenate([np.zeros([pad_len]), data, np.zeros([pad_len])]) - length = int(np.ceil(len(data) / audio_sr * self.target_sample)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - else: - out_audio, out_sr = self.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale - ) - _audio = out_audio.cpu().numpy() - - pad_len = int(self.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - audio.extend(list(_audio)) - return np.array(audio) - - -class RealTimeVC: - def __init__(self): - self.last_chunk = None - self.last_o = None - self.chunk_len = 16000 # 区块长度 - self.pre_len = 3840 # 交叉淡化长度,640的倍数 - - """输入输出都是1维numpy 音频波形数组""" - - def process(self, svc_model, speaker_id, f_pitch_change, input_wav_path): - import maad - audio, sr = torchaudio.load(input_wav_path) - audio = audio.cpu().numpy()[0] - temp_wav = io.BytesIO() - if self.last_chunk is None: - input_wav_path.seek(0) - audio, sr = svc_model.infer(speaker_id, f_pitch_change, input_wav_path) - audio = audio.cpu().numpy() - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return audio[-self.chunk_len:] - else: - audio = np.concatenate([self.last_chunk, audio]) - soundfile.write(temp_wav, audio, sr, format="wav") - temp_wav.seek(0) - audio, sr = svc_model.infer(speaker_id, f_pitch_change, temp_wav) - audio = audio.cpu().numpy() - ret = maad.util.crossfade(self.last_o, audio, self.pre_len) - self.last_chunk = audio[-self.pre_len:] - self.last_o = audio - return ret[self.chunk_len:2 * self.chunk_len] diff --git a/spaces/Vladislawoo/booktoread/README.md b/spaces/Vladislawoo/booktoread/README.md deleted file mode 100644 index f3662c8b2ad6357eb10a93e814bbecd36a2ada52..0000000000000000000000000000000000000000 --- a/spaces/Vladislawoo/booktoread/README.md +++ /dev/null @@ -1,25 +0,0 @@ ---- -title: Booktoread -emoji: 📉 -colorFrom: purple -colorTo: yellow -sdk: streamlit -sdk_version: 1.26.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - -## Умный поиск книг - -## 🦸‍♂️Команда -1. [Антон Яблоков](https://github.com/AntNikYab) -2. [Владислав Филиппов](https://github.com/Vlad1slawoo) - -## 🎯 Задача -собрать выборку из не менее, чем 5000 аннотаций c [сайта](https://www.biblio-globus.ru/category?cid=182&pagenumber=1) и построить систему поиска наиболее подходящих под пользовательский запрос книг. - -## Как пользоваться - diff --git a/spaces/Vrk/SeeFood/FetchRecipe.py b/spaces/Vrk/SeeFood/FetchRecipe.py deleted file mode 100644 index be3a73c86d1920920d6f3d5ae0c97066c877e1c9..0000000000000000000000000000000000000000 --- a/spaces/Vrk/SeeFood/FetchRecipe.py +++ /dev/null @@ -1,16 +0,0 @@ -import requests -import json - -url = "https://yummly2.p.rapidapi.com/feeds/auto-complete" - -querystring = {"q":"chicken soup"} - -headers = { - 'x-rapidapi-host': "yummly2.p.rapidapi.com", - 'x-rapidapi-key': "f6f6823b91msh9e92fed91d5356ap136f5djsn494d8f582fb3" - } - -response = requests.request("GET", url, headers=headers, params=querystring) -json_data = json.loads(response.text) - -print(json_data) \ No newline at end of file diff --git a/spaces/XzJosh/Aatrox-Bert-VITS2/text/symbols.py b/spaces/XzJosh/Aatrox-Bert-VITS2/text/symbols.py deleted file mode 100644 index 9dfae4e633829f20c4fd767b1c7a9198911ed801..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Aatrox-Bert-VITS2/text/symbols.py +++ /dev/null @@ -1,51 +0,0 @@ -punctuation = ['!', '?', '…', ",", ".", "'", '-'] -pu_symbols = punctuation + ["SP", "UNK"] -pad = '_' - -# chinese -zh_symbols = ['E', 'En', 'a', 'ai', 'an', 'ang', 'ao', 'b', 'c', 'ch', 'd', 'e', 'ei', 'en', 'eng', 'er', 'f', 'g', 'h', - 'i', 'i0', 'ia', 'ian', 'iang', 'iao', 'ie', 'in', 'ing', 'iong', 'ir', 'iu', 'j', 'k', 'l', 'm', 'n', 'o', - 'ong', - 'ou', 'p', 'q', 'r', 's', 'sh', 't', 'u', 'ua', 'uai', 'uan', 'uang', 'ui', 'un', 'uo', 'v', 'van', 've', 'vn', - 'w', 'x', 'y', 'z', 'zh', - "AA", "EE", "OO"] -num_zh_tones = 6 - -# japanese -ja_symbols = ['I', 'N', 'U', 'a', 'b', 'by', 'ch', 'cl', 'd', 'dy', 'e', 'f', 'g', 'gy', 'h', 'hy', 'i', 'j', 'k', 'ky', - 'm', 'my', 'n', 'ny', 'o', 'p', 'py', 'r', 'ry', 's', 'sh', 't', 'ts', 'u', 'V', 'w', 'y', 'z'] -num_ja_tones = 1 - -# English -en_symbols = ['aa', 'ae', 'ah', 'ao', 'aw', 'ay', 'b', 'ch', 'd', 'dh', 'eh', 'er', 'ey', 'f', 'g', 'hh', 'ih', 'iy', - 'jh', 'k', 'l', 'm', 'n', 'ng', 'ow', 'oy', 'p', 'r', 's', - 'sh', 't', 'th', 'uh', 'uw', 'V', 'w', 'y', 'z', 'zh'] -num_en_tones = 4 - -# combine all symbols -normal_symbols = sorted(set(zh_symbols + ja_symbols + en_symbols)) -symbols = [pad] + normal_symbols + pu_symbols -sil_phonemes_ids = [symbols.index(i) for i in pu_symbols] - -# combine all tones -num_tones = num_zh_tones + num_ja_tones + num_en_tones - -# language maps -language_id_map = { - 'ZH': 0, - "JA": 1, - "EN": 2 -} -num_languages = len(language_id_map.keys()) - -language_tone_start_map = { - 'ZH': 0, - "JA": num_zh_tones, - "EN": num_zh_tones + num_ja_tones -} - -if __name__ == '__main__': - a = set(zh_symbols) - b = set(en_symbols) - print(sorted(a&b)) - diff --git a/spaces/XzJosh/LAPLACE-Bert-VITS2/server.py b/spaces/XzJosh/LAPLACE-Bert-VITS2/server.py deleted file mode 100644 index c736ca4f95fec853950eef6654ef79856beffc0a..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/LAPLACE-Bert-VITS2/server.py +++ /dev/null @@ -1,123 +0,0 @@ -from flask import Flask, request, Response -from io import BytesIO -import torch -from av import open as avopen - -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -from scipy.io import wavfile - -# Flask Init -app = Flask(__name__) -app.config['JSON_AS_ASCII'] = False -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - print([f"{p}{t}" for p, t in zip(phone, tone)]) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language - -def infer(text, sdp_ratio, noise_scale, noise_scale_w,length_scale,sid): - bert, phones, tones, lang_ids = get_text(text,"ZH", hps,) - with torch.no_grad(): - x_tst=phones.to(dev).unsqueeze(0) - tones=tones.to(dev).unsqueeze(0) - lang_ids=lang_ids.to(dev).unsqueeze(0) - bert = bert.to(dev).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(dev) - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(dev) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids,bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - return audio - -def replace_punctuation(text, i=2): - punctuation = ",。?!" - for char in punctuation: - text = text.replace(char, char * i) - return text - -def wav2(i, o, format): - inp = avopen(i, 'rb') - out = avopen(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - for p in ostream.encode(None): out.mux(p) - - out.close() - inp.close() - -# Load Generator -hps = utils.get_hparams_from_file("./configs/config.json") - -dev='cuda' -net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(dev) -_ = net_g.eval() - -_ = utils.load_checkpoint("logs/G_649000.pth", net_g, None,skip_optimizer=True) - -@app.route("/",methods=['GET','POST']) -def main(): - if request.method == 'GET': - try: - speaker = request.args.get('speaker') - text = request.args.get('text').replace("/n","") - sdp_ratio = float(request.args.get("sdp_ratio", 0.2)) - noise = float(request.args.get("noise", 0.5)) - noisew = float(request.args.get("noisew", 0.6)) - length = float(request.args.get("length", 1.2)) - if length >= 2: - return "Too big length" - if len(text) >=200: - return "Too long text" - fmt = request.args.get("format", "wav") - if None in (speaker, text): - return "Missing Parameter" - if fmt not in ("mp3", "wav", "ogg"): - return "Invalid Format" - except: - return "Invalid Parameter" - - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise, noise_scale_w=noisew, length_scale=length, sid=speaker) - - with BytesIO() as wav: - wavfile.write(wav, hps.data.sampling_rate, audio) - torch.cuda.empty_cache() - if fmt == "wav": - return Response(wav.getvalue(), mimetype="audio/wav") - wav.seek(0, 0) - with BytesIO() as ofp: - wav2(wav, ofp, fmt) - return Response( - ofp.getvalue(), - mimetype="audio/mpeg" if fmt == "mp3" else "audio/ogg" - ) diff --git a/spaces/Yabo/ControlVideo/GITHUB_README.md b/spaces/Yabo/ControlVideo/GITHUB_README.md deleted file mode 100644 index 08048e53ca39be3c595d77be7315dea021e94e26..0000000000000000000000000000000000000000 --- a/spaces/Yabo/ControlVideo/GITHUB_README.md +++ /dev/null @@ -1,162 +0,0 @@ -# ControlVideo - -Official pytorch implementation of "ControlVideo: Training-free Controllable Text-to-Video Generation" - -[![arXiv](https://img.shields.io/badge/arXiv-2305.13077-b31b1b.svg)](https://arxiv.org/abs/2305.13077) -![visitors](https://visitor-badge.laobi.icu/badge?page_id=YBYBZhang/ControlVideo) -[![Replicate](https://replicate.com/cjwbw/controlvideo/badge)](https://replicate.com/cjwbw/controlvideo) - -

    - -
    -ControlVideo adapts ControlNet to the video counterpart without any finetuning, aiming to directly inherit its high-quality and consistent generation -

    - -## News -* [07/11/2023] Support [ControlNet 1.1](https://github.com/lllyasviel/ControlNet-v1-1-nightly) based version! -* [05/28/2023] Thanks [chenxwh](https://github.com/chenxwh), add a [Replicate demo](https://replicate.com/cjwbw/controlvideo)! -* [05/25/2023] Code [ControlVideo](https://github.com/YBYBZhang/ControlVideo/) released! -* [05/23/2023] Paper [ControlVideo](https://arxiv.org/abs/2305.13077) released! - -## Setup - -### 1. Download Weights -All pre-trained weights are downloaded to `checkpoints/` directory, including the pre-trained weights of [Stable Diffusion v1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5), ControlNet 1.0 conditioned on [canny edges](https://huggingface.co/lllyasviel/sd-controlnet-canny), [depth maps](https://huggingface.co/lllyasviel/sd-controlnet-depth), [human poses](https://huggingface.co/lllyasviel/sd-controlnet-openpose), and ControlNet 1.1 in [here](https://huggingface.co/lllyasviel). -The `flownet.pkl` is the weights of [RIFE](https://github.com/megvii-research/ECCV2022-RIFE). -The final file tree likes: - -```none -checkpoints -├── stable-diffusion-v1-5 -├── sd-controlnet-canny -├── sd-controlnet-depth -├── sd-controlnet-openpose -├── ... -├── flownet.pkl -``` -### 2. Requirements - -```shell -conda create -n controlvideo python=3.10 -conda activate controlvideo -pip install -r requirements.txt -``` -Note: `xformers` is recommended to save memory and running time. `controlnet-aux` is updated to version 0.0.6. - -## Inference - -To perform text-to-video generation, just run this command in `inference.sh`: -```bash -python inference.py \ - --prompt "A striking mallard floats effortlessly on the sparkling pond." \ - --condition "depth" \ - --video_path "data/mallard-water.mp4" \ - --output_path "outputs/" \ - --video_length 15 \ - --smoother_steps 19 20 \ - --width 512 \ - --height 512 \ - --frame_rate 2 \ - --version v10 \ - # --is_long_video -``` -where `--video_length` is the length of synthesized video, `--condition` represents the type of structure sequence, -`--smoother_steps` determines at which timesteps to perform smoothing, `--version` selects the version of ControlNet (e.g., `v10` or `v11`), and `--is_long_video` denotes whether to enable efficient long-video synthesis. - -## Visualizations - -### ControlVideo on depth maps - - - - - - - - - - - - - - - - - - - - - - -
    "A charming flamingo gracefully wanders in the calm and serene water, its delicate neck curving into an elegant shape.""A striking mallard floats effortlessly on the sparkling pond.""A gigantic yellow jeep slowly turns on a wide, smooth road in the city."
    "A sleek boat glides effortlessly through the shimmering river, van gogh style.""A majestic sailing boat cruises along the vast, azure sea.""A contented cow ambles across the dewy, verdant pasture."
    - -### ControlVideo on canny edges - - - - - - - - - - - - - - - - - - - - - - -
    "A young man riding a sleek, black motorbike through the winding mountain roads.""A white swan movingon the lake, cartoon style.""A dusty old jeep was making its way down the winding forest road, creaking and groaning with each bump and turn."
    "A shiny red jeep smoothly turns on a narrow, winding road in the mountains.""A majestic camel gracefully strides across the scorching desert sands.""A fit man is leisurely hiking through a lush and verdant forest."
    - - -### ControlVideo on human poses - - - - - - - - - - - - - -
    "James bond moonwalk on the beach, animation style.""Goku in a mountain range, surreal style.""Hulk is jumping on the street, cartoon style.""A robot dances on a road, animation style."
    - -### Long video generation - - - - - - - - - - -
    "A steamship on the ocean, at sunset, sketch style.""Hulk is dancing on the beach, cartoon style."
    - -## Citation -If you make use of our work, please cite our paper. -```bibtex -@article{zhang2023controlvideo, - title={ControlVideo: Training-free Controllable Text-to-Video Generation}, - author={Zhang, Yabo and Wei, Yuxiang and Jiang, Dongsheng and Zhang, Xiaopeng and Zuo, Wangmeng and Tian, Qi}, - journal={arXiv preprint arXiv:2305.13077}, - year={2023} -} -``` - -## Acknowledgement -This work repository borrows heavily from [Diffusers](https://github.com/huggingface/diffusers), [ControlNet](https://github.com/lllyasviel/ControlNet), [Tune-A-Video](https://github.com/showlab/Tune-A-Video), and [RIFE](https://github.com/megvii-research/ECCV2022-RIFE). - -There are also many interesting works on video generation: [Tune-A-Video](https://github.com/showlab/Tune-A-Video), [Text2Video-Zero](https://github.com/Picsart-AI-Research/Text2Video-Zero), [Follow-Your-Pose](https://github.com/mayuelala/FollowYourPose), [Control-A-Video](https://github.com/Weifeng-Chen/control-a-video), et al. diff --git a/spaces/YuxinJ/Scenimefy/Scenimefy/utils/__init__.py b/spaces/YuxinJ/Scenimefy/Scenimefy/utils/__init__.py deleted file mode 100644 index f8c8e6dd84323081212602564b9b324f8826a943..0000000000000000000000000000000000000000 --- a/spaces/YuxinJ/Scenimefy/Scenimefy/utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -""" -This package includes a miscellaneous collection of useful helper functions. -""" -from Scenimefy.utils import * diff --git a/spaces/abdabbas/breast_cancer/README.md b/spaces/abdabbas/breast_cancer/README.md deleted file mode 100644 index 697db44c2e5d95d4e4de5ba141238df910f09180..0000000000000000000000000000000000000000 --- a/spaces/abdabbas/breast_cancer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Breast Cancer -emoji: 📉 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/abhi3940/test/app.py b/spaces/abhi3940/test/app.py deleted file mode 100644 index 7710a6cf4c68b944225327957ca82aa8ac510aab..0000000000000000000000000000000000000000 --- a/spaces/abhi3940/test/app.py +++ /dev/null @@ -1,178 +0,0 @@ -import streamlit as st -from PIL import Image -import torch -from diffusers import DiffusionPipeline -from diffusers.utils import load_image -from streamlit_image_select import image_select -import time -import io - - -pipeline = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-refiner-1.0") - -def main(): - st.title("Welcome to SurgiLook.ai!") - st.write("With this tool, you can easily visualize the results of a surgical procedure before it happens") - - # Add an image input field - uploaded_file = st.file_uploader("Choose an image...", type="jpg") - # Add padding - st.write("") - st.write("") - - # Add padding - st.write("") - st.write("") - - # Add a section to select an image - - img = image_select( - label="Or use a Demo Image insted", - images=[ - - Image.open("chin.jpg"), - Image.open("facelift.jpg"), - Image.open("nose.jpg"), - - ], - use_container_width=False, - captions=["chin implant", "face lift", "nose adjustment", ], - return_value="index" -) - - # Add padding - st.write("") - st.write("") - - # Add a section with multiple options - st.header("Select an option:") - option = st.selectbox("", ("Face Lift", "Nose correction", "Chin implant")) - st.write("You selected:", option) - - # Add padding - st.write("") - st.write("") - - # Add a horizontal bar with images for each option - strg="" - if option == "Face Lift": - strg="Face Lift" - col1, col2, col3 = st.columns(3) - with col1: - st.image("fl1.jpg") - with col2: - st.image("fl2.jpg") - with col3: - st.image("fl3.jpg") - elif option == "Nose correction": - strg="Nose correction" - col1, col2, col3 = st.columns(3) - with col1: - st.image("nose1.jpg") - with col2: - st.image("nose3.jpg") - with col3: - st.image("nose4.jpg") - elif option == "Chin implant": - strg="Chin implant" - col1, col2, col3 = st.columns(3) - with col1: - st.image("chin1.jpg") - with col2: - st.image("chin2.jpg") - with col3: - st.image("chin3.jpg") - - - # Add padding - st.write("") - st.write("") - # Add a text input field - text_input = st.text_input("Enter some text...") - - # Add a generate button - if st.button("Generate"): - # Check if an image was uploaded - if uploaded_file is not None: - image_bytes = uploaded_file.read() - pil_image = Image.open(io.BytesIO(image_bytes)) - init_image = load_image(pil_image).convert("RGB") - prompt = f"generate image of how this person would look after {strg} also use this additional information{text_input}" - image = pipeline(prompt, image=init_image).images - st.image(image, caption="Uploaded Image", use_column_width=True) - else: - time.sleep(4) - if img==0: - st.image('chinop.jpg',width=300) - elif img==1: - st.image('faceliftop.jpg',width=300) - elif img==2: - st.image('noseop.jpg',width=300) - - # Add padding - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - - # Add a section with instructions and images - col1, col2 = st.columns(2) - with col1: - st.header("How to use this website") - st.write("Here are some steps to get you started:") - st.write("1. Upload an image using the image input field above.") - st.write("2. select the type of cosmetic sugery you want to perform.") - st.write("3. Add some text to describe any specific details") - st.write("4. Click the generate button to generate output.") - with col2: - st.image("step1.jpeg") - - st.write("") - st.write("") - - col1, col2 = st.columns(2) - with col2: - st.header("Transforming the cosmatic surgeriy experiance") - st.write("Our SugiLook AI model generates realistic before and after images of cosmetic surgery.") - st.write("Generate Precise image by providing detailed feedback") - st.write("Feel more confident on your decision") - st.write("Assist doctors in communicating the potential results of the procedure.") - - with col1: - st.image("step2.jpeg") - # Set up the footer container - - # Add a footer - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - st.write("") - - - - - footer_container = st.container() - with footer_container: - st.write("Terms of Use | Privacy Policy | Contact Us") - st.write("Language: English | Spanish | French | German") - - - - - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/pipelines/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/pipelines/__init__.py deleted file mode 100644 index 8b9046b07bb4ddea7a707a392b42e72db7c9df67..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/datasets/pipelines/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -from .compose import Compose -from .formating import (Collect, ImageToTensor, ToDataContainer, ToTensor, - Transpose, to_tensor) -from .loading import LoadAnnotations, LoadImageFromFile -from .test_time_aug import MultiScaleFlipAug -from .transforms import (CLAHE, AdjustGamma, Normalize, Pad, - PhotoMetricDistortion, RandomCrop, RandomFlip, - RandomRotate, Rerange, Resize, RGB2Gray, SegRescale) - -__all__ = [ - 'Compose', 'to_tensor', 'ToTensor', 'ImageToTensor', 'ToDataContainer', - 'Transpose', 'Collect', 'LoadAnnotations', 'LoadImageFromFile', - 'MultiScaleFlipAug', 'Resize', 'RandomFlip', 'Pad', 'RandomCrop', - 'Normalize', 'SegRescale', 'PhotoMetricDistortion', 'RandomRotate', - 'AdjustGamma', 'CLAHE', 'Rerange', 'RGB2Gray' -] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/ann_r50-d8.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/ann_r50-d8.py deleted file mode 100644 index a2cb653827e44e6015b3b83bc578003e614a6aa1..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/configs/_base_/models/ann_r50-d8.py +++ /dev/null @@ -1,46 +0,0 @@ -# model settings -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained='open-mmlab://resnet50_v1c', - backbone=dict( - type='ResNetV1c', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - dilations=(1, 1, 2, 4), - strides=(1, 2, 1, 1), - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch', - contract_dilation=True), - decode_head=dict( - type='ANNHead', - in_channels=[1024, 2048], - in_index=[2, 3], - channels=512, - project_channels=256, - query_scales=(1, ), - key_pool_scales=(1, 3, 6, 8), - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=1024, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) diff --git a/spaces/abrar-lohia/text-2-character-anim/VQTrans/dataset/dataset_tokenize.py b/spaces/abrar-lohia/text-2-character-anim/VQTrans/dataset/dataset_tokenize.py deleted file mode 100644 index 641a02a75f2cfaadea45851cad2a95b39bfa1eae..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/VQTrans/dataset/dataset_tokenize.py +++ /dev/null @@ -1,117 +0,0 @@ -import torch -from torch.utils import data -import numpy as np -from os.path import join as pjoin -import random -import codecs as cs -from tqdm import tqdm - - - -class VQMotionDataset(data.Dataset): - def __init__(self, dataset_name, feat_bias = 5, window_size = 64, unit_length = 8): - self.window_size = window_size - self.unit_length = unit_length - self.feat_bias = feat_bias - - self.dataset_name = dataset_name - min_motion_len = 40 if dataset_name =='t2m' else 24 - - if dataset_name == 't2m': - self.data_root = './dataset/HumanML3D' - self.motion_dir = pjoin(self.data_root, 'new_joint_vecs') - self.text_dir = pjoin(self.data_root, 'texts') - self.joints_num = 22 - radius = 4 - fps = 20 - self.max_motion_length = 196 - dim_pose = 263 - self.meta_dir = 'checkpoints/t2m/VQVAEV3_CB1024_CMT_H1024_NRES3/meta' - #kinematic_chain = paramUtil.t2m_kinematic_chain - elif dataset_name == 'kit': - self.data_root = './dataset/KIT-ML' - self.motion_dir = pjoin(self.data_root, 'new_joint_vecs') - self.text_dir = pjoin(self.data_root, 'texts') - self.joints_num = 21 - radius = 240 * 8 - fps = 12.5 - dim_pose = 251 - self.max_motion_length = 196 - self.meta_dir = 'checkpoints/kit/VQVAEV3_CB1024_CMT_H1024_NRES3/meta' - #kinematic_chain = paramUtil.kit_kinematic_chain - - joints_num = self.joints_num - - mean = np.load(pjoin(self.meta_dir, 'mean.npy')) - std = np.load(pjoin(self.meta_dir, 'std.npy')) - - split_file = pjoin(self.data_root, 'train.txt') - - data_dict = {} - id_list = [] - with cs.open(split_file, 'r') as f: - for line in f.readlines(): - id_list.append(line.strip()) - - new_name_list = [] - length_list = [] - for name in tqdm(id_list): - try: - motion = np.load(pjoin(self.motion_dir, name + '.npy')) - if (len(motion)) < min_motion_len or (len(motion) >= 200): - continue - - data_dict[name] = {'motion': motion, - 'length': len(motion), - 'name': name} - new_name_list.append(name) - length_list.append(len(motion)) - except: - # Some motion may not exist in KIT dataset - pass - - - self.mean = mean - self.std = std - self.length_arr = np.array(length_list) - self.data_dict = data_dict - self.name_list = new_name_list - - def inv_transform(self, data): - return data * self.std + self.mean - - def __len__(self): - return len(self.data_dict) - - def __getitem__(self, item): - name = self.name_list[item] - data = self.data_dict[name] - motion, m_length = data['motion'], data['length'] - - m_length = (m_length // self.unit_length) * self.unit_length - - idx = random.randint(0, len(motion) - m_length) - motion = motion[idx:idx+m_length] - - "Z Normalization" - motion = (motion - self.mean) / self.std - - return motion, name - -def DATALoader(dataset_name, - batch_size = 1, - num_workers = 8, unit_length = 4) : - - train_loader = torch.utils.data.DataLoader(VQMotionDataset(dataset_name, unit_length=unit_length), - batch_size, - shuffle=True, - num_workers=num_workers, - #collate_fn=collate_fn, - drop_last = True) - - return train_loader - -def cycle(iterable): - while True: - for x in iterable: - yield x diff --git a/spaces/adimmer/semi-supervised-wrappers/app.py b/spaces/adimmer/semi-supervised-wrappers/app.py deleted file mode 100644 index dd17b9106f3dec3615cf3bc2bbb7267c54c61bef..0000000000000000000000000000000000000000 --- a/spaces/adimmer/semi-supervised-wrappers/app.py +++ /dev/null @@ -1,343 +0,0 @@ -# Import the required python libraries -import datetime -import gradio -import numpy -import torchvision -import sklearn, sklearn.datasets, sklearn.ensemble, sklearn.metrics, sklearn.model_selection, \ - sklearn.naive_bayes, sklearn.semi_supervised - -# Download the Iris Dataset -iris = sklearn.datasets.load_iris() - -# Split the Iris data into 80% train and 20% test subsets -iris_train_features, iris_test_features, iris_train_labels, iris_test_labels = \ - sklearn.model_selection.train_test_split(iris.data, iris.target, test_size = 0.2, random_state = 42) - -print("iris_train_features.shape:", iris_train_features.shape) -print("iris_test_features.shape: ", iris_test_features.shape) -print("iris_train_labels.shape: ", iris_train_labels.shape) -print("iris_test_labels.shape: ", iris_test_labels.shape) - -# Download the Wine Dataset -wine = sklearn.datasets.load_wine() - -# Split the Wine data into 80% train and 20% test subsets -wine_train_features, wine_test_features, wine_train_labels, wine_test_labels = \ - sklearn.model_selection.train_test_split(wine.data, wine.target, test_size = 0.2, random_state = 42) - -print("wine_train_features.shape:", wine_train_features.shape) -print("wine_test_features.shape: ", wine_test_features.shape) -print("wine_train_labels.shape: ", wine_train_labels.shape) -print("wine_test_labels.shape: ", wine_test_labels.shape) - -# Download the MNIST Digits Dataset -digits = sklearn.datasets.load_digits() - -# Split the MNIST digits data into 80% train and 20% test subsets -digits_train_features, digits_test_features, digits_train_labels, digits_test_labels = \ - sklearn.model_selection.train_test_split(digits.images, digits.target, test_size = 0.2, random_state = 42) - -print("digits_train_features.shape:", digits_train_features.shape) -print("digits_test_features.shape: ", digits_test_features.shape) -print("digits_train_labels.shape: ", digits_train_labels.shape) -print("digits_test_labels.shape: ", digits_test_labels.shape) - -# Download the Fashion-MNIST Training Dataset -# fashion_train_raw = torchvision.datasets.FashionMNIST("./data/fashion", train = True, download = True) -# print(fashion_train_raw) - -# Download the Fashion-MNIST Test Dataset -# fashion_test_raw = torchvision.datasets.FashionMNIST("./data/fashion", train = False, download = True) -# print(fashion_test_raw) - -# Split the image from the labels for the datasets -# fashion_train_features = [fashion_train_raw[i][0] for i in range(len(fashion_train_raw))] -# fashion_train_labels = [fashion_train_raw[i][1] for i in range(len(fashion_train_raw))] - -# fashion_test_features = [fashion_test_raw[i][0] for i in range(len(fashion_test_raw))] -# fashion_test_labels = [fashion_test_raw[i][1] for i in range(len(fashion_test_raw))] - -# print("len(fashion_train_features):", len(fashion_train_features)) -# print("len(fashion_train_labels): ", len(fashion_train_labels)) -# print("len(fashion_test_features): ", len(fashion_test_features)) -# print("len(fashion_test_labels): ", len(fashion_test_labels)) - -# Convert the selected training and test data into numpy arrays. - -# fashion_train_features_numpy = numpy.array([ -# numpy.array(image) for image in fashion_train_features -# ]) -# fashion_test_features_numpy = numpy.array([ -# numpy.array(image) for image in fashion_test_features -# ]) - -# print("fashion_train_features_numpy.shape:", fashion_train_features_numpy.shape) -# print("fashion_test_features_numpy.shape: ", fashion_test_features_numpy.shape) - -# fashion_train_labels_numpy = numpy.array(fashion_train_labels) -# fashion_test_labels_numpy = numpy.array(fashion_test_labels) - -# print("fashion_train_labels_numpy.shape: ", fashion_train_labels_numpy.shape) -# print("fashion_test_labels_numpy.shape: ", fashion_test_labels_numpy.shape) - -# Download the CIFAR-10 Training Dataset -# cifar10_train_raw = torchvision.datasets.CIFAR10("./data/cifar10", train = True, download = True) -# print(cifar10_train_raw) - -# Download the CIFAR-10 Test Dataset -# cifar10_test_raw = torchvision.datasets.CIFAR10("./data/cifar10", train = False, download = True) -# print(cifar10_test_raw) - -# Split the image from the labels for the datasets -# cifar10_train_features = [cifar10_train_raw[i][0] for i in range(len(cifar10_train_raw))] -# cifar10_train_labels = [cifar10_train_raw[i][1] for i in range(len(cifar10_train_raw))] - -# cifar10_test_features = [cifar10_test_raw[i][0] for i in range(len(cifar10_test_raw))] -# cifar10_test_labels = [cifar10_test_raw[i][1] for i in range(len(cifar10_test_raw))] - -# print("len(cifar10_train_features):", len(cifar10_train_features)) -# print("len(cifar10_train_labels): ", len(cifar10_train_labels)) -# print("len(cifar10_test_features): ", len(cifar10_test_features)) -# print("len(cifar10_test_labels): ", len(cifar10_test_labels)) - -# Convert the selected training and test data into numpy arrays. - -# cifar10_train_features_numpy = numpy.array([ -# numpy.array(image) for image in cifar10_train_features -# ]) -# cifar10_test_features_numpy = numpy.array([ -# numpy.array(image) for image in cifar10_test_features -# ]) - -# print("cifar10_train_features_numpy.shape:", cifar10_train_features_numpy.shape) -# print("cifar10_test_features_numpy.shape: ", cifar10_test_features_numpy.shape) - -# cifar10_train_labels_numpy = numpy.array(cifar10_train_labels) -# cifar10_test_labels_numpy = numpy.array(cifar10_test_labels) - -# print("cifar10_train_labels_numpy.shape: ", cifar10_train_labels_numpy.shape) -# print("cifar10_test_labels_numpy.shape: ", cifar10_test_labels_numpy.shape) - -# Download the CIFAR-100 Training Dataset -# cifar100_train_raw = torchvision.datasets.CIFAR100("./data/cifar100", train = True, download = True) -# print(cifar100_train_raw) - -# Download the CIFAR-100 Test Dataset -# cifar100_test_raw = torchvision.datasets.CIFAR100("./data/cifar100", train = False, download = True) -# print(cifar100_test_raw) - -# Split the image from the labels for the datasets -# cifar100_train_features = [cifar100_train_raw[i][0] for i in range(len(cifar100_train_raw))] -# cifar100_train_labels = [cifar100_train_raw[i][1] for i in range(len(cifar100_train_raw))] - -# cifar100_test_features = [cifar100_test_raw[i][0] for i in range(len(cifar100_test_raw))] -# cifar100_test_labels = [cifar100_test_raw[i][1] for i in range(len(cifar100_test_raw))] - -# print("len(cifar100_train_features):", len(cifar100_train_features)) -# print("len(cifar100_train_labels): ", len(cifar100_train_labels)) -# print("len(cifar100_test_features): ", len(cifar100_test_features)) -# print("len(cifar100_test_labels): ", len(cifar100_test_labels)) - -# Convert the selected training and test data into numpy arrays. - -# cifar100_train_features_numpy = numpy.array([ -# numpy.array(image) for image in cifar100_train_features -# ]) -# cifar100_test_features_numpy = numpy.array([ -# numpy.array(image) for image in cifar100_test_features -# ]) - -# print("cifar100_train_features_numpy.shape:", cifar100_train_features_numpy.shape) -# print("cifar100_test_features_numpy.shape: ", cifar100_test_features_numpy.shape) - -# cifar100_train_labels_numpy = numpy.array(cifar100_train_labels) -# cifar100_test_labels_numpy = numpy.array(cifar100_test_labels) - -# print("cifar100_train_labels_numpy.shape: ", cifar100_train_labels_numpy.shape) -# print("cifar100_test_labels_numpy.shape: ", cifar100_test_labels_numpy.shape) - -# Define a method to obscure the labels of some of the test data -def obscure_labels(original_labels, percent_labeled): - percent_unlabeled = 100 - percent_labeled - obscure_threshold = percent_unlabeled // 10 - new_array = original_labels.copy() - for i in range(len(new_array)): - if (i % 10 < obscure_threshold): - new_array[i] = -1 - return new_array - -# Define a method to remove data from the dataset for baseline comparison -def remove_data(original_data, percent_labeled): - percent_unlabeled = 100 - percent_labeled - keep_threshold = percent_unlabeled // 10 - new_array = [] - for i in range(len(original_data)): - if (i % 10 >= keep_threshold): - new_array.append(original_data[i]) - return new_array - -# Define a method to return a Gaussian Native Bayes model -def get_gaussian_naive_bayes_model(): - return sklearn.naive_bayes.GaussianNB() - -# Define a method to return a Random Forest Classifier model -def get_random_forest_classifier_model(): - return sklearn.ensemble.RandomForestClassifier(n_estimators = 15, max_depth = 15, random_state = 42) - -# Define a method to return a Support Vector Classifier model -def get_support_vector_classifier_model(): - return sklearn.svm.SVC(probability = True, random_state = 42) - -# Define a method to return a Self Training Classifier model -def get_self_training_classifier_model(base_model, criterion, threshold, k_best): - return sklearn.semi_supervised.SelfTrainingClassifier( - base_model, - criterion = criterion, - threshold = threshold, - k_best = k_best - ) - -# Define a function to handle standardized training and accuracy metrics -def run( - dataset, - percent_labeled, - base_model_name, - semi_supervised_wrapper_criterion, - semi_supervised_wrapper_threshold, - semi_supervised_wrapper_k_best_fraction, - baseline = False -): - # Declare input translations - features_train_map = { - "Iris": iris_train_features, - "Wine": wine_train_features, - "MNIST Digits": digits_train_features, - # "Fashion MNIST": fashion_train_features_numpy, - # "CIFAR-10": cifar10_train_features_numpy, - # "CIFAR-100": cifar100_train_features_numpy - } - features_test_map = { - "Iris": iris_test_features, - "Wine": wine_test_features, - "MNIST Digits": digits_test_features, - # "Fashion MNIST": fashion_test_features_numpy, - # "CIFAR-10": cifar10_test_features_numpy, - # "CIFAR-100": cifar100_test_features_numpy - } - labels_train_map = { - "Iris": iris_train_labels, - "Wine": wine_train_labels, - "MNIST Digits": digits_train_labels, - # "Fashion MNIST": fashion_train_labels_numpy, - #"CIFAR-10": cifar10_train_labels_numpy, - # "CIFAR-100": cifar100_train_labels_numpy - } - labels_test_map = { - "Iris": iris_test_labels, - "Wine": wine_test_labels, - "MNIST Digits": digits_test_labels, - # "Fashion MNIST": fashion_test_labels_numpy, - # "CIFAR-10": cifar10_test_labels_numpy, - # "CIFAR-100": cifar100_test_labels_numpy - } - base_model_constructor_map = { - "Gaussian Native Bayes": get_gaussian_naive_bayes_model, - "Random Forest Classifier": get_random_forest_classifier_model, - "Support Vector Classifier": get_support_vector_classifier_model - } - - # Perform Input Translation - features_train_raw = features_train_map[dataset] - features_test_raw = features_test_map[dataset] - labels_train_raw = labels_train_map[dataset] - labels_test_raw = labels_test_map[dataset] - base_model_constructor = base_model_constructor_map[base_model_name] - - # Define the dataset - features_train = numpy.array([image.flatten() for image in remove_data(features_train_raw, percent_labeled)]) \ - if baseline else numpy.array([image.flatten() for image in features_train_raw]) - features_validate = numpy.array([image.flatten() for image in features_train_raw]) - features_test = numpy.array([image.flatten() for image in features_test_raw]) - - labels_train = numpy.array(remove_data(labels_train_raw, percent_labeled)) if baseline else \ - numpy.array(obscure_labels(labels_train_raw, percent_labeled)) - labels_validate = labels_train_raw - labels_test = labels_test_raw - - # Define the model - model = base_model_constructor() if baseline else get_self_training_classifier_model( - base_model_constructor(), - semi_supervised_wrapper_criterion.lower().replace(" ", "_"), - semi_supervised_wrapper_threshold, - int((features_validate.shape[0] - features_train.shape[0]) * \ - semi_supervised_wrapper_k_best_fraction) - ) - - # Train the models - train_start = datetime.datetime.now().replace(microsecond=0) - model.fit(features_train, labels_train) - train_end = datetime.datetime.now().replace(microsecond=0) - train_duration = train_end - train_start - - # Determine Model Accuracy - predictions_train = model.predict(features_validate) - train_accuracy = sklearn.metrics.accuracy_score(predictions_train, labels_validate) - predictions_test = model.predict(features_test) - test_accuracy = sklearn.metrics.accuracy_score(predictions_test, labels_test) - - return train_duration, train_accuracy, test_accuracy - -# Define a function to handle standardized training and accuracy metrics for the Gradio App -def run_gradio( - dataset, - percent_labeled, - base_model_name, - semi_supervised_wrapper_criterion, - semi_supervised_wrapper_threshold, - semi_supervised_wrapper_k_best_fraction -): - baseline = run( - dataset, - percent_labeled, - base_model_name, - semi_supervised_wrapper_criterion, - semi_supervised_wrapper_threshold, - semi_supervised_wrapper_k_best_fraction, - baseline = True - ) - semi_supervised = run( - dataset, - percent_labeled, - base_model_name, - semi_supervised_wrapper_criterion, - semi_supervised_wrapper_threshold, - semi_supervised_wrapper_k_best_fraction, - baseline = False - ) - - return [str(baseline[0]), str(semi_supervised[0])], \ - [str(round(baseline[1], 4)), str(round(semi_supervised[1], 4))], \ - [str(round(baseline[2], 4)), str(round(semi_supervised[2], 4))] - -# Declare and Launch the Gradio App -iface = gradio.Interface( - fn = run_gradio, - inputs = [ - gradio.inputs.Dropdown(["Iris", "Wine", "MNIST Digits"]), #, "Fashion MNIST", "CIFAR-10", "CIFAR-100"]), - gradio.inputs.Slider(minimum = 10, maximum = 100, step = 10, default = 100), - gradio.inputs.Dropdown([ - "Gaussian Native Bayes", - "Random Forest Classifier", - "Support Vector Classifier" - ]), - gradio.inputs.Dropdown(["Threshold", "K Best"]), - gradio.inputs.Slider(minimum = .5, maximum = .95, step = .05, default = .75), - gradio.inputs.Slider(minimum = .01, maximum = .25, step = .01, default = .01), - ], - outputs = [ - gradio.outputs.Dataframe(headers = ["Base Model", "Semi-Supervised Model"], label = "Training Duration"), - gradio.outputs.Dataframe(headers = ["Base Model", "Semi-Supervised Model"], label = "Training Accuracy"), - gradio.outputs.Dataframe(headers = ["Base Model", "Semi-Supervised Model"], label = "Test Accuracy") - ] -) -iface.launch() diff --git a/spaces/aiotedu/aiotchat/app.py b/spaces/aiotedu/aiotchat/app.py deleted file mode 100644 index b47b21ac48b173e8af669263f3f1de51bb896ecb..0000000000000000000000000000000000000000 --- a/spaces/aiotedu/aiotchat/app.py +++ /dev/null @@ -1,198 +0,0 @@ -import numpy as np -import gradio as gr -# print("loading asr and tts success!") -import soundfile -import azure.cognitiveservices.speech as speechsdk -import openai -import os -openai.api_key = os.environ.get("OPENAI_API_KEY") -speech_key = os.environ.get("SPEECH_KEY") - -def ms_tts(text, filename): - speech_config = speechsdk.SpeechConfig(subscription=speech_key, region='eastasia') - audio_config = speechsdk.audio.AudioOutputConfig(filename = filename) - - # The language of the voice that speaks. - # speech_config.speech_synthesis_voice_name='zh-CN-XiaochenNeural' - speech_config.speech_synthesis_voice_name='zh-CN-XiaomengNeural' - - speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=audio_config) - - speech_synthesis_result = speech_synthesizer.speak_text_async(text).get() - -def ms_asr(filename): - # This example requires environment variables named "SPEECH_KEY" and "SPEECH_REGION" - speech_config = speechsdk.SpeechConfig(subscription=speech_key, region="eastus") - speech_config.speech_recognition_language="zh-CN" - - audio_config = speechsdk.audio.AudioConfig(filename=filename) - speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config) - - # print("Speak into your microphone.") - speech_recognition_result = speech_recognizer.recognize_once_async().get() - - if speech_recognition_result.reason == speechsdk.ResultReason.RecognizedSpeech: - print("Recognized: {}".format(speech_recognition_result.text)) - elif speech_recognition_result.reason == speechsdk.ResultReason.NoMatch: - print("No speech could be recognized: {}".format(speech_recognition_result.no_match_details)) - elif speech_recognition_result.reason == speechsdk.ResultReason.Canceled: - cancellation_details = speech_recognition_result.cancellation_details - print("Speech Recognition canceled: {}".format(cancellation_details.reason)) - if cancellation_details.reason == speechsdk.CancellationReason.Error: - print("Error details: {}".format(cancellation_details.error_details)) - print("Did you set the speech resource key and region values?") - - return speech_recognition_result.text - -class Conversation: - def __init__(self, prompt, num_of_round): - self.prompt = prompt - self.num_of_round = num_of_round - self.messages = [] - self.messages.append({"role": "system", "content": self.prompt}) - - def ask(self, question): - try: - self.messages.append( {"role": "user", "content": question}) - response = openai.ChatCompletion.create( - model="gpt-3.5-turbo", - messages=self.messages, - # temperature=0.5, - temperature=0, - max_tokens=2048, - top_p=1, - ) - except Exception as e: - print(e) - return e - - message = response["choices"][0]["message"]["content"] - self.messages.append({"role": "assistant", "content": message}) - - if len(self.messages) > self.num_of_round*2 + 1: - del self.messages[1:3] - return message - - -# prompt = """Say English""" -# prompt = """You are an conversation bot. You can speak English and Chinese. You were developed by the AIOT CTO organization.""" -# prompt = """You are an conversation bot. You can speak English and Chinese. You were developed by the AIOT CTO organization. -# 旨在帮助用户回答与金融相关的问题。 -# 说明 -# - 主要回答与金融相关的问题。 -# - 可回答一般性问题。 -# - 如果你不确定答案,你可以说“我不知道”或“我不确定" -# 金融的定义如下: -# - 金融是以货币本身为经营目的,通过货币融通使货币增值的经济活动,包括以银行为中心的间接投融资和以投资银行为中心的直接投融资两种形式。 -# - 金融的内容概括为货币的发行与回笼,存款的吸收与付出,贷款的发放与回收,金银、外汇的买卖,有价证券的发行与转让,保险、信托、国内、国际的货币结算等。 -# - 从事金融活动的机构主要有银行、信托投资公司、保险公司、证券公司、投资基金,还有信用合作社、财务公司、金融资产管理公司、邮政储蓄机构、金融租赁公司以及证券、金银、外汇交易所等。 -# """ -# prompt = """You are an conversation bot. You can speak English and Chinese. You were developed by the AIOT CTO organization. -# 旨在帮助用户回答与知识产权相关的问题。 -# 说明 -# - 主要回答与知识产权相关的问题。 -# - 可回答一般性问题。 -# - 如果你不确定答案,你可以说“我不知道”或“我不确定" -# 知识产权的定义如下: -# - 知识产权是指人们就其智力劳动成果所依法享有的专有权利,通常是国家赋予创造者对其智力成果在一定时期内享有的专有权或独占权。其本质上是一种无形财产权,它的客体是智力成果或者知识产品。知识产权属于民事权利,受国家法律保护。 -# - 知识产权包括: -# 1. 专利 -# (1)发明专利:审查严格、含金量高,发明分为产品发明(如机器、仪器、设备和用具等)和方法发明(制造方法)两大类 -# (2)实用新型专利:实用新型是指对产品的形状、构造或者其结合所提出的适于实用的新的技术方案。低成本、研制周期短。 -# (3)外观专利:即视觉的新事物,是企业的无形资产 -# 2. 软件著作权:计算机软件著作权是指软件的开发者或者其他权利人依据有关著作权法律的规定,对于软件作品所享有的各项专有权利。分为软件著作权个人登记和企业登记,如果企业要申报双软认证或高新技术企业认定,需保证著作权是企业登记的状态。 -# 3. 作品著作权(版权):指文学、艺术、科学作品的作者对其作品享有的权利(包括财产权、人身权)。版权是知识产权的一种类型,它是由自然科学、社会科学以及文学、音乐、戏剧、绘画、雕塑、摄影、图片和电影摄影等方面的作品组成。 -# 4. 集成电路布图设计:指集成电路中至少有一个是有源元件的两个以上元件和部分或者全部互连线路的三维配置,或者为制造集成电路而准备的上述三维配置。集成电路布图设计实质上是一种图形设计,但它并非是工业品外观设计,不能适用专利法保护。从专利的的取得程序,专利申请审批的时间过长,成本较高,不利于技术的推广和应用。 -# 5. 商标:商标是用来区别一个经营者的品牌或服务和其他经营者的商品或服务的标记。我国商标法规定,经商标局核准注册的商标,包括商品商标、服务商标和集体商标、证明商标,商标注册人享有商标专用权,受法律保护,如果是驰名商标,将会获得跨类别的商标专用权法律保护。 -# """ -prompt = """You are an educational conversation bot. Your name is Xiaomeng. You can speak English and Chinese. You were developed by the AIOT CTO organization. - 你是一个教育对话机器人。你叫小萌。你可以说英文和中文。你是由智能物联首席技术官组织开发的。 - 你的任务包括: - - 英语口语对话 - - 知识问答 - - 讲故事 - - 知识问答任务要求: - - 回答必须保证是正确的 - - 回答需要采用逐步推理的方式 - - 如果你不确定答案,你可以说“我不知道”或“我不确定" - - 合规要求: - - 用户为年龄大于3岁且小于15岁的儿童 - - 知识问答任务中回答的内容和形式必须适合年龄大于3岁且小于15岁的儿童 - - 英语口语对话任务中回答的内容和形式必须适合大于3岁且小于15岁的儿童 - - 讲故事任务中回答的内容和形式必须适合年龄大于3岁且小于15岁的儿童 - - 全部对话的内容和形式必须适合年龄大于3岁且小于15岁的儿童 - - 需要对全部对话进行分析,当检测到用户不是年龄大于3岁且小于15岁的儿童时,回答”我猜和我对话的一定是一位机灵可爱的小朋友^_^ I guess I'm talking to a clever and cute kid ^_^” - - 语言要求: - - 只支持用中文或英文回答 - - 当用户用中文提问时,用中文回答 - - 当用户用英文提问时,用英文回答 - - 当用户用中文和英文混合语言提问时,可以用中文回答,也可以用英文回答,还可以用中文和英文混合语言回答 - - 当用户用其它语言提问时,回答“抱歉,我只说中文或英文!Sorry, I only speak Chinese or English!” - - 当用户要求用中文回答时,用中文回答 - - 当用户要求用英文回答时,用英文回答 - - 当用户要求回答的语言既不是中文也不是英文时,回答“抱歉,我只说中文或英文!Sorry, I only speak Chinese or English!” - - 回答长度要求: - - 知识问答任务中回答长度小于50字 - - 英语口语对话任务中回答长度小于20字 - - 讲故事任务中回答长度小于100字 - """ - - -conv = Conversation(prompt, 20) - -def predict(input, history=[]): - history.append(input) - response = conv.ask(input) - history.append(response) - responses = [(u,b) for u,b in zip(history[::2], history[1::2])] - return response, responses, history - -def main(audio, history=[]): - - s,y = audio - - print(s) - assert s in [48000, 16000] - if s == 48000: # Optional resample to 16000 - y = (y / max(np.max(y), 1) * 32767)[::3].astype("int16") - soundfile.write("./input.wav",y,16000) - - # wav_res = ms_asr("./input.wav") - wav_res = "hello!" - print("You said : ", wav_res) - # res = requests.post(url='http://10.10.239.164:8088', - # headers={"Content-Type": "application/json"}, - # json={"prompt":wav_res,"history": historyList.hlist}) - - answer, his_list, history = predict(wav_res, history) - - print("answer: ", answer) - # print("history: ", history) - # if len(history)>=10: - # historyList.hlist = history[1:] - # else: - # historyList.hlist = history - - - # ms_tts(text=answer, filename="./output.wav") - path1="./output.wav" - print("historyList.hlist: ", historyList.hlist) - return his_list, history, path1 - - -with gr.Blocks() as demo: - state = gr.State([]) - with gr.Row(): - with gr.Column(scale=4): - txt = gr.Chatbot(label="ChatBox") - out_voice = gr.Audio(label="audio") - with gr.Column(scale=4): - mic = gr.Mic(label="input") - button = gr.Button("Generate") - button.click(main, [mic, state], [txt, state, out_voice]) -# demo.queue().launch(server_name = "0.0.0.0", server_port=8080) -demo.queue().launch() \ No newline at end of file diff --git a/spaces/akhaliq/Mask2Former/mask2former/__init__.py b/spaces/akhaliq/Mask2Former/mask2former/__init__.py deleted file mode 100644 index 9b405c83bd2e8fa186a556a7db450af86c28c79b..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/__init__.py +++ /dev/null @@ -1,26 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from . import data # register all new datasets -from . import modeling - -# config -from .config import add_maskformer2_config - -# dataset loading -from .data.dataset_mappers.coco_instance_new_baseline_dataset_mapper import COCOInstanceNewBaselineDatasetMapper -from .data.dataset_mappers.coco_panoptic_new_baseline_dataset_mapper import COCOPanopticNewBaselineDatasetMapper -from .data.dataset_mappers.mask_former_instance_dataset_mapper import ( - MaskFormerInstanceDatasetMapper, -) -from .data.dataset_mappers.mask_former_panoptic_dataset_mapper import ( - MaskFormerPanopticDatasetMapper, -) -from .data.dataset_mappers.mask_former_semantic_dataset_mapper import ( - MaskFormerSemanticDatasetMapper, -) - -# models -from .maskformer_model import MaskFormer -from .test_time_augmentation import SemanticSegmentorWithTTA - -# evaluation -from .evaluation.instance_evaluation import InstanceSegEvaluator diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/speaker.py b/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/speaker.py deleted file mode 100644 index 494e882fe34fc38dcc793ab8c74a6cc2376bb7b5..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-Time-Voice-Cloning/encoder/data_objects/speaker.py +++ /dev/null @@ -1,40 +0,0 @@ -from encoder.data_objects.random_cycler import RandomCycler -from encoder.data_objects.utterance import Utterance -from pathlib import Path - -# Contains the set of utterances of a single speaker -class Speaker: - def __init__(self, root: Path): - self.root = root - self.name = root.name - self.utterances = None - self.utterance_cycler = None - - def _load_utterances(self): - with self.root.joinpath("_sources.txt").open("r") as sources_file: - sources = [l.split(",") for l in sources_file] - sources = {frames_fname: wave_fpath for frames_fname, wave_fpath in sources} - self.utterances = [Utterance(self.root.joinpath(f), w) for f, w in sources.items()] - self.utterance_cycler = RandomCycler(self.utterances) - - def random_partial(self, count, n_frames): - """ - Samples a batch of unique partial utterances from the disk in a way that all - utterances come up at least once every two cycles and in a random order every time. - - :param count: The number of partial utterances to sample from the set of utterances from - that speaker. Utterances are guaranteed not to be repeated if is not larger than - the number of utterances available. - :param n_frames: The number of frames in the partial utterance. - :return: A list of tuples (utterance, frames, range) where utterance is an Utterance, - frames are the frames of the partial utterances and range is the range of the partial - utterance with regard to the complete utterance. - """ - if self.utterances is None: - self._load_utterances() - - utterances = self.utterance_cycler.sample(count) - - a = [(u,) + u.random_partial(n_frames) for u in utterances] - - return a diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer/train.py b/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer/train.py deleted file mode 100644 index a136cf9b38538ca7dc428adf209c0cbb40e890d7..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-Time-Voice-Cloning/synthesizer/train.py +++ /dev/null @@ -1,269 +0,0 @@ -import torch -import torch.nn.functional as F -from torch import optim -from torch.utils.data import DataLoader -from synthesizer import audio -from synthesizer.models.tacotron import Tacotron -from synthesizer.synthesizer_dataset import SynthesizerDataset, collate_synthesizer -from synthesizer.utils import ValueWindow, data_parallel_workaround -from synthesizer.utils.plot import plot_spectrogram -from synthesizer.utils.symbols import symbols -from synthesizer.utils.text import sequence_to_text -from vocoder.display import * -from datetime import datetime -import numpy as np -from pathlib import Path -import sys -import time -import platform - - -def np_now(x: torch.Tensor): return x.detach().cpu().numpy() - -def time_string(): - return datetime.now().strftime("%Y-%m-%d %H:%M") - -def train(run_id: str, syn_dir: str, models_dir: str, save_every: int, - backup_every: int, force_restart:bool, hparams): - - syn_dir = Path(syn_dir) - models_dir = Path(models_dir) - models_dir.mkdir(exist_ok=True) - - model_dir = models_dir.joinpath(run_id) - plot_dir = model_dir.joinpath("plots") - wav_dir = model_dir.joinpath("wavs") - mel_output_dir = model_dir.joinpath("mel-spectrograms") - meta_folder = model_dir.joinpath("metas") - model_dir.mkdir(exist_ok=True) - plot_dir.mkdir(exist_ok=True) - wav_dir.mkdir(exist_ok=True) - mel_output_dir.mkdir(exist_ok=True) - meta_folder.mkdir(exist_ok=True) - - weights_fpath = model_dir.joinpath(run_id).with_suffix(".pt") - metadata_fpath = syn_dir.joinpath("train.txt") - - print("Checkpoint path: {}".format(weights_fpath)) - print("Loading training data from: {}".format(metadata_fpath)) - print("Using model: Tacotron") - - # Book keeping - step = 0 - time_window = ValueWindow(100) - loss_window = ValueWindow(100) - - - # From WaveRNN/train_tacotron.py - if torch.cuda.is_available(): - device = torch.device("cuda") - - for session in hparams.tts_schedule: - _, _, _, batch_size = session - if batch_size % torch.cuda.device_count() != 0: - raise ValueError("`batch_size` must be evenly divisible by n_gpus!") - else: - device = torch.device("cpu") - print("Using device:", device) - - # Instantiate Tacotron Model - print("\nInitialising Tacotron Model...\n") - model = Tacotron(embed_dims=hparams.tts_embed_dims, - num_chars=len(symbols), - encoder_dims=hparams.tts_encoder_dims, - decoder_dims=hparams.tts_decoder_dims, - n_mels=hparams.num_mels, - fft_bins=hparams.num_mels, - postnet_dims=hparams.tts_postnet_dims, - encoder_K=hparams.tts_encoder_K, - lstm_dims=hparams.tts_lstm_dims, - postnet_K=hparams.tts_postnet_K, - num_highways=hparams.tts_num_highways, - dropout=hparams.tts_dropout, - stop_threshold=hparams.tts_stop_threshold, - speaker_embedding_size=hparams.speaker_embedding_size).to(device) - - # Initialize the optimizer - optimizer = optim.Adam(model.parameters()) - - # Load the weights - if force_restart or not weights_fpath.exists(): - print("\nStarting the training of Tacotron from scratch\n") - model.save(weights_fpath) - - # Embeddings metadata - char_embedding_fpath = meta_folder.joinpath("CharacterEmbeddings.tsv") - with open(char_embedding_fpath, "w", encoding="utf-8") as f: - for symbol in symbols: - if symbol == " ": - symbol = "\\s" # For visual purposes, swap space with \s - - f.write("{}\n".format(symbol)) - - else: - print("\nLoading weights at %s" % weights_fpath) - model.load(weights_fpath, optimizer) - print("Tacotron weights loaded from step %d" % model.step) - - # Initialize the dataset - metadata_fpath = syn_dir.joinpath("train.txt") - mel_dir = syn_dir.joinpath("mels") - embed_dir = syn_dir.joinpath("embeds") - dataset = SynthesizerDataset(metadata_fpath, mel_dir, embed_dir, hparams) - test_loader = DataLoader(dataset, - batch_size=1, - shuffle=True, - pin_memory=True) - - for i, session in enumerate(hparams.tts_schedule): - current_step = model.get_step() - - r, lr, max_step, batch_size = session - - training_steps = max_step - current_step - - # Do we need to change to the next session? - if current_step >= max_step: - # Are there no further sessions than the current one? - if i == len(hparams.tts_schedule) - 1: - # We have completed training. Save the model and exit - model.save(weights_fpath, optimizer) - break - else: - # There is a following session, go to it - continue - - model.r = r - - # Begin the training - simple_table([(f"Steps with r={r}", str(training_steps // 1000) + "k Steps"), - ("Batch Size", batch_size), - ("Learning Rate", lr), - ("Outputs/Step (r)", model.r)]) - - for p in optimizer.param_groups: - p["lr"] = lr - - data_loader = DataLoader(dataset, - collate_fn=lambda batch: collate_synthesizer(batch, r, hparams), - batch_size=batch_size, - num_workers=2 if platform.system() != "Windows" else 0, - shuffle=True, - pin_memory=True) - - total_iters = len(dataset) - steps_per_epoch = np.ceil(total_iters / batch_size).astype(np.int32) - epochs = np.ceil(training_steps / steps_per_epoch).astype(np.int32) - - for epoch in range(1, epochs+1): - for i, (texts, mels, embeds, idx) in enumerate(data_loader, 1): - start_time = time.time() - - # Generate stop tokens for training - stop = torch.ones(mels.shape[0], mels.shape[2]) - for j, k in enumerate(idx): - stop[j, :int(dataset.metadata[k][4])-1] = 0 - - texts = texts.to(device) - mels = mels.to(device) - embeds = embeds.to(device) - stop = stop.to(device) - - # Forward pass - # Parallelize model onto GPUS using workaround due to python bug - if device.type == "cuda" and torch.cuda.device_count() > 1: - m1_hat, m2_hat, attention, stop_pred = data_parallel_workaround(model, texts, - mels, embeds) - else: - m1_hat, m2_hat, attention, stop_pred = model(texts, mels, embeds) - - # Backward pass - m1_loss = F.mse_loss(m1_hat, mels) + F.l1_loss(m1_hat, mels) - m2_loss = F.mse_loss(m2_hat, mels) - stop_loss = F.binary_cross_entropy(stop_pred, stop) - - loss = m1_loss + m2_loss + stop_loss - - optimizer.zero_grad() - loss.backward() - - if hparams.tts_clip_grad_norm is not None: - grad_norm = torch.nn.utils.clip_grad_norm_(model.parameters(), hparams.tts_clip_grad_norm) - if np.isnan(grad_norm.cpu()): - print("grad_norm was NaN!") - - optimizer.step() - - time_window.append(time.time() - start_time) - loss_window.append(loss.item()) - - step = model.get_step() - k = step // 1000 - - msg = f"| Epoch: {epoch}/{epochs} ({i}/{steps_per_epoch}) | Loss: {loss_window.average:#.4} | {1./time_window.average:#.2} steps/s | Step: {k}k | " - stream(msg) - - # Backup or save model as appropriate - if backup_every != 0 and step % backup_every == 0 : - backup_fpath = Path("{}/{}_{}k.pt".format(str(weights_fpath.parent), run_id, k)) - model.save(backup_fpath, optimizer) - - if save_every != 0 and step % save_every == 0 : - # Must save latest optimizer state to ensure that resuming training - # doesn't produce artifacts - model.save(weights_fpath, optimizer) - - # Evaluate model to generate samples - epoch_eval = hparams.tts_eval_interval == -1 and i == steps_per_epoch # If epoch is done - step_eval = hparams.tts_eval_interval > 0 and step % hparams.tts_eval_interval == 0 # Every N steps - if epoch_eval or step_eval: - for sample_idx in range(hparams.tts_eval_num_samples): - # At most, generate samples equal to number in the batch - if sample_idx + 1 <= len(texts): - # Remove padding from mels using frame length in metadata - mel_length = int(dataset.metadata[idx[sample_idx]][4]) - mel_prediction = np_now(m2_hat[sample_idx]).T[:mel_length] - target_spectrogram = np_now(mels[sample_idx]).T[:mel_length] - attention_len = mel_length // model.r - - eval_model(attention=np_now(attention[sample_idx][:, :attention_len]), - mel_prediction=mel_prediction, - target_spectrogram=target_spectrogram, - input_seq=np_now(texts[sample_idx]), - step=step, - plot_dir=plot_dir, - mel_output_dir=mel_output_dir, - wav_dir=wav_dir, - sample_num=sample_idx + 1, - loss=loss, - hparams=hparams) - - # Break out of loop to update training schedule - if step >= max_step: - break - - # Add line break after every epoch - print("") - -def eval_model(attention, mel_prediction, target_spectrogram, input_seq, step, - plot_dir, mel_output_dir, wav_dir, sample_num, loss, hparams): - # Save some results for evaluation - attention_path = str(plot_dir.joinpath("attention_step_{}_sample_{}".format(step, sample_num))) - save_attention(attention, attention_path) - - # save predicted mel spectrogram to disk (debug) - mel_output_fpath = mel_output_dir.joinpath("mel-prediction-step-{}_sample_{}.npy".format(step, sample_num)) - np.save(str(mel_output_fpath), mel_prediction, allow_pickle=False) - - # save griffin lim inverted wav for debug (mel -> wav) - wav = audio.inv_mel_spectrogram(mel_prediction.T, hparams) - wav_fpath = wav_dir.joinpath("step-{}-wave-from-mel_sample_{}.wav".format(step, sample_num)) - audio.save_wav(wav, str(wav_fpath), sr=hparams.sample_rate) - - # save real and predicted mel-spectrogram plot to disk (control purposes) - spec_fpath = plot_dir.joinpath("step-{}-mel-spectrogram_sample_{}.png".format(step, sample_num)) - title_str = "{}, {}, step={}, loss={:.5f}".format("Tacotron", time_string(), step, loss) - plot_spectrogram(mel_prediction, str(spec_fpath), title=title_str, - target_spectrogram=target_spectrogram, - max_len=target_spectrogram.size // hparams.num_mels) - print("Input at step {}: {}".format(step, sequence_to_text(input_seq))) diff --git a/spaces/akhaliq/deeplab2/data/utils/__init__.py b/spaces/akhaliq/deeplab2/data/utils/__init__.py deleted file mode 100644 index 35e4ce02ff422f3aa84ab644b88d65b13e0cbc03..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/data/utils/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - diff --git a/spaces/alc15492/MSemoji850NEW/app.py b/spaces/alc15492/MSemoji850NEW/app.py deleted file mode 100644 index e04d7dfcda8aca9c317688576fe5a741b3339182..0000000000000000000000000000000000000000 --- a/spaces/alc15492/MSemoji850NEW/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/alc15492/MSemoji850").launch() \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test143/README.md b/spaces/allknowingroger/Image-Models-Test143/README.md deleted file mode 100644 index a3a43bf672ca727d8113068aed4ea790c9de9309..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test143/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: More Image Models -emoji: 😻 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -duplicated_from: allknowingroger/Image-Models-Test142 ---- - - \ No newline at end of file diff --git a/spaces/allknowingroger/Llama_v2/index.html b/spaces/allknowingroger/Llama_v2/index.html deleted file mode 100644 index ff2f5b868a523124b8262f11a1b6f0ac3209d79b..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Llama_v2/index.html +++ /dev/null @@ -1,19 +0,0 @@ - - - - - - Llama v2 - - - - - - - - diff --git a/spaces/alphunt/diffdock-alphunt-demo/esm/esm/__init__.py b/spaces/alphunt/diffdock-alphunt-demo/esm/esm/__init__.py deleted file mode 100644 index 907081d4111c66358b51322add5b261bffcdf5b8..0000000000000000000000000000000000000000 --- a/spaces/alphunt/diffdock-alphunt-demo/esm/esm/__init__.py +++ /dev/null @@ -1,12 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .version import version as __version__ # noqa - -from .data import Alphabet, BatchConverter, FastaBatchedDataset # noqa -from .model.esm1 import ProteinBertModel # noqa -from .model.esm2 import ESM2 # noqa -from .model.msa_transformer import MSATransformer #noqa -from . import pretrained # noqa diff --git a/spaces/amankishore/sjc/sd1/ldm/modules/losses/__init__.py b/spaces/amankishore/sjc/sd1/ldm/modules/losses/__init__.py deleted file mode 100644 index 876d7c5bd6e3245ee77feb4c482b7a8143604ad5..0000000000000000000000000000000000000000 --- a/spaces/amankishore/sjc/sd1/ldm/modules/losses/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from ldm.modules.losses.contperceptual import LPIPSWithDiscriminator \ No newline at end of file diff --git a/spaces/aodianyun/ChatGLM-6B/app.py b/spaces/aodianyun/ChatGLM-6B/app.py deleted file mode 100644 index 765d91fdb06a8e5ae1a8490a1ae7f27db7a4309b..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/ChatGLM-6B/app.py +++ /dev/null @@ -1,33 +0,0 @@ -import os -from transformers import AutoModel, AutoTokenizer -import gradio as gr - -use_cpu = os.environ.get("USE_CPU", "all") -tokenizer = AutoTokenizer.from_pretrained("./THUDM/chatglm-6b", trust_remote_code=True) -if not use_cpu: - model = AutoModel.from_pretrained("./THUDM/chatglm-6b", trust_remote_code=True).half().cuda() -else: - model = AutoModel.from_pretrained("./THUDM/chatglm-6b", trust_remote_code=True).bfloat16() -model = model.eval() - -def predict(input, history=None): - if history is None: - history = [] - response, history = model.chat(tokenizer, input, history) - return history, history - - -with gr.Blocks() as demo: - gr.Markdown('''## ChatGLM-6B - unofficial demo - Unnoficial demo of the [ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B/blob/main/README_en.md) model, trained on 1T tokens of English and Chinese - ''') - state = gr.State([]) - chatbot = gr.Chatbot([], elem_id="chatbot").style(height=400) - with gr.Row(): - with gr.Column(scale=4): - txt = gr.Textbox(show_label=False, placeholder="Enter text and press enter").style(container=False) - with gr.Column(scale=1): - button = gr.Button("Generate") - txt.submit(predict, [txt, state], [chatbot, state]) - button.click(predict, [txt, state], [chatbot, state]) -demo.queue().launch() diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/sd_hijack_clip.py b/spaces/aodianyun/stable-diffusion-webui/modules/sd_hijack_clip.py deleted file mode 100644 index ba55fb98e54c3cf82a135f588efa9f7210c6fee3..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/sd_hijack_clip.py +++ /dev/null @@ -1,317 +0,0 @@ -import math -from collections import namedtuple - -import torch - -from modules import prompt_parser, devices, sd_hijack -from modules.shared import opts - - -class PromptChunk: - """ - This object contains token ids, weight (multipliers:1.4) and textual inversion embedding info for a chunk of prompt. - If a prompt is short, it is represented by one PromptChunk, otherwise, multiple are necessary. - Each PromptChunk contains an exact amount of tokens - 77, which includes one for start and end token, - so just 75 tokens from prompt. - """ - - def __init__(self): - self.tokens = [] - self.multipliers = [] - self.fixes = [] - - -PromptChunkFix = namedtuple('PromptChunkFix', ['offset', 'embedding']) -"""An object of this type is a marker showing that textual inversion embedding's vectors have to placed at offset in the prompt -chunk. Thos objects are found in PromptChunk.fixes and, are placed into FrozenCLIPEmbedderWithCustomWordsBase.hijack.fixes, and finally -are applied by sd_hijack.EmbeddingsWithFixes's forward function.""" - - -class FrozenCLIPEmbedderWithCustomWordsBase(torch.nn.Module): - """A pytorch module that is a wrapper for FrozenCLIPEmbedder module. it enhances FrozenCLIPEmbedder, making it possible to - have unlimited prompt length and assign weights to tokens in prompt. - """ - - def __init__(self, wrapped, hijack): - super().__init__() - - self.wrapped = wrapped - """Original FrozenCLIPEmbedder module; can also be FrozenOpenCLIPEmbedder or xlmr.BertSeriesModelWithTransformation, - depending on model.""" - - self.hijack: sd_hijack.StableDiffusionModelHijack = hijack - self.chunk_length = 75 - - def empty_chunk(self): - """creates an empty PromptChunk and returns it""" - - chunk = PromptChunk() - chunk.tokens = [self.id_start] + [self.id_end] * (self.chunk_length + 1) - chunk.multipliers = [1.0] * (self.chunk_length + 2) - return chunk - - def get_target_prompt_token_count(self, token_count): - """returns the maximum number of tokens a prompt of a known length can have before it requires one more PromptChunk to be represented""" - - return math.ceil(max(token_count, 1) / self.chunk_length) * self.chunk_length - - def tokenize(self, texts): - """Converts a batch of texts into a batch of token ids""" - - raise NotImplementedError - - def encode_with_transformers(self, tokens): - """ - converts a batch of token ids (in python lists) into a single tensor with numeric respresentation of those tokens; - All python lists with tokens are assumed to have same length, usually 77. - if input is a list with B elements and each element has T tokens, expected output shape is (B, T, C), where C depends on - model - can be 768 and 1024. - Among other things, this call will read self.hijack.fixes, apply it to its inputs, and clear it (setting it to None). - """ - - raise NotImplementedError - - def encode_embedding_init_text(self, init_text, nvpt): - """Converts text into a tensor with this text's tokens' embeddings. Note that those are embeddings before they are passed through - transformers. nvpt is used as a maximum length in tokens. If text produces less teokens than nvpt, only this many is returned.""" - - raise NotImplementedError - - def tokenize_line(self, line): - """ - this transforms a single prompt into a list of PromptChunk objects - as many as needed to - represent the prompt. - Returns the list and the total number of tokens in the prompt. - """ - - if opts.enable_emphasis: - parsed = prompt_parser.parse_prompt_attention(line) - else: - parsed = [[line, 1.0]] - - tokenized = self.tokenize([text for text, _ in parsed]) - - chunks = [] - chunk = PromptChunk() - token_count = 0 - last_comma = -1 - - def next_chunk(is_last=False): - """puts current chunk into the list of results and produces the next one - empty; - if is_last is true, tokens tokens at the end won't add to token_count""" - nonlocal token_count - nonlocal last_comma - nonlocal chunk - - if is_last: - token_count += len(chunk.tokens) - else: - token_count += self.chunk_length - - to_add = self.chunk_length - len(chunk.tokens) - if to_add > 0: - chunk.tokens += [self.id_end] * to_add - chunk.multipliers += [1.0] * to_add - - chunk.tokens = [self.id_start] + chunk.tokens + [self.id_end] - chunk.multipliers = [1.0] + chunk.multipliers + [1.0] - - last_comma = -1 - chunks.append(chunk) - chunk = PromptChunk() - - for tokens, (text, weight) in zip(tokenized, parsed): - if text == 'BREAK' and weight == -1: - next_chunk() - continue - - position = 0 - while position < len(tokens): - token = tokens[position] - - if token == self.comma_token: - last_comma = len(chunk.tokens) - - # this is when we are at the end of alloted 75 tokens for the current chunk, and the current token is not a comma. opts.comma_padding_backtrack - # is a setting that specifies that if there is a comma nearby, the text after the comma should be moved out of this chunk and into the next. - elif opts.comma_padding_backtrack != 0 and len(chunk.tokens) == self.chunk_length and last_comma != -1 and len(chunk.tokens) - last_comma <= opts.comma_padding_backtrack: - break_location = last_comma + 1 - - reloc_tokens = chunk.tokens[break_location:] - reloc_mults = chunk.multipliers[break_location:] - - chunk.tokens = chunk.tokens[:break_location] - chunk.multipliers = chunk.multipliers[:break_location] - - next_chunk() - chunk.tokens = reloc_tokens - chunk.multipliers = reloc_mults - - if len(chunk.tokens) == self.chunk_length: - next_chunk() - - embedding, embedding_length_in_tokens = self.hijack.embedding_db.find_embedding_at_position(tokens, position) - if embedding is None: - chunk.tokens.append(token) - chunk.multipliers.append(weight) - position += 1 - continue - - emb_len = int(embedding.vec.shape[0]) - if len(chunk.tokens) + emb_len > self.chunk_length: - next_chunk() - - chunk.fixes.append(PromptChunkFix(len(chunk.tokens), embedding)) - - chunk.tokens += [0] * emb_len - chunk.multipliers += [weight] * emb_len - position += embedding_length_in_tokens - - if len(chunk.tokens) > 0 or len(chunks) == 0: - next_chunk(is_last=True) - - return chunks, token_count - - def process_texts(self, texts): - """ - Accepts a list of texts and calls tokenize_line() on each, with cache. Returns the list of results and maximum - length, in tokens, of all texts. - """ - - token_count = 0 - - cache = {} - batch_chunks = [] - for line in texts: - if line in cache: - chunks = cache[line] - else: - chunks, current_token_count = self.tokenize_line(line) - token_count = max(current_token_count, token_count) - - cache[line] = chunks - - batch_chunks.append(chunks) - - return batch_chunks, token_count - - def forward(self, texts): - """ - Accepts an array of texts; Passes texts through transformers network to create a tensor with numerical representation of those texts. - Returns a tensor with shape of (B, T, C), where B is length of the array; T is length, in tokens, of texts (including padding) - T will - be a multiple of 77; and C is dimensionality of each token - for SD1 it's 768, and for SD2 it's 1024. - An example shape returned by this function can be: (2, 77, 768). - Webui usually sends just one text at a time through this function - the only time when texts is an array with more than one elemenet - is when you do prompt editing: "a picture of a [cat:dog:0.4] eating ice cream" - """ - - if opts.use_old_emphasis_implementation: - import modules.sd_hijack_clip_old - return modules.sd_hijack_clip_old.forward_old(self, texts) - - batch_chunks, token_count = self.process_texts(texts) - - used_embeddings = {} - chunk_count = max([len(x) for x in batch_chunks]) - - zs = [] - for i in range(chunk_count): - batch_chunk = [chunks[i] if i < len(chunks) else self.empty_chunk() for chunks in batch_chunks] - - tokens = [x.tokens for x in batch_chunk] - multipliers = [x.multipliers for x in batch_chunk] - self.hijack.fixes = [x.fixes for x in batch_chunk] - - for fixes in self.hijack.fixes: - for position, embedding in fixes: - used_embeddings[embedding.name] = embedding - - z = self.process_tokens(tokens, multipliers) - zs.append(z) - - if len(used_embeddings) > 0: - embeddings_list = ", ".join([f'{name} [{embedding.checksum()}]' for name, embedding in used_embeddings.items()]) - self.hijack.comments.append(f"Used embeddings: {embeddings_list}") - - return torch.hstack(zs) - - def process_tokens(self, remade_batch_tokens, batch_multipliers): - """ - sends one single prompt chunk to be encoded by transformers neural network. - remade_batch_tokens is a batch of tokens - a list, where every element is a list of tokens; usually - there are exactly 77 tokens in the list. batch_multipliers is the same but for multipliers instead of tokens. - Multipliers are used to give more or less weight to the outputs of transformers network. Each multiplier - corresponds to one token. - """ - tokens = torch.asarray(remade_batch_tokens).to(devices.device) - - # this is for SD2: SD1 uses the same token for padding and end of text, while SD2 uses different ones. - if self.id_end != self.id_pad: - for batch_pos in range(len(remade_batch_tokens)): - index = remade_batch_tokens[batch_pos].index(self.id_end) - tokens[batch_pos, index+1:tokens.shape[1]] = self.id_pad - - z = self.encode_with_transformers(tokens) - - # restoring original mean is likely not correct, but it seems to work well to prevent artifacts that happen otherwise - batch_multipliers = torch.asarray(batch_multipliers).to(devices.device) - original_mean = z.mean() - z = z * batch_multipliers.reshape(batch_multipliers.shape + (1,)).expand(z.shape) - new_mean = z.mean() - z = z * (original_mean / new_mean) - - return z - - -class FrozenCLIPEmbedderWithCustomWords(FrozenCLIPEmbedderWithCustomWordsBase): - def __init__(self, wrapped, hijack): - super().__init__(wrapped, hijack) - self.tokenizer = wrapped.tokenizer - - vocab = self.tokenizer.get_vocab() - - self.comma_token = vocab.get(',', None) - - self.token_mults = {} - tokens_with_parens = [(k, v) for k, v in vocab.items() if '(' in k or ')' in k or '[' in k or ']' in k] - for text, ident in tokens_with_parens: - mult = 1.0 - for c in text: - if c == '[': - mult /= 1.1 - if c == ']': - mult *= 1.1 - if c == '(': - mult *= 1.1 - if c == ')': - mult /= 1.1 - - if mult != 1.0: - self.token_mults[ident] = mult - - self.id_start = self.wrapped.tokenizer.bos_token_id - self.id_end = self.wrapped.tokenizer.eos_token_id - self.id_pad = self.id_end - - def tokenize(self, texts): - tokenized = self.wrapped.tokenizer(texts, truncation=False, add_special_tokens=False)["input_ids"] - - return tokenized - - def encode_with_transformers(self, tokens): - outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers) - - if opts.CLIP_stop_at_last_layers > 1: - z = outputs.hidden_states[-opts.CLIP_stop_at_last_layers] - z = self.wrapped.transformer.text_model.final_layer_norm(z) - else: - z = outputs.last_hidden_state - - return z - - def encode_embedding_init_text(self, init_text, nvpt): - embedding_layer = self.wrapped.transformer.text_model.embeddings - ids = self.wrapped.tokenizer(init_text, max_length=nvpt, return_tensors="pt", add_special_tokens=False)["input_ids"] - embedded = embedding_layer.token_embedding.wrapped(ids.to(embedding_layer.token_embedding.wrapped.weight.device)).squeeze(0) - - return embedded diff --git a/spaces/artificialguybr/OPENHERMES-V2.5-DEMO/README.md b/spaces/artificialguybr/OPENHERMES-V2.5-DEMO/README.md deleted file mode 100644 index 93578b9cb72391b073cbda617ff0fa10c0fb3dfb..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/OPENHERMES-V2.5-DEMO/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: OPENHERMES 2 -emoji: 📚 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.48.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/artificialguybr/video-dubbing/TTS/tests/zoo_tests/__init__.py b/spaces/artificialguybr/video-dubbing/TTS/tests/zoo_tests/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/WmfImagePlugin.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/WmfImagePlugin.py deleted file mode 100644 index 2f54cdebbeacaa29cdb478f2ad92784a873b5822..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/WmfImagePlugin.py +++ /dev/null @@ -1,177 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# WMF stub codec -# -# history: -# 1996-12-14 fl Created -# 2004-02-22 fl Turned into a stub driver -# 2004-02-23 fl Added EMF support -# -# Copyright (c) Secret Labs AB 1997-2004. All rights reserved. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# -# WMF/EMF reference documentation: -# https://winprotocoldoc.blob.core.windows.net/productionwindowsarchives/MS-WMF/[MS-WMF].pdf -# http://wvware.sourceforge.net/caolan/index.html -# http://wvware.sourceforge.net/caolan/ora-wmf.html - -from . import Image, ImageFile -from ._binary import i16le as word -from ._binary import si16le as short -from ._binary import si32le as _long - -_handler = None - - -def register_handler(handler): - """ - Install application-specific WMF image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -if hasattr(Image.core, "drawwmf"): - # install default handler (windows only) - - class WmfHandler: - def open(self, im): - im.mode = "RGB" - self.bbox = im.info["wmf_bbox"] - - def load(self, im): - im.fp.seek(0) # rewind - return Image.frombytes( - "RGB", - im.size, - Image.core.drawwmf(im.fp.read(), im.size, self.bbox), - "raw", - "BGR", - (im.size[0] * 3 + 3) & -4, - -1, - ) - - register_handler(WmfHandler()) - -# -# -------------------------------------------------------------------- -# Read WMF file - - -def _accept(prefix): - return ( - prefix[:6] == b"\xd7\xcd\xc6\x9a\x00\x00" or prefix[:4] == b"\x01\x00\x00\x00" - ) - - -## -# Image plugin for Windows metafiles. - - -class WmfStubImageFile(ImageFile.StubImageFile): - - format = "WMF" - format_description = "Windows Metafile" - - def _open(self): - self._inch = None - - # check placable header - s = self.fp.read(80) - - if s[:6] == b"\xd7\xcd\xc6\x9a\x00\x00": - - # placeable windows metafile - - # get units per inch - self._inch = word(s, 14) - - # get bounding box - x0 = short(s, 6) - y0 = short(s, 8) - x1 = short(s, 10) - y1 = short(s, 12) - - # normalize size to 72 dots per inch - self.info["dpi"] = 72 - size = ( - (x1 - x0) * self.info["dpi"] // self._inch, - (y1 - y0) * self.info["dpi"] // self._inch, - ) - - self.info["wmf_bbox"] = x0, y0, x1, y1 - - # sanity check (standard metafile header) - if s[22:26] != b"\x01\x00\t\x00": - raise SyntaxError("Unsupported WMF file format") - - elif s[:4] == b"\x01\x00\x00\x00" and s[40:44] == b" EMF": - # enhanced metafile - - # get bounding box - x0 = _long(s, 8) - y0 = _long(s, 12) - x1 = _long(s, 16) - y1 = _long(s, 20) - - # get frame (in 0.01 millimeter units) - frame = _long(s, 24), _long(s, 28), _long(s, 32), _long(s, 36) - - size = x1 - x0, y1 - y0 - - # calculate dots per inch from bbox and frame - xdpi = 2540.0 * (x1 - y0) / (frame[2] - frame[0]) - ydpi = 2540.0 * (y1 - y0) / (frame[3] - frame[1]) - - self.info["wmf_bbox"] = x0, y0, x1, y1 - - if xdpi == ydpi: - self.info["dpi"] = xdpi - else: - self.info["dpi"] = xdpi, ydpi - - else: - raise SyntaxError("Unsupported file format") - - self.mode = "RGB" - self._size = size - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - def load(self, dpi=None): - if dpi is not None and self._inch is not None: - self.info["dpi"] = dpi - x0, y0, x1, y1 = self.info["wmf_bbox"] - self._size = ( - (x1 - x0) * self.info["dpi"] // self._inch, - (y1 - y0) * self.info["dpi"] // self._inch, - ) - return super().load() - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - raise OSError("WMF save handler not installed") - _handler.save(im, fp, filename) - - -# -# -------------------------------------------------------------------- -# Registry stuff - - -Image.register_open(WmfStubImageFile.format, WmfStubImageFile, _accept) -Image.register_save(WmfStubImageFile.format, _save) - -Image.register_extensions(WmfStubImageFile.format, [".wmf", ".emf"]) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/atn/ParserATNSimulator.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/atn/ParserATNSimulator.py deleted file mode 100644 index 9948f4be30b5d21d0484e45e21954951f64adb55..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/antlr4/atn/ParserATNSimulator.py +++ /dev/null @@ -1,1646 +0,0 @@ -# -# Copyright (c) 2012-2017 The ANTLR Project. All rights reserved. -# Use of this file is governed by the BSD 3-clause license that -# can be found in the LICENSE.txt file in the project root. -# - -# -# The embodiment of the adaptive LL(*), ALL(*), parsing strategy. -# -#

    -# The basic complexity of the adaptive strategy makes it harder to understand. -# We begin with ATN simulation to build paths in a DFA. Subsequent prediction -# requests go through the DFA first. If they reach a state without an edge for -# the current symbol, the algorithm fails over to the ATN simulation to -# complete the DFA path for the current input (until it finds a conflict state -# or uniquely predicting state).

    -# -#

    -# All of that is done without using the outer context because we want to create -# a DFA that is not dependent upon the rule invocation stack when we do a -# prediction. One DFA works in all contexts. We avoid using context not -# necessarily because it's slower, although it can be, but because of the DFA -# caching problem. The closure routine only considers the rule invocation stack -# created during prediction beginning in the decision rule. For example, if -# prediction occurs without invoking another rule's ATN, there are no context -# stacks in the configurations. When lack of context leads to a conflict, we -# don't know if it's an ambiguity or a weakness in the strong LL(*) parsing -# strategy (versus full LL(*)).

    -# -#

    -# When SLL yields a configuration set with conflict, we rewind the input and -# retry the ATN simulation, this time using full outer context without adding -# to the DFA. Configuration context stacks will be the full invocation stacks -# from the start rule. If we get a conflict using full context, then we can -# definitively say we have a true ambiguity for that input sequence. If we -# don't get a conflict, it implies that the decision is sensitive to the outer -# context. (It is not context-sensitive in the sense of context-sensitive -# grammars.)

    -# -#

    -# The next time we reach this DFA state with an SLL conflict, through DFA -# simulation, we will again retry the ATN simulation using full context mode. -# This is slow because we can't save the results and have to "interpret" the -# ATN each time we get that input.

    -# -#

    -# CACHING FULL CONTEXT PREDICTIONS

    -# -#

    -# We could cache results from full context to predicted alternative easily and -# that saves a lot of time but doesn't work in presence of predicates. The set -# of visible predicates from the ATN start state changes depending on the -# context, because closure can fall off the end of a rule. I tried to cache -# tuples (stack context, semantic context, predicted alt) but it was slower -# than interpreting and much more complicated. Also required a huge amount of -# memory. The goal is not to create the world's fastest parser anyway. I'd like -# to keep this algorithm simple. By launching multiple threads, we can improve -# the speed of parsing across a large number of files.

    -# -#

    -# There is no strict ordering between the amount of input used by SLL vs LL, -# which makes it really hard to build a cache for full context. Let's say that -# we have input A B C that leads to an SLL conflict with full context X. That -# implies that using X we might only use A B but we could also use A B C D to -# resolve conflict. Input A B C D could predict alternative 1 in one position -# in the input and A B C E could predict alternative 2 in another position in -# input. The conflicting SLL configurations could still be non-unique in the -# full context prediction, which would lead us to requiring more input than the -# original A B C. To make a prediction cache work, we have to track the exact -# input used during the previous prediction. That amounts to a cache that maps -# X to a specific DFA for that context.

    -# -#

    -# Something should be done for left-recursive expression predictions. They are -# likely LL(1) + pred eval. Easier to do the whole SLL unless error and retry -# with full LL thing Sam does.

    -# -#

    -# AVOIDING FULL CONTEXT PREDICTION

    -# -#

    -# We avoid doing full context retry when the outer context is empty, we did not -# dip into the outer context by falling off the end of the decision state rule, -# or when we force SLL mode.

    -# -#

    -# As an example of the not dip into outer context case, consider as super -# constructor calls versus function calls. One grammar might look like -# this:

    -# -#
    -# ctorBody
    -#   : '{' superCall? stat* '}'
    -#   ;
    -# 
    -# -#

    -# Or, you might see something like

    -# -#
    -# stat
    -#   : superCall ';'
    -#   | expression ';'
    -#   | ...
    -#   ;
    -# 
    -# -#

    -# In both cases I believe that no closure operations will dip into the outer -# context. In the first case ctorBody in the worst case will stop at the '}'. -# In the 2nd case it should stop at the ';'. Both cases should stay within the -# entry rule and not dip into the outer context.

    -# -#

    -# PREDICATES

    -# -#

    -# Predicates are always evaluated if present in either SLL or LL both. SLL and -# LL simulation deals with predicates differently. SLL collects predicates as -# it performs closure operations like ANTLR v3 did. It delays predicate -# evaluation until it reaches and accept state. This allows us to cache the SLL -# ATN simulation whereas, if we had evaluated predicates on-the-fly during -# closure, the DFA state configuration sets would be different and we couldn't -# build up a suitable DFA.

    -# -#

    -# When building a DFA accept state during ATN simulation, we evaluate any -# predicates and return the sole semantically valid alternative. If there is -# more than 1 alternative, we report an ambiguity. If there are 0 alternatives, -# we throw an exception. Alternatives without predicates act like they have -# true predicates. The simple way to think about it is to strip away all -# alternatives with false predicates and choose the minimum alternative that -# remains.

    -# -#

    -# When we start in the DFA and reach an accept state that's predicated, we test -# those and return the minimum semantically viable alternative. If no -# alternatives are viable, we throw an exception.

    -# -#

    -# During full LL ATN simulation, closure always evaluates predicates and -# on-the-fly. This is crucial to reducing the configuration set size during -# closure. It hits a landmine when parsing with the Java grammar, for example, -# without this on-the-fly evaluation.

    -# -#

    -# SHARING DFA

    -# -#

    -# All instances of the same parser share the same decision DFAs through a -# static field. Each instance gets its own ATN simulator but they share the -# same {@link #decisionToDFA} field. They also share a -# {@link PredictionContextCache} object that makes sure that all -# {@link PredictionContext} objects are shared among the DFA states. This makes -# a big size difference.

    -# -#

    -# THREAD SAFETY

    -# -#

    -# The {@link ParserATNSimulator} locks on the {@link #decisionToDFA} field when -# it adds a new DFA object to that array. {@link #addDFAEdge} -# locks on the DFA for the current decision when setting the -# {@link DFAState#edges} field. {@link #addDFAState} locks on -# the DFA for the current decision when looking up a DFA state to see if it -# already exists. We must make sure that all requests to add DFA states that -# are equivalent result in the same shared DFA object. This is because lots of -# threads will be trying to update the DFA at once. The -# {@link #addDFAState} method also locks inside the DFA lock -# but this time on the shared context cache when it rebuilds the -# configurations' {@link PredictionContext} objects using cached -# subgraphs/nodes. No other locking occurs, even during DFA simulation. This is -# safe as long as we can guarantee that all threads referencing -# {@code s.edge[t]} get the same physical target {@link DFAState}, or -# {@code null}. Once into the DFA, the DFA simulation does not reference the -# {@link DFA#states} map. It follows the {@link DFAState#edges} field to new -# targets. The DFA simulator will either find {@link DFAState#edges} to be -# {@code null}, to be non-{@code null} and {@code dfa.edges[t]} null, or -# {@code dfa.edges[t]} to be non-null. The -# {@link #addDFAEdge} method could be racing to set the field -# but in either case the DFA simulator works; if {@code null}, and requests ATN -# simulation. It could also race trying to get {@code dfa.edges[t]}, but either -# way it will work because it's not doing a test and set operation.

    -# -#

    -# Starting with SLL then failing to combined SLL/LL (Two-Stage -# Parsing)

    -# -#

    -# Sam pointed out that if SLL does not give a syntax error, then there is no -# point in doing full LL, which is slower. We only have to try LL if we get a -# syntax error. For maximum speed, Sam starts the parser set to pure SLL -# mode with the {@link BailErrorStrategy}:

    -# -#
    -# parser.{@link Parser#getInterpreter() getInterpreter()}.{@link #setPredictionMode setPredictionMode}{@code (}{@link PredictionMode#SLL}{@code )};
    -# parser.{@link Parser#setErrorHandler setErrorHandler}(new {@link BailErrorStrategy}());
    -# 
    -# -#

    -# If it does not get a syntax error, then we're done. If it does get a syntax -# error, we need to retry with the combined SLL/LL strategy.

    -# -#

    -# The reason this works is as follows. If there are no SLL conflicts, then the -# grammar is SLL (at least for that input set). If there is an SLL conflict, -# the full LL analysis must yield a set of viable alternatives which is a -# subset of the alternatives reported by SLL. If the LL set is a singleton, -# then the grammar is LL but not SLL. If the LL set is the same size as the SLL -# set, the decision is SLL. If the LL set has size > 1, then that decision -# is truly ambiguous on the current input. If the LL set is smaller, then the -# SLL conflict resolution might choose an alternative that the full LL would -# rule out as a possibility based upon better context information. If that's -# the case, then the SLL parse will definitely get an error because the full LL -# analysis says it's not viable. If SLL conflict resolution chooses an -# alternative within the LL set, them both SLL and LL would choose the same -# alternative because they both choose the minimum of multiple conflicting -# alternatives.

    -# -#

    -# Let's say we have a set of SLL conflicting alternatives {@code {1, 2, 3}} and -# a smaller LL set called s. If s is {@code {2, 3}}, then SLL -# parsing will get an error because SLL will pursue alternative 1. If -# s is {@code {1, 2}} or {@code {1, 3}} then both SLL and LL will -# choose the same alternative because alternative one is the minimum of either -# set. If s is {@code {2}} or {@code {3}} then SLL will get a syntax -# error. If s is {@code {1}} then SLL will succeed.

    -# -#

    -# Of course, if the input is invalid, then we will get an error for sure in -# both SLL and LL parsing. Erroneous input will therefore require 2 passes over -# the input.

    -# -import sys -from antlr4 import DFA -from antlr4.PredictionContext import PredictionContextCache, PredictionContext, SingletonPredictionContext, \ - PredictionContextFromRuleContext -from antlr4.BufferedTokenStream import TokenStream -from antlr4.Parser import Parser -from antlr4.ParserRuleContext import ParserRuleContext -from antlr4.RuleContext import RuleContext -from antlr4.Token import Token -from antlr4.Utils import str_list -from antlr4.atn.ATN import ATN -from antlr4.atn.ATNConfig import ATNConfig -from antlr4.atn.ATNConfigSet import ATNConfigSet -from antlr4.atn.ATNSimulator import ATNSimulator -from antlr4.atn.ATNState import StarLoopEntryState, DecisionState, RuleStopState, ATNState -from antlr4.atn.PredictionMode import PredictionMode -from antlr4.atn.SemanticContext import SemanticContext, AND, andContext, orContext -from antlr4.atn.Transition import Transition, RuleTransition, ActionTransition, PrecedencePredicateTransition, \ - PredicateTransition, AtomTransition, SetTransition, NotSetTransition -from antlr4.dfa.DFAState import DFAState, PredPrediction -from antlr4.error.Errors import NoViableAltException - - -class ParserATNSimulator(ATNSimulator): - - debug = False - debug_list_atn_decisions = False - dfa_debug = False - retry_debug = False - - - def __init__(self, parser:Parser, atn:ATN, decisionToDFA:list, sharedContextCache:PredictionContextCache): - super().__init__(atn, sharedContextCache) - self.parser = parser - self.decisionToDFA = decisionToDFA - # SLL, LL, or LL + exact ambig detection?# - self.predictionMode = PredictionMode.LL - # LAME globals to avoid parameters!!!!! I need these down deep in predTransition - self._input = None - self._startIndex = 0 - self._outerContext = None - self._dfa = None - # Each prediction operation uses a cache for merge of prediction contexts. - # Don't keep around as it wastes huge amounts of memory. DoubleKeyMap - # isn't synchronized but we're ok since two threads shouldn't reuse same - # parser/atnsim object because it can only handle one input at a time. - # This maps graphs a and b to merged result c. (a,b)→c. We can avoid - # the merge if we ever see a and b again. Note that (b,a)→c should - # also be examined during cache lookup. - # - self.mergeCache = None - - - def reset(self): - pass - - def adaptivePredict(self, input:TokenStream, decision:int, outerContext:ParserRuleContext): - if ParserATNSimulator.debug or ParserATNSimulator.debug_list_atn_decisions: - print("adaptivePredict decision " + str(decision) + - " exec LA(1)==" + self.getLookaheadName(input) + - " line " + str(input.LT(1).line) + ":" + - str(input.LT(1).column)) - self._input = input - self._startIndex = input.index - self._outerContext = outerContext - - dfa = self.decisionToDFA[decision] - self._dfa = dfa - m = input.mark() - index = input.index - - # Now we are certain to have a specific decision's DFA - # But, do we still need an initial state? - try: - if dfa.precedenceDfa: - # the start state for a precedence DFA depends on the current - # parser precedence, and is provided by a DFA method. - s0 = dfa.getPrecedenceStartState(self.parser.getPrecedence()) - else: - # the start state for a "regular" DFA is just s0 - s0 = dfa.s0 - - if s0 is None: - if outerContext is None: - outerContext = ParserRuleContext.EMPTY - if ParserATNSimulator.debug or ParserATNSimulator.debug_list_atn_decisions: - print("predictATN decision " + str(dfa.decision) + - " exec LA(1)==" + self.getLookaheadName(input) + - ", outerContext=" + outerContext.toString(self.parser.literalNames, None)) - - fullCtx = False - s0_closure = self.computeStartState(dfa.atnStartState, ParserRuleContext.EMPTY, fullCtx) - - if dfa.precedenceDfa: - # If this is a precedence DFA, we use applyPrecedenceFilter - # to convert the computed start state to a precedence start - # state. We then use DFA.setPrecedenceStartState to set the - # appropriate start state for the precedence level rather - # than simply setting DFA.s0. - # - dfa.s0.configs = s0_closure # not used for prediction but useful to know start configs anyway - s0_closure = self.applyPrecedenceFilter(s0_closure) - s0 = self.addDFAState(dfa, DFAState(configs=s0_closure)) - dfa.setPrecedenceStartState(self.parser.getPrecedence(), s0) - else: - s0 = self.addDFAState(dfa, DFAState(configs=s0_closure)) - dfa.s0 = s0 - - alt = self.execATN(dfa, s0, input, index, outerContext) - if ParserATNSimulator.debug: - print("DFA after predictATN: " + dfa.toString(self.parser.literalNames)) - return alt - finally: - self._dfa = None - self.mergeCache = None # wack cache after each prediction - input.seek(index) - input.release(m) - - # Performs ATN simulation to compute a predicted alternative based - # upon the remaining input, but also updates the DFA cache to avoid - # having to traverse the ATN again for the same input sequence. - - # There are some key conditions we're looking for after computing a new - # set of ATN configs (proposed DFA state): - # if the set is empty, there is no viable alternative for current symbol - # does the state uniquely predict an alternative? - # does the state have a conflict that would prevent us from - # putting it on the work list? - - # We also have some key operations to do: - # add an edge from previous DFA state to potentially new DFA state, D, - # upon current symbol but only if adding to work list, which means in all - # cases except no viable alternative (and possibly non-greedy decisions?) - # collecting predicates and adding semantic context to DFA accept states - # adding rule context to context-sensitive DFA accept states - # consuming an input symbol - # reporting a conflict - # reporting an ambiguity - # reporting a context sensitivity - # reporting insufficient predicates - - # cover these cases: - # dead end - # single alt - # single alt + preds - # conflict - # conflict + preds - # - def execATN(self, dfa:DFA, s0:DFAState, input:TokenStream, startIndex:int, outerContext:ParserRuleContext ): - if ParserATNSimulator.debug or ParserATNSimulator.debug_list_atn_decisions: - print("execATN decision " + str(dfa.decision) + - " exec LA(1)==" + self.getLookaheadName(input) + - " line " + str(input.LT(1).line) + ":" + str(input.LT(1).column)) - - previousD = s0 - - if ParserATNSimulator.debug: - print("s0 = " + str(s0)) - - t = input.LA(1) - - while True: # while more work - D = self.getExistingTargetState(previousD, t) - if D is None: - D = self.computeTargetState(dfa, previousD, t) - if D is self.ERROR: - # if any configs in previous dipped into outer context, that - # means that input up to t actually finished entry rule - # at least for SLL decision. Full LL doesn't dip into outer - # so don't need special case. - # We will get an error no matter what so delay until after - # decision; better error message. Also, no reachable target - # ATN states in SLL implies LL will also get nowhere. - # If conflict in states that dip out, choose min since we - # will get error no matter what. - e = self.noViableAlt(input, outerContext, previousD.configs, startIndex) - input.seek(startIndex) - alt = self.getSynValidOrSemInvalidAltThatFinishedDecisionEntryRule(previousD.configs, outerContext) - if alt!=ATN.INVALID_ALT_NUMBER: - return alt - raise e - - if D.requiresFullContext and self.predictionMode != PredictionMode.SLL: - # IF PREDS, MIGHT RESOLVE TO SINGLE ALT => SLL (or syntax error) - conflictingAlts = D.configs.conflictingAlts - if D.predicates is not None: - if ParserATNSimulator.debug: - print("DFA state has preds in DFA sim LL failover") - conflictIndex = input.index - if conflictIndex != startIndex: - input.seek(startIndex) - - conflictingAlts = self.evalSemanticContext(D.predicates, outerContext, True) - if len(conflictingAlts)==1: - if ParserATNSimulator.debug: - print("Full LL avoided") - return min(conflictingAlts) - - if conflictIndex != startIndex: - # restore the index so reporting the fallback to full - # context occurs with the index at the correct spot - input.seek(conflictIndex) - - if ParserATNSimulator.dfa_debug: - print("ctx sensitive state " + str(outerContext) +" in " + str(D)) - fullCtx = True - s0_closure = self.computeStartState(dfa.atnStartState, outerContext, fullCtx) - self.reportAttemptingFullContext(dfa, conflictingAlts, D.configs, startIndex, input.index) - alt = self.execATNWithFullContext(dfa, D, s0_closure, input, startIndex, outerContext) - return alt - - if D.isAcceptState: - if D.predicates is None: - return D.prediction - - stopIndex = input.index - input.seek(startIndex) - alts = self.evalSemanticContext(D.predicates, outerContext, True) - if len(alts)==0: - raise self.noViableAlt(input, outerContext, D.configs, startIndex) - elif len(alts)==1: - return min(alts) - else: - # report ambiguity after predicate evaluation to make sure the correct - # set of ambig alts is reported. - self.reportAmbiguity(dfa, D, startIndex, stopIndex, False, alts, D.configs) - return min(alts) - - previousD = D - - if t != Token.EOF: - input.consume() - t = input.LA(1) - - # - # Get an existing target state for an edge in the DFA. If the target state - # for the edge has not yet been computed or is otherwise not available, - # this method returns {@code null}. - # - # @param previousD The current DFA state - # @param t The next input symbol - # @return The existing target DFA state for the given input symbol - # {@code t}, or {@code null} if the target state for this edge is not - # already cached - # - def getExistingTargetState(self, previousD:DFAState, t:int): - edges = previousD.edges - if edges is None or t + 1 < 0 or t + 1 >= len(edges): - return None - else: - return edges[t + 1] - - # - # Compute a target state for an edge in the DFA, and attempt to add the - # computed state and corresponding edge to the DFA. - # - # @param dfa The DFA - # @param previousD The current DFA state - # @param t The next input symbol - # - # @return The computed target DFA state for the given input symbol - # {@code t}. If {@code t} does not lead to a valid DFA state, this method - # returns {@link #ERROR}. - # - def computeTargetState(self, dfa:DFA, previousD:DFAState, t:int): - reach = self.computeReachSet(previousD.configs, t, False) - if reach is None: - self.addDFAEdge(dfa, previousD, t, self.ERROR) - return self.ERROR - - # create new target state; we'll add to DFA after it's complete - D = DFAState(configs=reach) - - predictedAlt = self.getUniqueAlt(reach) - - if ParserATNSimulator.debug: - altSubSets = PredictionMode.getConflictingAltSubsets(reach) - print("SLL altSubSets=" + str(altSubSets) + ", configs=" + str(reach) + - ", predict=" + str(predictedAlt) + ", allSubsetsConflict=" + - str(PredictionMode.allSubsetsConflict(altSubSets)) + ", conflictingAlts=" + - str(self.getConflictingAlts(reach))) - - if predictedAlt!=ATN.INVALID_ALT_NUMBER: - # NO CONFLICT, UNIQUELY PREDICTED ALT - D.isAcceptState = True - D.configs.uniqueAlt = predictedAlt - D.prediction = predictedAlt - elif PredictionMode.hasSLLConflictTerminatingPrediction(self.predictionMode, reach): - # MORE THAN ONE VIABLE ALTERNATIVE - D.configs.conflictingAlts = self.getConflictingAlts(reach) - D.requiresFullContext = True - # in SLL-only mode, we will stop at this state and return the minimum alt - D.isAcceptState = True - D.prediction = min(D.configs.conflictingAlts) - - if D.isAcceptState and D.configs.hasSemanticContext: - self.predicateDFAState(D, self.atn.getDecisionState(dfa.decision)) - if D.predicates is not None: - D.prediction = ATN.INVALID_ALT_NUMBER - - # all adds to dfa are done after we've created full D state - D = self.addDFAEdge(dfa, previousD, t, D) - return D - - def predicateDFAState(self, dfaState:DFAState, decisionState:DecisionState): - # We need to test all predicates, even in DFA states that - # uniquely predict alternative. - nalts = len(decisionState.transitions) - # Update DFA so reach becomes accept state with (predicate,alt) - # pairs if preds found for conflicting alts - altsToCollectPredsFrom = self.getConflictingAltsOrUniqueAlt(dfaState.configs) - altToPred = self.getPredsForAmbigAlts(altsToCollectPredsFrom, dfaState.configs, nalts) - if altToPred is not None: - dfaState.predicates = self.getPredicatePredictions(altsToCollectPredsFrom, altToPred) - dfaState.prediction = ATN.INVALID_ALT_NUMBER # make sure we use preds - else: - # There are preds in configs but they might go away - # when OR'd together like {p}? || NONE == NONE. If neither - # alt has preds, resolve to min alt - dfaState.prediction = min(altsToCollectPredsFrom) - - # comes back with reach.uniqueAlt set to a valid alt - def execATNWithFullContext(self, dfa:DFA, D:DFAState, # how far we got before failing over - s0:ATNConfigSet, - input:TokenStream, - startIndex:int, - outerContext:ParserRuleContext): - if ParserATNSimulator.debug or ParserATNSimulator.debug_list_atn_decisions: - print("execATNWithFullContext", str(s0)) - fullCtx = True - foundExactAmbig = False - reach = None - previous = s0 - input.seek(startIndex) - t = input.LA(1) - predictedAlt = -1 - while (True): # while more work - reach = self.computeReachSet(previous, t, fullCtx) - if reach is None: - # if any configs in previous dipped into outer context, that - # means that input up to t actually finished entry rule - # at least for LL decision. Full LL doesn't dip into outer - # so don't need special case. - # We will get an error no matter what so delay until after - # decision; better error message. Also, no reachable target - # ATN states in SLL implies LL will also get nowhere. - # If conflict in states that dip out, choose min since we - # will get error no matter what. - e = self.noViableAlt(input, outerContext, previous, startIndex) - input.seek(startIndex) - alt = self.getSynValidOrSemInvalidAltThatFinishedDecisionEntryRule(previous, outerContext) - if alt!=ATN.INVALID_ALT_NUMBER: - return alt - else: - raise e - - altSubSets = PredictionMode.getConflictingAltSubsets(reach) - if ParserATNSimulator.debug: - print("LL altSubSets=" + str(altSubSets) + ", predict=" + - str(PredictionMode.getUniqueAlt(altSubSets)) + ", resolvesToJustOneViableAlt=" + - str(PredictionMode.resolvesToJustOneViableAlt(altSubSets))) - - reach.uniqueAlt = self.getUniqueAlt(reach) - # unique prediction? - if reach.uniqueAlt!=ATN.INVALID_ALT_NUMBER: - predictedAlt = reach.uniqueAlt - break - elif self.predictionMode is not PredictionMode.LL_EXACT_AMBIG_DETECTION: - predictedAlt = PredictionMode.resolvesToJustOneViableAlt(altSubSets) - if predictedAlt != ATN.INVALID_ALT_NUMBER: - break - else: - # In exact ambiguity mode, we never try to terminate early. - # Just keeps scarfing until we know what the conflict is - if PredictionMode.allSubsetsConflict(altSubSets) and PredictionMode.allSubsetsEqual(altSubSets): - foundExactAmbig = True - predictedAlt = PredictionMode.getSingleViableAlt(altSubSets) - break - # else there are multiple non-conflicting subsets or - # we're not sure what the ambiguity is yet. - # So, keep going. - - previous = reach - if t != Token.EOF: - input.consume() - t = input.LA(1) - - # If the configuration set uniquely predicts an alternative, - # without conflict, then we know that it's a full LL decision - # not SLL. - if reach.uniqueAlt != ATN.INVALID_ALT_NUMBER : - self.reportContextSensitivity(dfa, predictedAlt, reach, startIndex, input.index) - return predictedAlt - - # We do not check predicates here because we have checked them - # on-the-fly when doing full context prediction. - - # - # In non-exact ambiguity detection mode, we might actually be able to - # detect an exact ambiguity, but I'm not going to spend the cycles - # needed to check. We only emit ambiguity warnings in exact ambiguity - # mode. - # - # For example, we might know that we have conflicting configurations. - # But, that does not mean that there is no way forward without a - # conflict. It's possible to have nonconflicting alt subsets as in: - - # altSubSets=[{1, 2}, {1, 2}, {1}, {1, 2}] - - # from - # - # [(17,1,[5 $]), (13,1,[5 10 $]), (21,1,[5 10 $]), (11,1,[$]), - # (13,2,[5 10 $]), (21,2,[5 10 $]), (11,2,[$])] - # - # In this case, (17,1,[5 $]) indicates there is some next sequence that - # would resolve this without conflict to alternative 1. Any other viable - # next sequence, however, is associated with a conflict. We stop - # looking for input because no amount of further lookahead will alter - # the fact that we should predict alternative 1. We just can't say for - # sure that there is an ambiguity without looking further. - - self.reportAmbiguity(dfa, D, startIndex, input.index, foundExactAmbig, None, reach) - - return predictedAlt - - def computeReachSet(self, closure:ATNConfigSet, t:int, fullCtx:bool): - if ParserATNSimulator.debug: - print("in computeReachSet, starting closure: " + str(closure)) - - if self.mergeCache is None: - self.mergeCache = dict() - - intermediate = ATNConfigSet(fullCtx) - - # Configurations already in a rule stop state indicate reaching the end - # of the decision rule (local context) or end of the start rule (full - # context). Once reached, these configurations are never updated by a - # closure operation, so they are handled separately for the performance - # advantage of having a smaller intermediate set when calling closure. - # - # For full-context reach operations, separate handling is required to - # ensure that the alternative matching the longest overall sequence is - # chosen when multiple such configurations can match the input. - - skippedStopStates = None - - # First figure out where we can reach on input t - for c in closure: - if ParserATNSimulator.debug: - print("testing " + self.getTokenName(t) + " at " + str(c)) - - if isinstance(c.state, RuleStopState): - if fullCtx or t == Token.EOF: - if skippedStopStates is None: - skippedStopStates = list() - skippedStopStates.append(c) - continue - - for trans in c.state.transitions: - target = self.getReachableTarget(trans, t) - if target is not None: - intermediate.add(ATNConfig(state=target, config=c), self.mergeCache) - - # Now figure out where the reach operation can take us... - - reach = None - - # This block optimizes the reach operation for intermediate sets which - # trivially indicate a termination state for the overall - # adaptivePredict operation. - # - # The conditions assume that intermediate - # contains all configurations relevant to the reach set, but this - # condition is not true when one or more configurations have been - # withheld in skippedStopStates, or when the current symbol is EOF. - # - if skippedStopStates is None and t!=Token.EOF: - if len(intermediate)==1: - # Don't pursue the closure if there is just one state. - # It can only have one alternative; just add to result - # Also don't pursue the closure if there is unique alternative - # among the configurations. - reach = intermediate - elif self.getUniqueAlt(intermediate)!=ATN.INVALID_ALT_NUMBER: - # Also don't pursue the closure if there is unique alternative - # among the configurations. - reach = intermediate - - # If the reach set could not be trivially determined, perform a closure - # operation on the intermediate set to compute its initial value. - # - if reach is None: - reach = ATNConfigSet(fullCtx) - closureBusy = set() - treatEofAsEpsilon = t == Token.EOF - for c in intermediate: - self.closure(c, reach, closureBusy, False, fullCtx, treatEofAsEpsilon) - - if t == Token.EOF: - # After consuming EOF no additional input is possible, so we are - # only interested in configurations which reached the end of the - # decision rule (local context) or end of the start rule (full - # context). Update reach to contain only these configurations. This - # handles both explicit EOF transitions in the grammar and implicit - # EOF transitions following the end of the decision or start rule. - # - # When reach==intermediate, no closure operation was performed. In - # this case, removeAllConfigsNotInRuleStopState needs to check for - # reachable rule stop states as well as configurations already in - # a rule stop state. - # - # This is handled before the configurations in skippedStopStates, - # because any configurations potentially added from that list are - # already guaranteed to meet this condition whether or not it's - # required. - # - reach = self.removeAllConfigsNotInRuleStopState(reach, reach is intermediate) - - # If skippedStopStates is not null, then it contains at least one - # configuration. For full-context reach operations, these - # configurations reached the end of the start rule, in which case we - # only add them back to reach if no configuration during the current - # closure operation reached such a state. This ensures adaptivePredict - # chooses an alternative matching the longest overall sequence when - # multiple alternatives are viable. - # - if skippedStopStates is not None and ( (not fullCtx) or (not PredictionMode.hasConfigInRuleStopState(reach))): - for c in skippedStopStates: - reach.add(c, self.mergeCache) - if len(reach)==0: - return None - else: - return reach - - # - # Return a configuration set containing only the configurations from - # {@code configs} which are in a {@link RuleStopState}. If all - # configurations in {@code configs} are already in a rule stop state, this - # method simply returns {@code configs}. - # - #

    When {@code lookToEndOfRule} is true, this method uses - # {@link ATN#nextTokens} for each configuration in {@code configs} which is - # not already in a rule stop state to see if a rule stop state is reachable - # from the configuration via epsilon-only transitions.

    - # - # @param configs the configuration set to update - # @param lookToEndOfRule when true, this method checks for rule stop states - # reachable by epsilon-only transitions from each configuration in - # {@code configs}. - # - # @return {@code configs} if all configurations in {@code configs} are in a - # rule stop state, otherwise return a new configuration set containing only - # the configurations from {@code configs} which are in a rule stop state - # - def removeAllConfigsNotInRuleStopState(self, configs:ATNConfigSet, lookToEndOfRule:bool): - if PredictionMode.allConfigsInRuleStopStates(configs): - return configs - result = ATNConfigSet(configs.fullCtx) - for config in configs: - if isinstance(config.state, RuleStopState): - result.add(config, self.mergeCache) - continue - if lookToEndOfRule and config.state.epsilonOnlyTransitions: - nextTokens = self.atn.nextTokens(config.state) - if Token.EPSILON in nextTokens: - endOfRuleState = self.atn.ruleToStopState[config.state.ruleIndex] - result.add(ATNConfig(state=endOfRuleState, config=config), self.mergeCache) - return result - - def computeStartState(self, p:ATNState, ctx:RuleContext, fullCtx:bool): - # always at least the implicit call to start rule - initialContext = PredictionContextFromRuleContext(self.atn, ctx) - configs = ATNConfigSet(fullCtx) - - for i in range(0, len(p.transitions)): - target = p.transitions[i].target - c = ATNConfig(target, i+1, initialContext) - closureBusy = set() - self.closure(c, configs, closureBusy, True, fullCtx, False) - return configs - - # - # This method transforms the start state computed by - # {@link #computeStartState} to the special start state used by a - # precedence DFA for a particular precedence value. The transformation - # process applies the following changes to the start state's configuration - # set. - # - #
      - #
    1. Evaluate the precedence predicates for each configuration using - # {@link SemanticContext#evalPrecedence}.
    2. - #
    3. Remove all configurations which predict an alternative greater than - # 1, for which another configuration that predicts alternative 1 is in the - # same ATN state with the same prediction context. This transformation is - # valid for the following reasons: - #
        - #
      • The closure block cannot contain any epsilon transitions which bypass - # the body of the closure, so all states reachable via alternative 1 are - # part of the precedence alternatives of the transformed left-recursive - # rule.
      • - #
      • The "primary" portion of a left recursive rule cannot contain an - # epsilon transition, so the only way an alternative other than 1 can exist - # in a state that is also reachable via alternative 1 is by nesting calls - # to the left-recursive rule, with the outer calls not being at the - # preferred precedence level.
      • - #
      - #
    4. - #
    - # - #

    - # The prediction context must be considered by this filter to address - # situations like the following. - #

    - # - #
    -    # grammar TA;
    -    # prog: statement* EOF;
    -    # statement: letterA | statement letterA 'b' ;
    -    # letterA: 'a';
    -    # 
    - #
    - #

    - # If the above grammar, the ATN state immediately before the token - # reference {@code 'a'} in {@code letterA} is reachable from the left edge - # of both the primary and closure blocks of the left-recursive rule - # {@code statement}. The prediction context associated with each of these - # configurations distinguishes between them, and prevents the alternative - # which stepped out to {@code prog} (and then back in to {@code statement} - # from being eliminated by the filter. - #

    - # - # @param configs The configuration set computed by - # {@link #computeStartState} as the start state for the DFA. - # @return The transformed configuration set representing the start state - # for a precedence DFA at a particular precedence level (determined by - # calling {@link Parser#getPrecedence}). - # - def applyPrecedenceFilter(self, configs:ATNConfigSet): - statesFromAlt1 = dict() - configSet = ATNConfigSet(configs.fullCtx) - for config in configs: - # handle alt 1 first - if config.alt != 1: - continue - updatedContext = config.semanticContext.evalPrecedence(self.parser, self._outerContext) - if updatedContext is None: - # the configuration was eliminated - continue - - statesFromAlt1[config.state.stateNumber] = config.context - if updatedContext is not config.semanticContext: - configSet.add(ATNConfig(config=config, semantic=updatedContext), self.mergeCache) - else: - configSet.add(config, self.mergeCache) - - for config in configs: - if config.alt == 1: - # already handled - continue - - # In the future, this elimination step could be updated to also - # filter the prediction context for alternatives predicting alt>1 - # (basically a graph subtraction algorithm). - # - if not config.precedenceFilterSuppressed: - context = statesFromAlt1.get(config.state.stateNumber, None) - if context==config.context: - # eliminated - continue - - configSet.add(config, self.mergeCache) - - return configSet - - def getReachableTarget(self, trans:Transition, ttype:int): - if trans.matches(ttype, 0, self.atn.maxTokenType): - return trans.target - else: - return None - - def getPredsForAmbigAlts(self, ambigAlts:set, configs:ATNConfigSet, nalts:int): - # REACH=[1|1|[]|0:0, 1|2|[]|0:1] - # altToPred starts as an array of all null contexts. The entry at index i - # corresponds to alternative i. altToPred[i] may have one of three values: - # 1. null: no ATNConfig c is found such that c.alt==i - # 2. SemanticContext.NONE: At least one ATNConfig c exists such that - # c.alt==i and c.semanticContext==SemanticContext.NONE. In other words, - # alt i has at least one unpredicated config. - # 3. Non-NONE Semantic Context: There exists at least one, and for all - # ATNConfig c such that c.alt==i, c.semanticContext!=SemanticContext.NONE. - # - # From this, it is clear that NONE||anything==NONE. - # - altToPred = [None] * (nalts + 1) - for c in configs: - if c.alt in ambigAlts: - altToPred[c.alt] = orContext(altToPred[c.alt], c.semanticContext) - - nPredAlts = 0 - for i in range(1, nalts+1): - if altToPred[i] is None: - altToPred[i] = SemanticContext.NONE - elif altToPred[i] is not SemanticContext.NONE: - nPredAlts += 1 - - # nonambig alts are null in altToPred - if nPredAlts==0: - altToPred = None - if ParserATNSimulator.debug: - print("getPredsForAmbigAlts result " + str_list(altToPred)) - return altToPred - - def getPredicatePredictions(self, ambigAlts:set, altToPred:list): - pairs = [] - containsPredicate = False - for i in range(1, len(altToPred)): - pred = altToPred[i] - # unpredicated is indicated by SemanticContext.NONE - if ambigAlts is not None and i in ambigAlts: - pairs.append(PredPrediction(pred, i)) - if pred is not SemanticContext.NONE: - containsPredicate = True - - if not containsPredicate: - return None - - return pairs - - # - # This method is used to improve the localization of error messages by - # choosing an alternative rather than throwing a - # {@link NoViableAltException} in particular prediction scenarios where the - # {@link #ERROR} state was reached during ATN simulation. - # - #

    - # The default implementation of this method uses the following - # algorithm to identify an ATN configuration which successfully parsed the - # decision entry rule. Choosing such an alternative ensures that the - # {@link ParserRuleContext} returned by the calling rule will be complete - # and valid, and the syntax error will be reported later at a more - # localized location.

    - # - #
      - #
    • If a syntactically valid path or paths reach the end of the decision rule and - # they are semantically valid if predicated, return the min associated alt.
    • - #
    • Else, if a semantically invalid but syntactically valid path exist - # or paths exist, return the minimum associated alt. - #
    • - #
    • Otherwise, return {@link ATN#INVALID_ALT_NUMBER}.
    • - #
    - # - #

    - # In some scenarios, the algorithm described above could predict an - # alternative which will result in a {@link FailedPredicateException} in - # the parser. Specifically, this could occur if the only configuration - # capable of successfully parsing to the end of the decision rule is - # blocked by a semantic predicate. By choosing this alternative within - # {@link #adaptivePredict} instead of throwing a - # {@link NoViableAltException}, the resulting - # {@link FailedPredicateException} in the parser will identify the specific - # predicate which is preventing the parser from successfully parsing the - # decision rule, which helps developers identify and correct logic errors - # in semantic predicates. - #

    - # - # @param configs The ATN configurations which were valid immediately before - # the {@link #ERROR} state was reached - # @param outerContext The is the \gamma_0 initial parser context from the paper - # or the parser stack at the instant before prediction commences. - # - # @return The value to return from {@link #adaptivePredict}, or - # {@link ATN#INVALID_ALT_NUMBER} if a suitable alternative was not - # identified and {@link #adaptivePredict} should report an error instead. - # - def getSynValidOrSemInvalidAltThatFinishedDecisionEntryRule(self, configs:ATNConfigSet, outerContext:ParserRuleContext): - semValidConfigs, semInvalidConfigs = self.splitAccordingToSemanticValidity(configs, outerContext) - alt = self.getAltThatFinishedDecisionEntryRule(semValidConfigs) - if alt!=ATN.INVALID_ALT_NUMBER: # semantically/syntactically viable path exists - return alt - # Is there a syntactically valid path with a failed pred? - if len(semInvalidConfigs)>0: - alt = self.getAltThatFinishedDecisionEntryRule(semInvalidConfigs) - if alt!=ATN.INVALID_ALT_NUMBER: # syntactically viable path exists - return alt - return ATN.INVALID_ALT_NUMBER - - def getAltThatFinishedDecisionEntryRule(self, configs:ATNConfigSet): - alts = set() - for c in configs: - if c.reachesIntoOuterContext>0 or (isinstance(c.state, RuleStopState) and c.context.hasEmptyPath() ): - alts.add(c.alt) - if len(alts)==0: - return ATN.INVALID_ALT_NUMBER - else: - return min(alts) - - # Walk the list of configurations and split them according to - # those that have preds evaluating to true/false. If no pred, assume - # true pred and include in succeeded set. Returns Pair of sets. - # - # Create a new set so as not to alter the incoming parameter. - # - # Assumption: the input stream has been restored to the starting point - # prediction, which is where predicates need to evaluate. - # - def splitAccordingToSemanticValidity(self, configs:ATNConfigSet, outerContext:ParserRuleContext): - succeeded = ATNConfigSet(configs.fullCtx) - failed = ATNConfigSet(configs.fullCtx) - for c in configs: - if c.semanticContext is not SemanticContext.NONE: - predicateEvaluationResult = c.semanticContext.eval(self.parser, outerContext) - if predicateEvaluationResult: - succeeded.add(c) - else: - failed.add(c) - else: - succeeded.add(c) - return (succeeded,failed) - - # Look through a list of predicate/alt pairs, returning alts for the - # pairs that win. A {@code NONE} predicate indicates an alt containing an - # unpredicated config which behaves as "always true." If !complete - # then we stop at the first predicate that evaluates to true. This - # includes pairs with null predicates. - # - def evalSemanticContext(self, predPredictions:list, outerContext:ParserRuleContext, complete:bool): - predictions = set() - for pair in predPredictions: - if pair.pred is SemanticContext.NONE: - predictions.add(pair.alt) - if not complete: - break - continue - predicateEvaluationResult = pair.pred.eval(self.parser, outerContext) - if ParserATNSimulator.debug or ParserATNSimulator.dfa_debug: - print("eval pred " + str(pair) + "=" + str(predicateEvaluationResult)) - - if predicateEvaluationResult: - if ParserATNSimulator.debug or ParserATNSimulator.dfa_debug: - print("PREDICT " + str(pair.alt)) - predictions.add(pair.alt) - if not complete: - break - return predictions - - - # TODO: If we are doing predicates, there is no point in pursuing - # closure operations if we reach a DFA state that uniquely predicts - # alternative. We will not be caching that DFA state and it is a - # waste to pursue the closure. Might have to advance when we do - # ambig detection thought :( - # - - def closure(self, config:ATNConfig, configs:ATNConfigSet, closureBusy:set, collectPredicates:bool, fullCtx:bool, treatEofAsEpsilon:bool): - initialDepth = 0 - self.closureCheckingStopState(config, configs, closureBusy, collectPredicates, - fullCtx, initialDepth, treatEofAsEpsilon) - - - def closureCheckingStopState(self, config:ATNConfig, configs:ATNConfigSet, closureBusy:set, collectPredicates:bool, fullCtx:bool, depth:int, treatEofAsEpsilon:bool): - if ParserATNSimulator.debug: - print("closure(" + str(config) + ")") - - if isinstance(config.state, RuleStopState): - # We hit rule end. If we have context info, use it - # run thru all possible stack tops in ctx - if not config.context.isEmpty(): - for i in range(0, len(config.context)): - state = config.context.getReturnState(i) - if state is PredictionContext.EMPTY_RETURN_STATE: - if fullCtx: - configs.add(ATNConfig(state=config.state, context=PredictionContext.EMPTY, config=config), self.mergeCache) - continue - else: - # we have no context info, just chase follow links (if greedy) - if ParserATNSimulator.debug: - print("FALLING off rule " + self.getRuleName(config.state.ruleIndex)) - self.closure_(config, configs, closureBusy, collectPredicates, - fullCtx, depth, treatEofAsEpsilon) - continue - returnState = self.atn.states[state] - newContext = config.context.getParent(i) # "pop" return state - c = ATNConfig(state=returnState, alt=config.alt, context=newContext, semantic=config.semanticContext) - # While we have context to pop back from, we may have - # gotten that context AFTER having falling off a rule. - # Make sure we track that we are now out of context. - c.reachesIntoOuterContext = config.reachesIntoOuterContext - self.closureCheckingStopState(c, configs, closureBusy, collectPredicates, fullCtx, depth - 1, treatEofAsEpsilon) - return - elif fullCtx: - # reached end of start rule - configs.add(config, self.mergeCache) - return - else: - # else if we have no context info, just chase follow links (if greedy) - if ParserATNSimulator.debug: - print("FALLING off rule " + self.getRuleName(config.state.ruleIndex)) - - self.closure_(config, configs, closureBusy, collectPredicates, fullCtx, depth, treatEofAsEpsilon) - - # Do the actual work of walking epsilon edges# - def closure_(self, config:ATNConfig, configs:ATNConfigSet, closureBusy:set, collectPredicates:bool, fullCtx:bool, depth:int, treatEofAsEpsilon:bool): - p = config.state - # optimization - if not p.epsilonOnlyTransitions: - configs.add(config, self.mergeCache) - # make sure to not return here, because EOF transitions can act as - # both epsilon transitions and non-epsilon transitions. - - first = True - for t in p.transitions: - if first: - first = False - if self.canDropLoopEntryEdgeInLeftRecursiveRule(config): - continue - - continueCollecting = collectPredicates and not isinstance(t, ActionTransition) - c = self.getEpsilonTarget(config, t, continueCollecting, depth == 0, fullCtx, treatEofAsEpsilon) - if c is not None: - newDepth = depth - if isinstance( config.state, RuleStopState): - # target fell off end of rule; mark resulting c as having dipped into outer context - # We can't get here if incoming config was rule stop and we had context - # track how far we dip into outer context. Might - # come in handy and we avoid evaluating context dependent - # preds if this is > 0. - if self._dfa is not None and self._dfa.precedenceDfa: - if t.outermostPrecedenceReturn == self._dfa.atnStartState.ruleIndex: - c.precedenceFilterSuppressed = True - c.reachesIntoOuterContext += 1 - if c in closureBusy: - # avoid infinite recursion for right-recursive rules - continue - closureBusy.add(c) - configs.dipsIntoOuterContext = True # TODO: can remove? only care when we add to set per middle of this method - newDepth -= 1 - if ParserATNSimulator.debug: - print("dips into outer ctx: " + str(c)) - else: - if not t.isEpsilon: - if c in closureBusy: - # avoid infinite recursion for EOF* and EOF+ - continue - closureBusy.add(c) - if isinstance(t, RuleTransition): - # latch when newDepth goes negative - once we step out of the entry context we can't return - if newDepth >= 0: - newDepth += 1 - - self.closureCheckingStopState(c, configs, closureBusy, continueCollecting, fullCtx, newDepth, treatEofAsEpsilon) - - - - # Implements first-edge (loop entry) elimination as an optimization - # during closure operations. See antlr/antlr4#1398. - # - # The optimization is to avoid adding the loop entry config when - # the exit path can only lead back to the same - # StarLoopEntryState after popping context at the rule end state - # (traversing only epsilon edges, so we're still in closure, in - # this same rule). - # - # We need to detect any state that can reach loop entry on - # epsilon w/o exiting rule. We don't have to look at FOLLOW - # links, just ensure that all stack tops for config refer to key - # states in LR rule. - # - # To verify we are in the right situation we must first check - # closure is at a StarLoopEntryState generated during LR removal. - # Then we check that each stack top of context is a return state - # from one of these cases: - # - # 1. 'not' expr, '(' type ')' expr. The return state points at loop entry state - # 2. expr op expr. The return state is the block end of internal block of (...)* - # 3. 'between' expr 'and' expr. The return state of 2nd expr reference. - # That state points at block end of internal block of (...)*. - # 4. expr '?' expr ':' expr. The return state points at block end, - # which points at loop entry state. - # - # If any is true for each stack top, then closure does not add a - # config to the current config set for edge[0], the loop entry branch. - # - # Conditions fail if any context for the current config is: - # - # a. empty (we'd fall out of expr to do a global FOLLOW which could - # even be to some weird spot in expr) or, - # b. lies outside of expr or, - # c. lies within expr but at a state not the BlockEndState - # generated during LR removal - # - # Do we need to evaluate predicates ever in closure for this case? - # - # No. Predicates, including precedence predicates, are only - # evaluated when computing a DFA start state. I.e., only before - # the lookahead (but not parser) consumes a token. - # - # There are no epsilon edges allowed in LR rule alt blocks or in - # the "primary" part (ID here). If closure is in - # StarLoopEntryState any lookahead operation will have consumed a - # token as there are no epsilon-paths that lead to - # StarLoopEntryState. We do not have to evaluate predicates - # therefore if we are in the generated StarLoopEntryState of a LR - # rule. Note that when making a prediction starting at that - # decision point, decision d=2, compute-start-state performs - # closure starting at edges[0], edges[1] emanating from - # StarLoopEntryState. That means it is not performing closure on - # StarLoopEntryState during compute-start-state. - # - # How do we know this always gives same prediction answer? - # - # Without predicates, loop entry and exit paths are ambiguous - # upon remaining input +b (in, say, a+b). Either paths lead to - # valid parses. Closure can lead to consuming + immediately or by - # falling out of this call to expr back into expr and loop back - # again to StarLoopEntryState to match +b. In this special case, - # we choose the more efficient path, which is to take the bypass - # path. - # - # The lookahead language has not changed because closure chooses - # one path over the other. Both paths lead to consuming the same - # remaining input during a lookahead operation. If the next token - # is an operator, lookahead will enter the choice block with - # operators. If it is not, lookahead will exit expr. Same as if - # closure had chosen to enter the choice block immediately. - # - # Closure is examining one config (some loopentrystate, some alt, - # context) which means it is considering exactly one alt. Closure - # always copies the same alt to any derived configs. - # - # How do we know this optimization doesn't mess up precedence in - # our parse trees? - # - # Looking through expr from left edge of stat only has to confirm - # that an input, say, a+b+c; begins with any valid interpretation - # of an expression. The precedence actually doesn't matter when - # making a decision in stat seeing through expr. It is only when - # parsing rule expr that we must use the precedence to get the - # right interpretation and, hence, parse tree. - # - # @since 4.6 - # - def canDropLoopEntryEdgeInLeftRecursiveRule(self, config): - # return False - p = config.state - # First check to see if we are in StarLoopEntryState generated during - # left-recursion elimination. For efficiency, also check if - # the context has an empty stack case. If so, it would mean - # global FOLLOW so we can't perform optimization - # Are we the special loop entry/exit state? or SLL wildcard - if p.stateType != ATNState.STAR_LOOP_ENTRY \ - or not p.isPrecedenceDecision \ - or config.context.isEmpty() \ - or config.context.hasEmptyPath(): - return False - - # Require all return states to return back to the same rule - # that p is in. - numCtxs = len(config.context) - for i in range(0, numCtxs): # for each stack context - returnState = self.atn.states[config.context.getReturnState(i)] - if returnState.ruleIndex != p.ruleIndex: - return False - - decisionStartState = p.transitions[0].target - blockEndStateNum = decisionStartState.endState.stateNumber - blockEndState = self.atn.states[blockEndStateNum] - - # Verify that the top of each stack context leads to loop entry/exit - # state through epsilon edges and w/o leaving rule. - for i in range(0, numCtxs): # for each stack context - returnStateNumber = config.context.getReturnState(i) - returnState = self.atn.states[returnStateNumber] - # all states must have single outgoing epsilon edge - if len(returnState.transitions) != 1 or not returnState.transitions[0].isEpsilon: - return False - - # Look for prefix op case like 'not expr', (' type ')' expr - returnStateTarget = returnState.transitions[0].target - if returnState.stateType == ATNState.BLOCK_END and returnStateTarget is p: - continue - - # Look for 'expr op expr' or case where expr's return state is block end - # of (...)* internal block; the block end points to loop back - # which points to p but we don't need to check that - if returnState is blockEndState: - continue - - # Look for ternary expr ? expr : expr. The return state points at block end, - # which points at loop entry state - if returnStateTarget is blockEndState: - continue - - # Look for complex prefix 'between expr and expr' case where 2nd expr's - # return state points at block end state of (...)* internal block - if returnStateTarget.stateType == ATNState.BLOCK_END \ - and len(returnStateTarget.transitions) == 1 \ - and returnStateTarget.transitions[0].isEpsilon \ - and returnStateTarget.transitions[0].target is p: - continue - - # anything else ain't conforming - return False - - return True - - - def getRuleName(self, index:int): - if self.parser is not None and index>=0: - return self.parser.ruleNames[index] - else: - return "" - - epsilonTargetMethods = dict() - epsilonTargetMethods[Transition.RULE] = lambda sim, config, t, collectPredicates, inContext, fullCtx, treatEofAsEpsilon: \ - sim.ruleTransition(config, t) - epsilonTargetMethods[Transition.PRECEDENCE] = lambda sim, config, t, collectPredicates, inContext, fullCtx, treatEofAsEpsilon: \ - sim.precedenceTransition(config, t, collectPredicates, inContext, fullCtx) - epsilonTargetMethods[Transition.PREDICATE] = lambda sim, config, t, collectPredicates, inContext, fullCtx, treatEofAsEpsilon: \ - sim.predTransition(config, t, collectPredicates, inContext, fullCtx) - epsilonTargetMethods[Transition.ACTION] = lambda sim, config, t, collectPredicates, inContext, fullCtx, treatEofAsEpsilon: \ - sim.actionTransition(config, t) - epsilonTargetMethods[Transition.EPSILON] = lambda sim, config, t, collectPredicates, inContext, fullCtx, treatEofAsEpsilon: \ - ATNConfig(state=t.target, config=config) - epsilonTargetMethods[Transition.ATOM] = lambda sim, config, t, collectPredicates, inContext, fullCtx, treatEofAsEpsilon: \ - ATNConfig(state=t.target, config=config) if treatEofAsEpsilon and t.matches(Token.EOF, 0, 1) else None - epsilonTargetMethods[Transition.RANGE] = lambda sim, config, t, collectPredicates, inContext, fullCtx, treatEofAsEpsilon: \ - ATNConfig(state=t.target, config=config) if treatEofAsEpsilon and t.matches(Token.EOF, 0, 1) else None - epsilonTargetMethods[Transition.SET] = lambda sim, config, t, collectPredicates, inContext, fullCtx, treatEofAsEpsilon: \ - ATNConfig(state=t.target, config=config) if treatEofAsEpsilon and t.matches(Token.EOF, 0, 1) else None - - def getEpsilonTarget(self, config:ATNConfig, t:Transition, collectPredicates:bool, inContext:bool, fullCtx:bool, treatEofAsEpsilon:bool): - m = self.epsilonTargetMethods.get(t.serializationType, None) - if m is None: - return None - else: - return m(self, config, t, collectPredicates, inContext, fullCtx, treatEofAsEpsilon) - - def actionTransition(self, config:ATNConfig, t:ActionTransition): - if ParserATNSimulator.debug: - print("ACTION edge " + str(t.ruleIndex) + ":" + str(t.actionIndex)) - return ATNConfig(state=t.target, config=config) - - def precedenceTransition(self, config:ATNConfig, pt:PrecedencePredicateTransition, collectPredicates:bool, inContext:bool, fullCtx:bool): - if ParserATNSimulator.debug: - print("PRED (collectPredicates=" + str(collectPredicates) + ") " + - str(pt.precedence) + ">=_p, ctx dependent=true") - if self.parser is not None: - print("context surrounding pred is " + str(self.parser.getRuleInvocationStack())) - - c = None - if collectPredicates and inContext: - if fullCtx: - # In full context mode, we can evaluate predicates on-the-fly - # during closure, which dramatically reduces the size of - # the config sets. It also obviates the need to test predicates - # later during conflict resolution. - currentPosition = self._input.index - self._input.seek(self._startIndex) - predSucceeds = pt.getPredicate().eval(self.parser, self._outerContext) - self._input.seek(currentPosition) - if predSucceeds: - c = ATNConfig(state=pt.target, config=config) # no pred context - else: - newSemCtx = andContext(config.semanticContext, pt.getPredicate()) - c = ATNConfig(state=pt.target, semantic=newSemCtx, config=config) - else: - c = ATNConfig(state=pt.target, config=config) - - if ParserATNSimulator.debug: - print("config from pred transition=" + str(c)) - return c - - def predTransition(self, config:ATNConfig, pt:PredicateTransition, collectPredicates:bool, inContext:bool, fullCtx:bool): - if ParserATNSimulator.debug: - print("PRED (collectPredicates=" + str(collectPredicates) + ") " + str(pt.ruleIndex) + - ":" + str(pt.predIndex) + ", ctx dependent=" + str(pt.isCtxDependent)) - if self.parser is not None: - print("context surrounding pred is " + str(self.parser.getRuleInvocationStack())) - - c = None - if collectPredicates and (not pt.isCtxDependent or (pt.isCtxDependent and inContext)): - if fullCtx: - # In full context mode, we can evaluate predicates on-the-fly - # during closure, which dramatically reduces the size of - # the config sets. It also obviates the need to test predicates - # later during conflict resolution. - currentPosition = self._input.index - self._input.seek(self._startIndex) - predSucceeds = pt.getPredicate().eval(self.parser, self._outerContext) - self._input.seek(currentPosition) - if predSucceeds: - c = ATNConfig(state=pt.target, config=config) # no pred context - else: - newSemCtx = andContext(config.semanticContext, pt.getPredicate()) - c = ATNConfig(state=pt.target, semantic=newSemCtx, config=config) - else: - c = ATNConfig(state=pt.target, config=config) - - if ParserATNSimulator.debug: - print("config from pred transition=" + str(c)) - return c - - def ruleTransition(self, config:ATNConfig, t:RuleTransition): - if ParserATNSimulator.debug: - print("CALL rule " + self.getRuleName(t.target.ruleIndex) + ", ctx=" + str(config.context)) - returnState = t.followState - newContext = SingletonPredictionContext.create(config.context, returnState.stateNumber) - return ATNConfig(state=t.target, context=newContext, config=config ) - - def getConflictingAlts(self, configs:ATNConfigSet): - altsets = PredictionMode.getConflictingAltSubsets(configs) - return PredictionMode.getAlts(altsets) - - # Sam pointed out a problem with the previous definition, v3, of - # ambiguous states. If we have another state associated with conflicting - # alternatives, we should keep going. For example, the following grammar - # - # s : (ID | ID ID?) ';' ; - # - # When the ATN simulation reaches the state before ';', it has a DFA - # state that looks like: [12|1|[], 6|2|[], 12|2|[]]. Naturally - # 12|1|[] and 12|2|[] conflict, but we cannot stop processing this node - # because alternative to has another way to continue, via [6|2|[]]. - # The key is that we have a single state that has config's only associated - # with a single alternative, 2, and crucially the state transitions - # among the configurations are all non-epsilon transitions. That means - # we don't consider any conflicts that include alternative 2. So, we - # ignore the conflict between alts 1 and 2. We ignore a set of - # conflicting alts when there is an intersection with an alternative - # associated with a single alt state in the state→config-list map. - # - # It's also the case that we might have two conflicting configurations but - # also a 3rd nonconflicting configuration for a different alternative: - # [1|1|[], 1|2|[], 8|3|[]]. This can come about from grammar: - # - # a : A | A | A B ; - # - # After matching input A, we reach the stop state for rule A, state 1. - # State 8 is the state right before B. Clearly alternatives 1 and 2 - # conflict and no amount of further lookahead will separate the two. - # However, alternative 3 will be able to continue and so we do not - # stop working on this state. In the previous example, we're concerned - # with states associated with the conflicting alternatives. Here alt - # 3 is not associated with the conflicting configs, but since we can continue - # looking for input reasonably, I don't declare the state done. We - # ignore a set of conflicting alts when we have an alternative - # that we still need to pursue. - # - - def getConflictingAltsOrUniqueAlt(self, configs:ATNConfigSet): - conflictingAlts = None - if configs.uniqueAlt!= ATN.INVALID_ALT_NUMBER: - conflictingAlts = set() - conflictingAlts.add(configs.uniqueAlt) - else: - conflictingAlts = configs.conflictingAlts - return conflictingAlts - - def getTokenName(self, t:int): - if t==Token.EOF: - return "EOF" - if self.parser is not None and \ - self.parser.literalNames is not None and \ - t < len(self.parser.literalNames): - return self.parser.literalNames[t] + "<" + str(t) + ">" - else: - return str(t) - - def getLookaheadName(self, input:TokenStream): - return self.getTokenName(input.LA(1)) - - # Used for debugging in adaptivePredict around execATN but I cut - # it out for clarity now that alg. works well. We can leave this - # "dead" code for a bit. - # - def dumpDeadEndConfigs(self, nvae:NoViableAltException): - print("dead end configs: ") - for c in nvae.getDeadEndConfigs(): - trans = "no edges" - if len(c.state.transitions)>0: - t = c.state.transitions[0] - if isinstance(t, AtomTransition): - trans = "Atom "+ self.getTokenName(t.label) - elif isinstance(t, SetTransition): - neg = isinstance(t, NotSetTransition) - trans = ("~" if neg else "")+"Set "+ str(t.set) - print(c.toString(self.parser, True) + ":" + trans, file=sys.stderr) - - def noViableAlt(self, input:TokenStream, outerContext:ParserRuleContext, configs:ATNConfigSet, startIndex:int): - return NoViableAltException(self.parser, input, input.get(startIndex), input.LT(1), configs, outerContext) - - def getUniqueAlt(self, configs:ATNConfigSet): - alt = ATN.INVALID_ALT_NUMBER - for c in configs: - if alt == ATN.INVALID_ALT_NUMBER: - alt = c.alt # found first alt - elif c.alt!=alt: - return ATN.INVALID_ALT_NUMBER - return alt - - # - # Add an edge to the DFA, if possible. This method calls - # {@link #addDFAState} to ensure the {@code to} state is present in the - # DFA. If {@code from} is {@code null}, or if {@code t} is outside the - # range of edges that can be represented in the DFA tables, this method - # returns without adding the edge to the DFA. - # - #

    If {@code to} is {@code null}, this method returns {@code null}. - # Otherwise, this method returns the {@link DFAState} returned by calling - # {@link #addDFAState} for the {@code to} state.

    - # - # @param dfa The DFA - # @param from The source state for the edge - # @param t The input symbol - # @param to The target state for the edge - # - # @return If {@code to} is {@code null}, this method returns {@code null}; - # otherwise this method returns the result of calling {@link #addDFAState} - # on {@code to} - # - def addDFAEdge(self, dfa:DFA, from_:DFAState, t:int, to:DFAState): - if ParserATNSimulator.debug: - print("EDGE " + str(from_) + " -> " + str(to) + " upon " + self.getTokenName(t)) - - if to is None: - return None - - to = self.addDFAState(dfa, to) # used existing if possible not incoming - if from_ is None or t < -1 or t > self.atn.maxTokenType: - return to - - if from_.edges is None: - from_.edges = [None] * (self.atn.maxTokenType + 2) - from_.edges[t+1] = to # connect - - if ParserATNSimulator.debug: - names = None if self.parser is None else self.parser.literalNames - print("DFA=\n" + dfa.toString(names)) - - return to - - # - # Add state {@code D} to the DFA if it is not already present, and return - # the actual instance stored in the DFA. If a state equivalent to {@code D} - # is already in the DFA, the existing state is returned. Otherwise this - # method returns {@code D} after adding it to the DFA. - # - #

    If {@code D} is {@link #ERROR}, this method returns {@link #ERROR} and - # does not change the DFA.

    - # - # @param dfa The dfa - # @param D The DFA state to add - # @return The state stored in the DFA. This will be either the existing - # state if {@code D} is already in the DFA, or {@code D} itself if the - # state was not already present. - # - def addDFAState(self, dfa:DFA, D:DFAState): - if D is self.ERROR: - return D - - - existing = dfa.states.get(D, None) - if existing is not None: - return existing - - D.stateNumber = len(dfa.states) - if not D.configs.readonly: - D.configs.optimizeConfigs(self) - D.configs.setReadonly(True) - dfa.states[D] = D - if ParserATNSimulator.debug: - print("adding new DFA state: " + str(D)) - return D - - def reportAttemptingFullContext(self, dfa:DFA, conflictingAlts:set, configs:ATNConfigSet, startIndex:int, stopIndex:int): - if ParserATNSimulator.debug or ParserATNSimulator.retry_debug: - print("reportAttemptingFullContext decision=" + str(dfa.decision) + ":" + str(configs) + - ", input=" + self.parser.getTokenStream().getText(startIndex, stopIndex)) - if self.parser is not None: - self.parser.getErrorListenerDispatch().reportAttemptingFullContext(self.parser, dfa, startIndex, stopIndex, conflictingAlts, configs) - - def reportContextSensitivity(self, dfa:DFA, prediction:int, configs:ATNConfigSet, startIndex:int, stopIndex:int): - if ParserATNSimulator.debug or ParserATNSimulator.retry_debug: - print("reportContextSensitivity decision=" + str(dfa.decision) + ":" + str(configs) + - ", input=" + self.parser.getTokenStream().getText(startIndex, stopIndex)) - if self.parser is not None: - self.parser.getErrorListenerDispatch().reportContextSensitivity(self.parser, dfa, startIndex, stopIndex, prediction, configs) - - # If context sensitive parsing, we know it's ambiguity not conflict# - def reportAmbiguity(self, dfa:DFA, D:DFAState, startIndex:int, stopIndex:int, - exact:bool, ambigAlts:set, configs:ATNConfigSet ): - if ParserATNSimulator.debug or ParserATNSimulator.retry_debug: -# ParserATNPathFinder finder = new ParserATNPathFinder(parser, atn); -# int i = 1; -# for (Transition t : dfa.atnStartState.transitions) { -# print("ALT "+i+"="); -# print(startIndex+".."+stopIndex+", len(input)="+parser.getInputStream().size()); -# TraceTree path = finder.trace(t.target, parser.getContext(), (TokenStream)parser.getInputStream(), -# startIndex, stopIndex); -# if ( path!=null ) { -# print("path = "+path.toStringTree()); -# for (TraceTree leaf : path.leaves) { -# List states = path.getPathToNode(leaf); -# print("states="+states); -# } -# } -# i++; -# } - print("reportAmbiguity " + str(ambigAlts) + ":" + str(configs) + - ", input=" + self.parser.getTokenStream().getText(startIndex, stopIndex)) - if self.parser is not None: - self.parser.getErrorListenerDispatch().reportAmbiguity(self.parser, dfa, startIndex, stopIndex, exact, ambigAlts, configs) - diff --git a/spaces/aryadytm/photo-colorization/Dockerfile b/spaces/aryadytm/photo-colorization/Dockerfile deleted file mode 100644 index 995e8e56f44f9160085b7699985c953b89c9caa0..0000000000000000000000000000000000000000 --- a/spaces/aryadytm/photo-colorization/Dockerfile +++ /dev/null @@ -1,9 +0,0 @@ -FROM pytorch/pytorch:latest - -WORKDIR /app - -COPY . . - -RUN pip install -r requirements.txt - -CMD [ "streamlit", "run", "app.py" ] \ No newline at end of file diff --git a/spaces/awacke1/CardGame/app.py b/spaces/awacke1/CardGame/app.py deleted file mode 100644 index 4a3c36fbeb89bf46952418951f871d3d556104ed..0000000000000000000000000000000000000000 --- a/spaces/awacke1/CardGame/app.py +++ /dev/null @@ -1,138 +0,0 @@ -from typing import List, Tuple - -import streamlit as st -from treys import Card, Deck -from treys import Evaluator - -from models import Hand, SC - - -# credits -# Treys python package -# https://github.com/ihendley/treys -# StreamLit - statefull webapp -# https://blog.streamlit.io/session-state-for-streamlit/ - - -def app(): - placeholder = st.empty() - # INIT - st.title('Poker play') - if 'count' not in st.session_state: - st.session_state.count = 0 - st.session_state.sc = SC(st) - # END INIT - - # Streamlit runs from top to bottom on every iteraction so - st.session_state.count += 1 - placeholder.write('INIT:') - st.write('Count of screen refresh = ', st.session_state.count) - - # shuffle cards - btn_shuffle_cards = st.button('shuffle cards') - if btn_shuffle_cards: - _sc: SC = st.session_state.sc - _sc.deck = Deck() - _sc.isTableSet = True - placeholder.write('INIT:TableSet') - - btnTablePreflop = st.button('Table : preFlop') - if btnTablePreflop: - print('-btn preFlop') - _sc: SC = st.session_state.sc - # check pre condition - # draw preFlop to players - for index, player in enumerate(_sc.players_dict.values()): - player.hand.card1 = _sc.deck.draw(1) - for index, player in enumerate(_sc.players_dict.values()): - player.hand.card2 = _sc.deck.draw(1) - - st.session_state.isPreFlop = True - placeholder.write('INIT:TableSet:PreFlop') - - btn_flop = st.button('Table : flop') - if btn_flop: - print('-btn Flop') - _sc: SC = st.session_state.sc - _sc.board_flop = _sc.deck.draw(3) - _sc.isFlop = True - print('-btn Flop done') - placeholder.write('INIT:TableSet:PreFlop:Flop') - - btn_turn = st.button('Table : turn') - if btn_turn: - _sc: SC = st.session_state.sc - _sc.board_turn = _sc.deck.draw(1) - _sc.isTurn = True - placeholder.write('INIT:TableSet:PreFlop:Flop:Turn') - - btn_river = st.button('Table : river') - if btn_river: - _sc: SC = st.session_state.sc - _sc.board_river = _sc.deck.draw(1) - _sc.isRiver = True - placeholder.write('INIT:TableSet:PreFlop:Flop:Turn:River') - - btn_show = st.button('Table : show') - if btn_show: - placeholder.write('INIT:TableSet:PreFlop:Flop:Turn:River===showing') - layTable(st.session_state.sc) - - -def eval_print(msg: str, player_num: int, player_hand: Hand, rankDesc: str, score: int): - st.write(msg, 'Player ', player_num, ' hand is ' - , Card.print_pretty_cards([player_hand.card1, player_hand.card2]) - , 'desc:', rankDesc - , 'score:', score - ) - st.write(' ') - - -def eval_player(board: List[Card], hand: Hand, evaluator: Evaluator) -> Tuple[int, str]: - eval_score = evaluator.evaluate(board, [hand.card1, hand.card2]) # Important Treys looks at hand as array of int - rank_class = evaluator.get_rank_class(eval_score) - descClass = evaluator.class_to_string(rank_class) # description of the rank class - return eval_score, descClass - - -def layTable(sc: SC): - _board = None - if sc.isFlop: - st.write('Flop = ', Card.print_pretty_cards(sc.board_flop)) - _board = sc.board_flop.copy() - if sc.isTurn: - st.write('Turn = ', Card.print_pretty_card(sc.board_turn)) - _board.append(sc.board_turn) - if sc.isRiver: - st.write('River = ', Card.print_pretty_card(sc.board_river)) - _board.append(sc.board_river) - - # eval now - evaluator = Evaluator() - - for index, player in enumerate(sc.players_dict.values()): - tuple2 = eval_player(_board, player.hand, evaluator) - player.score_abs = tuple2[0] - player.score_desc = tuple2[1] - - from collections import OrderedDict - # order the players : key is auto generated 0,1,2,3..., value is x[1].score_abs ->translated to -- 1,2,3... - orderd = OrderedDict(sorted(sc.players_dict.items(), key=lambda x: x[1].score_abs)) - # get the tuple example (3) - winner = next(iter(orderd.items())) - - for index, (k, player) in enumerate(sc.players_dict.items()): - if k == winner[0]: - # eval_print(msg, player_num, player_hand, rankDesc): - eval_print('winner', player.pos, player.hand, player.score_desc, player.score_abs) - else: - eval_print(' - ', player.pos, player.hand, player.score_desc, player.score_abs) - - # print summy on the console - # evaluator.hand_summary(_board, all-hands) - - st.write('game ended') - - -# run -app() diff --git a/spaces/awacke1/Embedded_Space_Test/README.md b/spaces/awacke1/Embedded_Space_Test/README.md deleted file mode 100644 index 14fa6aa5b0810466e1d95653d41b0324ad14a299..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Embedded_Space_Test/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Embedded Space Test -emoji: 🏃 -colorFrom: yellow -colorTo: pink -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/VoiceGPT15/README.md b/spaces/awacke1/VoiceGPT15/README.md deleted file mode 100644 index b231e6b131a7f30b61c7734dafc32909589e0aff..0000000000000000000000000000000000000000 --- a/spaces/awacke1/VoiceGPT15/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: VoiceGPT15 -emoji: 🌍 -colorFrom: red -colorTo: indigo -sdk: streamlit -sdk_version: 1.21.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awacke1/google-bigbird-pegasus-large-pubmed/app.py b/spaces/awacke1/google-bigbird-pegasus-large-pubmed/app.py deleted file mode 100644 index 9213d962138e78206cf3fd0f29940da3358d2299..0000000000000000000000000000000000000000 --- a/spaces/awacke1/google-bigbird-pegasus-large-pubmed/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/google/bigbird-pegasus-large-pubmed").launch() \ No newline at end of file diff --git a/spaces/awinml/alpaca-cpp/README.md b/spaces/awinml/alpaca-cpp/README.md deleted file mode 100644 index 963839bcbb0c297e2a955ee822d0d25614756c1f..0000000000000000000000000000000000000000 --- a/spaces/awinml/alpaca-cpp/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Alpaca CPP -emoji: ⚡ -colorFrom: pink -colorTo: red -sdk: gradio -sdk_version: 3.28.1 -app_file: app.py -pinned: false -duplicated_from: awinml/alpaca-cpp-python ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/banana-projects/coref/README.md b/spaces/banana-projects/coref/README.md deleted file mode 100644 index b9aff200c53ad346dafd45a96b7ebe5d09426621..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/coref/README.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -title: coref -emoji: ⚡ -colorFrom: yellow -colorTo: yellow -sdk: static ---- - -## Coref \ No newline at end of file diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/DecalGeometry.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/DecalGeometry.js deleted file mode 100644 index 09d1e99aa7c29b51684b4f72e11c2c286da506e2..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/geometries/DecalGeometry.js +++ /dev/null @@ -1,357 +0,0 @@ -/** - * @author Mugen87 / https://github.com/Mugen87 - * @author spite / https://github.com/spite - * - * You can use this geometry to create a decal mesh, that serves different kinds of purposes. - * e.g. adding unique details to models, performing dynamic visual environmental changes or covering seams. - * - * Constructor parameter: - * - * mesh — Any mesh object - * position — Position of the decal projector - * orientation — Orientation of the decal projector - * size — Size of the decal projector - * - * reference: http://blog.wolfire.com/2009/06/how-to-project-decals/ - * - */ - -( function () { - - function DecalGeometry( mesh, position, orientation, size ) { - - THREE.BufferGeometry.call( this ); - - // buffers - - var vertices = []; - var normals = []; - var uvs = []; - - // helpers - - var plane = new THREE.Vector3(); - - // this matrix represents the transformation of the decal projector - - var projectorMatrix = new THREE.Matrix4(); - projectorMatrix.makeRotationFromEuler( orientation ); - projectorMatrix.setPosition( position ); - - var projectorMatrixInverse = new THREE.Matrix4().getInverse( projectorMatrix ); - - // generate buffers - - generate(); - - // build geometry - - this.addAttribute( 'position', new THREE.Float32BufferAttribute( vertices, 3 ) ); - this.addAttribute( 'normal', new THREE.Float32BufferAttribute( normals, 3 ) ); - this.addAttribute( 'uv', new THREE.Float32BufferAttribute( uvs, 2 ) ); - - function generate() { - - var i; - var geometry = new THREE.BufferGeometry(); - var decalVertices = []; - - var vertex = new THREE.Vector3(); - var normal = new THREE.Vector3(); - - // handle different geometry types - - if ( mesh.geometry.isGeometry ) { - - geometry.fromGeometry( mesh.geometry ); - - } else { - - geometry.copy( mesh.geometry ); - - } - - var positionAttribute = geometry.attributes.position; - var normalAttribute = geometry.attributes.normal; - - // first, create an array of 'DecalVertex' objects - // three consecutive 'DecalVertex' objects represent a single face - // - // this data structure will be later used to perform the clipping - - if ( geometry.index !== null ) { - - // indexed BufferGeometry - - var index = geometry.index; - - for ( i = 0; i < index.count; i ++ ) { - - vertex.fromBufferAttribute( positionAttribute, index.getX( i ) ); - normal.fromBufferAttribute( normalAttribute, index.getX( i ) ); - - pushDecalVertex( decalVertices, vertex, normal ); - - } - - } else { - - // non-indexed BufferGeometry - - for ( i = 0; i < positionAttribute.count; i ++ ) { - - vertex.fromBufferAttribute( positionAttribute, i ); - normal.fromBufferAttribute( normalAttribute, i ); - - pushDecalVertex( decalVertices, vertex, normal ); - - } - - } - - // second, clip the geometry so that it doesn't extend out from the projector - - decalVertices = clipGeometry( decalVertices, plane.set( 1, 0, 0 ) ); - decalVertices = clipGeometry( decalVertices, plane.set( - 1, 0, 0 ) ); - decalVertices = clipGeometry( decalVertices, plane.set( 0, 1, 0 ) ); - decalVertices = clipGeometry( decalVertices, plane.set( 0, - 1, 0 ) ); - decalVertices = clipGeometry( decalVertices, plane.set( 0, 0, 1 ) ); - decalVertices = clipGeometry( decalVertices, plane.set( 0, 0, - 1 ) ); - - // third, generate final vertices, normals and uvs - - for ( i = 0; i < decalVertices.length; i ++ ) { - - var decalVertex = decalVertices[ i ]; - - // create texture coordinates (we are still in projector space) - - uvs.push( - 0.5 + ( decalVertex.position.x / size.x ), - 0.5 + ( decalVertex.position.y / size.y ) - ); - - // transform the vertex back to world space - - decalVertex.position.applyMatrix4( projectorMatrix ); - - // now create vertex and normal buffer data - - vertices.push( decalVertex.position.x, decalVertex.position.y, decalVertex.position.z ); - normals.push( decalVertex.normal.x, decalVertex.normal.y, decalVertex.normal.z ); - - } - - } - - function pushDecalVertex( decalVertices, vertex, normal ) { - - // transform the vertex to world space, then to projector space - - vertex.applyMatrix4( mesh.matrixWorld ); - vertex.applyMatrix4( projectorMatrixInverse ); - - decalVertices.push( new DecalVertex( vertex.clone(), normal.clone() ) ); - - } - - function clipGeometry( inVertices, plane ) { - - var outVertices = []; - - var s = 0.5 * Math.abs( size.dot( plane ) ); - - // a single iteration clips one face, - // which consists of three consecutive 'DecalVertex' objects - - for ( var i = 0; i < inVertices.length; i += 3 ) { - - var v1Out, v2Out, v3Out, total = 0; - var nV1, nV2, nV3, nV4; - - var d1 = inVertices[ i + 0 ].position.dot( plane ) - s; - var d2 = inVertices[ i + 1 ].position.dot( plane ) - s; - var d3 = inVertices[ i + 2 ].position.dot( plane ) - s; - - v1Out = d1 > 0; - v2Out = d2 > 0; - v3Out = d3 > 0; - - // calculate, how many vertices of the face lie outside of the clipping plane - - total = ( v1Out ? 1 : 0 ) + ( v2Out ? 1 : 0 ) + ( v3Out ? 1 : 0 ); - - switch ( total ) { - - case 0: { - - // the entire face lies inside of the plane, no clipping needed - - outVertices.push( inVertices[ i ] ); - outVertices.push( inVertices[ i + 1 ] ); - outVertices.push( inVertices[ i + 2 ] ); - break; - - } - - case 1: { - - // one vertex lies outside of the plane, perform clipping - - if ( v1Out ) { - - nV1 = inVertices[ i + 1 ]; - nV2 = inVertices[ i + 2 ]; - nV3 = clip( inVertices[ i ], nV1, plane, s ); - nV4 = clip( inVertices[ i ], nV2, plane, s ); - - } - - if ( v2Out ) { - - nV1 = inVertices[ i ]; - nV2 = inVertices[ i + 2 ]; - nV3 = clip( inVertices[ i + 1 ], nV1, plane, s ); - nV4 = clip( inVertices[ i + 1 ], nV2, plane, s ); - - outVertices.push( nV3 ); - outVertices.push( nV2.clone() ); - outVertices.push( nV1.clone() ); - - outVertices.push( nV2.clone() ); - outVertices.push( nV3.clone() ); - outVertices.push( nV4 ); - break; - - } - - if ( v3Out ) { - - nV1 = inVertices[ i ]; - nV2 = inVertices[ i + 1 ]; - nV3 = clip( inVertices[ i + 2 ], nV1, plane, s ); - nV4 = clip( inVertices[ i + 2 ], nV2, plane, s ); - - } - - outVertices.push( nV1.clone() ); - outVertices.push( nV2.clone() ); - outVertices.push( nV3 ); - - outVertices.push( nV4 ); - outVertices.push( nV3.clone() ); - outVertices.push( nV2.clone() ); - - break; - - } - - case 2: { - - // two vertices lies outside of the plane, perform clipping - - if ( ! v1Out ) { - - nV1 = inVertices[ i ].clone(); - nV2 = clip( nV1, inVertices[ i + 1 ], plane, s ); - nV3 = clip( nV1, inVertices[ i + 2 ], plane, s ); - outVertices.push( nV1 ); - outVertices.push( nV2 ); - outVertices.push( nV3 ); - - } - - if ( ! v2Out ) { - - nV1 = inVertices[ i + 1 ].clone(); - nV2 = clip( nV1, inVertices[ i + 2 ], plane, s ); - nV3 = clip( nV1, inVertices[ i ], plane, s ); - outVertices.push( nV1 ); - outVertices.push( nV2 ); - outVertices.push( nV3 ); - - } - - if ( ! v3Out ) { - - nV1 = inVertices[ i + 2 ].clone(); - nV2 = clip( nV1, inVertices[ i ], plane, s ); - nV3 = clip( nV1, inVertices[ i + 1 ], plane, s ); - outVertices.push( nV1 ); - outVertices.push( nV2 ); - outVertices.push( nV3 ); - - } - - break; - - } - - case 3: { - - // the entire face lies outside of the plane, so let's discard the corresponding vertices - - break; - - } - - } - - } - - return outVertices; - - } - - function clip( v0, v1, p, s ) { - - var d0 = v0.position.dot( p ) - s; - var d1 = v1.position.dot( p ) - s; - - var s0 = d0 / ( d0 - d1 ); - - var v = new DecalVertex( - new THREE.Vector3( - v0.position.x + s0 * ( v1.position.x - v0.position.x ), - v0.position.y + s0 * ( v1.position.y - v0.position.y ), - v0.position.z + s0 * ( v1.position.z - v0.position.z ) - ), - new THREE.Vector3( - v0.normal.x + s0 * ( v1.normal.x - v0.normal.x ), - v0.normal.y + s0 * ( v1.normal.y - v0.normal.y ), - v0.normal.z + s0 * ( v1.normal.z - v0.normal.z ) - ) - ); - - // need to clip more values (texture coordinates)? do it this way: - // intersectpoint.value = a.value + s * ( b.value - a.value ); - - return v; - - } - - } - - DecalGeometry.prototype = Object.create( THREE.BufferGeometry.prototype ); - DecalGeometry.prototype.constructor = DecalGeometry; - - // helper - - function DecalVertex( position, normal ) { - - this.position = position; - this.normal = normal; - - } - - DecalVertex.prototype.clone = function () { - - return new DecalVertex( this.position.clone(), this.normal.clone() ); - - }; - - // export - - THREE.DecalGeometry = DecalGeometry; - -} )(); diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/sea3d/SEA3DDraco.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/sea3d/SEA3DDraco.js deleted file mode 100644 index a13f5265a26aa59995344a93725e4feba4a71bc1..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/loaders/sea3d/SEA3DDraco.js +++ /dev/null @@ -1,212 +0,0 @@ -/** - * SEA3D - Google Draco - * @author Sunag / http://www.sunag.com.br/ - */ - -'use strict'; - -// -// Lossy Compression -// - -SEA3D.GeometryDraco = function ( name, data, sea3d ) { - - this.name = name; - this.data = data; - this.sea3d = sea3d; - - var attrib = data.readUShort(), - i; - - this.isBig = ( attrib & 1 ) !== 0; - - data.readVInt = this.isBig ? data.readUInt : data.readUShort; - - this.groups = []; - - if ( attrib & 32 ) { - - this.uv = []; - this.uv.length = data.readUByte(); - - } - - if ( attrib & 1024 ) { - - var numGroups = data.readUByte(), - groupOffset = 0; - - for ( i = 0; i < numGroups; i ++ ) { - - var groupLength = data.readVInt() * 3; - - this.groups.push( { - start: groupOffset, - count: groupLength, - } ); - - groupOffset += groupLength; - - } - - } - - var module = SEA3D.GeometryDraco.getModule(), - dracoData = new Int8Array( data.concat( data.position, data.bytesAvailable ).buffer ); - - var decoder = new module.Decoder(); - - var buffer = new module.DecoderBuffer(); - buffer.Init( dracoData, dracoData.length ); - - var mesh = new module.Mesh(); - - var decodingStatus = decoder.DecodeBufferToMesh( buffer, mesh ); - - if ( ! decodingStatus.ok() ) { - - data.position += 5; // jump "DRACO" magic string - var version = data.readUByte() + '.' + data.readUByte(); // draco version - - console.error( "SEA3D Draco", version, "decoding failed:", decodingStatus.error_msg(), "You may need update 'draco_decoder.js'." ); - - // use an empty geometry - this.vertex = new Float32Array(); - - return; - - } - - var index = 0; - - this.vertex = this.readFloat32Array( module, decoder, mesh, index ++ ); - - if ( attrib & 4 ) this.normal = this.readFloat32Array( module, decoder, mesh, index ++ ); - - if ( attrib & 32 ) { - - for ( i = 0; i < this.uv.length; i ++ ) { - - this.uv[ i ] = this.readFloat32Array( module, decoder, mesh, index ++ ); - - } - - } - - if ( attrib & 64 ) { - - this.jointPerVertex = decoder.GetAttribute( mesh, index ).num_components(); - - this.joint = this.readUint16Array( module, decoder, mesh, index ++ ); - this.weight = this.readFloat32Array( module, decoder, mesh, index ++ ); - - } - - this.indexes = this.readIndices( module, decoder, mesh ); - - module.destroy( mesh ); - module.destroy( buffer ); - module.destroy( decoder ); - -}; - -SEA3D.GeometryDraco.getModule = function () { - - if ( ! this.module ) { - - this.module = DracoDecoderModule(); - - } - - return this.module; - -}; - -SEA3D.GeometryDraco.prototype.type = "sdrc"; - -SEA3D.GeometryDraco.prototype.readIndices = function ( module, decoder, mesh ) { - - var numFaces = mesh.num_faces(), - numIndices = numFaces * 3, - indices = new ( numIndices >= 0xFFFE ? Uint32Array : Uint16Array )( numIndices ); - - var ia = new module.DracoInt32Array(); - - for ( var i = 0; i < numFaces; ++ i ) { - - decoder.GetFaceFromMesh( mesh, i, ia ); - - var index = i * 3; - - indices[ index ] = ia.GetValue( 0 ); - indices[ index + 1 ] = ia.GetValue( 1 ); - indices[ index + 2 ] = ia.GetValue( 2 ); - - } - - module.destroy( ia ); - - return indices; - -}; - -SEA3D.GeometryDraco.prototype.readFloat32Array = function ( module, decoder, mesh, attrib ) { - - var attribute = decoder.GetAttribute( mesh, attrib ), - numPoints = mesh.num_points(); - - var dracoArray = new module.DracoFloat32Array(); - decoder.GetAttributeFloatForAllPoints( mesh, attribute, dracoArray ); - - var size = numPoints * attribute.num_components(), - output = new Float32Array( size ); - - for ( var i = 0; i < size; ++ i ) { - - output[ i ] = dracoArray.GetValue( i ); - - } - - module.destroy( dracoArray ); - - return output; - -}; - -SEA3D.GeometryDraco.prototype.readUint16Array = function ( module, decoder, mesh, attrib, type ) { - - var attribute = decoder.GetAttribute( mesh, attrib ), - numPoints = mesh.num_points(); - - var dracoArray = new module.DracoUInt16Array(); - decoder.GetAttributeUInt16ForAllPoints( mesh, attribute, dracoArray ); - - var size = numPoints * attribute.num_components(), - output = new Uint16Array( size ); - - for ( var i = 0; i < size; ++ i ) { - - output[ i ] = dracoArray.GetValue( i ); - - } - - module.destroy( dracoArray ); - - return output; - -}; - -// -// Extension -// - -THREE.SEA3D.EXTENSIONS_LOADER.push( { - - setTypeRead: function () { - - this.file.addClass( SEA3D.GeometryDraco, true ); - this.file.typeRead[ SEA3D.GeometryDraco.prototype.type ] = this.readGeometryBuffer; - - } - -} ); diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/pmrem/PMREMGenerator.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/pmrem/PMREMGenerator.js deleted file mode 100644 index 8fcc3a41280124e1be2492da1ec1c15ff06450ff..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/pmrem/PMREMGenerator.js +++ /dev/null @@ -1,291 +0,0 @@ -/** - * @author Prashant Sharma / spidersharma03 - * @author Ben Houston / bhouston, https://clara.io - * - * To avoid cube map seams, I create an extra pixel around each face. This way when the cube map is - * sampled by an application later(with a little care by sampling the centre of the texel), the extra 1 border - * of pixels makes sure that there is no seams artifacts present. This works perfectly for cubeUV format as - * well where the 6 faces can be arranged in any manner whatsoever. - * Code in the beginning of fragment shader's main function does this job for a given resolution. - * Run Scene_PMREM_Test.html in the examples directory to see the sampling from the cube lods generated - * by this class. - */ - -THREE.PMREMGenerator = ( function () { - - var shader = getShader(); - var camera = new THREE.OrthographicCamera( - 1, 1, 1, - 1, 0.0, 1000 ); - var scene = new THREE.Scene(); - var planeMesh = new THREE.Mesh( new THREE.PlaneBufferGeometry( 2, 2, 0 ), shader ); - planeMesh.material.side = THREE.DoubleSide; - scene.add( planeMesh ); - scene.add( camera ); - - var PMREMGenerator = function ( sourceTexture, samplesPerLevel, resolution ) { - - this.sourceTexture = sourceTexture; - this.resolution = ( resolution !== undefined ) ? resolution : 256; // NODE: 256 is currently hard coded in the glsl code for performance reasons - this.samplesPerLevel = ( samplesPerLevel !== undefined ) ? samplesPerLevel : 32; - - var monotonicEncoding = ( this.sourceTexture.encoding === THREE.LinearEncoding ) || - ( this.sourceTexture.encoding === THREE.GammaEncoding ) || ( this.sourceTexture.encoding === THREE.sRGBEncoding ); - - this.sourceTexture.minFilter = ( monotonicEncoding ) ? THREE.LinearFilter : THREE.NearestFilter; - this.sourceTexture.magFilter = ( monotonicEncoding ) ? THREE.LinearFilter : THREE.NearestFilter; - this.sourceTexture.generateMipmaps = this.sourceTexture.generateMipmaps && monotonicEncoding; - - this.cubeLods = []; - - var size = this.resolution; - var params = { - format: this.sourceTexture.format, - magFilter: this.sourceTexture.magFilter, - minFilter: this.sourceTexture.minFilter, - type: this.sourceTexture.type, - generateMipmaps: this.sourceTexture.generateMipmaps, - anisotropy: this.sourceTexture.anisotropy, - encoding: this.sourceTexture.encoding - }; - - // how many LODs fit in the given CubeUV Texture. - this.numLods = Math.log( size ) / Math.log( 2 ) - 2; // IE11 doesn't support Math.log2 - - for ( var i = 0; i < this.numLods; i ++ ) { - - var renderTarget = new THREE.WebGLRenderTargetCube( size, size, params ); - renderTarget.texture.name = "PMREMGenerator.cube" + i; - this.cubeLods.push( renderTarget ); - size = Math.max( 16, size / 2 ); - - } - - }; - - PMREMGenerator.prototype = { - - constructor: PMREMGenerator, - - /* - * Prashant Sharma / spidersharma03: More thought and work is needed here. - * Right now it's a kind of a hack to use the previously convolved map to convolve the current one. - * I tried to use the original map to convolve all the lods, but for many textures(specially the high frequency) - * even a high number of samples(1024) dosen't lead to satisfactory results. - * By using the previous convolved maps, a lower number of samples are generally sufficient(right now 32, which - * gives okay results unless we see the reflection very carefully, or zoom in too much), however the math - * goes wrong as the distribution function tries to sample a larger area than what it should be. So I simply scaled - * the roughness by 0.9(totally empirical) to try to visually match the original result. - * The condition "if(i <5)" is also an attemt to make the result match the original result. - * This method requires the most amount of thinking I guess. Here is a paper which we could try to implement in future:: - * https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch20.html - */ - update: function ( renderer ) { - - // Texture should only be flipped for CubeTexture, not for - // a Texture created via THREE.WebGLRenderTargetCube. - var tFlip = ( this.sourceTexture.isCubeTexture ) ? - 1 : 1; - - shader.defines[ 'SAMPLES_PER_LEVEL' ] = this.samplesPerLevel; - shader.uniforms[ 'faceIndex' ].value = 0; - shader.uniforms[ 'envMap' ].value = this.sourceTexture; - shader.envMap = this.sourceTexture; - shader.needsUpdate = true; - - var gammaInput = renderer.gammaInput; - var gammaOutput = renderer.gammaOutput; - var toneMapping = renderer.toneMapping; - var toneMappingExposure = renderer.toneMappingExposure; - var currentRenderTarget = renderer.getRenderTarget(); - - renderer.toneMapping = THREE.LinearToneMapping; - renderer.toneMappingExposure = 1.0; - renderer.gammaInput = false; - renderer.gammaOutput = false; - - for ( var i = 0; i < this.numLods; i ++ ) { - - var r = i / ( this.numLods - 1 ); - shader.uniforms[ 'roughness' ].value = r * 0.9; // see comment above, pragmatic choice - // Only apply the tFlip for the first LOD - shader.uniforms[ 'tFlip' ].value = ( i == 0 ) ? tFlip : 1; - var size = this.cubeLods[ i ].width; - shader.uniforms[ 'mapSize' ].value = size; - this.renderToCubeMapTarget( renderer, this.cubeLods[ i ] ); - - if ( i < 5 ) shader.uniforms[ 'envMap' ].value = this.cubeLods[ i ].texture; - - } - - renderer.setRenderTarget( currentRenderTarget ); - renderer.toneMapping = toneMapping; - renderer.toneMappingExposure = toneMappingExposure; - renderer.gammaInput = gammaInput; - renderer.gammaOutput = gammaOutput; - - }, - - renderToCubeMapTarget: function ( renderer, renderTarget ) { - - for ( var i = 0; i < 6; i ++ ) { - - this.renderToCubeMapTargetFace( renderer, renderTarget, i ); - - } - - }, - - renderToCubeMapTargetFace: function ( renderer, renderTarget, faceIndex ) { - - shader.uniforms[ 'faceIndex' ].value = faceIndex; - renderer.setRenderTarget( renderTarget, faceIndex ); - renderer.clear(); - renderer.render( scene, camera ); - - }, - - dispose: function () { - - for ( var i = 0, l = this.cubeLods.length; i < l; i ++ ) { - - this.cubeLods[ i ].dispose(); - - } - - }, - - }; - - function getShader() { - - var shaderMaterial = new THREE.ShaderMaterial( { - - defines: { - "SAMPLES_PER_LEVEL": 20, - }, - - uniforms: { - "faceIndex": { value: 0 }, - "roughness": { value: 0.5 }, - "mapSize": { value: 0.5 }, - "envMap": { value: null }, - "tFlip": { value: - 1 }, - }, - - vertexShader: - "varying vec2 vUv;\n\ - void main() {\n\ - vUv = uv;\n\ - gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );\n\ - }", - - fragmentShader: - "#include \n\ - varying vec2 vUv;\n\ - uniform int faceIndex;\n\ - uniform float roughness;\n\ - uniform samplerCube envMap;\n\ - uniform float mapSize;\n\ - uniform float tFlip;\n\ - \n\ - float GGXRoughnessToBlinnExponent( const in float ggxRoughness ) {\n\ - float a = ggxRoughness + 0.0001;\n\ - a *= a;\n\ - return ( 2.0 / a - 2.0 );\n\ - }\n\ - vec3 ImportanceSamplePhong(vec2 uv, mat3 vecSpace, float specPow) {\n\ - float phi = uv.y * 2.0 * PI;\n\ - float cosTheta = pow(1.0 - uv.x, 1.0 / (specPow + 1.0));\n\ - float sinTheta = sqrt(1.0 - cosTheta * cosTheta);\n\ - vec3 sampleDir = vec3(cos(phi) * sinTheta, sin(phi) * sinTheta, cosTheta);\n\ - return vecSpace * sampleDir;\n\ - }\n\ - vec3 ImportanceSampleGGX( vec2 uv, mat3 vecSpace, float Roughness )\n\ - {\n\ - float a = Roughness * Roughness;\n\ - float Phi = 2.0 * PI * uv.x;\n\ - float CosTheta = sqrt( (1.0 - uv.y) / ( 1.0 + (a*a - 1.0) * uv.y ) );\n\ - float SinTheta = sqrt( 1.0 - CosTheta * CosTheta );\n\ - return vecSpace * vec3(SinTheta * cos( Phi ), SinTheta * sin( Phi ), CosTheta);\n\ - }\n\ - mat3 matrixFromVector(vec3 n) {\n\ - float a = 1.0 / (1.0 + n.z);\n\ - float b = -n.x * n.y * a;\n\ - vec3 b1 = vec3(1.0 - n.x * n.x * a, b, -n.x);\n\ - vec3 b2 = vec3(b, 1.0 - n.y * n.y * a, -n.y);\n\ - return mat3(b1, b2, n);\n\ - }\n\ - \n\ - vec4 testColorMap(float Roughness) {\n\ - vec4 color;\n\ - if(faceIndex == 0)\n\ - color = vec4(1.0,0.0,0.0,1.0);\n\ - else if(faceIndex == 1)\n\ - color = vec4(0.0,1.0,0.0,1.0);\n\ - else if(faceIndex == 2)\n\ - color = vec4(0.0,0.0,1.0,1.0);\n\ - else if(faceIndex == 3)\n\ - color = vec4(1.0,1.0,0.0,1.0);\n\ - else if(faceIndex == 4)\n\ - color = vec4(0.0,1.0,1.0,1.0);\n\ - else\n\ - color = vec4(1.0,0.0,1.0,1.0);\n\ - color *= ( 1.0 - Roughness );\n\ - return color;\n\ - }\n\ - void main() {\n\ - vec3 sampleDirection;\n\ - vec2 uv = vUv*2.0 - 1.0;\n\ - float offset = -1.0/mapSize;\n\ - const float a = -1.0;\n\ - const float b = 1.0;\n\ - float c = -1.0 + offset;\n\ - float d = 1.0 - offset;\n\ - float bminusa = b - a;\n\ - uv.x = (uv.x - a)/bminusa * d - (uv.x - b)/bminusa * c;\n\ - uv.y = (uv.y - a)/bminusa * d - (uv.y - b)/bminusa * c;\n\ - if (faceIndex==0) {\n\ - sampleDirection = vec3(1.0, -uv.y, -uv.x);\n\ - } else if (faceIndex==1) {\n\ - sampleDirection = vec3(-1.0, -uv.y, uv.x);\n\ - } else if (faceIndex==2) {\n\ - sampleDirection = vec3(uv.x, 1.0, uv.y);\n\ - } else if (faceIndex==3) {\n\ - sampleDirection = vec3(uv.x, -1.0, -uv.y);\n\ - } else if (faceIndex==4) {\n\ - sampleDirection = vec3(uv.x, -uv.y, 1.0);\n\ - } else {\n\ - sampleDirection = vec3(-uv.x, -uv.y, -1.0);\n\ - }\n\ - vec3 correctedDirection = vec3( tFlip * sampleDirection.x, sampleDirection.yz );\n\ - mat3 vecSpace = matrixFromVector( normalize( correctedDirection ) );\n\ - vec3 rgbColor = vec3(0.0);\n\ - const int NumSamples = SAMPLES_PER_LEVEL;\n\ - vec3 vect;\n\ - float weight = 0.0;\n\ - for( int i = 0; i < NumSamples; i ++ ) {\n\ - float sini = sin(float(i));\n\ - float cosi = cos(float(i));\n\ - float r = rand(vec2(sini, cosi));\n\ - vect = ImportanceSampleGGX(vec2(float(i) / float(NumSamples), r), vecSpace, roughness);\n\ - float dotProd = dot(vect, normalize(sampleDirection));\n\ - weight += dotProd;\n\ - vec3 color = envMapTexelToLinear(textureCube(envMap, vect)).rgb;\n\ - rgbColor.rgb += color;\n\ - }\n\ - rgbColor /= float(NumSamples);\n\ - //rgbColor = testColorMap( roughness ).rgb;\n\ - gl_FragColor = linearToOutputTexel( vec4( rgbColor, 1.0 ) );\n\ - }", - - blending: THREE.NoBlending - - } ); - - shaderMaterial.type = 'PMREMGenerator'; - - return shaderMaterial; - - } - - return PMREMGenerator; - -} )(); diff --git a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/skinning_vertex.glsl.js b/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/skinning_vertex.glsl.js deleted file mode 100644 index a07418c4798f42882771ec720eac742c8f7e841f..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/renderers/shaders/ShaderChunk/skinning_vertex.glsl.js +++ /dev/null @@ -1,15 +0,0 @@ -export default /* glsl */` -#ifdef USE_SKINNING - - vec4 skinVertex = bindMatrix * vec4( transformed, 1.0 ); - - vec4 skinned = vec4( 0.0 ); - skinned += boneMatX * skinVertex * skinWeight.x; - skinned += boneMatY * skinVertex * skinWeight.y; - skinned += boneMatZ * skinVertex * skinWeight.z; - skinned += boneMatW * skinVertex * skinWeight.w; - - transformed = ( bindMatrixInverse * skinned ).xyz; - -#endif -`; diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001520.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001520.py deleted file mode 100644 index 43c7248137807e6458b0e62c42481571795ea9de..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001520.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_img[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

    Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

    visitor badge
    " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001612.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001612.py deleted file mode 100644 index d057f3626e03f07ee3a3d9911acd2cdf7fe11a3b..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001612.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_img[0]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "

    Towards Real-World Blind Face Restoration with Generative Facial Prior | Github Repo

    visitor badge
    " -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/beihai/PDF-Table-Extractor/.history/test_20220621135627.py b/spaces/beihai/PDF-Table-Extractor/.history/test_20220621135627.py deleted file mode 100644 index 0f2eb3ee053299afff773b29da51c12c14398c7f..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/test_20220621135627.py +++ /dev/null @@ -1,29 +0,0 @@ -#-*- coding : utf-8-*- -import base64 -from subprocess import STDOUT -import streamlit as st -import pandas as pd -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") -input_pdf = st.file_uploader(label = "", type = 'pdf') -background = st.selectbox("表格线条是否透明",(False,True)) -page_number = st.text_input("请填写表格所在PDF页码,eg: 3,1-3,2-end,all", value = 1) - -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - - tables_all= cam.read_pdf("input.pdf", pages=page_number, process_background=background) - result_all = pd.ExcelWriter("result.xlsx", engine='xlsxwriter') - for i in range(0,len(tables_all)): - table = tables_all[i].df - sheetname = str(i) - table.to_excel(result_all, sheetname,index=False) - result_all.save() - with open(result_all,'rb') as f: - st.download_button('抽取完成, 点击下载!', f,file_name="result.xlsx",mime="application/vnd.ms-excel") - \ No newline at end of file diff --git a/spaces/bergum/commerce-demo/proxy.py b/spaces/bergum/commerce-demo/proxy.py deleted file mode 100644 index f47086deda0d48708cb0d2e9b799fa90672e3d39..0000000000000000000000000000000000000000 --- a/spaces/bergum/commerce-demo/proxy.py +++ /dev/null @@ -1,35 +0,0 @@ -#!/usr/bin/env python3 -import http.server -import http.client -import socketserver - -PORT = 8000 -PROXY_PORT = 8080 - -class ProxyRequestHandler(http.server.SimpleHTTPRequestHandler): - def do_GET(self): - # Forward GET request to the proxy port with "/site" appended to the path - if self.path == '/' or self.path.startswith("/?"): - url = 'http://localhost:{}{}'.format(PROXY_PORT, '/site' + self.path) - else: - url = 'http://localhost:{}{}'.format(PROXY_PORT, self.path) - headers = dict(self.headers) - if 'Host' in headers: - del headers['Host'] # Remove "Host" header to avoid "HTTP/1.1 400 Bad Request" error - if 'host' in headers: - del headers['host'] # Remove "Host" header to avoid "HTTP/1.1 400 Bad Request" error - conn = http.client.HTTPConnection('localhost', PROXY_PORT) - print(headers) - conn.request('GET', url, headers=headers) - response = conn.getresponse() - self.send_response(response.status) - for header, value in response.getheaders(): - self.send_header(header, value) - self.end_headers() - self.wfile.write(response.read()) - -if __name__ == '__main__': - # Start the HTTP server - with socketserver.TCPServer(("", PORT), ProxyRequestHandler) as httpd: - print("Server running on port", PORT) - httpd.serve_forever() diff --git a/spaces/besarismaili/fastai_pet_classifier/app.py b/spaces/besarismaili/fastai_pet_classifier/app.py deleted file mode 100644 index 7da3645535f2c6ebe738e9e23cca51742e6912c0..0000000000000000000000000000000000000000 --- a/spaces/besarismaili/fastai_pet_classifier/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import gradio as gr -from fastai.vision.all import * -import skimage - -learn = load_learner('export.pkl') - -labels = learn.dls.vocab -def predict(img): - img = PILImage.create(img) - pred,pred_idx,probs = learn.predict(img) - return {labels[i]: float(probs[i]) for i in range(len(labels))} - -title = "Pet Breed Classifier" -description = "A pet breed classifier trained on the Oxford Pets dataset with fastai. Created as a demo for Gradio and HuggingFace Spaces." -article="

    Blog post

    " -examples = ['siamese.jpg'] -interpretation='default' -enable_queue=True - -gr.Interface(fn=predict,inputs=gr.inputs.Image(shape=(512, 512)),outputs=gr.outputs.Label(num_top_classes=3),title=title,description=description,article=article,examples=examples,interpretation=interpretation,enable_queue=enable_queue).launch() diff --git a/spaces/bhaskartripathi/Text2Question/run_qg.py b/spaces/bhaskartripathi/Text2Question/run_qg.py deleted file mode 100644 index 1d27bc27d2698bea1f4fd7cdec54cfacff3290dd..0000000000000000000000000000000000000000 --- a/spaces/bhaskartripathi/Text2Question/run_qg.py +++ /dev/null @@ -1,73 +0,0 @@ -import argparse -import numpy as np -from questiongenerator import QuestionGenerator -from questiongenerator import print_qa - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument( - "--text_dir", - default=None, - type=str, - required=True, - help="The text that will be used as context for question generation.", - ) - parser.add_argument( - "--model_dir", - default=None, - type=str, - help="The folder that the trained model checkpoints are in.", - ) - parser.add_argument( - "--num_questions", - default=10, - type=int, - help="The desired number of questions to generate.", - ) - parser.add_argument( - "--answer_style", - default="all", - type=str, - help="The desired type of answers. Choose from ['all', 'sentences', 'multiple_choice']", - ) - parser.add_argument( - "--show_answers", - default='True', - type=parse_bool_string, - help="Whether or not you want the answers to be visible. Choose from ['True', 'False']", - ) - parser.add_argument( - "--use_qa_eval", - default='True', - type=parse_bool_string, - help="Whether or not you want the generated questions to be filtered for quality. Choose from ['True', 'False']", - ) - args = parser.parse_args() - - with open(args.text_dir, 'r') as file: - text_file = file.read() - - qg = QuestionGenerator(args.model_dir) - - qa_list = qg.generate( - text_file, - num_questions=int(args.num_questions), - answer_style=args.answer_style, - use_evaluator=args.use_qa_eval - ) - print_qa(qa_list, show_answers=args.show_answers) - -# taken from https://stackoverflow.com/questions/15008758/parsing-boolean-values-with-argparse -def parse_bool_string(s): - if isinstance(s, bool): - return s - if s.lower() in ('yes', 'true', 't', 'y', '1'): - return True - elif s.lower() in ('no', 'false', 'f', 'n', '0'): - return False - else: - raise argparse.ArgumentTypeError('Boolean value expected.') - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Gundam Extreme Vs WORK Full Boost Pc Free Download.md b/spaces/bioriAsaeru/text-to-voice/Gundam Extreme Vs WORK Full Boost Pc Free Download.md deleted file mode 100644 index 6332f23d38f395ad9f84fed7bc578517d19764f8..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Gundam Extreme Vs WORK Full Boost Pc Free Download.md +++ /dev/null @@ -1,25 +0,0 @@ -
    -

    How to Play Gundam Extreme VS Full Boost on PC for Free

    -

    Gundam Extreme VS Full Boost is a popular arcade game that features 2v2 battles between various mobile suits from the Gundam franchise. It was released for the PlayStation 3 in 2014, but it is not officially available for PC. However, there is a way to play it on your computer using an emulator called RPCS3.

    -

    RPCS3 is a free and open-source PlayStation 3 emulator that can run many PS3 games on Windows, Linux and BSD systems. It is still in development, so some games may not work perfectly or at all. However, Gundam Extreme VS Full Boost is one of the games that has been reported to be playable at 60 FPS with some tweaks and settings.

    -

    gundam extreme vs full boost pc free download


    Download Ziphttps://urloso.com/2uyRc8



    -

    In this article, we will show you how to download and install RPCS3, how to get the game files for Gundam Extreme VS Full Boost, and how to configure the emulator to run the game smoothly. We will also provide some tips and tricks to improve your gaming experience.

    -

    Step 1: Download and Install RPCS3

    -

    The first thing you need to do is to download and install RPCS3 on your PC. You can get the latest version of the emulator from its official website: https://rpcs3.net/download. There are two options: a standalone build or an appimage. The standalone build requires you to extract the files to a folder of your choice, while the appimage is a single executable file that you can run from anywhere. Either way, make sure you have the latest version of Microsoft Visual C++ Redistributable installed on your system.

    -

    Once you have downloaded RPCS3, run it and you will see the main window of the emulator. Here you can manage your games, configure your settings, and launch your games. Before you can play any game, you need to install the PS3 firmware on RPCS3. This is a legal requirement and you can get it from Sony's official website: https://www.playstation.com/en-us/support/hardware/ps3/system-software/. Download the latest update file (PS3UPDAT.PUP) and save it somewhere on your PC.

    -

    Then, go to File > Install Firmware and select the PS3UPDAT.PUP file you downloaded. This will install the firmware on RPCS3 and enable you to play PS3 games. You will see a message saying "Success!" when it is done.

    -

    Step 2: Get the Game Files for Gundam Extreme VS Full Boost

    -

    The next step is to get the game files for Gundam Extreme VS Full Boost. There are two ways to do this: either by dumping your own copy of the game from your PS3 console or by downloading it from the internet. The first option is legal and recommended, but it requires you to have a PS3 console with a compatible Blu-ray drive and a software called PS3 ISO Rebuilder. The second option is illegal and not recommended, but it is easier and faster.

    -

    If you choose to dump your own copy of the game, you will need to follow these steps:

    -
      -
    • Insert your Gundam Extreme VS Full Boost disc into your PS3 console.
    • -
    • Connect your PS3 console to your PC via an Ethernet cable or a Wi-Fi network.
    • -
    • Enable FTP server on your PS3 console using a homebrew application such as Multiman or Webman.
    • -
    • On your PC, open an FTP client such as FileZilla or WinSCP and connect to your PS3 console using its IP address and port number (usually 21).
    • -
    • Navigate to /dev_bdvd/PS3_GAME/ on your PS3 console and copy all the files and folders to a folder on your PC.
    • -
    • Download PS3 ISO Rebuilder from https://www.psx-place.com/resources/ps3-iso-rebuilder.100/ and extract it to a folder on your PC.
    • -
    • Run PS3 ISO Rebuilder.exe and click on "Create ISO(s)" tab.
    • -
    • Select the

      -

      d5da3c52bf
      -
      -
      \ No newline at end of file diff --git a/spaces/brainblow/beat_remixer/beat_manipulator/metrics.py b/spaces/brainblow/beat_remixer/beat_manipulator/metrics.py deleted file mode 100644 index 37310085efc9318d1d7b19a3d6d467e24996401b..0000000000000000000000000000000000000000 --- a/spaces/brainblow/beat_remixer/beat_manipulator/metrics.py +++ /dev/null @@ -1,40 +0,0 @@ -import numpy as np -from . import effects - -def volume(audio: np.ndarray) -> float: - return np.average(np.abs(audio)) - -def volume_gradient(audio: np.ndarray, number:int = 1) -> float: - audio = effects.gradient(audio, number = number) - return np.average(np.abs(audio)) - -def maximum_high(audio: np.ndarray, number:int = 1) -> float: - audio = effects.gradient(audio, number = number) - return np.max(np.abs(audio)) - -def locate_1st_hit(audio: np.ndarray, number: int = 1): - audio = effects.gradient(audio, number = number) - return np.argmax(audio, axis=1) / len(audio[0]) - -def is_hit(audio: np.ndarray, threshold: float = 0.5, number:int = 1) -> int: - return 1 if maximum_high(audio, number=number) > threshold else 0 - -def hit_at_start(audio: np.ndarray, diff = 0.1) -> int: - return is_hit(audio) * (locate_1st_hit(audio) <= diff) - -def hit_in_middle(audio: np.ndarray, diff = 0.1) -> int: - return is_hit(audio) * ((0.5 - diff) <= locate_1st_hit(audio) <= (0.5 + diff)) - -def hit_at_end(audio: np.ndarray, diff = 0.1) -> int: - return is_hit(audio) * (locate_1st_hit(audio) >= (1-diff)) - -BM_METRICS = { - "v": volume, - "g": volume_gradient, - "m": maximum_high, - "l": locate_1st_hit, - "h": is_hit, - "s": hit_at_start, - "a": hit_in_middle, - "e": hit_at_end, -} \ No newline at end of file diff --git a/spaces/brjathu/HMR2.0/hmr2/models/__init__.py b/spaces/brjathu/HMR2.0/hmr2/models/__init__.py deleted file mode 100644 index 347edebc45eaca55cc143648bf72d38f886b646b..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/hmr2/models/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .smpl_wrapper import SMPL -from .hmr2 import HMR2 -from .discriminator import Discriminator diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/samplers/grouped_batch_sampler.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/samplers/grouped_batch_sampler.py deleted file mode 100644 index 5b247730aacd04dd0c752664acde3257c4eddd71..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/samplers/grouped_batch_sampler.py +++ /dev/null @@ -1,47 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -from torch.utils.data.sampler import BatchSampler, Sampler - - -class GroupedBatchSampler(BatchSampler): - """ - Wraps another sampler to yield a mini-batch of indices. - It enforces that the batch only contain elements from the same group. - It also tries to provide mini-batches which follows an ordering which is - as close as possible to the ordering from the original sampler. - """ - - def __init__(self, sampler, group_ids, batch_size): - """ - Args: - sampler (Sampler): Base sampler. - group_ids (list[int]): If the sampler produces indices in range [0, N), - `group_ids` must be a list of `N` ints which contains the group id of each sample. - The group ids must be a set of integers in the range [0, num_groups). - batch_size (int): Size of mini-batch. - """ - if not isinstance(sampler, Sampler): - raise ValueError( - "sampler should be an instance of " - "torch.utils.data.Sampler, but got sampler={}".format(sampler) - ) - self.sampler = sampler - self.group_ids = np.asarray(group_ids) - assert self.group_ids.ndim == 1 - self.batch_size = batch_size - groups = np.unique(self.group_ids).tolist() - - # buffer the indices of each group until batch size is reached - self.buffer_per_group = {k: [] for k in groups} - - def __iter__(self): - for idx in self.sampler: - group_id = self.group_ids[idx] - group_buffer = self.buffer_per_group[group_id] - group_buffer.append(idx) - if len(group_buffer) == self.batch_size: - yield group_buffer[:] # yield a copy of the list - del group_buffer[:] - - def __len__(self): - raise NotImplementedError("len() of GroupedBatchSampler is not well-defined.") diff --git a/spaces/camileLDJ/allenai-cosmo-xl/README.md b/spaces/camileLDJ/allenai-cosmo-xl/README.md deleted file mode 100644 index 50c2ce299e345215310eed3a7773de052d1f78da..0000000000000000000000000000000000000000 --- a/spaces/camileLDJ/allenai-cosmo-xl/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Allenai Cosmo Xl -emoji: ⚡ -colorFrom: green -colorTo: gray -sdk: streamlit -sdk_version: 1.19.0 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/captainChan/CaptainChan/losses.py b/spaces/captainChan/CaptainChan/losses.py deleted file mode 100644 index eea99b5dc280b2e4719afe0b3bda0b3faf316327..0000000000000000000000000000000000000000 --- a/spaces/captainChan/CaptainChan/losses.py +++ /dev/null @@ -1,72 +0,0 @@ -from fastai.vision import * - -from modules.model import Model - - -class MultiLosses(nn.Module): - def __init__(self, one_hot=True): - super().__init__() - self.ce = SoftCrossEntropyLoss() if one_hot else torch.nn.CrossEntropyLoss() - self.bce = torch.nn.BCELoss() - - @property - def last_losses(self): - return self.losses - - def _flatten(self, sources, lengths): - return torch.cat([t[:l] for t, l in zip(sources, lengths)]) - - def _merge_list(self, all_res): - if not isinstance(all_res, (list, tuple)): - return all_res - def merge(items): - if isinstance(items[0], torch.Tensor): return torch.cat(items, dim=0) - else: return items[0] - res = dict() - for key in all_res[0].keys(): - items = [r[key] for r in all_res] - res[key] = merge(items) - return res - - def _ce_loss(self, output, gt_labels, gt_lengths, idx=None, record=True): - loss_name = output.get('name') - pt_logits, weight = output['logits'], output['loss_weight'] - - assert pt_logits.shape[0] % gt_labels.shape[0] == 0 - iter_size = pt_logits.shape[0] // gt_labels.shape[0] - if iter_size > 1: - gt_labels = gt_labels.repeat(iter_size, 1, 1) - gt_lengths = gt_lengths.repeat(iter_size) - flat_gt_labels = self._flatten(gt_labels, gt_lengths) - flat_pt_logits = self._flatten(pt_logits, gt_lengths) - - nll = output.get('nll') - if nll is not None: - loss = self.ce(flat_pt_logits, flat_gt_labels, softmax=False) * weight - else: - loss = self.ce(flat_pt_logits, flat_gt_labels) * weight - if record and loss_name is not None: self.losses[f'{loss_name}_loss'] = loss - - return loss - - def forward(self, outputs, *args): - self.losses = {} - if isinstance(outputs, (tuple, list)): - outputs = [self._merge_list(o) for o in outputs] - return sum([self._ce_loss(o, *args) for o in outputs if o['loss_weight'] > 0.]) - else: - return self._ce_loss(outputs, *args, record=False) - - -class SoftCrossEntropyLoss(nn.Module): - def __init__(self, reduction="mean"): - super().__init__() - self.reduction = reduction - - def forward(self, input, target, softmax=True): - if softmax: log_prob = F.log_softmax(input, dim=-1) - else: log_prob = torch.log(input) - loss = -(target * log_prob).sum(dim=-1) - if self.reduction == "mean": return loss.mean() - elif self.reduction == "sum": return loss.sum() - else: return loss diff --git a/spaces/captchaboy/FAST-ABINet-OCR/modules/model_alignment.py b/spaces/captchaboy/FAST-ABINet-OCR/modules/model_alignment.py deleted file mode 100644 index 0405c228b3339e5ba0835c33ba56844831c06057..0000000000000000000000000000000000000000 --- a/spaces/captchaboy/FAST-ABINet-OCR/modules/model_alignment.py +++ /dev/null @@ -1,34 +0,0 @@ -import torch -import torch.nn as nn -from fastai.vision import * - -from modules.model import Model, _default_tfmer_cfg - - -class BaseAlignment(Model): - def __init__(self, config): - super().__init__(config) - d_model = ifnone(config.model_alignment_d_model, _default_tfmer_cfg['d_model']) - - self.loss_weight = ifnone(config.model_alignment_loss_weight, 1.0) - self.max_length = config.dataset_max_length + 1 # additional stop token - self.w_att = nn.Linear(2 * d_model, d_model) - self.cls = nn.Linear(d_model, self.charset.num_classes) - - def forward(self, l_feature, v_feature): - """ - Args: - l_feature: (N, T, E) where T is length, N is batch size and d is dim of model - v_feature: (N, T, E) shape the same as l_feature - l_lengths: (N,) - v_lengths: (N,) - """ - f = torch.cat((l_feature, v_feature), dim=2) - f_att = torch.sigmoid(self.w_att(f)) - output = f_att * v_feature + (1 - f_att) * l_feature - - logits = self.cls(output) # (N, T, C) - pt_lengths = self._get_length(logits) - - return {'logits': logits, 'pt_lengths': pt_lengths, 'loss_weight':self.loss_weight, - 'name': 'alignment'} diff --git a/spaces/catgirlss/kittens/README.md b/spaces/catgirlss/kittens/README.md deleted file mode 100644 index e632d0a6d226f0b51aa9921445e15495d9c31bfa..0000000000000000000000000000000000000000 --- a/spaces/catgirlss/kittens/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: kittens -emoji: 💜 -colorFrom: purple -colorTo: blue -sdk: docker -pinned: true ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/ccolas/TastyPiano/src/cocktails/utilities/__init__.py b/spaces/ccolas/TastyPiano/src/cocktails/utilities/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/cedpsam/mistral_openorca_lamacpp/app.py b/spaces/cedpsam/mistral_openorca_lamacpp/app.py deleted file mode 100644 index 177170cc2f315a37b84a0dc9ef518b81726a00d6..0000000000000000000000000000000000000000 --- a/spaces/cedpsam/mistral_openorca_lamacpp/app.py +++ /dev/null @@ -1,124 +0,0 @@ - -import gradio as gr -import os -os.environ["HF_HUB_ENABLE_HF_TRANSFER"]="1" -from langchain.llms import LlamaCpp -from langchain.prompts import PromptTemplate -from langchain.chains import LLMChain -from langchain.callbacks.manager import CallbackManager -from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler -from huggingface_hub import hf_hub_download - -callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]) - -repo_id="TheBloke/Mistral-7B-OpenOrca-GGUF" -model_name="mistral-7b-openorca.Q5_K_M.gguf" - -hf_hub_download(repo_id=repo_id, - filename=model_name,local_dir =".") - - -llm = LlamaCpp( - model_path=model_name, - n_ctx=4096, - callback_manager=callback_manager, - verbose=True, # Verbose is required to pass to the callback manager - ) -def format_prompt(message, history): - prompt = "" - for user_prompt, bot_response in history: - prompt += f"<|im_start|>user\n {user_prompt} <|im_end|>\n" - prompt += f"<|im_start|>assistant\n {bot_response}<|im_end|>\n" - prompt += f"<|im_start|>user\n {message} <|im_end|>\n<|im_start|>assistant\n" - return prompt - -def generate( - prompt, history, temperature=0.9, top_p=0.95, max_new_tokens=256,repetition_penalty=1.0, -): - - temperature = float(temperature) - if temperature < 1e-2: - temperature = 1e-2 - top_p = float(top_p) - - - formatted_prompt = format_prompt(prompt, history) - - - # stream = client.text_generation(formatted_prompt, **generate_kwargs, stream=True, details=True, return_full_text=False) - output = "" - output=llm(formatted_prompt, - temperature=temperature, - max_tokens=max_new_tokens, - repeat_penalty=repetition_penalty, - top_p=top_p, - stop=["<|im_end|>","<|im_start|>user"] - ) - # output=formatted_prompt+"ans:"+output - # for response in stream: - # output += response.token.text - # yield output - return output - - -additional_inputs=[ - gr.Slider( - label="Temperature", - value=0.9, - minimum=0.0, - maximum=1.0, - step=0.05, - interactive=True, - info="Higher values produce more diverse outputs", - ), - - gr.Slider( - label="Top-p (nucleus sampling)", - value=0.90, - minimum=0.0, - maximum=1, - step=0.05, - interactive=True, - info="Higher values sample more low-probability tokens", - ), - gr.Slider( - label="Max new tokens", - value=400, - minimum=0, - maximum=1048, - step=64, - interactive=True, - info="The maximum numbers of new tokens", - ), - gr.Slider( - label="Repetition penalty", - value=1.2, - minimum=1.0, - maximum=2.0, - step=0.05, - interactive=True, - info="Penalize repeated tokens", - ) - -] - -css = """ - #mkd { - height: 500px; - overflow: auto; - border: 1px solid #ccc; - } -""" - -with gr.Blocks(css=css) as demo: - gr.HTML("

      Mistral 7B Instruct

      ") - gr.HTML("

      In this demo, you can chat with Mistral-7B-Instruct model. 💬

      ") - gr.HTML("

      Learn more about the model here. 📚

      ") - gr.HTML(f"

      it's lamacpp running {model_name} from {repo_id}

      ") - gr.ChatInterface( - generate, - additional_inputs=additional_inputs, - examples=[["What is the secret to life?"], ["Write me a recipe for pancakes."]] - ) - -demo.queue(max_size=None).launch(debug=True) \ No newline at end of file diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/MegEngine/python/models/darknet.py b/spaces/chendl/compositional_test/multimodal/YOLOX/demo/MegEngine/python/models/darknet.py deleted file mode 100644 index 47469aa683a91cdf88091956b71637cae7a97dc3..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/MegEngine/python/models/darknet.py +++ /dev/null @@ -1,154 +0,0 @@ -#!/usr/bin/env python3 -# -*- encoding: utf-8 -*- -# Copyright (c) Megvii Inc. All rights reserved. - -import megengine.module as M - -from .network_blocks import BaseConv, CSPLayer, DWConv, Focus, ResLayer, SPPBottleneck - - -class Darknet(M.Module): - # number of blocks from dark2 to dark5. - depth2blocks = {21: [1, 2, 2, 1], 53: [2, 8, 8, 4]} - - def __init__( - self, depth, in_channels=3, stem_out_channels=32, out_features=("dark3", "dark4", "dark5"), - ): - """ - Args: - depth (int): depth of darknet used in model, usually use [21, 53] for this param. - in_channels (int): number of input channels, for example, use 3 for RGB image. - stem_out_channels (int): number of output channels of darknet stem. - It decides channels of darknet layer2 to layer5. - out_features (Tuple[str]): desired output layer name. - """ - super().__init__() - assert out_features, "please provide output features of Darknet" - self.out_features = out_features - self.stem = M.Sequential( - BaseConv(in_channels, stem_out_channels, ksize=3, stride=1, act="lrelu"), - *self.make_group_layer(stem_out_channels, num_blocks=1, stride=2), - ) - in_channels = stem_out_channels * 2 # 64 - - num_blocks = Darknet.depth2blocks[depth] - # create darknet with `stem_out_channels` and `num_blocks` layers. - # to make model structure more clear, we don't use `for` statement in python. - self.dark2 = M.Sequential(*self.make_group_layer(in_channels, num_blocks[0], stride=2)) - in_channels *= 2 # 128 - self.dark3 = M.Sequential(*self.make_group_layer(in_channels, num_blocks[1], stride=2)) - in_channels *= 2 # 256 - self.dark4 = M.Sequential(*self.make_group_layer(in_channels, num_blocks[2], stride=2)) - in_channels *= 2 # 512 - - self.dark5 = M.Sequential( - *self.make_group_layer(in_channels, num_blocks[3], stride=2), - *self.make_spp_block([in_channels, in_channels * 2], in_channels * 2), - ) - - def make_group_layer(self, in_channels: int, num_blocks: int, stride: int = 1): - "starts with conv layer then has `num_blocks` `ResLayer`" - return [ - BaseConv(in_channels, in_channels * 2, ksize=3, stride=stride, act="lrelu"), - *[(ResLayer(in_channels * 2)) for _ in range(num_blocks)] - ] - - def make_spp_block(self, filters_list, in_filters): - m = M.Sequential( - *[ - BaseConv(in_filters, filters_list[0], 1, stride=1, act="lrelu"), - BaseConv(filters_list[0], filters_list[1], 3, stride=1, act="lrelu"), - SPPBottleneck( - in_channels=filters_list[1], - out_channels=filters_list[0], - activation="lrelu" - ), - BaseConv(filters_list[0], filters_list[1], 3, stride=1, act="lrelu"), - BaseConv(filters_list[1], filters_list[0], 1, stride=1, act="lrelu"), - ] - ) - return m - - def forward(self, x): - outputs = {} - x = self.stem(x) - outputs["stem"] = x - x = self.dark2(x) - outputs["dark2"] = x - x = self.dark3(x) - outputs["dark3"] = x - x = self.dark4(x) - outputs["dark4"] = x - x = self.dark5(x) - outputs["dark5"] = x - return {k: v for k, v in outputs.items() if k in self.out_features} - - -class CSPDarknet(M.Module): - - def __init__( - self, dep_mul, wid_mul, - out_features=("dark3", "dark4", "dark5"), - depthwise=False, act="silu", - ): - super().__init__() - assert out_features, "please provide output features of Darknet" - self.out_features = out_features - Conv = DWConv if depthwise else BaseConv - - base_channels = int(wid_mul * 64) # 64 - base_depth = max(round(dep_mul * 3), 1) # 3 - - # stem - self.stem = Focus(3, base_channels, ksize=3, act=act) - - # dark2 - self.dark2 = M.Sequential( - Conv(base_channels, base_channels * 2, 3, 2, act=act), - CSPLayer( - base_channels * 2, base_channels * 2, - n=base_depth, depthwise=depthwise, act=act - ), - ) - - # dark3 - self.dark3 = M.Sequential( - Conv(base_channels * 2, base_channels * 4, 3, 2, act=act), - CSPLayer( - base_channels * 4, base_channels * 4, - n=base_depth * 3, depthwise=depthwise, act=act, - ), - ) - - # dark4 - self.dark4 = M.Sequential( - Conv(base_channels * 4, base_channels * 8, 3, 2, act=act), - CSPLayer( - base_channels * 8, base_channels * 8, - n=base_depth * 3, depthwise=depthwise, act=act, - ), - ) - - # dark5 - self.dark5 = M.Sequential( - Conv(base_channels * 8, base_channels * 16, 3, 2, act=act), - SPPBottleneck(base_channels * 16, base_channels * 16, activation=act), - CSPLayer( - base_channels * 16, base_channels * 16, n=base_depth, - shortcut=False, depthwise=depthwise, act=act, - ), - ) - - def forward(self, x): - outputs = {} - x = self.stem(x) - outputs["stem"] = x - x = self.dark2(x) - outputs["dark2"] = x - x = self.dark3(x) - outputs["dark3"] = x - x = self.dark4(x) - outputs["dark4"] = x - x = self.dark5(x) - outputs["dark5"] = x - return {k: v for k, v in outputs.items() if k in self.out_features} diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/mm-imdb/run_mmimdb.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/mm-imdb/run_mmimdb.py deleted file mode 100644 index 2cc3bc3a0c73ccd3d859131e7af91a04677fbe55..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/mm-imdb/run_mmimdb.py +++ /dev/null @@ -1,576 +0,0 @@ -# coding=utf-8 -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Finetuning the library models for multimodal multiclass prediction on MM-IMDB dataset.""" - - -import argparse -import glob -import json -import logging -import os -import random - -import numpy as np -import torch -from sklearn.metrics import f1_score -from torch import nn -from torch.utils.data import DataLoader, RandomSampler, SequentialSampler -from torch.utils.data.distributed import DistributedSampler -from tqdm import tqdm, trange -from utils_mmimdb import ImageEncoder, JsonlDataset, collate_fn, get_image_transforms, get_mmimdb_labels - -import transformers -from transformers import ( - WEIGHTS_NAME, - AdamW, - AutoConfig, - AutoModel, - AutoTokenizer, - MMBTConfig, - MMBTForClassification, - get_linear_schedule_with_warmup, -) -from transformers.trainer_utils import is_main_process - - -try: - from torch.utils.tensorboard import SummaryWriter -except ImportError: - from tensorboardX import SummaryWriter - - -logger = logging.getLogger(__name__) - - -def set_seed(args): - random.seed(args.seed) - np.random.seed(args.seed) - torch.manual_seed(args.seed) - if args.n_gpu > 0: - torch.cuda.manual_seed_all(args.seed) - - -def train(args, train_dataset, model, tokenizer, criterion): - """Train the model""" - if args.local_rank in [-1, 0]: - tb_writer = SummaryWriter() - - args.train_batch_size = args.per_gpu_train_batch_size * max(1, args.n_gpu) - train_sampler = RandomSampler(train_dataset) if args.local_rank == -1 else DistributedSampler(train_dataset) - train_dataloader = DataLoader( - train_dataset, - sampler=train_sampler, - batch_size=args.train_batch_size, - collate_fn=collate_fn, - num_workers=args.num_workers, - ) - - if args.max_steps > 0: - t_total = args.max_steps - args.num_train_epochs = args.max_steps // (len(train_dataloader) // args.gradient_accumulation_steps) + 1 - else: - t_total = len(train_dataloader) // args.gradient_accumulation_steps * args.num_train_epochs - - # Prepare optimizer and schedule (linear warmup and decay) - no_decay = ["bias", "LayerNorm.weight"] - optimizer_grouped_parameters = [ - { - "params": [p for n, p in model.named_parameters() if not any(nd in n for nd in no_decay)], - "weight_decay": args.weight_decay, - }, - {"params": [p for n, p in model.named_parameters() if any(nd in n for nd in no_decay)], "weight_decay": 0.0}, - ] - - optimizer = AdamW(optimizer_grouped_parameters, lr=args.learning_rate, eps=args.adam_epsilon) - scheduler = get_linear_schedule_with_warmup( - optimizer, num_warmup_steps=args.warmup_steps, num_training_steps=t_total - ) - if args.fp16: - try: - from apex import amp - except ImportError: - raise ImportError("Please install apex from https://www.github.com/nvidia/apex to use fp16 training.") - model, optimizer = amp.initialize(model, optimizer, opt_level=args.fp16_opt_level) - - # multi-gpu training (should be after apex fp16 initialization) - if args.n_gpu > 1: - model = nn.DataParallel(model) - - # Distributed training (should be after apex fp16 initialization) - if args.local_rank != -1: - model = nn.parallel.DistributedDataParallel( - model, device_ids=[args.local_rank], output_device=args.local_rank, find_unused_parameters=True - ) - - # Train! - logger.info("***** Running training *****") - logger.info(" Num examples = %d", len(train_dataset)) - logger.info(" Num Epochs = %d", args.num_train_epochs) - logger.info(" Instantaneous batch size per GPU = %d", args.per_gpu_train_batch_size) - logger.info( - " Total train batch size (w. parallel, distributed & accumulation) = %d", - args.train_batch_size - * args.gradient_accumulation_steps - * (torch.distributed.get_world_size() if args.local_rank != -1 else 1), - ) - logger.info(" Gradient Accumulation steps = %d", args.gradient_accumulation_steps) - logger.info(" Total optimization steps = %d", t_total) - - global_step = 0 - tr_loss, logging_loss = 0.0, 0.0 - best_f1, n_no_improve = 0, 0 - model.zero_grad() - train_iterator = trange(int(args.num_train_epochs), desc="Epoch", disable=args.local_rank not in [-1, 0]) - set_seed(args) # Added here for reproductibility - for _ in train_iterator: - epoch_iterator = tqdm(train_dataloader, desc="Iteration", disable=args.local_rank not in [-1, 0]) - for step, batch in enumerate(epoch_iterator): - model.train() - batch = tuple(t.to(args.device) for t in batch) - labels = batch[5] - inputs = { - "input_ids": batch[0], - "input_modal": batch[2], - "attention_mask": batch[1], - "modal_start_tokens": batch[3], - "modal_end_tokens": batch[4], - } - outputs = model(**inputs) - logits = outputs[0] # model outputs are always tuple in transformers (see doc) - loss = criterion(logits, labels) - - if args.n_gpu > 1: - loss = loss.mean() # mean() to average on multi-gpu parallel training - if args.gradient_accumulation_steps > 1: - loss = loss / args.gradient_accumulation_steps - - if args.fp16: - with amp.scale_loss(loss, optimizer) as scaled_loss: - scaled_loss.backward() - else: - loss.backward() - - tr_loss += loss.item() - if (step + 1) % args.gradient_accumulation_steps == 0: - if args.fp16: - nn.utils.clip_grad_norm_(amp.master_params(optimizer), args.max_grad_norm) - else: - nn.utils.clip_grad_norm_(model.parameters(), args.max_grad_norm) - - optimizer.step() - scheduler.step() # Update learning rate schedule - model.zero_grad() - global_step += 1 - - if args.local_rank in [-1, 0] and args.logging_steps > 0 and global_step % args.logging_steps == 0: - logs = {} - if ( - args.local_rank == -1 and args.evaluate_during_training - ): # Only evaluate when single GPU otherwise metrics may not average well - results = evaluate(args, model, tokenizer, criterion) - for key, value in results.items(): - eval_key = "eval_{}".format(key) - logs[eval_key] = value - - loss_scalar = (tr_loss - logging_loss) / args.logging_steps - learning_rate_scalar = scheduler.get_lr()[0] - logs["learning_rate"] = learning_rate_scalar - logs["loss"] = loss_scalar - logging_loss = tr_loss - - for key, value in logs.items(): - tb_writer.add_scalar(key, value, global_step) - print(json.dumps({**logs, **{"step": global_step}})) - - if args.local_rank in [-1, 0] and args.save_steps > 0 and global_step % args.save_steps == 0: - # Save model checkpoint - output_dir = os.path.join(args.output_dir, "checkpoint-{}".format(global_step)) - if not os.path.exists(output_dir): - os.makedirs(output_dir) - model_to_save = ( - model.module if hasattr(model, "module") else model - ) # Take care of distributed/parallel training - torch.save(model_to_save.state_dict(), os.path.join(output_dir, WEIGHTS_NAME)) - torch.save(args, os.path.join(output_dir, "training_args.bin")) - logger.info("Saving model checkpoint to %s", output_dir) - - if args.max_steps > 0 and global_step > args.max_steps: - epoch_iterator.close() - break - if args.max_steps > 0 and global_step > args.max_steps: - train_iterator.close() - break - - if args.local_rank == -1: - results = evaluate(args, model, tokenizer, criterion) - if results["micro_f1"] > best_f1: - best_f1 = results["micro_f1"] - n_no_improve = 0 - else: - n_no_improve += 1 - - if n_no_improve > args.patience: - train_iterator.close() - break - - if args.local_rank in [-1, 0]: - tb_writer.close() - - return global_step, tr_loss / global_step - - -def evaluate(args, model, tokenizer, criterion, prefix=""): - # Loop to handle MNLI double evaluation (matched, mis-matched) - eval_output_dir = args.output_dir - eval_dataset = load_examples(args, tokenizer, evaluate=True) - - if not os.path.exists(eval_output_dir) and args.local_rank in [-1, 0]: - os.makedirs(eval_output_dir) - - args.eval_batch_size = args.per_gpu_eval_batch_size * max(1, args.n_gpu) - # Note that DistributedSampler samples randomly - eval_sampler = SequentialSampler(eval_dataset) - eval_dataloader = DataLoader( - eval_dataset, sampler=eval_sampler, batch_size=args.eval_batch_size, collate_fn=collate_fn - ) - - # multi-gpu eval - if args.n_gpu > 1 and not isinstance(model, nn.DataParallel): - model = nn.DataParallel(model) - - # Eval! - logger.info("***** Running evaluation {} *****".format(prefix)) - logger.info(" Num examples = %d", len(eval_dataset)) - logger.info(" Batch size = %d", args.eval_batch_size) - eval_loss = 0.0 - nb_eval_steps = 0 - preds = None - out_label_ids = None - for batch in tqdm(eval_dataloader, desc="Evaluating"): - model.eval() - batch = tuple(t.to(args.device) for t in batch) - - with torch.no_grad(): - batch = tuple(t.to(args.device) for t in batch) - labels = batch[5] - inputs = { - "input_ids": batch[0], - "input_modal": batch[2], - "attention_mask": batch[1], - "modal_start_tokens": batch[3], - "modal_end_tokens": batch[4], - } - outputs = model(**inputs) - logits = outputs[0] # model outputs are always tuple in transformers (see doc) - tmp_eval_loss = criterion(logits, labels) - eval_loss += tmp_eval_loss.mean().item() - nb_eval_steps += 1 - if preds is None: - preds = torch.sigmoid(logits).detach().cpu().numpy() > 0.5 - out_label_ids = labels.detach().cpu().numpy() - else: - preds = np.append(preds, torch.sigmoid(logits).detach().cpu().numpy() > 0.5, axis=0) - out_label_ids = np.append(out_label_ids, labels.detach().cpu().numpy(), axis=0) - - eval_loss = eval_loss / nb_eval_steps - result = { - "loss": eval_loss, - "macro_f1": f1_score(out_label_ids, preds, average="macro"), - "micro_f1": f1_score(out_label_ids, preds, average="micro"), - } - - output_eval_file = os.path.join(eval_output_dir, prefix, "eval_results.txt") - with open(output_eval_file, "w") as writer: - logger.info("***** Eval results {} *****".format(prefix)) - for key in sorted(result.keys()): - logger.info(" %s = %s", key, str(result[key])) - writer.write("%s = %s\n" % (key, str(result[key]))) - - return result - - -def load_examples(args, tokenizer, evaluate=False): - path = os.path.join(args.data_dir, "dev.jsonl" if evaluate else "train.jsonl") - transforms = get_image_transforms() - labels = get_mmimdb_labels() - dataset = JsonlDataset(path, tokenizer, transforms, labels, args.max_seq_length - args.num_image_embeds - 2) - return dataset - - -def main(): - parser = argparse.ArgumentParser() - - # Required parameters - parser.add_argument( - "--data_dir", - default=None, - type=str, - required=True, - help="The input data dir. Should contain the .jsonl files for MMIMDB.", - ) - parser.add_argument( - "--model_name_or_path", - default=None, - type=str, - required=True, - help="Path to pretrained model or model identifier from huggingface.co/models", - ) - parser.add_argument( - "--output_dir", - default=None, - type=str, - required=True, - help="The output directory where the model predictions and checkpoints will be written.", - ) - - # Other parameters - parser.add_argument( - "--config_name", default="", type=str, help="Pretrained config name or path if not the same as model_name" - ) - parser.add_argument( - "--tokenizer_name", - default="", - type=str, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--cache_dir", - default=None, - type=str, - help="Where do you want to store the pre-trained models downloaded from huggingface.co", - ) - parser.add_argument( - "--max_seq_length", - default=128, - type=int, - help=( - "The maximum total input sequence length after tokenization. Sequences longer " - "than this will be truncated, sequences shorter will be padded." - ), - ) - parser.add_argument( - "--num_image_embeds", default=1, type=int, help="Number of Image Embeddings from the Image Encoder" - ) - parser.add_argument("--do_train", action="store_true", help="Whether to run training.") - parser.add_argument("--do_eval", action="store_true", help="Whether to run eval on the dev set.") - parser.add_argument( - "--evaluate_during_training", action="store_true", help="Rul evaluation during training at each logging step." - ) - parser.add_argument( - "--do_lower_case", action="store_true", help="Set this flag if you are using an uncased model." - ) - - parser.add_argument("--per_gpu_train_batch_size", default=8, type=int, help="Batch size per GPU/CPU for training.") - parser.add_argument( - "--per_gpu_eval_batch_size", default=8, type=int, help="Batch size per GPU/CPU for evaluation." - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument("--learning_rate", default=5e-5, type=float, help="The initial learning rate for Adam.") - parser.add_argument("--weight_decay", default=0.0, type=float, help="Weight deay if we apply some.") - parser.add_argument("--adam_epsilon", default=1e-8, type=float, help="Epsilon for Adam optimizer.") - parser.add_argument("--max_grad_norm", default=1.0, type=float, help="Max gradient norm.") - parser.add_argument( - "--num_train_epochs", default=3.0, type=float, help="Total number of training epochs to perform." - ) - parser.add_argument("--patience", default=5, type=int, help="Patience for Early Stopping.") - parser.add_argument( - "--max_steps", - default=-1, - type=int, - help="If > 0: set total number of training steps to perform. Override num_train_epochs.", - ) - parser.add_argument("--warmup_steps", default=0, type=int, help="Linear warmup over warmup_steps.") - - parser.add_argument("--logging_steps", type=int, default=50, help="Log every X updates steps.") - parser.add_argument("--save_steps", type=int, default=50, help="Save checkpoint every X updates steps.") - parser.add_argument( - "--eval_all_checkpoints", - action="store_true", - help="Evaluate all checkpoints starting with the same prefix as model_name ending and ending with step number", - ) - parser.add_argument("--no_cuda", action="store_true", help="Avoid using CUDA when available") - parser.add_argument("--num_workers", type=int, default=8, help="number of worker threads for dataloading") - parser.add_argument( - "--overwrite_output_dir", action="store_true", help="Overwrite the content of the output directory" - ) - parser.add_argument( - "--overwrite_cache", action="store_true", help="Overwrite the cached training and evaluation sets" - ) - parser.add_argument("--seed", type=int, default=42, help="random seed for initialization") - - parser.add_argument( - "--fp16", - action="store_true", - help="Whether to use 16-bit (mixed) precision (through NVIDIA apex) instead of 32-bit", - ) - parser.add_argument( - "--fp16_opt_level", - type=str, - default="O1", - help=( - "For fp16: Apex AMP optimization level selected in ['O0', 'O1', 'O2', and 'O3']." - "See details at https://nvidia.github.io/apex/amp.html" - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument("--server_ip", type=str, default="", help="For distant debugging.") - parser.add_argument("--server_port", type=str, default="", help="For distant debugging.") - args = parser.parse_args() - - if ( - os.path.exists(args.output_dir) - and os.listdir(args.output_dir) - and args.do_train - and not args.overwrite_output_dir - ): - raise ValueError( - "Output directory ({}) already exists and is not empty. Use --overwrite_output_dir to overcome.".format( - args.output_dir - ) - ) - - # Setup distant debugging if needed - if args.server_ip and args.server_port: - # Distant debugging - see https://code.visualstudio.com/docs/python/debugging#_attach-to-a-local-script - import ptvsd - - print("Waiting for debugger attach") - ptvsd.enable_attach(address=(args.server_ip, args.server_port), redirect_output=True) - ptvsd.wait_for_attach() - - # Setup CUDA, GPU & distributed training - if args.local_rank == -1 or args.no_cuda: - device = torch.device("cuda" if torch.cuda.is_available() and not args.no_cuda else "cpu") - args.n_gpu = 0 if args.no_cuda else torch.cuda.device_count() - else: # Initializes the distributed backend which will take care of sychronizing nodes/GPUs - torch.cuda.set_device(args.local_rank) - device = torch.device("cuda", args.local_rank) - torch.distributed.init_process_group(backend="nccl") - args.n_gpu = 1 - - args.device = device - - # Setup logging - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO if args.local_rank in [-1, 0] else logging.WARN, - ) - logger.warning( - "Process rank: %s, device: %s, n_gpu: %s, distributed training: %s, 16-bits training: %s", - args.local_rank, - device, - args.n_gpu, - bool(args.local_rank != -1), - args.fp16, - ) - # Set the verbosity to info of the Transformers logger (on main process only): - if is_main_process(args.local_rank): - transformers.utils.logging.set_verbosity_info() - transformers.utils.logging.enable_default_handler() - transformers.utils.logging.enable_explicit_format() - # Set seed - set_seed(args) - - # Load pretrained model and tokenizer - if args.local_rank not in [-1, 0]: - torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab - - # Setup model - labels = get_mmimdb_labels() - num_labels = len(labels) - transformer_config = AutoConfig.from_pretrained(args.config_name if args.config_name else args.model_name_or_path) - tokenizer = AutoTokenizer.from_pretrained( - args.tokenizer_name if args.tokenizer_name else args.model_name_or_path, - do_lower_case=args.do_lower_case, - cache_dir=args.cache_dir, - ) - transformer = AutoModel.from_pretrained( - args.model_name_or_path, config=transformer_config, cache_dir=args.cache_dir - ) - img_encoder = ImageEncoder(args) - config = MMBTConfig(transformer_config, num_labels=num_labels) - model = MMBTForClassification(config, transformer, img_encoder) - - if args.local_rank == 0: - torch.distributed.barrier() # Make sure only the first process in distributed training will download model & vocab - - model.to(args.device) - - logger.info("Training/evaluation parameters %s", args) - - # Training - if args.do_train: - train_dataset = load_examples(args, tokenizer, evaluate=False) - label_frequences = train_dataset.get_label_frequencies() - label_frequences = [label_frequences[l] for l in labels] - label_weights = ( - torch.tensor(label_frequences, device=args.device, dtype=torch.float) / len(train_dataset) - ) ** -1 - criterion = nn.BCEWithLogitsLoss(pos_weight=label_weights) - global_step, tr_loss = train(args, train_dataset, model, tokenizer, criterion) - logger.info(" global_step = %s, average loss = %s", global_step, tr_loss) - - # Saving best-practices: if you use defaults names for the model, you can reload it using from_pretrained() - if args.do_train and (args.local_rank == -1 or torch.distributed.get_rank() == 0): - logger.info("Saving model checkpoint to %s", args.output_dir) - # Save a trained model, configuration and tokenizer using `save_pretrained()`. - # They can then be reloaded using `from_pretrained()` - model_to_save = ( - model.module if hasattr(model, "module") else model - ) # Take care of distributed/parallel training - torch.save(model_to_save.state_dict(), os.path.join(args.output_dir, WEIGHTS_NAME)) - tokenizer.save_pretrained(args.output_dir) - - # Good practice: save your training arguments together with the trained model - torch.save(args, os.path.join(args.output_dir, "training_args.bin")) - - # Load a trained model and vocabulary that you have fine-tuned - model = MMBTForClassification(config, transformer, img_encoder) - model.load_state_dict(torch.load(os.path.join(args.output_dir, WEIGHTS_NAME))) - tokenizer = AutoTokenizer.from_pretrained(args.output_dir) - model.to(args.device) - - # Evaluation - results = {} - if args.do_eval and args.local_rank in [-1, 0]: - checkpoints = [args.output_dir] - if args.eval_all_checkpoints: - checkpoints = [ - os.path.dirname(c) for c in sorted(glob.glob(args.output_dir + "/**/" + WEIGHTS_NAME, recursive=True)) - ] - - logger.info("Evaluate the following checkpoints: %s", checkpoints) - for checkpoint in checkpoints: - global_step = checkpoint.split("-")[-1] if len(checkpoints) > 1 else "" - prefix = checkpoint.split("/")[-1] if checkpoint.find("checkpoint") != -1 else "" - model = MMBTForClassification(config, transformer, img_encoder) - model.load_state_dict(torch.load(checkpoint)) - model.to(args.device) - result = evaluate(args, model, tokenizer, criterion, prefix=prefix) - result = {k + "_{}".format(global_step): v for k, v in result.items()} - results.update(result) - - return results - - -if __name__ == "__main__": - main() diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/abc/_streams.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/abc/_streams.py deleted file mode 100644 index 4fa7ccc9ffe0e750a1b5a4164970ed4de9c93b2b..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/anyio/abc/_streams.py +++ /dev/null @@ -1,203 +0,0 @@ -from __future__ import annotations - -from abc import abstractmethod -from typing import Any, Callable, Generic, TypeVar, Union - -from .._core._exceptions import EndOfStream -from .._core._typedattr import TypedAttributeProvider -from ._resources import AsyncResource -from ._tasks import TaskGroup - -T_Item = TypeVar("T_Item") -T_co = TypeVar("T_co", covariant=True) -T_contra = TypeVar("T_contra", contravariant=True) - - -class UnreliableObjectReceiveStream( - Generic[T_co], AsyncResource, TypedAttributeProvider -): - """ - An interface for receiving objects. - - This interface makes no guarantees that the received messages arrive in the order in which they - were sent, or that no messages are missed. - - Asynchronously iterating over objects of this type will yield objects matching the given type - parameter. - """ - - def __aiter__(self) -> UnreliableObjectReceiveStream[T_co]: - return self - - async def __anext__(self) -> T_co: - try: - return await self.receive() - except EndOfStream: - raise StopAsyncIteration - - @abstractmethod - async def receive(self) -> T_co: - """ - Receive the next item. - - :raises ~anyio.ClosedResourceError: if the receive stream has been explicitly - closed - :raises ~anyio.EndOfStream: if this stream has been closed from the other end - :raises ~anyio.BrokenResourceError: if this stream has been rendered unusable - due to external causes - """ - - -class UnreliableObjectSendStream( - Generic[T_contra], AsyncResource, TypedAttributeProvider -): - """ - An interface for sending objects. - - This interface makes no guarantees that the messages sent will reach the recipient(s) in the - same order in which they were sent, or at all. - """ - - @abstractmethod - async def send(self, item: T_contra) -> None: - """ - Send an item to the peer(s). - - :param item: the item to send - :raises ~anyio.ClosedResourceError: if the send stream has been explicitly - closed - :raises ~anyio.BrokenResourceError: if this stream has been rendered unusable - due to external causes - """ - - -class UnreliableObjectStream( - UnreliableObjectReceiveStream[T_Item], UnreliableObjectSendStream[T_Item] -): - """ - A bidirectional message stream which does not guarantee the order or reliability of message - delivery. - """ - - -class ObjectReceiveStream(UnreliableObjectReceiveStream[T_co]): - """ - A receive message stream which guarantees that messages are received in the same order in - which they were sent, and that no messages are missed. - """ - - -class ObjectSendStream(UnreliableObjectSendStream[T_contra]): - """ - A send message stream which guarantees that messages are delivered in the same order in which - they were sent, without missing any messages in the middle. - """ - - -class ObjectStream( - ObjectReceiveStream[T_Item], - ObjectSendStream[T_Item], - UnreliableObjectStream[T_Item], -): - """ - A bidirectional message stream which guarantees the order and reliability of message delivery. - """ - - @abstractmethod - async def send_eof(self) -> None: - """ - Send an end-of-file indication to the peer. - - You should not try to send any further data to this stream after calling this method. - This method is idempotent (does nothing on successive calls). - """ - - -class ByteReceiveStream(AsyncResource, TypedAttributeProvider): - """ - An interface for receiving bytes from a single peer. - - Iterating this byte stream will yield a byte string of arbitrary length, but no more than - 65536 bytes. - """ - - def __aiter__(self) -> ByteReceiveStream: - return self - - async def __anext__(self) -> bytes: - try: - return await self.receive() - except EndOfStream: - raise StopAsyncIteration - - @abstractmethod - async def receive(self, max_bytes: int = 65536) -> bytes: - """ - Receive at most ``max_bytes`` bytes from the peer. - - .. note:: Implementors of this interface should not return an empty :class:`bytes` object, - and users should ignore them. - - :param max_bytes: maximum number of bytes to receive - :return: the received bytes - :raises ~anyio.EndOfStream: if this stream has been closed from the other end - """ - - -class ByteSendStream(AsyncResource, TypedAttributeProvider): - """An interface for sending bytes to a single peer.""" - - @abstractmethod - async def send(self, item: bytes) -> None: - """ - Send the given bytes to the peer. - - :param item: the bytes to send - """ - - -class ByteStream(ByteReceiveStream, ByteSendStream): - """A bidirectional byte stream.""" - - @abstractmethod - async def send_eof(self) -> None: - """ - Send an end-of-file indication to the peer. - - You should not try to send any further data to this stream after calling this method. - This method is idempotent (does nothing on successive calls). - """ - - -#: Type alias for all unreliable bytes-oriented receive streams. -AnyUnreliableByteReceiveStream = Union[ - UnreliableObjectReceiveStream[bytes], ByteReceiveStream -] -#: Type alias for all unreliable bytes-oriented send streams. -AnyUnreliableByteSendStream = Union[UnreliableObjectSendStream[bytes], ByteSendStream] -#: Type alias for all unreliable bytes-oriented streams. -AnyUnreliableByteStream = Union[UnreliableObjectStream[bytes], ByteStream] -#: Type alias for all bytes-oriented receive streams. -AnyByteReceiveStream = Union[ObjectReceiveStream[bytes], ByteReceiveStream] -#: Type alias for all bytes-oriented send streams. -AnyByteSendStream = Union[ObjectSendStream[bytes], ByteSendStream] -#: Type alias for all bytes-oriented streams. -AnyByteStream = Union[ObjectStream[bytes], ByteStream] - - -class Listener(Generic[T_co], AsyncResource, TypedAttributeProvider): - """An interface for objects that let you accept incoming connections.""" - - @abstractmethod - async def serve( - self, - handler: Callable[[T_co], Any], - task_group: TaskGroup | None = None, - ) -> None: - """ - Accept incoming connections as they come in and start tasks to handle them. - - :param handler: a callable that will be used to handle each accepted connection - :param task_group: the task group that will be used to start tasks for handling each - accepted connection (if omitted, an ad-hoc task group will be created) - """ diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/segment/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/segment/__init__.py deleted file mode 100644 index e75904ab436ca73816e85768f6c2fca9c23d8a4d..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/segment/__init__.py +++ /dev/null @@ -1,107 +0,0 @@ -from typing import Optional, Sequence, TypeVar, Type -from abc import abstractmethod -from chromadb.types import ( - Collection, - MetadataEmbeddingRecord, - VectorEmbeddingRecord, - Where, - WhereDocument, - VectorQuery, - VectorQueryResult, - Segment, - SeqId, - Metadata, -) -from chromadb.config import Component, System -from uuid import UUID - - -class SegmentImplementation(Component): - @abstractmethod - def __init__(self, sytstem: System, segment: Segment): - pass - - @abstractmethod - def count(self) -> int: - """Get the number of embeddings in this segment""" - pass - - @abstractmethod - def max_seqid(self) -> SeqId: - """Get the maximum SeqID currently indexed by this segment""" - pass - - @staticmethod - def propagate_collection_metadata(metadata: Metadata) -> Optional[Metadata]: - """Given an arbitrary metadata map (e.g, from a collection), validate it and - return metadata (if any) that is applicable and should be applied to the - segment. Validation errors will be reported to the user.""" - return None - - -S = TypeVar("S", bound=SegmentImplementation) - - -class MetadataReader(SegmentImplementation): - """Embedding Metadata segment interface""" - - @abstractmethod - def get_metadata( - self, - where: Optional[Where] = None, - where_document: Optional[WhereDocument] = None, - ids: Optional[Sequence[str]] = None, - limit: Optional[int] = None, - offset: Optional[int] = None, - ) -> Sequence[MetadataEmbeddingRecord]: - """Query for embedding metadata.""" - pass - - -class VectorReader(SegmentImplementation): - """Embedding Vector segment interface""" - - @abstractmethod - def get_vectors( - self, ids: Optional[Sequence[str]] = None - ) -> Sequence[VectorEmbeddingRecord]: - """Get embeddings from the segment. If no IDs are provided, all embeddings are - returned.""" - pass - - @abstractmethod - def query_vectors( - self, query: VectorQuery - ) -> Sequence[Sequence[VectorQueryResult]]: - """Given a vector query, return the top-k nearest neighbors for vector in the - query.""" - pass - - -class SegmentManager(Component): - """Interface for a pluggable strategy for creating, retrieving and instantiating - segments as required""" - - @abstractmethod - def create_segments(self, collection: Collection) -> Sequence[Segment]: - """Return the segments required for a new collection. Returns only segment data, - does not persist to the SysDB""" - pass - - @abstractmethod - def delete_segments(self, collection_id: UUID) -> Sequence[UUID]: - """Delete any local state for all the segments associated with a collection, and - returns a sequence of their IDs. Does not update the SysDB.""" - pass - - # Future Note: To support time travel, add optional parameters to this method to - # retrieve Segment instances that are bounded to events from a specific range of - # time - @abstractmethod - def get_segment(self, collection_id: UUID, type: Type[S]) -> S: - """Return the segment that should be used for servicing queries to a collection. - Implementations should cache appropriately; clients are intended to call this - method repeatedly rather than storing the result (thereby giving this - implementation full control over which segment impls are in or out of memory at - a given time.)""" - pass diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/encodings/MacRoman.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/encodings/MacRoman.py deleted file mode 100644 index ba8bf14ef7de1cf76248a2bbd1a98bc8bf36cc5e..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/fontTools/encodings/MacRoman.py +++ /dev/null @@ -1,258 +0,0 @@ -MacRoman = [ - "NUL", - "Eth", - "eth", - "Lslash", - "lslash", - "Scaron", - "scaron", - "Yacute", - "yacute", - "HT", - "LF", - "Thorn", - "thorn", - "CR", - "Zcaron", - "zcaron", - "DLE", - "DC1", - "DC2", - "DC3", - "DC4", - "onehalf", - "onequarter", - "onesuperior", - "threequarters", - "threesuperior", - "twosuperior", - "brokenbar", - "minus", - "multiply", - "RS", - "US", - "space", - "exclam", - "quotedbl", - "numbersign", - "dollar", - "percent", - "ampersand", - "quotesingle", - "parenleft", - "parenright", - "asterisk", - "plus", - "comma", - "hyphen", - "period", - "slash", - "zero", - "one", - "two", - "three", - "four", - "five", - "six", - "seven", - "eight", - "nine", - "colon", - "semicolon", - "less", - "equal", - "greater", - "question", - "at", - "A", - "B", - "C", - "D", - "E", - "F", - "G", - "H", - "I", - "J", - "K", - "L", - "M", - "N", - "O", - "P", - "Q", - "R", - "S", - "T", - "U", - "V", - "W", - "X", - "Y", - "Z", - "bracketleft", - "backslash", - "bracketright", - "asciicircum", - "underscore", - "grave", - "a", - "b", - "c", - "d", - "e", - "f", - "g", - "h", - "i", - "j", - "k", - "l", - "m", - "n", - "o", - "p", - "q", - "r", - "s", - "t", - "u", - "v", - "w", - "x", - "y", - "z", - "braceleft", - "bar", - "braceright", - "asciitilde", - "DEL", - "Adieresis", - "Aring", - "Ccedilla", - "Eacute", - "Ntilde", - "Odieresis", - "Udieresis", - "aacute", - "agrave", - "acircumflex", - "adieresis", - "atilde", - "aring", - "ccedilla", - "eacute", - "egrave", - "ecircumflex", - "edieresis", - "iacute", - "igrave", - "icircumflex", - "idieresis", - "ntilde", - "oacute", - "ograve", - "ocircumflex", - "odieresis", - "otilde", - "uacute", - "ugrave", - "ucircumflex", - "udieresis", - "dagger", - "degree", - "cent", - "sterling", - "section", - "bullet", - "paragraph", - "germandbls", - "registered", - "copyright", - "trademark", - "acute", - "dieresis", - "notequal", - "AE", - "Oslash", - "infinity", - "plusminus", - "lessequal", - "greaterequal", - "yen", - "mu", - "partialdiff", - "summation", - "product", - "pi", - "integral", - "ordfeminine", - "ordmasculine", - "Omega", - "ae", - "oslash", - "questiondown", - "exclamdown", - "logicalnot", - "radical", - "florin", - "approxequal", - "Delta", - "guillemotleft", - "guillemotright", - "ellipsis", - "nbspace", - "Agrave", - "Atilde", - "Otilde", - "OE", - "oe", - "endash", - "emdash", - "quotedblleft", - "quotedblright", - "quoteleft", - "quoteright", - "divide", - "lozenge", - "ydieresis", - "Ydieresis", - "fraction", - "currency", - "guilsinglleft", - "guilsinglright", - "fi", - "fl", - "daggerdbl", - "periodcentered", - "quotesinglbase", - "quotedblbase", - "perthousand", - "Acircumflex", - "Ecircumflex", - "Aacute", - "Edieresis", - "Egrave", - "Iacute", - "Icircumflex", - "Idieresis", - "Igrave", - "Oacute", - "Ocircumflex", - "apple", - "Ograve", - "Uacute", - "Ucircumflex", - "Ugrave", - "dotlessi", - "circumflex", - "tilde", - "macron", - "breve", - "dotaccent", - "ring", - "cedilla", - "hungarumlaut", - "ogonek", - "caron", -] diff --git a/spaces/cihyFjudo/fairness-paper-search/?x??k ?lowjo??trmdsl WORK.md b/spaces/cihyFjudo/fairness-paper-search/?x??k ?lowjo??trmdsl WORK.md deleted file mode 100644 index 0fd413e549cc4e1925b9b70f3c9397314b88772d..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/?x??k ?lowjo??trmdsl WORK.md +++ /dev/null @@ -1,6 +0,0 @@ -

      ?x??k ?lowjo??trmdsl


      Download File ⚹⚹⚹ https://tinurli.com/2uwixU



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cihyFjudo/fairness-paper-search/Download And Install Chrome On Mac Customize Your Settings And Preferences.md b/spaces/cihyFjudo/fairness-paper-search/Download And Install Chrome On Mac Customize Your Settings And Preferences.md deleted file mode 100644 index 4db47724802410c461a4ef5e5c84ee4b4cc4ade3..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Download And Install Chrome On Mac Customize Your Settings And Preferences.md +++ /dev/null @@ -1,43 +0,0 @@ -
      -

      Now that you know how to download Google Chrome, you might want to make Chrome default browser on Mac. To do this, simply open it and click the three dots in the top-right corner, followed by Settings, then in the Default Browser section click Make Default.

      -

      Quick tip: If you have Google Chrome downloaded, it will sync your bookmarks and browsing history across all your devices. This means that if you opened a recipe on your laptop but forgot to save it before going to the grocery store, you can just open Chrome on your cell phone and it will be there in your history.

      -

      Download And Install Chrome On Mac


      Download 🗸🗸🗸 https://tinurli.com/2uwkrM



      -

      4. You'll be directed to a page that says "Thank you for downloading Chrome!" If you have the setting enabled on your current browser, the Chrome file will download automatically. Otherwise, click download Chrome manually.

      -

      7. You might be prompted with a box that says, "'Google Chrome' is an application that is downloaded from the Internet. Are you sure you want to open it?" Click Open.

      -

      4. Wait for Chrome to download and install. Once it's done, a Chrome browser window should open automatically. Be sure to sign in to your Google account so your content can start automatically syncing across devices.

      -

      -

      3. Because the app is free, it will likely start downloading without you needing to enter your password. On some devices, you will be required to enter a password or verify your Touch ID.

      -

      For many, Google Chrome is the only web browser worth using. If you're wanting to see what all the fuss is about, you'll be happy to know that downloading and installing it on your Mac is incredibly easy and fast to do. In this guide, we walk you through the steps so that you can finally ditch Safari or Mozilla or whatever for Chrome.

      -

      For this reason, many Apple users often need to download additional browsers to supplement their browsing experience. Typically, one of the top picks for additional browsers for Mac users is Google Chrome.

      -

      Developed by Google, Google Chrome is fast, easy to use, and works across platforms. With this, it's not surprising why Google Chrome is the most popular browser on the planet. So, if you're wondering how to download Google Chrome on your Mac, keep reading.

      -

      Aside from this, you may also want to consider ejecting the Google Chrome installer. To eject your Google Chrome installer, open the Finder app. In the left-hand side of the screen, click the Eject button found under Locations.

      -

      Many people use browsers like Safari, Firefox, Avast Secure Browser, or Camino on Mac devices. However, you can also download and install Chrome if that's your preferred option. To start the process, you first need to download the installation file.

      -

      Citrix Workspace app is the easy-to-install client software that provides seamless, secure access to everything you need to get work done. With this free download, you easily and securely get instant access to all applications, desktops and data from any device, including smartphones, tablets, PCs and Macs.

      -

      Citrix Workspace app will automatically replace many previous versions of Citrix Receiver and the Citrix online plug-ins; However, some versions must be removed manually before you can install Citrix Workspace app.

      -

      If you are using the Postman web client, you will need to also download the Postman desktop agent. The Postman agent overcomes the Cross Object Resource Sharing (CORS) limitations of browsers, and facilitates API request sending from your browser version of Postman. Read the blog post.

      -

      If you want to be first in line to experience new features, download our latest Canary builds available for OSX (Intel and Apple chips) / Windows (x64) / Linux (x64) for a sneak peek. Our Canary builds are designed for early adopters, and may sometimes break.

      -

      Apple has improved the default Safari browser on Mac with the last couple of macOS updates. Even after all the new additions to Safari, most still prefer Google Chrome to browse the web on Mac. But at times, many users face issues installing Google Chrome on Mac. If you frequently face the same, here are the best ways to fix Chrome installation errors on Mac.

      -

      If you deal with a sketchy network connection on your Mac, you may end up with a broken or corrupt Chrome file. Ensure a solid Wi-Fi connection on your Mac while downloading and installing Chrome. If your Mac keeps disconnecting from Wi-Fi, read our dedicated post to fix the problem.

      -

      If you deal with low space on Mac, use the Recommendations menu to optimize storage. You can empty the Bin, find large files and remove unnecessary ones from the same menu. You can also use the best Mac cleaner apps to free up space. Once you have sufficient storage on your Mac, try installing Google Chrome again.

      -

      Note: In order to start saving, you will need to ensure that your system is up to date or running with Mac OS 10.14.4. Apple recently made changes that will now require you to install App Extensions directly from the App Store.

      -

      Headless Chrome is shipping in Chrome 59. It's a way to run the Chrome browser in a headless environment. Essentially, running Chrome without chrome! It brings all modern web platform features provided by Chromium and the Blink rendering engine to the command line.

      -

      chrome should point to your installation of Chrome. The exact location will vary from platform to platform. Since I'm on Mac, I created convenient aliases for each version of Chrome that I have installed.

      -

      Lighthouse is a marvelous tool for testing the quality of your web apps. A robust module for launching Chrome was developed within Lighthouse and is now extracted for standalone use. The chrome-launcher NPM module will find where Chrome is installed, set up a debug instance, launch the browser, and kill it when your program is done. Best part is that it works cross-platform thanks to Node!

      -

      Warning: The DevTools protocol can do a ton of interesting stuff, but it can be a bit daunting at first. I recommend spending a bit of time browsing the DevTools Protocol Viewer, first. Then, move on to the chrome-remote-interface API docs to see how it wraps the raw protocol.

      -

      When you're ready to take a proctored exam, you will need to use one of the supported internet browsers with the Proctorio extension. If not already installed, please download one of the supported browsers below:

      -

      You can install the Office 365 add-in individually in your account. Keep in mind that if you have multiple email accounts in Outlook, you have to install the add-in in each email account where you want to access the sales tools.

      -

      Once installation is complete, you'll be redirected to a page indicating the add-in has been successfully installed. You can now access your templates, documents, and sequences from your Outlook inbox.

      -

      Thanks for this article which helped to me regain my older laptop in greater speed but my USB DRIVE is gone for now after using the pendrive for chrome os i cant even format it or doing anything with it. Everytime i connect it to my another lap it just shows many USB drives and thats all i am unable to access it and its just got wasted. KINDLE FIND ANY SOLUTION FOR THAT.

      -

      Can Try install dual boot type not real dual boot just swap spare hard disk to install and then remove it from laptop and plug it with hard drive case
      Change boot sequence 1st from usb. 2nd fom hardrive
      when there is no hard drive in usb port it will boot windows from laptop hardrive

      -

      I tried to install chrome os flex on my 5400 rpm hard disk when it installs and shut down it boot loops i tried another 5400 hard disk it did the same when i tried my main kingstone ssd it worked i think the have a software bug that blocks booting for hard disks

      -

      Bhai install to krlia ab wapis se mujhe remove krna h chrome os flex aur windows install krna h.. Windows ki flash pendrive h mere pas but ho nhi rhi boot manager se kya kru reply asap wrna pitai padegi bro

      -

      I have installed on my Lenova laptop with AMD a6 chip. When I tried it worked like a charm. When I installed it, its not going ahead from the chrome logo. It gets stuck there or either black screen comes and I cant do anything. Now I cant even go back to windows. Please help!!!

      -

      Hi,
      I have an old Chromebook dell 7310.
      I was hoping to install Chrome OS flex, but when I try to boot on the usb stick, it says me : the device inserted does not cointain Chrome OS
      Any idea to make it work ?
      Thanks

      -

      Brother I cannot play the .mkv video files in the chrome is flex.
      Like the audio part is all good but for the video part all I get is a black screen!
      Is there a app or extension for that? And if it does get played will I be able to use and change the subtitles given along with the video file?

      -

      My laptop screen is going black after showing the chrome logo while trying to install it. is it because my laptop( hp notebook pentium processor) dont support chrome os flex? will i get to install it after a stable release of the chrome os flex arrives? what is the expected release date of it?

      -

      Can we use pendrive after the process like how we use them before installing boot image on them ( not only for Chrome flex ,for windows as well) or the pen drive just become useless and only be used to install chrome flex/windows

      -

      When i booted it with usb stick it worked fine but after i signed out and installed it, then after removing usb and starting up it started with the chrome logo but then there was a black sceen with the cursor. even now it works fine with usb stick but the same problem with internal storage

      -

      If you have set your Mac to allow apps only from the App Store and you try to install an app from elsewhere, your Mac will say that the app can't be opened because it was not downloaded from the App Store.*

      -

      To install Istation, please see the appropriate download below. In the event that the school's internet connection is lost, Istation will continue to function normally and will synchronize with our servers when the internet connection is restored. Since Istation is delivered through the internet, we transparently provide enhancements without a service call.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/Tai Chi Zero 1080p Torrent Experience the Legend of Yang Lu Chan in Full HD.md b/spaces/cihyFjudo/fairness-paper-search/Tai Chi Zero 1080p Torrent Experience the Legend of Yang Lu Chan in Full HD.md deleted file mode 100644 index 99290befe447f8a3ff3ef8b53e85cbaadc3700d4..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Tai Chi Zero 1080p Torrent Experience the Legend of Yang Lu Chan in Full HD.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Download runaway.a.twist.of.fate.v1.11.crack torrent


      Download File ❤❤❤ https://tinurli.com/2uwiTi



      - - aaccfb2cb3
      -
      -
      -

      diff --git a/spaces/cihyFjudo/fairness-paper-search/Trading Connors Vix Reversals.pdf.md b/spaces/cihyFjudo/fairness-paper-search/Trading Connors Vix Reversals.pdf.md deleted file mode 100644 index 97d9394eb669d01067d23485dc1df062a610aef8..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Trading Connors Vix Reversals.pdf.md +++ /dev/null @@ -1,24 +0,0 @@ - -

      Larry Connors is an experienced trader and publisher of trading research. Together with Linda Raschke, he wrote the book, Street Smarts, which is a solid collection of trading strategies including the Holy Grail.

      -

      However, his book, How Markets Really Work, is a great read. It has back-test results of trading strategies and price action behavior, including highs/lows, VIX, put/call ratio and more. It presents the results clearly in nice tables to show you how markets really work.

      -

      Trading Connors Vix Reversals.pdf


      DOWNLOAD ->>->>->> https://tinurli.com/2uwhXK



      -

      Futures and forex trading contains substantial risk and is not for every investor. An investor could potentially lose all or more than the initial investment. Risk capital is money that can be lost without jeopardizing ones financial security or life style. Only risk capital should be used for trading and only those with sufficient risk capital should consider trading. Past performance is not necessarily indicative of future results.

      -

      The website contents are only for educational purposes. All trades are random examples selected to present the trading setups and are not real trades. All trademarks belong to their respective owners. We are not registered with any regulating body that allows us to give financial and investment advice.

      -

      Larry Connors has over 30 years of experience in the financial industry. For many years, he has provided data-driven research to investors, hedge funds, and trading firms around the world. Larry developed the VIX RSI strategy together with Cesar Alvarez and published it in 2009 in their book Short Term Trading Strategies That Work.

      -

      For more detailed information on these rules and the rationale behind them, refer to Connors' book Short Term Trading Strategies That Work. The full C# source code can be reviewed as part of the TuringTrader.org open-source project. Our implementation of VIX RSI implements the rules as published in the book. However, we slightly altered the entry and exit points. Further, instead of trading on the market close, we modified the strategy to trade on the next market open. This change frees investors from the requirement of being in front of a computer at the market close and seems more suitable for our readers.

      -

      Through both back testing and actual use in trading, the Connors VIX Reversals have proven to be one of the premier market timing tools for serious S&P traders, options traders, and stock traders. Put this new research to use in your trading today!

      -

      We assert that the historically unique pattern of correlation exhibited by the 21day correlation of daily returns between the S&P 500 (SPX) and the U.S. 10Yr Treasury (UST 10Y), the (RVSL) since 2007 both informs on a secular shift in liquid markets trading and can be enlisted as a key metric to gauge the timing and scope of periodic market shocks such as the May 2010, August 2011, October 2012, October 2014 and August 2015 *risk-off moves. *risk-off loosely defined as a sharp decline in Equities of 5% or more. Secular Shift: As the graph in our report illustrates, the vast majority (67%) of *outlier negative correlations since 1962 have been concentrated in just the past eight years. Anecdotal and empirical study suggests that several factors outlined in our Market Risk Management Framework piece have conspired to foster the instances of elevated correlations. A summary of these factors include increased banking and securities regulation, a decrease in the number of liquidity providers and the persistence of aggressive monetary policy globally by Central Banks. We note that our analysis of other risk and return measures (not shown here) including the VIX, credit spreads and stock dispersions, do not demonstrate the same level of persistent and outsized clustering.

      -

      -

      In this article, we explain what the WilliamsVixFix indicator is, how you can use it, and whether is a good trading tool or not. We test one WilliamsVixFix trading strategy. The indicator works.

      -

      thanks for taking a look at my work and updating it. Intriguing that 14 years later, out of sample, it is still working. Has it found a higher level of lows?. Mike Carr has done a lot of work with this as well and I hope to add some new insights this year now that i have retired to just trading.

      -


      Our software is backed by our unconditional Money Back Guarantee. If for any reason you are not fully satisfied, you may return the software, within 30 days of purchase, for a 100% refund, less shipping and handling. Texas residents add 8.25% sales tax. Educational material is non-refundable.

      Important Information: Futures, options and securities trading has risk of loss and may not be suitable for all persons. No Strategy can guarantee profits or freedom from loss. Past results are not necessarily indicative of future results. These results are based on simulated or hypothetical performance results that have certain inherent limitations. Unlike an actual performance record, simulated results do not represent actual trading. There are numerous market factors, including liquidity, which cannot be fully accounted for in the preparation of hypothetical performance results all of which can adversely affect actual trading results. No representation is being made that any account will or is likely to achieve profits or losses similar to these being shown.

      -

      Larry Connors has over 30 years in the financial markets industry. His opinions have been featured at the Wall Street Journal, Bloomberg, Dow Jones, & many others. For over 15 years, Larry Connors and now Connors Research has provided the highest-quality, data-driven research on trading for individual investors, hedge funds, proprietary trading firms, and bank trading desks around the world.

      -

      The relative strength index (RSI) is a technical indicator used in the analysis of financial markets. It is intended to chart the current and historical strength or weakness of a stock or market based on the closing prices of a recent trading period. The indicator should not be confused with relative strength.

      -

      The level of the RSI is a measure of the stock's recent trading strength. The slope of the RSI is directly proportional to the velocity of a change in the trend. The distance traveled by the RSI is proportional to the magnitude of the move.

      -

      The product of a five-year collaboration between Dr. Ari Kiev, a leading psychiatrist renowned for his success with Olympic athletes, and top equities trader Steve Cohen, Trading to Win gives you the essential tools to overcome outmoded, self-limiting beliefs and mindsets that may be keeping you from a higher level of success. Illustrated with real market scenarios and applications, this powerful program will help psych you into a less stressful, more self-possessed mastery of the trading game and help you reach goals you may never have thought possible.

      -

      Trading to Win presents a step-by-step, goal-oriented program for building the mental and emotional stamina not only to win, but to win on an unprecedented level. Created by a leading psychiatrist for a top trading firm, this proven approach spotlights a set of philosophical and behavioral principles designed to assist you in implementing proactive trading strategies, as well as developing the mindset needed to trade effectively in the realm of uncertainty. Delving into your underlying thought processes when you trade, Trading to Win enables you to understand what is motivating you, whether it is consistent with your game plan, and whether you are in any way sabotaging yourself.

      -

      One of the first books to address the psychological nature of how successful traders think ~ The Disciplined Trader is now an industry classic. In this groundbreaking work published in 1990 ~Douglas examines the causes as to why most traders cannot raise and keep their equity on a consistent basis ~ and brings the reader to practical and unique conclusions as to how to go about changing any limiting mindset. The trader is taken through a step-by-step process to breakthrough those queries ~ and begin to understand that their very thoughts may be limiting their ability to accumulate and succeed at trading.

      -

      In his first book, Hit & Run Trading, Jeff Cooper taught traders how he has made his living day-trading stocks over the past decade. The book is such a success that it is now back for its fifth printing in its first 18 months.

      -

      The strategies and techniques assembled in this book will give you the tools and insight needed to make better trading decisions, ultimately becoming a smarter trader. This book is a must for anyone who has the drive to be successful where others have failed.

      aaccfb2cb3
      -
      -
      \ No newline at end of file diff --git a/spaces/cloversid/rvc-ai/Dockerfile b/spaces/cloversid/rvc-ai/Dockerfile deleted file mode 100644 index e4a7b3dd76c52ba318650d4444567aa8311b7912..0000000000000000000000000000000000000000 --- a/spaces/cloversid/rvc-ai/Dockerfile +++ /dev/null @@ -1,32 +0,0 @@ -# read the doc: https://huggingface.co/docs/hub/spaces-sdks-docker -# you will also find guides on how best to write your Dockerfile -FROM python:3.9 -RUN apt update && apt upgrade -y && apt install git ffmpeg -y -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 user -RUN wget https://github.com/git-lfs/git-lfs/releases/download/v2.9.0/git-lfs-linux-amd64-v2.9.0.tar.gz -RUN tar -xf git-lfs-linux-amd64-v2.9.0.tar.gz && chmod 755 install.sh && ./install.sh -# Switch to the "user" user -USER user -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH \ - PYTHONPATH=$HOME/app \ - PYTHONUNBUFFERED=1 \ - GRADIO_ALLOW_FLAGGING=never \ - GRADIO_NUM_PORTS=1 \ - GRADIO_SERVER_NAME=0.0.0.0 \ - GRADIO_THEME=huggingface \ - SYSTEM=spaces - -# Set the working directory to the user's home directory -WORKDIR $HOME -RUN --mount=type=secret,id=auth,mode=0444,required=true git lfs clone https://$(cat /run/secrets/auth)@huggingface.co/spaces/cloversid/rvc-space.git -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -RUN mv rvc-space rvc -WORKDIR $HOME/rvc -RUN git lfs install -RUN git lfs pull -RUN python merge_external_models.py -RUN pip install --no-cache-dir -r requirements.txt -CMD python app.py \ No newline at end of file diff --git a/spaces/cmudrc/lattice-interpolation/README.md b/spaces/cmudrc/lattice-interpolation/README.md deleted file mode 100644 index edb83e579b9ae0811c48878ae7566055b86e4ec0..0000000000000000000000000000000000000000 --- a/spaces/cmudrc/lattice-interpolation/README.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Lattice Interpolation -emoji: 🔗 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.6 -python_version: 3.7.15 -app_file: app.py -pinned: false -license: mit -datasets: -- cmudrc/2d-lattices -models: -- cmudrc/2d-lattice-encoder -- cmudrc/2d-lattice-decoder ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/eac3dec.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/eac3dec.c deleted file mode 100644 index 5c71751a0c8a9b7b7b0978651df446234835205e..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/eac3dec.c +++ /dev/null @@ -1,634 +0,0 @@ -/* - * E-AC-3 decoder - * Copyright (c) 2007 Bartlomiej Wolowiec - * Copyright (c) 2008 Justin Ruggles - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/* - * There are several features of E-AC-3 that this decoder does not yet support. - * - * Enhanced Coupling - * No known samples exist. If any ever surface, this feature should not be - * too difficult to implement. - * - * Reduced Sample Rates - * No known samples exist. The spec also does not give clear information - * on how this is to be implemented. - * - * Transient Pre-noise Processing - * This is side information which a decoder should use to reduce artifacts - * caused by transients. There are samples which are known to have this - * information, but this decoder currently ignores it. - */ - - -#include "avcodec.h" -#include "aac_ac3_parser.h" -#include "ac3.h" -#include "ac3dec.h" -#include "ac3dec_data.h" -#include "eac3_data.h" - -/** gain adaptive quantization mode */ -typedef enum { - EAC3_GAQ_NO =0, - EAC3_GAQ_12, - EAC3_GAQ_14, - EAC3_GAQ_124 -} EAC3GaqMode; - -#define EAC3_SR_CODE_REDUCED 3 - -static void ff_eac3_apply_spectral_extension(AC3DecodeContext *s) -{ - int bin, bnd, ch, i; - uint8_t wrapflag[SPX_MAX_BANDS]={1,0,}, num_copy_sections, copy_sizes[SPX_MAX_BANDS]; - float rms_energy[SPX_MAX_BANDS]; - - /* Set copy index mapping table. Set wrap flags to apply a notch filter at - wrap points later on. */ - bin = s->spx_dst_start_freq; - num_copy_sections = 0; - for (bnd = 0; bnd < s->num_spx_bands; bnd++) { - int copysize; - int bandsize = s->spx_band_sizes[bnd]; - if (bin + bandsize > s->spx_src_start_freq) { - copy_sizes[num_copy_sections++] = bin - s->spx_dst_start_freq; - bin = s->spx_dst_start_freq; - wrapflag[bnd] = 1; - } - for (i = 0; i < bandsize; i += copysize) { - if (bin == s->spx_src_start_freq) { - copy_sizes[num_copy_sections++] = bin - s->spx_dst_start_freq; - bin = s->spx_dst_start_freq; - } - copysize = FFMIN(bandsize - i, s->spx_src_start_freq - bin); - bin += copysize; - } - } - copy_sizes[num_copy_sections++] = bin - s->spx_dst_start_freq; - - for (ch = 1; ch <= s->fbw_channels; ch++) { - if (!s->channel_uses_spx[ch]) - continue; - - /* Copy coeffs from normal bands to extension bands */ - bin = s->spx_src_start_freq; - for (i = 0; i < num_copy_sections; i++) { - memcpy(&s->transform_coeffs[ch][bin], - &s->transform_coeffs[ch][s->spx_dst_start_freq], - copy_sizes[i]*sizeof(INTFLOAT)); - bin += copy_sizes[i]; - } - - /* Calculate RMS energy for each SPX band. */ - bin = s->spx_src_start_freq; - for (bnd = 0; bnd < s->num_spx_bands; bnd++) { - int bandsize = s->spx_band_sizes[bnd]; - float accum = 0.0f; - for (i = 0; i < bandsize; i++) { - float coeff = s->transform_coeffs[ch][bin++]; - accum += coeff * coeff; - } - rms_energy[bnd] = sqrtf(accum / bandsize); - } - - /* Apply a notch filter at transitions between normal and extension - bands and at all wrap points. */ - if (s->spx_atten_code[ch] >= 0) { - const float *atten_tab = ff_eac3_spx_atten_tab[s->spx_atten_code[ch]]; - bin = s->spx_src_start_freq - 2; - for (bnd = 0; bnd < s->num_spx_bands; bnd++) { - if (wrapflag[bnd]) { - INTFLOAT *coeffs = &s->transform_coeffs[ch][bin]; - coeffs[0] *= atten_tab[0]; - coeffs[1] *= atten_tab[1]; - coeffs[2] *= atten_tab[2]; - coeffs[3] *= atten_tab[1]; - coeffs[4] *= atten_tab[0]; - } - bin += s->spx_band_sizes[bnd]; - } - } - - /* Apply noise-blended coefficient scaling based on previously - calculated RMS energy, blending factors, and SPX coordinates for - each band. */ - bin = s->spx_src_start_freq; - for (bnd = 0; bnd < s->num_spx_bands; bnd++) { - float nscale = s->spx_noise_blend[ch][bnd] * rms_energy[bnd] * (1.0f / INT32_MIN); - float sscale = s->spx_signal_blend[ch][bnd]; -#if USE_FIXED - // spx_noise_blend and spx_signal_blend are both FP.23 - nscale *= 1.0 / (1<<23); - sscale *= 1.0 / (1<<23); - if (nscale < -1.0) - nscale = -1.0; -#endif - for (i = 0; i < s->spx_band_sizes[bnd]; i++) { - UINTFLOAT noise = (INTFLOAT)(nscale * (int32_t)av_lfg_get(&s->dith_state)); - s->transform_coeffs[ch][bin] *= sscale; - s->transform_coeffs[ch][bin++] += noise; - } - } - } -} - - -/** lrint(M_SQRT2*cos(2*M_PI/12)*(1<<23)) */ -#define COEFF_0 10273905LL - -/** lrint(M_SQRT2*cos(0*M_PI/12)*(1<<23)) = lrint(M_SQRT2*(1<<23)) */ -#define COEFF_1 11863283LL - -/** lrint(M_SQRT2*cos(5*M_PI/12)*(1<<23)) */ -#define COEFF_2 3070444LL - -/** - * Calculate 6-point IDCT of the pre-mantissas. - * All calculations are 24-bit fixed-point. - */ -static void idct6(int pre_mant[6]) -{ - int tmp; - int even0, even1, even2, odd0, odd1, odd2; - - odd1 = pre_mant[1] - pre_mant[3] - pre_mant[5]; - - even2 = ( pre_mant[2] * COEFF_0) >> 23; - tmp = ( pre_mant[4] * COEFF_1) >> 23; - odd0 = ((pre_mant[1] + pre_mant[5]) * COEFF_2) >> 23; - - even0 = pre_mant[0] + (tmp >> 1); - even1 = pre_mant[0] - tmp; - - tmp = even0; - even0 = tmp + even2; - even2 = tmp - even2; - - tmp = odd0; - odd0 = tmp + pre_mant[1] + pre_mant[3]; - odd2 = tmp + pre_mant[5] - pre_mant[3]; - - pre_mant[0] = even0 + odd0; - pre_mant[1] = even1 + odd1; - pre_mant[2] = even2 + odd2; - pre_mant[3] = even2 - odd2; - pre_mant[4] = even1 - odd1; - pre_mant[5] = even0 - odd0; -} - -static void ff_eac3_decode_transform_coeffs_aht_ch(AC3DecodeContext *s, int ch) -{ - int bin, blk, gs; - int end_bap, gaq_mode; - GetBitContext *gbc = &s->gbc; - int gaq_gain[AC3_MAX_COEFS]; - - gaq_mode = get_bits(gbc, 2); - end_bap = (gaq_mode < 2) ? 12 : 17; - - /* if GAQ gain is used, decode gain codes for bins with hebap between - 8 and end_bap */ - gs = 0; - if (gaq_mode == EAC3_GAQ_12 || gaq_mode == EAC3_GAQ_14) { - /* read 1-bit GAQ gain codes */ - for (bin = s->start_freq[ch]; bin < s->end_freq[ch]; bin++) { - if (s->bap[ch][bin] > 7 && s->bap[ch][bin] < end_bap) - gaq_gain[gs++] = get_bits1(gbc) << (gaq_mode-1); - } - } else if (gaq_mode == EAC3_GAQ_124) { - /* read 1.67-bit GAQ gain codes (3 codes in 5 bits) */ - int gc = 2; - for (bin = s->start_freq[ch]; bin < s->end_freq[ch]; bin++) { - if (s->bap[ch][bin] > 7 && s->bap[ch][bin] < 17) { - if (gc++ == 2) { - int group_code = get_bits(gbc, 5); - if (group_code > 26) { - av_log(s->avctx, AV_LOG_WARNING, "GAQ gain group code out-of-range\n"); - group_code = 26; - } - gaq_gain[gs++] = ff_ac3_ungroup_3_in_5_bits_tab[group_code][0]; - gaq_gain[gs++] = ff_ac3_ungroup_3_in_5_bits_tab[group_code][1]; - gaq_gain[gs++] = ff_ac3_ungroup_3_in_5_bits_tab[group_code][2]; - gc = 0; - } - } - } - } - - gs=0; - for (bin = s->start_freq[ch]; bin < s->end_freq[ch]; bin++) { - int hebap = s->bap[ch][bin]; - int bits = ff_eac3_bits_vs_hebap[hebap]; - if (!hebap) { - /* zero-mantissa dithering */ - for (blk = 0; blk < 6; blk++) { - s->pre_mantissa[ch][bin][blk] = (av_lfg_get(&s->dith_state) & 0x7FFFFF) - 0x400000; - } - } else if (hebap < 8) { - /* Vector Quantization */ - int v = get_bits(gbc, bits); - for (blk = 0; blk < 6; blk++) { - s->pre_mantissa[ch][bin][blk] = ff_eac3_mantissa_vq[hebap][v][blk] * (1 << 8); - } - } else { - /* Gain Adaptive Quantization */ - int gbits, log_gain; - if (gaq_mode != EAC3_GAQ_NO && hebap < end_bap) { - log_gain = gaq_gain[gs++]; - } else { - log_gain = 0; - } - gbits = bits - log_gain; - - for (blk = 0; blk < 6; blk++) { - int mant = get_sbits(gbc, gbits); - if (log_gain && mant == -(1 << (gbits-1))) { - /* large mantissa */ - int b; - int mbits = bits - (2 - log_gain); - mant = get_sbits(gbc, mbits); - mant = ((unsigned)mant) << (23 - (mbits - 1)); - /* remap mantissa value to correct for asymmetric quantization */ - if (mant >= 0) - b = 1 << (23 - log_gain); - else - b = ff_eac3_gaq_remap_2_4_b[hebap-8][log_gain-1] * (1 << 8); - mant += ((ff_eac3_gaq_remap_2_4_a[hebap-8][log_gain-1] * (int64_t)mant) >> 15) + b; - } else { - /* small mantissa, no GAQ, or Gk=1 */ - mant *= (1 << 24 - bits); - if (!log_gain) { - /* remap mantissa value for no GAQ or Gk=1 */ - mant += (ff_eac3_gaq_remap_1[hebap-8] * (int64_t)mant) >> 15; - } - } - s->pre_mantissa[ch][bin][blk] = mant; - } - } - idct6(s->pre_mantissa[ch][bin]); - } -} - -static int ff_eac3_parse_header(AC3DecodeContext *s) -{ - int i, blk, ch; - int ac3_exponent_strategy, parse_aht_info, parse_spx_atten_data; - int parse_transient_proc_info; - int num_cpl_blocks; - GetBitContext *gbc = &s->gbc; - - /* An E-AC-3 stream can have multiple independent streams which the - application can select from. each independent stream can also contain - dependent streams which are used to add or replace channels. */ - if (s->frame_type == EAC3_FRAME_TYPE_RESERVED) { - av_log(s->avctx, AV_LOG_ERROR, "Reserved frame type\n"); - return AAC_AC3_PARSE_ERROR_FRAME_TYPE; - } - - /* The substream id indicates which substream this frame belongs to. each - independent stream has its own substream id, and the dependent streams - associated to an independent stream have matching substream id's. */ - if (s->substreamid) { - /* only decode substream with id=0. skip any additional substreams. */ - if (!s->eac3_subsbtreamid_found) { - s->eac3_subsbtreamid_found = 1; - avpriv_request_sample(s->avctx, "Additional substreams"); - } - return AAC_AC3_PARSE_ERROR_FRAME_TYPE; - } - - if (s->bit_alloc_params.sr_code == EAC3_SR_CODE_REDUCED) { - /* The E-AC-3 specification does not tell how to handle reduced sample - rates in bit allocation. The best assumption would be that it is - handled like AC-3 DolbyNet, but we cannot be sure until we have a - sample which utilizes this feature. */ - avpriv_request_sample(s->avctx, "Reduced sampling rate"); - return AVERROR_PATCHWELCOME; - } - skip_bits(gbc, 5); // skip bitstream id - - /* volume control params */ - for (i = 0; i < (s->channel_mode ? 1 : 2); i++) { - s->dialog_normalization[i] = -get_bits(gbc, 5); - if (s->dialog_normalization[i] == 0) { - s->dialog_normalization[i] = -31; - } - if (s->target_level != 0) { - s->level_gain[i] = powf(2.0f, - (float)(s->target_level - s->dialog_normalization[i])/6.0f); - } - s->compression_exists[i] = get_bits1(gbc); - if (s->compression_exists[i]) { - s->heavy_dynamic_range[i] = AC3_HEAVY_RANGE(get_bits(gbc, 8)); - } - } - - /* dependent stream channel map */ - if (s->frame_type == EAC3_FRAME_TYPE_DEPENDENT) { - if (get_bits1(gbc)) { - int64_t channel_layout = 0; - int channel_map = get_bits(gbc, 16); - av_log(s->avctx, AV_LOG_DEBUG, "channel_map: %0X\n", channel_map); - - for (i = 0; i < 16; i++) - if (channel_map & (1 << (EAC3_MAX_CHANNELS - i - 1))) - channel_layout |= ff_eac3_custom_channel_map_locations[i][1]; - - if (av_popcount64(channel_layout) > EAC3_MAX_CHANNELS) { - return AVERROR_INVALIDDATA; - } - s->channel_map = channel_map; - } - } - - /* mixing metadata */ - if (get_bits1(gbc)) { - /* center and surround mix levels */ - if (s->channel_mode > AC3_CHMODE_STEREO) { - s->preferred_downmix = get_bits(gbc, 2); - if (s->channel_mode & 1) { - /* if three front channels exist */ - s->center_mix_level_ltrt = get_bits(gbc, 3); - s->center_mix_level = get_bits(gbc, 3); - } - if (s->channel_mode & 4) { - /* if a surround channel exists */ - s->surround_mix_level_ltrt = av_clip(get_bits(gbc, 3), 3, 7); - s->surround_mix_level = av_clip(get_bits(gbc, 3), 3, 7); - } - } - - /* lfe mix level */ - if (s->lfe_on && (s->lfe_mix_level_exists = get_bits1(gbc))) { - s->lfe_mix_level = get_bits(gbc, 5); - } - - /* info for mixing with other streams and substreams */ - if (s->frame_type == EAC3_FRAME_TYPE_INDEPENDENT) { - for (i = 0; i < (s->channel_mode ? 1 : 2); i++) { - // TODO: apply program scale factor - if (get_bits1(gbc)) { - skip_bits(gbc, 6); // skip program scale factor - } - } - if (get_bits1(gbc)) { - skip_bits(gbc, 6); // skip external program scale factor - } - /* skip mixing parameter data */ - switch(get_bits(gbc, 2)) { - case 1: skip_bits(gbc, 5); break; - case 2: skip_bits(gbc, 12); break; - case 3: { - int mix_data_size = (get_bits(gbc, 5) + 2) << 3; - skip_bits_long(gbc, mix_data_size); - break; - } - } - /* skip pan information for mono or dual mono source */ - if (s->channel_mode < AC3_CHMODE_STEREO) { - for (i = 0; i < (s->channel_mode ? 1 : 2); i++) { - if (get_bits1(gbc)) { - /* note: this is not in the ATSC A/52B specification - reference: ETSI TS 102 366 V1.1.1 - section: E.1.3.1.25 */ - skip_bits(gbc, 8); // skip pan mean direction index - skip_bits(gbc, 6); // skip reserved paninfo bits - } - } - } - /* skip mixing configuration information */ - if (get_bits1(gbc)) { - for (blk = 0; blk < s->num_blocks; blk++) { - if (s->num_blocks == 1 || get_bits1(gbc)) { - skip_bits(gbc, 5); - } - } - } - } - } - - /* informational metadata */ - if (get_bits1(gbc)) { - s->bitstream_mode = get_bits(gbc, 3); - skip_bits(gbc, 2); // skip copyright bit and original bitstream bit - if (s->channel_mode == AC3_CHMODE_STEREO) { - s->dolby_surround_mode = get_bits(gbc, 2); - s->dolby_headphone_mode = get_bits(gbc, 2); - } - if (s->channel_mode >= AC3_CHMODE_2F2R) { - s->dolby_surround_ex_mode = get_bits(gbc, 2); - } - for (i = 0; i < (s->channel_mode ? 1 : 2); i++) { - if (get_bits1(gbc)) { - skip_bits(gbc, 8); // skip mix level, room type, and A/D converter type - } - } - if (s->bit_alloc_params.sr_code != EAC3_SR_CODE_REDUCED) { - skip_bits1(gbc); // skip source sample rate code - } - } - - /* converter synchronization flag - If frames are less than six blocks, this bit should be turned on - once every 6 blocks to indicate the start of a frame set. - reference: RFC 4598, Section 2.1.3 Frame Sets */ - if (s->frame_type == EAC3_FRAME_TYPE_INDEPENDENT && s->num_blocks != 6) { - skip_bits1(gbc); // skip converter synchronization flag - } - - /* original frame size code if this stream was converted from AC-3 */ - if (s->frame_type == EAC3_FRAME_TYPE_AC3_CONVERT && - (s->num_blocks == 6 || get_bits1(gbc))) { - skip_bits(gbc, 6); // skip frame size code - } - - /* additional bitstream info */ - if (get_bits1(gbc)) { - int addbsil = get_bits(gbc, 6); - for (i = 0; i < addbsil + 1; i++) { - if (i == 0) { - /* In this 8 bit chunk, the LSB is equal to flag_ec3_extension_type_a - which can be used to detect Atmos presence */ - skip_bits(gbc, 7); - if (get_bits1(gbc)) { - s->eac3_extension_type_a = 1; - } - } else { - skip_bits(gbc, 8); // skip additional bit stream info - } - } - } - - /* audio frame syntax flags, strategy data, and per-frame data */ - - if (s->num_blocks == 6) { - ac3_exponent_strategy = get_bits1(gbc); - parse_aht_info = get_bits1(gbc); - } else { - /* less than 6 blocks, so use AC-3-style exponent strategy syntax, and - do not use AHT */ - ac3_exponent_strategy = 1; - parse_aht_info = 0; - } - - s->snr_offset_strategy = get_bits(gbc, 2); - parse_transient_proc_info = get_bits1(gbc); - - s->block_switch_syntax = get_bits1(gbc); - if (!s->block_switch_syntax) - memset(s->block_switch, 0, sizeof(s->block_switch)); - - s->dither_flag_syntax = get_bits1(gbc); - if (!s->dither_flag_syntax) { - for (ch = 1; ch <= s->fbw_channels; ch++) - s->dither_flag[ch] = 1; - } - s->dither_flag[CPL_CH] = s->dither_flag[s->lfe_ch] = 0; - - s->bit_allocation_syntax = get_bits1(gbc); - if (!s->bit_allocation_syntax) { - /* set default bit allocation parameters */ - s->bit_alloc_params.slow_decay = ff_ac3_slow_decay_tab[2]; - s->bit_alloc_params.fast_decay = ff_ac3_fast_decay_tab[1]; - s->bit_alloc_params.slow_gain = ff_ac3_slow_gain_tab [1]; - s->bit_alloc_params.db_per_bit = ff_ac3_db_per_bit_tab[2]; - s->bit_alloc_params.floor = ff_ac3_floor_tab [7]; - } - - s->fast_gain_syntax = get_bits1(gbc); - s->dba_syntax = get_bits1(gbc); - s->skip_syntax = get_bits1(gbc); - parse_spx_atten_data = get_bits1(gbc); - - /* coupling strategy occurrence and coupling use per block */ - num_cpl_blocks = 0; - if (s->channel_mode > 1) { - for (blk = 0; blk < s->num_blocks; blk++) { - s->cpl_strategy_exists[blk] = (!blk || get_bits1(gbc)); - if (s->cpl_strategy_exists[blk]) { - s->cpl_in_use[blk] = get_bits1(gbc); - } else { - s->cpl_in_use[blk] = s->cpl_in_use[blk-1]; - } - num_cpl_blocks += s->cpl_in_use[blk]; - } - } else { - memset(s->cpl_in_use, 0, sizeof(s->cpl_in_use)); - } - - /* exponent strategy data */ - if (ac3_exponent_strategy) { - /* AC-3-style exponent strategy syntax */ - for (blk = 0; blk < s->num_blocks; blk++) { - for (ch = !s->cpl_in_use[blk]; ch <= s->fbw_channels; ch++) { - s->exp_strategy[blk][ch] = get_bits(gbc, 2); - } - } - } else { - /* LUT-based exponent strategy syntax */ - for (ch = !((s->channel_mode > 1) && num_cpl_blocks); ch <= s->fbw_channels; ch++) { - int frmchexpstr = get_bits(gbc, 5); - for (blk = 0; blk < 6; blk++) { - s->exp_strategy[blk][ch] = ff_eac3_frm_expstr[frmchexpstr][blk]; - } - } - } - /* LFE exponent strategy */ - if (s->lfe_on) { - for (blk = 0; blk < s->num_blocks; blk++) { - s->exp_strategy[blk][s->lfe_ch] = get_bits1(gbc); - } - } - /* original exponent strategies if this stream was converted from AC-3 */ - if (s->frame_type == EAC3_FRAME_TYPE_INDEPENDENT && - (s->num_blocks == 6 || get_bits1(gbc))) { - skip_bits(gbc, 5 * s->fbw_channels); // skip converter channel exponent strategy - } - - /* determine which channels use AHT */ - if (parse_aht_info) { - /* For AHT to be used, all non-zero blocks must reuse exponents from - the first block. Furthermore, for AHT to be used in the coupling - channel, all blocks must use coupling and use the same coupling - strategy. */ - s->channel_uses_aht[CPL_CH]=0; - for (ch = (num_cpl_blocks != 6); ch <= s->channels; ch++) { - int use_aht = 1; - for (blk = 1; blk < 6; blk++) { - if ((s->exp_strategy[blk][ch] != EXP_REUSE) || - (!ch && s->cpl_strategy_exists[blk])) { - use_aht = 0; - break; - } - } - s->channel_uses_aht[ch] = use_aht && get_bits1(gbc); - } - } else { - memset(s->channel_uses_aht, 0, sizeof(s->channel_uses_aht)); - } - - /* per-frame SNR offset */ - if (!s->snr_offset_strategy) { - int csnroffst = (get_bits(gbc, 6) - 15) << 4; - int snroffst = (csnroffst + get_bits(gbc, 4)) << 2; - for (ch = 0; ch <= s->channels; ch++) - s->snr_offset[ch] = snroffst; - } - - /* transient pre-noise processing data */ - if (parse_transient_proc_info) { - for (ch = 1; ch <= s->fbw_channels; ch++) { - if (get_bits1(gbc)) { // channel in transient processing - skip_bits(gbc, 10); // skip transient processing location - skip_bits(gbc, 8); // skip transient processing length - } - } - } - - /* spectral extension attenuation data */ - for (ch = 1; ch <= s->fbw_channels; ch++) { - if (parse_spx_atten_data && get_bits1(gbc)) { - s->spx_atten_code[ch] = get_bits(gbc, 5); - } else { - s->spx_atten_code[ch] = -1; - } - } - - /* block start information */ - if (s->num_blocks > 1 && get_bits1(gbc)) { - /* reference: Section E2.3.2.27 - nblkstrtbits = (numblks - 1) * (4 + ceiling(log2(words_per_frame))) - The spec does not say what this data is or what it's used for. - It is likely the offset of each block within the frame. */ - int block_start_bits = (s->num_blocks-1) * (4 + av_log2(s->frame_size-2)); - skip_bits_long(gbc, block_start_bits); - avpriv_request_sample(s->avctx, "Block start info"); - } - - /* syntax state initialization */ - for (ch = 1; ch <= s->fbw_channels; ch++) { - s->first_spx_coords[ch] = 1; - s->first_cpl_coords[ch] = 1; - } - s->first_cpl_leak = 1; - - return 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g729postfilter.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g729postfilter.h deleted file mode 100644 index 69815341ed3f3110328150203bd29826a3fd183b..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g729postfilter.h +++ /dev/null @@ -1,117 +0,0 @@ -/* - * G.729, G729 Annex D postfilter - * Copyright (c) 2008 Vladimir Voroshilov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ -#ifndef AVCODEC_G729POSTFILTER_H -#define AVCODEC_G729POSTFILTER_H - -#include -#include "acelp_pitch_delay.h" -#include "audiodsp.h" - -/** - * tilt compensation factor (G.729, k1>0) - * 0.2 in Q15 - */ -#define G729_TILT_FACTOR_PLUS 6554 - -/** - * tilt compensation factor (G.729, k1<0) - * 0.9 in Q15 - */ -#define G729_TILT_FACTOR_MINUS 29491 - -/* 4.2.2 */ -#define FORMANT_PP_FACTOR_NUM 18022 //0.55 in Q15 -#define FORMANT_PP_FACTOR_DEN 22938 //0.70 in Q15 - -/** - * gain adjustment factor (G.729, 4.2.4) - * 0.9875 in Q15 - */ -#define G729_AGC_FACTOR 32358 -#define G729_AGC_FAC1 (32768-G729_AGC_FACTOR) - -/** - * 1.0 / (1.0 + 0.5) in Q15 - * where 0.5 is the minimum value of - * weight factor, controlling amount of long-term postfiltering - */ -#define MIN_LT_FILT_FACTOR_A 21845 - -/** - * Short interpolation filter length - */ -#define SHORT_INT_FILT_LEN 2 - -/** - * Long interpolation filter length - */ -#define LONG_INT_FILT_LEN 8 - -/** - * Number of analyzed fractional pitch delays in second stage of long-term - * postfilter - */ -#define ANALYZED_FRAC_DELAYS 7 - -/** - * Amount of past residual signal data stored in buffer - */ -#define RES_PREV_DATA_SIZE (PITCH_DELAY_MAX + LONG_INT_FILT_LEN + 1) - -/** - * \brief Signal postfiltering (4.2) - * \param dsp initialized DSP context - * \param ht_prev_data [in/out] (Q12) pointer to variable receiving tilt - * compensation filter data from previous subframe - * \param voicing [in/out] (Q0) pointer to variable receiving voicing decision - * \param lp_filter_coeffs (Q12) LP filter coefficients - * \param pitch_delay_int integer part of the pitch delay - * \param residual [in/out] (Q0) residual signal buffer (used in long-term postfilter) - * \param res_filter_data [in/out] (Q0) speech data of previous subframe - * \param pos_filter_data [in/out] (Q0) previous speech data for short-term postfilter - * \param speech [in/out] (Q0) signal buffer - * \param subframe_size size of subframe - * - * Filtering has the following stages: - * Long-term postfilter (4.2.1) - * Short-term postfilter (4.2.2). - * Tilt-compensation (4.2.3) - */ -void ff_g729_postfilter(AudioDSPContext *adsp, int16_t* ht_prev_data, int* voicing, - const int16_t *lp_filter_coeffs, int pitch_delay_int, - int16_t* residual, int16_t* res_filter_data, - int16_t* pos_filter_data, int16_t *speech, - int subframe_size); - -/** - * \brief Adaptive gain control (4.2.4) - * \param gain_before (Q0) gain of speech before applying postfilters - * \param gain_after (Q0) gain of speech after applying postfilters - * \param speech [in/out] (Q0) signal buffer - * \param subframe_size length of subframe - * \param gain_prev (Q12) previous value of gain coefficient - * - * \return (Q12) last value of gain coefficient - */ -int16_t ff_g729_adaptive_gain_control(int gain_before, int gain_after, int16_t *speech, - int subframe_size, int16_t gain_prev); - -#endif // AVCODEC_G729POSTFILTER_H diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h263dsp.h b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h263dsp.h deleted file mode 100644 index 1abea3ca8cd3227d8cb0d7acee80816b4145601d..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h263dsp.h +++ /dev/null @@ -1,35 +0,0 @@ -/* - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#ifndef AVCODEC_H263DSP_H -#define AVCODEC_H263DSP_H - -#include - -extern const uint8_t ff_h263_loop_filter_strength[32]; - -typedef struct H263DSPContext { - void (*h263_h_loop_filter)(uint8_t *src, int stride, int qscale); - void (*h263_v_loop_filter)(uint8_t *src, int stride, int qscale); -} H263DSPContext; - -void ff_h263dsp_init(H263DSPContext *ctx); -void ff_h263dsp_init_x86(H263DSPContext *ctx); -void ff_h263dsp_init_mips(H263DSPContext *ctx); - -#endif /* AVCODEC_H263DSP_H */ diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h274.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h274.c deleted file mode 100644 index a69f94114293ea2672deff9fcc57fd8405aa4b58..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h274.c +++ /dev/null @@ -1,792 +0,0 @@ -/* - * H.274 film grain synthesis - * Copyright (c) 2021 Niklas Haas - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * H.274 film grain synthesis. - * @author Niklas Haas - */ - -#include "libavutil/avassert.h" -#include "libavutil/imgutils.h" - -#include "h274.h" - -static const int8_t Gaussian_LUT[2048+4]; -static const uint32_t Seed_LUT[256]; -static const int8_t R64T[64][64]; - -static void prng_shift(uint32_t *state) -{ - // Primitive polynomial x^31 + x^3 + 1 (modulo 2) - uint32_t x = *state; - uint8_t feedback = (x >> 2) ^ (x >> 30); - *state = (x << 1) | (feedback & 1u); -} - -static void init_slice_c(int8_t out[64][64], uint8_t h, uint8_t v, - int16_t tmp[64][64]) -{ - static const uint8_t deblock_factors[13] = { - 64, 71, 77, 84, 90, 96, 103, 109, 116, 122, 128, 128, 128 - }; - - const uint8_t deblock_coeff = deblock_factors[v]; - const uint8_t freq_h = ((h + 3) << 2) - 1; - const uint8_t freq_v = ((v + 3) << 2) - 1; - uint32_t seed = Seed_LUT[h + v * 13]; - - // Initialize with random gaussian values, using the output array as a - // temporary buffer for these intermediate values. - // - // Note: To make the subsequent matrix multiplication cache friendlier, we - // store each *column* of the starting image in a *row* of `out` - for (int y = 0; y <= freq_v; y++) { - for (int x = 0; x <= freq_h; x += 4) { - uint16_t offset = seed % 2048; - out[x + 0][y] = Gaussian_LUT[offset + 0]; - out[x + 1][y] = Gaussian_LUT[offset + 1]; - out[x + 2][y] = Gaussian_LUT[offset + 2]; - out[x + 3][y] = Gaussian_LUT[offset + 3]; - prng_shift(&seed); - } - } - - out[0][0] = 0; - - // 64x64 inverse integer transform - for (int y = 0; y < 64; y++) { - for (int x = 0; x <= freq_h; x++) { - int32_t sum = 0; - for (int p = 0; p <= freq_v; p++) - sum += R64T[y][p] * out[x][p]; - tmp[y][x] = (sum + 128) >> 8; - } - } - - for (int y = 0; y < 64; y++) { - for (int x = 0; x < 64; x++) { - int32_t sum = 0; - for (int p = 0; p <= freq_h; p++) - sum += tmp[y][p] * R64T[x][p]; // R64T^T = R64 - // Renormalize and clip to [-127, 127] - out[y][x] = av_clip((sum + 128) >> 8, -127, 127); - } - } - - // Deblock horizontal edges by simple attentuation of values - for (int y = 0; y < 64; y += 8) { - for (int x = 0; x < 64; x++) { - out[y + 0][x] = (out[y + 0][x] * deblock_coeff) >> 7; - out[y + 7][x] = (out[y + 7][x] * deblock_coeff) >> 7; - } - } -} - -static void init_slice(H274FilmGrainDatabase *database, uint8_t h, uint8_t v) -{ - if (database->residency[h] & (1 << v)) - return; - - database->residency[h] |= (1 << v); - init_slice_c(database->db[h][v], h, v, database->slice_tmp); -} - -// Computes the average of an 8x8 block, right-shifted by 6 -static uint16_t avg_8x8_c(const uint8_t *in, int in_stride) -{ - uint16_t avg[8] = {0}; // summing over an array vectorizes better - - for (int y = 0; y < 8; y++) { - for (int x = 0; x < 8; x++) - avg[x] += in[x]; - in += in_stride; - } - - return (avg[0] + avg[1] + avg[2] + avg[3] + - avg[4] + avg[5] + avg[6] + avg[7]) >> 6; -} - -// Synthesize an 8x8 block of film grain by copying the pattern from `db` -static void synth_grain_8x8_c(int8_t *out, const int out_stride, - const int16_t scale, const uint8_t shift, - const int8_t *db) -{ - for (int y = 0; y < 8; y++) { - for (int x = 0; x < 8; x++) - out[x] = (scale * db[x]) >> shift; - - out += out_stride; - db += 64; - } -} - -// Deblock vertical edges of an 8x8 block, mixing with the previous block -static void deblock_8x8_c(int8_t *out, const int out_stride) -{ - for (int y = 0; y < 8; y++) { - const int8_t l1 = out[-2], l0 = out[-1]; - const int8_t r0 = out[0], r1 = out[1]; - out[0] = (l0 + r0 * 2 + r1) >> 2; - out[-1] = (r0 + l0 * 2 + l1) >> 2; - out += out_stride; - } -} - -// Generates a single 8x8 block of grain, optionally also applying the -// deblocking step (note that this implies writing to the previous block). -static av_always_inline void generate(int8_t *out, int out_stride, - const uint8_t *in, int in_stride, - H274FilmGrainDatabase *database, - const AVFilmGrainH274Params *h274, - int c, int invert, int deblock, - int y_offset, int x_offset) -{ - const uint8_t shift = h274->log2_scale_factor + 6; - const uint16_t avg = avg_8x8_c(in, in_stride); - int16_t scale; - uint8_t h, v; - int8_t s = -1; - - // FIXME: This logic only generates grain with a single - // intensity interval. Strictly speaking, the H.274 specification allows - // for overlapping intensity intervals, however SMPTE RDD 5-2006 (which - // concerns the implementation of H.274 for H.264) forbids this as it - // requires a nontrivial grain synthesis process (FFT). - // - // In principle, we should detect this possibility ahead of time and warn - // the user that the output is unlikely to be correct, or alternatively - // return an AVERROR_PATCHWELCOME. - for (int i = 0; i < h274->num_intensity_intervals[c]; i++) { - if (avg >= h274->intensity_interval_lower_bound[c][i] && - avg <= h274->intensity_interval_upper_bound[c][i]) - { - s = i; - break; - } - } - - if (s < 0) { - // No matching intensity interval, synthesize blank film grain - for (int y = 0; y < 8; y++) - memset(out + y * out_stride, 0, sizeof(int8_t[8])); - return; - } - - h = av_clip(h274->comp_model_value[c][s][1], 2, 14) - 2; - v = av_clip(h274->comp_model_value[c][s][2], 2, 14) - 2; - init_slice(database, h, v); - - scale = h274->comp_model_value[c][s][0]; - if (invert) - scale = -scale; - - synth_grain_8x8_c(out, out_stride, scale, shift, - &database->db[h][v][y_offset][x_offset]); - - if (deblock) - deblock_8x8_c(out, out_stride); -} - -// Saturating 8-bit sum of a+b -static void add_8x8_clip_c(uint8_t *out, const uint8_t *a, const int8_t *b, - int n) -{ - for (int i = 0; i < n; i++) - out[i] = av_clip_uint8(a[i] + b[i]); -} - -int ff_h274_apply_film_grain(AVFrame *out_frame, const AVFrame *in_frame, - H274FilmGrainDatabase *database, - const AVFilmGrainParams *params) -{ - AVFilmGrainH274Params h274 = params->codec.h274; - av_assert1(params->type == AV_FILM_GRAIN_PARAMS_H274); - if (h274.model_id != 0) - return AVERROR_PATCHWELCOME; - - av_assert1(out_frame->format == in_frame->format); - if (in_frame->format != AV_PIX_FMT_YUV420P) - return AVERROR_PATCHWELCOME; - - for (int c = 0; c < 3; c++) { - static const uint8_t color_offset[3] = { 0, 85, 170 }; - uint32_t seed = Seed_LUT[(params->seed + color_offset[c]) % 256]; - const int width = c > 0 ? AV_CEIL_RSHIFT(out_frame->width, 1) : out_frame->width; - const int height = c > 0 ? AV_CEIL_RSHIFT(out_frame->height, 1) : out_frame->height; - - uint8_t * const out = out_frame->data[c]; - const int out_stride = out_frame->linesize[c]; - int8_t * const grain = out_frame->data[c]; // re-use output buffer for grain - const int grain_stride = out_stride; - const uint8_t * const in = in_frame->data[c]; - const int in_stride = in_frame->linesize[c]; - - if (!h274.component_model_present[c]) { - av_image_copy_plane(out, out_stride, in, in_stride, - width * sizeof(uint8_t), height); - continue; - } - - if (c > 0) { - // Adaptation for 4:2:0 chroma subsampling - for (int i = 0; i < h274.num_intensity_intervals[c]; i++) { - h274.comp_model_value[c][i][0] >>= 1; - h274.comp_model_value[c][i][1] *= 2; - h274.comp_model_value[c][i][2] *= 2; - } - } - - // Film grain synthesis is done in 8x8 blocks, but the PRNG state is - // only advanced in 16x16 blocks, so use a nested loop - for (int y = 0; y < height; y += 16) { - for (int x = 0; x < width; x += 16) { - uint16_t y_offset = (seed >> 16) % 52; - uint16_t x_offset = (seed & 0xFFFF) % 56; - const int invert = (seed & 0x1); - y_offset &= 0xFFFC; - x_offset &= 0xFFF8; - prng_shift(&seed); - - for (int yy = 0; yy < 16 && y+yy < height; yy += 8) { - for (int xx = 0; xx < 16 && x+xx < width; xx += 8) { - generate(grain + (y+yy) * grain_stride + (x+xx), grain_stride, - in + (y+yy) * in_stride + (x+xx), in_stride, - database, &h274, c, invert, (x+xx) > 0, - y_offset + yy, x_offset + xx); - } - } - } - } - - // Final output blend pass, done after grain synthesis is complete - // because deblocking depends on previous grain values - for (int y = 0; y < height; y++) { - add_8x8_clip_c(out + y * out_stride, in + y * in_stride, - grain + y * grain_stride, width); - } - } - - return 0; -} - -// These tables are all taken from the SMPTE RDD 5-2006 specification -static const int8_t Gaussian_LUT[2048+4] = { - -11, 12, 103, -11, 42, -35, 12, 59, 77, 98, -87, 3, 65, -78, 45, 56, -51, 21, - 13, -11, -20, -19, 33, -127, 17, -6, -105, 18, 19, 71, 48, -10, -38, 42, - -2, 75, -67, 52, -90, 33, -47, 21, -3, -56, 49, 1, -57, -42, -1, 120, -127, - -108, -49, 9, 14, 127, 122, 109, 52, 127, 2, 7, 114, 19, 30, 12, 77, 112, - 82, -61, -127, 111, -52, -29, 2, -49, -24, 58, -29, -73, 12, 112, 67, 79, - -3, -114, -87, -6, -5, 40, 58, -81, 49, -27, -31, -34, -105, 50, 16, -24, - -35, -14, -15, -127, -55, -22, -55, -127, -112, 5, -26, -72, 127, 127, -2, - 41, 87, -65, -16, 55, 19, 91, -81, -65, -64, 35, -7, -54, 99, -7, 88, 125, - -26, 91, 0, 63, 60, -14, -23, 113, -33, 116, 14, 26, 51, -16, 107, -8, 53, - 38, -34, 17, -7, 4, -91, 6, 63, 63, -15, 39, -36, 19, 55, 17, -51, 40, 33, - -37, 126, -39, -118, 17, -30, 0, 19, 98, 60, 101, -12, -73, -17, -52, 98, - 3, 3, 60, 33, -3, -2, 10, -42, -106, -38, 14, 127, 16, -127, -31, -86, -39, - -56, 46, -41, 75, 23, -19, -22, -70, 74, -54, -2, 32, -45, 17, -92, 59, - -64, -67, 56, -102, -29, -87, -34, -92, 68, 5, -74, -61, 93, -43, 14, -26, - -38, -126, -17, 16, -127, 64, 34, 31, 93, 17, -51, -59, 71, 77, 81, 127, - 127, 61, 33, -106, -93, 0, 0, 75, -69, 71, 127, -19, -111, 30, 23, 15, 2, - 39, 92, 5, 42, 2, -6, 38, 15, 114, -30, -37, 50, 44, 106, 27, 119, 7, -80, - 25, -68, -21, 92, -11, -1, 18, 41, -50, 79, -127, -43, 127, 18, 11, -21, - 32, -52, 27, -88, -90, -39, -19, -10, 24, -118, 72, -24, -44, 2, 12, 86, - -107, 39, -33, -127, 47, 51, -24, -22, 46, 0, 15, -35, -69, -2, -74, 24, - -6, 0, 29, -3, 45, 32, -32, 117, -45, 79, -24, -17, -109, -10, -70, 88, - -48, 24, -91, 120, -37, 50, -127, 58, 32, -82, -10, -17, -7, 46, -127, -15, - 89, 127, 17, 98, -39, -33, 37, 42, -40, -32, -21, 105, -19, 19, 19, -59, - -9, 30, 0, -127, 34, 127, -84, 75, 24, -40, -49, -127, -107, -14, 45, -75, - 1, 30, -20, 41, -68, -40, 12, 127, -3, 5, 20, -73, -59, -127, -3, -3, -53, - -6, -119, 93, 120, -80, -50, 0, 20, -46, 67, 78, -12, -22, -127, 36, -41, - 56, 119, -5, -116, -22, 68, -14, -90, 24, -82, -44, -127, 107, -25, -37, - 40, -7, -7, -82, 5, -87, 44, -34, 9, -127, 39, 70, 49, -63, 74, -49, 109, - -27, -89, -47, -39, 44, 49, -4, 60, -42, 80, 9, -127, -9, -56, -49, 125, - -66, 47, 36, 117, 15, -11, -96, 109, 94, -17, -56, 70, 8, -14, -5, 50, 37, - -45, 120, -30, -76, 40, -46, 6, 3, 69, 17, -78, 1, -79, 6, 127, 43, 26, - 127, -127, 28, -55, -26, 55, 112, 48, 107, -1, -77, -1, 53, -9, -22, -43, - 123, 108, 127, 102, 68, 46, 5, 1, 123, -13, -55, -34, -49, 89, 65, -105, - -5, 94, -53, 62, 45, 30, 46, 18, -35, 15, 41, 47, -98, -24, 94, -75, 127, - -114, 127, -68, 1, -17, 51, -95, 47, 12, 34, -45, -75, 89, -107, -9, -58, - -29, -109, -24, 127, -61, -13, 77, -45, 17, 19, 83, -24, 9, 127, -66, 54, - 4, 26, 13, 111, 43, -113, -22, 10, -24, 83, 67, -14, 75, -123, 59, 127, - -12, 99, -19, 64, -38, 54, 9, 7, 61, -56, 3, -57, 113, -104, -59, 3, -9, - -47, 74, 85, -55, -34, 12, 118, 28, 93, -72, 13, -99, -72, -20, 30, 72, - -94, 19, -54, 64, -12, -63, -25, 65, 72, -10, 127, 0, -127, 103, -20, -73, - -112, -103, -6, 28, -42, -21, -59, -29, -26, 19, -4, -51, 94, -58, -95, - -37, 35, 20, -69, 127, -19, -127, -22, -120, -53, 37, 74, -127, -1, -12, - -119, -53, -28, 38, 69, 17, 16, -114, 89, 62, 24, 37, -23, 49, -101, -32, - -9, -95, -53, 5, 93, -23, -49, -8, 51, 3, -75, -90, -10, -39, 127, -86, - -22, 20, 20, 113, 75, 52, -31, 92, -63, 7, -12, 46, 36, 101, -43, -17, -53, - -7, -38, -76, -31, -21, 62, 31, 62, 20, -127, 31, 64, 36, 102, -85, -10, - 77, 80, 58, -79, -8, 35, 8, 80, -24, -9, 3, -17, 72, 127, 83, -87, 55, 18, - -119, -123, 36, 10, 127, 56, -55, 113, 13, 26, 32, -13, -48, 22, -13, 5, - 58, 27, 24, 26, -11, -36, 37, -92, 78, 81, 9, 51, 14, 67, -13, 0, 32, 45, - -76, 32, -39, -22, -49, -127, -27, 31, -9, 36, 14, 71, 13, 57, 12, -53, - -86, 53, -44, -35, 2, 127, 12, -66, -44, 46, -115, 3, 10, 56, -35, 119, - -19, -61, 52, -59, -127, -49, -23, 4, -5, 17, -82, -6, 127, 25, 79, 67, 64, - -25, 14, -64, -37, -127, -28, 21, -63, 66, -53, -41, 109, -62, 15, -22, 13, - 29, -63, 20, 27, 95, -44, -59, -116, -10, 79, -49, 22, -43, -16, 46, -47, - -120, -36, -29, -52, -44, 29, 127, -13, 49, -9, -127, 75, -28, -23, 88, 59, - 11, -95, 81, -59, 58, 60, -26, 40, -92, -3, -22, -58, -45, -59, -22, -53, - 71, -29, 66, -32, -23, 14, -17, -66, -24, -28, -62, 47, 38, 17, 16, -37, - -24, -11, 8, -27, -19, 59, 45, -49, -47, -4, -22, -81, 30, -67, -127, 74, - 102, 5, -18, 98, 34, -66, 42, -52, 7, -59, 24, -58, -19, -24, -118, -73, - 91, 15, -16, 79, -32, -79, -127, -36, 41, 77, -83, 2, 56, 22, -75, 127, - -16, -21, 12, 31, 56, -113, -127, 90, 55, 61, 12, 55, -14, -113, -14, 32, - 49, -67, -17, 91, -10, 1, 21, 69, -70, 99, -19, -112, 66, -90, -10, -9, - -71, 127, 50, -81, -49, 24, 61, -61, -111, 7, -41, 127, 88, -66, 108, -127, - -6, 36, -14, 41, -50, 14, 14, 73, -101, -28, 77, 127, -8, -100, 88, 38, - 121, 88, -125, -60, 13, -94, -115, 20, -67, -87, -94, -119, 44, -28, -30, - 18, 5, -53, -61, 20, -43, 11, -77, -60, 13, 29, 3, 6, -72, 38, -60, -11, - 108, -53, 41, 66, -12, -127, -127, -49, 24, 29, 46, 36, 91, 34, -33, 116, - -51, -34, -52, 91, 7, -83, 73, -26, -103, 24, -10, 76, 84, 5, 68, -80, -13, - -17, -32, -48, 20, 50, 26, 10, 63, -104, -14, 37, 127, 114, 97, 35, 1, -33, - -55, 127, -124, -33, 61, -7, 119, -32, -127, -53, -42, 63, 3, -5, -26, 70, - -58, -33, -44, -43, 34, -56, -127, 127, 25, -35, -11, 16, -81, 29, -58, 40, - -127, -127, 20, -47, -11, -36, -63, -52, -32, -82, 78, -76, -73, 8, 27, - -72, -9, -74, -85, -86, -57, 25, 78, -10, -97, 35, -65, 8, -59, 14, 1, -42, - 32, -88, -44, 17, -3, -9, 59, 40, 12, -108, -40, 24, 34, 18, -28, 2, 51, - -110, -4, 100, 1, 65, 22, 0, 127, 61, 45, 25, -31, 6, 9, -7, -48, 99, 16, - 44, -2, -40, 32, -39, -52, 10, -110, -19, 56, -127, 69, 26, 51, 92, 40, 61, - -52, 45, -38, 13, 85, 122, 27, 66, 45, -111, -83, -3, 31, 37, 19, -36, 58, - 71, 39, -78, -47, 58, -78, 8, -62, -36, -14, 61, 42, -127, 71, -4, 24, -54, - 52, -127, 67, -4, -42, 30, -63, 59, -3, -1, -18, -46, -92, -81, -96, -14, - -53, -10, -11, -77, 13, 1, 8, -67, -127, 127, -28, 26, -14, 18, -13, -26, - 2, 10, -46, -32, -15, 27, -31, -59, 59, 77, -121, 28, 40, -54, -62, -31, - -21, -37, -32, -6, -127, -25, -60, 70, -127, 112, -127, 127, 88, -7, 116, - 110, 53, 87, -127, 3, 16, 23, 74, -106, -51, 3, 74, -82, -112, -74, 65, 81, - 25, 53, 127, -45, -50, -103, -41, -65, -29, 79, -67, 64, -33, -30, -8, 127, - 0, -13, -51, 67, -14, 5, -92, 29, -35, -8, -90, -57, -3, 36, 43, 44, -31, - -69, -7, 36, 39, -51, 43, -81, 58, 6, 127, 12, 57, 66, 46, 59, -43, -42, - 41, -15, -120, 24, 3, -11, 19, -13, 51, 28, 3, 55, -48, -12, -1, 2, 97, - -19, 29, 42, 13, 43, 78, -44, 56, -108, -43, -19, 127, 15, -11, -18, -81, - 83, -37, 77, -109, 15, 65, -50, 43, 12, 13, 27, 28, 61, 57, 30, 26, 106, - -18, 56, 13, 97, 4, -8, -62, -103, 94, 108, -44, 52, 27, -47, -9, 105, -53, - 46, 89, 103, -33, 38, -34, 55, 51, 70, -94, -35, -87, -107, -19, -31, 9, - -19, 79, -14, 77, 5, -19, -107, 85, 21, -45, -39, -42, 9, -29, 74, 47, -75, - 60, -127, 120, -112, -57, -32, 41, 7, 79, 76, 66, 57, 41, -25, 31, 37, -47, - -36, 43, -73, -37, 63, 127, -69, -52, 90, -33, -61, 60, -55, 44, 15, 4, - -67, 13, -92, 64, 29, -39, -3, 83, -2, -38, -85, -86, 58, 35, -69, -61, 29, - -37, -95, -78, 4, 30, -4, -32, -80, -22, -9, -77, 46, 7, -93, -71, 65, 9, - -50, 127, -70, 26, -12, -39, -114, 63, -127, -100, 4, -32, 111, 22, -60, - 65, -101, 26, -42, 21, -59, -27, -74, 2, -94, 6, 126, 5, 76, -88, -9, -43, - -101, 127, 1, 125, 92, -63, 52, 56, 4, 81, -127, 127, 80, 127, -29, 30, - 116, -74, -17, -57, 105, 48, 45, 25, -72, 48, -38, -108, 31, -34, 4, -11, - 41, -127, 52, -104, -43, -37, 52, 2, 47, 87, -9, 77, 27, -41, -25, 90, 86, - -56, 75, 10, 33, 78, 58, 127, 127, -7, -73, 49, -33, -106, -35, 38, 57, 53, - -17, -4, 83, 52, -108, 54, -125, 28, 23, 56, -43, -88, -17, -6, 47, 23, -9, - 0, -13, 111, 75, 27, -52, -38, -34, 39, 30, 66, 39, 38, -64, 38, 3, 21, - -32, -51, -28, 54, -38, -87, 20, 52, 115, 18, -81, -70, 0, -14, -46, -46, - -3, 125, 16, -14, 23, -82, -84, -69, -20, -65, -127, 9, 81, -49, 61, 7, - -36, -45, -42, 57, -26, 47, 20, -85, 46, -13, 41, -37, -75, -60, 86, -78, - -127, 12, 50, 2, -3, 13, 47, 5, 19, -78, -55, -27, 65, -71, 12, -108, 20, - -16, 11, -31, 63, -55, 37, 75, -17, 127, -73, -33, -28, -120, 105, 68, 106, - -103, -106, 71, 61, 2, 23, -3, 33, -5, -15, -67, -15, -23, -54, 15, -63, - 76, 58, -110, 1, 83, -27, 22, 75, -39, -17, -11, 64, -17, -127, -54, -66, - 31, 96, 116, 3, -114, -7, -108, -63, 97, 9, 50, 8, 75, -28, 72, 112, -36, - -112, 95, -50, 23, -13, -19, 55, 21, 23, 92, 91, 22, -49, 16, -75, 23, 9, - -49, -97, -37, 49, -36, 36, -127, -86, 43, 127, -24, -24, 84, 83, -35, -34, - -12, 109, 102, -38, 51, -68, 34, 19, -22, 49, -32, 127, 40, 24, -93, -4, - -3, 105, 3, -58, -18, 8, 127, -18, 125, 68, 69, -62, 30, -36, 54, -57, -24, - 17, 43, -36, -27, -57, -67, -21, -10, -49, 68, 12, 65, 4, 48, 55, 127, -75, - 44, 89, -66, -13, -78, -82, -91, 22, 30, 33, -40, -87, -34, 96, -91, 39, - 10, -64, -3, -12, 127, -50, -37, -56, 23, -35, -36, -54, 90, -91, 2, 50, - 77, -6, -127, 16, 46, -5, -73, 0, -56, -18, -72, 28, 93, 60, 49, 20, 18, - 111, -111, 32, -83, 47, 47, -10, 35, -88, 43, 57, -98, 127, -17, 0, 1, -39, - -127, -2, 0, 63, 93, 0, 36, -66, -61, -19, 39, -127, 58, 50, -17, 127, 88, - -43, -108, -51, -16, 7, -36, 68, 46, -14, 107, 40, 57, 7, 19, 8, 3, 88, - -90, -92, -18, -21, -24, 13, 7, -4, -78, -91, -4, 8, -35, -5, 19, 2, -111, - 4, -66, -81, 122, -20, -34, -37, -84, 127, 68, 46, 17, 47, - - // Repeat the beginning of the array to allow wrapping reads - -11, 12, 103, -11, -}; - -static const uint32_t Seed_LUT[256] = { - 747538460, 1088979410, 1744950180, 1767011913, 1403382928, - 521866116, 1060417601, 2110622736, 1557184770, 105289385, 585624216, - 1827676546, 1191843873, 1018104344, 1123590530, 663361569, 2023850500, - 76561770, 1226763489, 80325252, 1992581442, 502705249, 740409860, - 516219202, 557974537, 1883843076, 720112066, 1640137737, 1820967556, - 40667586, 155354121, 1820967557, 1115949072, 1631803309, 98284748, - 287433856, 2119719977, 988742797, 1827432592, 579378475, 1017745956, - 1309377032, 1316535465, 2074315269, 1923385360, 209722667, 1546228260, - 168102420, 135274561, 355958469, 248291472, 2127839491, 146920100, - 585982612, 1611702337, 696506029, 1386498192, 1258072451, 1212240548, - 1043171860, 1217404993, 1090770605, 1386498193, 169093201, 541098240, - 1468005469, 456510673, 1578687785, 1838217424, 2010752065, 2089828354, - 1362717428, 970073673, 854129835, 714793201, 1266069081, 1047060864, - 1991471829, 1098097741, 913883585, 1669598224, 1337918685, 1219264706, - 1799741108, 1834116681, 683417731, 1120274457, 1073098457, 1648396544, - 176642749, 31171789, 718317889, 1266977808, 1400892508, 549749008, - 1808010512, 67112961, 1005669825, 903663673, 1771104465, 1277749632, - 1229754427, 950632997, 1979371465, 2074373264, 305357524, 1049387408, - 1171033360, 1686114305, 2147468765, 1941195985, 117709841, 809550080, - 991480851, 1816248997, 1561503561, 329575568, 780651196, 1659144592, - 1910793616, 604016641, 1665084765, 1530186961, 1870928913, 809550081, - 2079346113, 71307521, 876663040, 1073807360, 832356664, 1573927377, - 204073344, 2026918147, 1702476788, 2043881033, 57949587, 2001393952, - 1197426649, 1186508931, 332056865, 950043140, 890043474, 349099312, - 148914948, 236204097, 2022643605, 1441981517, 498130129, 1443421481, - 924216797, 1817491777, 1913146664, 1411989632, 929068432, 495735097, - 1684636033, 1284520017, 432816184, 1344884865, 210843729, 676364544, - 234449232, 12112337, 1350619139, 1753272996, 2037118872, 1408560528, - 533334916, 1043640385, 357326099, 201376421, 110375493, 541106497, - 416159637, 242512193, 777294080, 1614872576, 1535546636, 870600145, - 910810409, 1821440209, 1605432464, 1145147393, 951695441, 1758494976, - 1506656568, 1557150160, 608221521, 1073840384, 217672017, 684818688, - 1750138880, 16777217, 677990609, 953274371, 1770050213, 1359128393, - 1797602707, 1984616737, 1865815816, 2120835200, 2051677060, 1772234061, - 1579794881, 1652821009, 1742099468, 1887260865, 46468113, 1011925248, - 1134107920, 881643832, 1354774993, 472508800, 1892499769, 1752793472, - 1962502272, 687898625, 883538000, 1354355153, 1761673473, 944820481, - 2020102353, 22020353, 961597696, 1342242816, 964808962, 1355809701, - 17016649, 1386540177, 647682692, 1849012289, 751668241, 1557184768, - 127374604, 1927564752, 1045744913, 1614921984, 43588881, 1016185088, - 1544617984, 1090519041, 136122424, 215038417, 1563027841, 2026918145, - 1688778833, 701530369, 1372639488, 1342242817, 2036945104, 953274369, - 1750192384, 16842753, 964808960, 1359020032, 1358954497 -}; - -// Note: This is pre-transposed, i.e. stored column-major order -static const int8_t R64T[64][64] = { - { - 32, 45, 45, 45, 45, 45, 45, 45, 44, 44, 44, 44, 43, 43, 43, 42, - 42, 41, 41, 40, 40, 39, 39, 38, 38, 37, 36, 36, 35, 34, 34, 33, - 32, 31, 30, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, - 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 4, 3, 2, 1, - }, { - 32, 45, 45, 44, 43, 42, 41, 39, 38, 36, 34, 31, 29, 26, 23, 20, - 17, 14, 11, 8, 4, 1, -2, -6, -9, -12, -15, -18, -21, -24, -27, -30, - -32, -34, -36, -38, -40, -41, -43, -44, -44, -45, -45, -45, -45, -45, -44, -43, - -42, -40, -39, -37, -35, -33, -30, -28, -25, -22, -19, -16, -13, -10, -7, -3, - }, { - 32, 45, 44, 42, 40, 37, 34, 30, 25, 20, 15, 10, 4, -1, -7, -12, - -17, -22, -27, -31, -35, -38, -41, -43, -44, -45, -45, -45, -43, -41, -39, -36, - -32, -28, -23, -18, -13, -8, -2, 3, 9, 14, 19, 24, 29, 33, 36, 39, - 42, 44, 45, 45, 45, 44, 43, 40, 38, 34, 30, 26, 21, 16, 11, 6, - }, { - 32, 45, 43, 39, 35, 30, 23, 16, 9, 1, -7, -14, -21, -28, -34, -38, - -42, -44, -45, -45, -43, -40, -36, -31, -25, -18, -11, -3, 4, 12, 19, 26, - 32, 37, 41, 44, 45, 45, 44, 41, 38, 33, 27, 20, 13, 6, -2, -10, - -17, -24, -30, -36, -40, -43, -45, -45, -44, -42, -39, -34, -29, -22, -15, -8, - }, { - 32, 44, 41, 36, 29, 20, 11, 1, -9, -18, -27, -34, -40, -44, -45, -45, - -42, -37, -30, -22, -13, -3, 7, 16, 25, 33, 39, 43, 45, 45, 43, 38, - 32, 24, 15, 6, -4, -14, -23, -31, -38, -42, -45, -45, -43, -39, -34, -26, - -17, -8, 2, 12, 21, 30, 36, 41, 44, 45, 44, 40, 35, 28, 19, 10, - }, { - 32, 44, 39, 31, 21, 10, -2, -14, -25, -34, -41, -45, -45, -42, -36, -28, - -17, -6, 7, 18, 29, 37, 43, 45, 44, 40, 34, 24, 13, 1, -11, -22, - -32, -39, -44, -45, -43, -38, -30, -20, -9, 3, 15, 26, 35, 41, 45, 45, - 42, 36, 27, 16, 4, -8, -19, -30, -38, -43, -45, -44, -40, -33, -23, -12, - }, { - 32, 43, 36, 26, 13, -1, -15, -28, -38, -44, -45, -42, -35, -24, -11, 3, - 17, 30, 39, 44, 45, 41, 34, 22, 9, -6, -19, -31, -40, -45, -45, -40, - -32, -20, -7, 8, 21, 33, 41, 45, 44, 39, 30, 18, 4, -10, -23, -34, - -42, -45, -44, -38, -29, -16, -2, 12, 25, 36, 43, 45, 43, 37, 27, 14, - }, { - 32, 42, 34, 20, 4, -12, -27, -38, -44, -45, -39, -28, -13, 3, 19, 33, - 42, 45, 43, 34, 21, 6, -11, -26, -38, -44, -45, -39, -29, -14, 2, 18, - 32, 41, 45, 43, 35, 22, 7, -10, -25, -37, -44, -45, -40, -30, -15, 1, - 17, 31, 41, 45, 43, 36, 23, 8, -9, -24, -36, -44, -45, -40, -30, -16, - }, { - 32, 41, 30, 14, -4, -22, -36, -44, -44, -37, -23, -6, 13, 30, 41, 45, - 42, 31, 15, -3, -21, -36, -44, -45, -38, -24, -7, 12, 29, 40, 45, 42, - 32, 16, -2, -20, -35, -44, -45, -38, -25, -8, 11, 28, 40, 45, 43, 33, - 17, -1, -19, -34, -43, -45, -39, -26, -9, 10, 27, 39, 45, 43, 34, 18, - }, { - 32, 40, 27, 8, -13, -31, -43, -45, -38, -22, -2, 18, 35, 44, 44, 34, - 17, -3, -23, -38, -45, -42, -30, -12, 9, 28, 41, 45, 40, 26, 7, -14, - -32, -43, -45, -37, -21, -1, 19, 36, 44, 44, 34, 16, -4, -24, -39, -45, - -42, -30, -11, 10, 29, 41, 45, 39, 25, 6, -15, -33, -43, -45, -36, -20, - }, { - 32, 39, 23, 1, -21, -38, -45, -40, -25, -3, 19, 37, 45, 41, 27, 6, - -17, -36, -45, -42, -29, -8, 15, 34, 44, 43, 30, 10, -13, -33, -44, -44, - -32, -12, 11, 31, 43, 44, 34, 14, -9, -30, -43, -45, -35, -16, 7, 28, - 42, 45, 36, 18, -4, -26, -41, -45, -38, -20, 2, 24, 40, 45, 39, 22, - }, { - 32, 38, 19, -6, -29, -43, -44, -31, -9, 16, 36, 45, 40, 22, -2, -26, - -42, -45, -34, -12, 13, 34, 45, 41, 25, 1, -23, -40, -45, -36, -15, 10, - 32, 44, 43, 28, 4, -20, -39, -45, -38, -18, 7, 30, 43, 44, 30, 8, - -17, -37, -45, -39, -21, 3, 27, 42, 44, 33, 11, -14, -35, -45, -41, -24, - }, { - 32, 37, 15, -12, -35, -45, -39, -18, 9, 33, 45, 40, 21, -6, -30, -44, - -42, -24, 2, 28, 43, 43, 27, 1, -25, -42, -44, -30, -4, 22, 41, 45, - 32, 8, -19, -39, -45, -34, -11, 16, 38, 45, 36, 14, -13, -36, -45, -38, - -17, 10, 34, 45, 40, 20, -7, -31, -44, -41, -23, 3, 29, 44, 43, 26, - }, { - 32, 36, 11, -18, -40, -45, -30, -3, 25, 43, 43, 24, -4, -31, -45, -39, - -17, 12, 36, 45, 35, 10, -19, -40, -44, -30, -2, 26, 43, 42, 23, -6, - -32, -45, -39, -16, 13, 37, 45, 34, 9, -20, -41, -44, -29, -1, 27, 44, - 42, 22, -7, -33, -45, -38, -15, 14, 38, 45, 34, 8, -21, -41, -44, -28, - }, { - 32, 34, 7, -24, -43, -41, -19, 12, 38, 45, 30, 1, -29, -45, -39, -14, - 17, 40, 44, 26, -4, -33, -45, -36, -9, 22, 43, 42, 21, -10, -36, -45, - -32, -3, 27, 44, 40, 16, -15, -39, -44, -28, 2, 31, 45, 37, 11, -20, - -42, -43, -23, 8, 35, 45, 34, 6, -25, -44, -41, -18, 13, 38, 45, 30, - }, { - 32, 33, 2, -30, -45, -36, -7, 26, 44, 38, 11, -22, -43, -40, -15, 18, - 42, 42, 19, -14, -40, -44, -23, 10, 38, 45, 27, -6, -35, -45, -30, 1, - 32, 45, 34, 3, -29, -45, -36, -8, 25, 44, 39, 12, -21, -43, -41, -16, - 17, 41, 43, 20, -13, -39, -44, -24, 9, 37, 45, 28, -4, -34, -45, -31, - }, { - 32, 31, -2, -34, -45, -28, 7, 37, 44, 24, -11, -39, -43, -20, 15, 41, - 42, 16, -19, -43, -40, -12, 23, 44, 38, 8, -27, -45, -35, -3, 30, 45, - 32, -1, -34, -45, -29, 6, 36, 45, 25, -10, -39, -44, -21, 14, 41, 42, - 17, -18, -43, -40, -13, 22, 44, 38, 9, -26, -45, -36, -4, 30, 45, 33, - }, { - 32, 30, -7, -38, -43, -18, 19, 44, 38, 6, -30, -45, -29, 8, 39, 43, - 17, -20, -44, -37, -4, 31, 45, 28, -9, -39, -43, -16, 21, 44, 36, 3, - -32, -45, -27, 10, 40, 42, 15, -22, -44, -36, -2, 33, 45, 26, -11, -40, - -42, -14, 23, 45, 35, 1, -34, -45, -25, 12, 41, 41, 13, -24, -45, -34, - }, { - 32, 28, -11, -41, -40, -8, 30, 45, 25, -14, -43, -38, -4, 33, 45, 22, - -17, -44, -36, -1, 35, 44, 19, -20, -44, -34, 2, 37, 43, 16, -23, -45, - -32, 6, 39, 42, 13, -26, -45, -30, 9, 40, 41, 10, -29, -45, -27, 12, - 42, 39, 7, -31, -45, -24, 15, 43, 38, 3, -34, -45, -21, 18, 44, 36, - }, { - 32, 26, -15, -44, -35, 3, 39, 41, 9, -31, -45, -20, 21, 45, 30, -10, - -42, -38, -2, 36, 43, 14, -27, -45, -25, 16, 44, 34, -4, -39, -41, -8, - 32, 45, 19, -22, -45, -30, 11, 42, 38, 1, -36, -43, -13, 28, 45, 24, - -17, -44, -34, 6, 40, 40, 7, -33, -44, -18, 23, 45, 29, -12, -43, -37, - }, { - 32, 24, -19, -45, -29, 14, 44, 33, -9, -42, -36, 3, 40, 39, 2, -37, - -42, -8, 34, 44, 13, -30, -45, -18, 25, 45, 23, -20, -45, -28, 15, 44, - 32, -10, -43, -36, 4, 40, 39, 1, -38, -41, -7, 34, 43, 12, -30, -45, - -17, 26, 45, 22, -21, -45, -27, 16, 44, 31, -11, -43, -35, 6, 41, 38, - }, { - 32, 22, -23, -45, -21, 24, 45, 20, -25, -45, -19, 26, 45, 18, -27, -45, - -17, 28, 45, 16, -29, -45, -15, 30, 44, 14, -30, -44, -13, 31, 44, 12, - -32, -44, -11, 33, 43, 10, -34, -43, -9, 34, 43, 8, -35, -42, -7, 36, - 42, 6, -36, -41, -4, 37, 41, 3, -38, -40, -2, 38, 40, 1, -39, -39, - }, { - 32, 20, -27, -45, -13, 33, 43, 6, -38, -39, 2, 41, 35, -10, -44, -30, - 17, 45, 23, -24, -45, -16, 30, 44, 9, -36, -41, -1, 40, 37, -7, -43, - -32, 14, 45, 26, -21, -45, -19, 28, 44, 12, -34, -42, -4, 38, 39, -3, - -42, -34, 11, 44, 29, -18, -45, -22, 25, 45, 15, -31, -43, -8, 36, 40, - }, { - 32, 18, -30, -43, -4, 39, 36, -10, -44, -26, 23, 45, 13, -34, -41, 1, - 42, 33, -15, -45, -21, 28, 44, 8, -38, -38, 7, 44, 29, -20, -45, -16, - 32, 42, 2, -40, -35, 12, 45, 24, -25, -45, -11, 36, 40, -3, -43, -31, - 17, 45, 19, -30, -43, -6, 39, 37, -9, -44, -27, 22, 45, 14, -34, -41, - }, { - 32, 16, -34, -40, 4, 44, 27, -24, -44, -8, 39, 36, -13, -45, -19, 31, - 42, -1, -43, -30, 21, 45, 11, -37, -38, 10, 45, 22, -29, -43, -2, 41, - 32, -18, -45, -14, 35, 39, -7, -44, -25, 26, 44, 6, -40, -34, 15, 45, - 17, -33, -41, 3, 43, 28, -23, -45, -9, 38, 36, -12, -45, -20, 30, 42, - }, { - 32, 14, -36, -37, 13, 45, 15, -36, -38, 12, 45, 16, -35, -38, 11, 45, - 17, -34, -39, 10, 45, 18, -34, -39, 9, 45, 19, -33, -40, 8, 45, 20, - -32, -40, 7, 45, 21, -31, -41, 6, 44, 22, -30, -41, 4, 44, 23, -30, - -42, 3, 44, 24, -29, -42, 2, 44, 25, -28, -43, 1, 43, 26, -27, -43, - }, { - 32, 12, -39, -33, 21, 44, 2, -43, -25, 30, 41, -8, -45, -16, 36, 36, - -17, -45, -7, 41, 29, -26, -43, 3, 44, 20, -34, -38, 13, 45, 11, -39, - -32, 22, 44, 1, -43, -24, 30, 40, -9, -45, -15, 37, 35, -18, -45, -6, - 42, 28, -27, -42, 4, 45, 19, -34, -38, 14, 45, 10, -40, -31, 23, 44, - }, { - 32, 10, -41, -28, 29, 40, -11, -45, -9, 41, 27, -30, -40, 12, 45, 8, - -42, -26, 30, 39, -13, -45, -7, 42, 25, -31, -39, 14, 45, 6, -43, -24, - 32, 38, -15, -45, -4, 43, 23, -33, -38, 16, 45, 3, -43, -22, 34, 37, - -17, -45, -2, 44, 21, -34, -36, 18, 44, 1, -44, -20, 35, 36, -19, -44, - }, { - 32, 8, -43, -22, 35, 34, -23, -42, 9, 45, 7, -43, -21, 36, 34, -24, - -42, 10, 45, 6, -43, -20, 36, 33, -25, -41, 11, 45, 4, -44, -19, 37, - 32, -26, -41, 12, 45, 3, -44, -18, 38, 31, -27, -40, 13, 45, 2, -44, - -17, 38, 30, -28, -40, 14, 45, 1, -44, -16, 39, 30, -29, -39, 15, 45, - }, { - 32, 6, -44, -16, 40, 26, -34, -34, 25, 40, -15, -44, 4, 45, 7, -44, - -17, 39, 27, -33, -35, 24, 41, -14, -44, 3, 45, 8, -43, -18, 39, 28, - -32, -36, 23, 41, -13, -45, 2, 45, 9, -43, -19, 38, 29, -31, -36, 22, - 42, -12, -45, 1, 45, 10, -43, -20, 38, 30, -30, -37, 21, 42, -11, -45, - }, { - 32, 3, -45, -10, 43, 16, -41, -22, 38, 28, -34, -33, 29, 37, -23, -40, - 17, 43, -11, -45, 4, 45, 2, -45, -9, 44, 15, -41, -21, 38, 27, -34, - -32, 30, 36, -24, -40, 18, 43, -12, -44, 6, 45, 1, -45, -8, 44, 14, - -42, -20, 39, 26, -35, -31, 30, 36, -25, -39, 19, 42, -13, -44, 7, 45, - }, { - 32, 1, -45, -3, 45, 6, -45, -8, 44, 10, -44, -12, 43, 14, -43, -16, - 42, 18, -41, -20, 40, 22, -39, -24, 38, 26, -36, -28, 35, 30, -34, -31, - 32, 33, -30, -34, 29, 36, -27, -37, 25, 38, -23, -39, 21, 40, -19, -41, - 17, 42, -15, -43, 13, 44, -11, -44, 9, 45, -7, -45, 4, 45, -2, -45, - }, { - 32, -1, -45, 3, 45, -6, -45, 8, 44, -10, -44, 12, 43, -14, -43, 16, - 42, -18, -41, 20, 40, -22, -39, 24, 38, -26, -36, 28, 35, -30, -34, 31, - 32, -33, -30, 34, 29, -36, -27, 37, 25, -38, -23, 39, 21, -40, -19, 41, - 17, -42, -15, 43, 13, -44, -11, 44, 9, -45, -7, 45, 4, -45, -2, 45, - }, { - 32, -3, -45, 10, 43, -16, -41, 22, 38, -28, -34, 33, 29, -37, -23, 40, - 17, -43, -11, 45, 4, -45, 2, 45, -9, -44, 15, 41, -21, -38, 27, 34, - -32, -30, 36, 24, -40, -18, 43, 12, -44, -6, 45, -1, -45, 8, 44, -14, - -42, 20, 39, -26, -35, 31, 30, -36, -25, 39, 19, -42, -13, 44, 7, -45, - }, { - 32, -6, -44, 16, 40, -26, -34, 34, 25, -40, -15, 44, 4, -45, 7, 44, - -17, -39, 27, 33, -35, -24, 41, 14, -44, -3, 45, -8, -43, 18, 39, -28, - -32, 36, 23, -41, -13, 45, 2, -45, 9, 43, -19, -38, 29, 31, -36, -22, - 42, 12, -45, -1, 45, -10, -43, 20, 38, -30, -30, 37, 21, -42, -11, 45, - }, { - 32, -8, -43, 22, 35, -34, -23, 42, 9, -45, 7, 43, -21, -36, 34, 24, - -42, -10, 45, -6, -43, 20, 36, -33, -25, 41, 11, -45, 4, 44, -19, -37, - 32, 26, -41, -12, 45, -3, -44, 18, 38, -31, -27, 40, 13, -45, 2, 44, - -17, -38, 30, 28, -40, -14, 45, -1, -44, 16, 39, -30, -29, 39, 15, -45, - }, { - 32, -10, -41, 28, 29, -40, -11, 45, -9, -41, 27, 30, -40, -12, 45, -8, - -42, 26, 30, -39, -13, 45, -7, -42, 25, 31, -39, -14, 45, -6, -43, 24, - 32, -38, -15, 45, -4, -43, 23, 33, -38, -16, 45, -3, -43, 22, 34, -37, - -17, 45, -2, -44, 21, 34, -36, -18, 44, -1, -44, 20, 35, -36, -19, 44, - }, { - 32, -12, -39, 33, 21, -44, 2, 43, -25, -30, 41, 8, -45, 16, 36, -36, - -17, 45, -7, -41, 29, 26, -43, -3, 44, -20, -34, 38, 13, -45, 11, 39, - -32, -22, 44, -1, -43, 24, 30, -40, -9, 45, -15, -37, 35, 18, -45, 6, - 42, -28, -27, 42, 4, -45, 19, 34, -38, -14, 45, -10, -40, 31, 23, -44, - }, { - 32, -14, -36, 37, 13, -45, 15, 36, -38, -12, 45, -16, -35, 38, 11, -45, - 17, 34, -39, -10, 45, -18, -34, 39, 9, -45, 19, 33, -40, -8, 45, -20, - -32, 40, 7, -45, 21, 31, -41, -6, 44, -22, -30, 41, 4, -44, 23, 30, - -42, -3, 44, -24, -29, 42, 2, -44, 25, 28, -43, -1, 43, -26, -27, 43, - }, { - 32, -16, -34, 40, 4, -44, 27, 24, -44, 8, 39, -36, -13, 45, -19, -31, - 42, 1, -43, 30, 21, -45, 11, 37, -38, -10, 45, -22, -29, 43, -2, -41, - 32, 18, -45, 14, 35, -39, -7, 44, -25, -26, 44, -6, -40, 34, 15, -45, - 17, 33, -41, -3, 43, -28, -23, 45, -9, -38, 36, 12, -45, 20, 30, -42, - }, { - 32, -18, -30, 43, -4, -39, 36, 10, -44, 26, 23, -45, 13, 34, -41, -1, - 42, -33, -15, 45, -21, -28, 44, -8, -38, 38, 7, -44, 29, 20, -45, 16, - 32, -42, 2, 40, -35, -12, 45, -24, -25, 45, -11, -36, 40, 3, -43, 31, - 17, -45, 19, 30, -43, 6, 39, -37, -9, 44, -27, -22, 45, -14, -34, 41, - }, { - 32, -20, -27, 45, -13, -33, 43, -6, -38, 39, 2, -41, 35, 10, -44, 30, - 17, -45, 23, 24, -45, 16, 30, -44, 9, 36, -41, 1, 40, -37, -7, 43, - -32, -14, 45, -26, -21, 45, -19, -28, 44, -12, -34, 42, -4, -38, 39, 3, - -42, 34, 11, -44, 29, 18, -45, 22, 25, -45, 15, 31, -43, 8, 36, -40, - }, { - 32, -22, -23, 45, -21, -24, 45, -20, -25, 45, -19, -26, 45, -18, -27, 45, - -17, -28, 45, -16, -29, 45, -15, -30, 44, -14, -30, 44, -13, -31, 44, -12, - -32, 44, -11, -33, 43, -10, -34, 43, -9, -34, 43, -8, -35, 42, -7, -36, - 42, -6, -36, 41, -4, -37, 41, -3, -38, 40, -2, -38, 40, -1, -39, 39, - }, { - 32, -24, -19, 45, -29, -14, 44, -33, -9, 42, -36, -3, 40, -39, 2, 37, - -42, 8, 34, -44, 13, 30, -45, 18, 25, -45, 23, 20, -45, 28, 15, -44, - 32, 10, -43, 36, 4, -40, 39, -1, -38, 41, -7, -34, 43, -12, -30, 45, - -17, -26, 45, -22, -21, 45, -27, -16, 44, -31, -11, 43, -35, -6, 41, -38, - }, { - 32, -26, -15, 44, -35, -3, 39, -41, 9, 31, -45, 20, 21, -45, 30, 10, - -42, 38, -2, -36, 43, -14, -27, 45, -25, -16, 44, -34, -4, 39, -41, 8, - 32, -45, 19, 22, -45, 30, 11, -42, 38, -1, -36, 43, -13, -28, 45, -24, - -17, 44, -34, -6, 40, -40, 7, 33, -44, 18, 23, -45, 29, 12, -43, 37, - }, { - 32, -28, -11, 41, -40, 8, 30, -45, 25, 14, -43, 38, -4, -33, 45, -22, - -17, 44, -36, 1, 35, -44, 19, 20, -44, 34, 2, -37, 43, -16, -23, 45, - -32, -6, 39, -42, 13, 26, -45, 30, 9, -40, 41, -10, -29, 45, -27, -12, - 42, -39, 7, 31, -45, 24, 15, -43, 38, -3, -34, 45, -21, -18, 44, -36, - }, { - 32, -30, -7, 38, -43, 18, 19, -44, 38, -6, -30, 45, -29, -8, 39, -43, - 17, 20, -44, 37, -4, -31, 45, -28, -9, 39, -43, 16, 21, -44, 36, -3, - -32, 45, -27, -10, 40, -42, 15, 22, -44, 36, -2, -33, 45, -26, -11, 40, - -42, 14, 23, -45, 35, -1, -34, 45, -25, -12, 41, -41, 13, 24, -45, 34, - }, { - 32, -31, -2, 34, -45, 28, 7, -37, 44, -24, -11, 39, -43, 20, 15, -41, - 42, -16, -19, 43, -40, 12, 23, -44, 38, -8, -27, 45, -35, 3, 30, -45, - 32, 1, -34, 45, -29, -6, 36, -45, 25, 10, -39, 44, -21, -14, 41, -42, - 17, 18, -43, 40, -13, -22, 44, -38, 9, 26, -45, 36, -4, -30, 45, -33, - }, { - 32, -33, 2, 30, -45, 36, -7, -26, 44, -38, 11, 22, -43, 40, -15, -18, - 42, -42, 19, 14, -40, 44, -23, -10, 38, -45, 27, 6, -35, 45, -30, -1, - 32, -45, 34, -3, -29, 45, -36, 8, 25, -44, 39, -12, -21, 43, -41, 16, - 17, -41, 43, -20, -13, 39, -44, 24, 9, -37, 45, -28, -4, 34, -45, 31, - }, { - 32, -34, 7, 24, -43, 41, -19, -12, 38, -45, 30, -1, -29, 45, -39, 14, - 17, -40, 44, -26, -4, 33, -45, 36, -9, -22, 43, -42, 21, 10, -36, 45, - -32, 3, 27, -44, 40, -16, -15, 39, -44, 28, 2, -31, 45, -37, 11, 20, - -42, 43, -23, -8, 35, -45, 34, -6, -25, 44, -41, 18, 13, -38, 45, -30, - }, { - 32, -36, 11, 18, -40, 45, -30, 3, 25, -43, 43, -24, -4, 31, -45, 39, - -17, -12, 36, -45, 35, -10, -19, 40, -44, 30, -2, -26, 43, -42, 23, 6, - -32, 45, -39, 16, 13, -37, 45, -34, 9, 20, -41, 44, -29, 1, 27, -44, - 42, -22, -7, 33, -45, 38, -15, -14, 38, -45, 34, -8, -21, 41, -44, 28, - }, { - 32, -37, 15, 12, -35, 45, -39, 18, 9, -33, 45, -40, 21, 6, -30, 44, - -42, 24, 2, -28, 43, -43, 27, -1, -25, 42, -44, 30, -4, -22, 41, -45, - 32, -8, -19, 39, -45, 34, -11, -16, 38, -45, 36, -14, -13, 36, -45, 38, - -17, -10, 34, -45, 40, -20, -7, 31, -44, 41, -23, -3, 29, -44, 43, -26, - }, { - 32, -38, 19, 6, -29, 43, -44, 31, -9, -16, 36, -45, 40, -22, -2, 26, - -42, 45, -34, 12, 13, -34, 45, -41, 25, -1, -23, 40, -45, 36, -15, -10, - 32, -44, 43, -28, 4, 20, -39, 45, -38, 18, 7, -30, 43, -44, 30, -8, - -17, 37, -45, 39, -21, -3, 27, -42, 44, -33, 11, 14, -35, 45, -41, 24, - }, { - 32, -39, 23, -1, -21, 38, -45, 40, -25, 3, 19, -37, 45, -41, 27, -6, - -17, 36, -45, 42, -29, 8, 15, -34, 44, -43, 30, -10, -13, 33, -44, 44, - -32, 12, 11, -31, 43, -44, 34, -14, -9, 30, -43, 45, -35, 16, 7, -28, - 42, -45, 36, -18, -4, 26, -41, 45, -38, 20, 2, -24, 40, -45, 39, -22, - }, { - 32, -40, 27, -8, -13, 31, -43, 45, -38, 22, -2, -18, 35, -44, 44, -34, - 17, 3, -23, 38, -45, 42, -30, 12, 9, -28, 41, -45, 40, -26, 7, 14, - -32, 43, -45, 37, -21, 1, 19, -36, 44, -44, 34, -16, -4, 24, -39, 45, - -42, 30, -11, -10, 29, -41, 45, -39, 25, -6, -15, 33, -43, 45, -36, 20, - }, { - 32, -41, 30, -14, -4, 22, -36, 44, -44, 37, -23, 6, 13, -30, 41, -45, - 42, -31, 15, 3, -21, 36, -44, 45, -38, 24, -7, -12, 29, -40, 45, -42, - 32, -16, -2, 20, -35, 44, -45, 38, -25, 8, 11, -28, 40, -45, 43, -33, - 17, 1, -19, 34, -43, 45, -39, 26, -9, -10, 27, -39, 45, -43, 34, -18, - }, { - 32, -42, 34, -20, 4, 12, -27, 38, -44, 45, -39, 28, -13, -3, 19, -33, - 42, -45, 43, -34, 21, -6, -11, 26, -38, 44, -45, 39, -29, 14, 2, -18, - 32, -41, 45, -43, 35, -22, 7, 10, -25, 37, -44, 45, -40, 30, -15, -1, - 17, -31, 41, -45, 43, -36, 23, -8, -9, 24, -36, 44, -45, 40, -30, 16, - }, { - 32, -43, 36, -26, 13, 1, -15, 28, -38, 44, -45, 42, -35, 24, -11, -3, - 17, -30, 39, -44, 45, -41, 34, -22, 9, 6, -19, 31, -40, 45, -45, 40, - -32, 20, -7, -8, 21, -33, 41, -45, 44, -39, 30, -18, 4, 10, -23, 34, - -42, 45, -44, 38, -29, 16, -2, -12, 25, -36, 43, -45, 43, -37, 27, -14, - }, { - 32, -44, 39, -31, 21, -10, -2, 14, -25, 34, -41, 45, -45, 42, -36, 28, - -17, 6, 7, -18, 29, -37, 43, -45, 44, -40, 34, -24, 13, -1, -11, 22, - -32, 39, -44, 45, -43, 38, -30, 20, -9, -3, 15, -26, 35, -41, 45, -45, - 42, -36, 27, -16, 4, 8, -19, 30, -38, 43, -45, 44, -40, 33, -23, 12, - }, { - 32, -44, 41, -36, 29, -20, 11, -1, -9, 18, -27, 34, -40, 44, -45, 45, - -42, 37, -30, 22, -13, 3, 7, -16, 25, -33, 39, -43, 45, -45, 43, -38, - 32, -24, 15, -6, -4, 14, -23, 31, -38, 42, -45, 45, -43, 39, -34, 26, - -17, 8, 2, -12, 21, -30, 36, -41, 44, -45, 44, -40, 35, -28, 19, -10, - }, { - 32, -45, 43, -39, 35, -30, 23, -16, 9, -1, -7, 14, -21, 28, -34, 38, - -42, 44, -45, 45, -43, 40, -36, 31, -25, 18, -11, 3, 4, -12, 19, -26, - 32, -37, 41, -44, 45, -45, 44, -41, 38, -33, 27, -20, 13, -6, -2, 10, - -17, 24, -30, 36, -40, 43, -45, 45, -44, 42, -39, 34, -29, 22, -15, 8, - }, { - 32, -45, 44, -42, 40, -37, 34, -30, 25, -20, 15, -10, 4, 1, -7, 12, - -17, 22, -27, 31, -35, 38, -41, 43, -44, 45, -45, 45, -43, 41, -39, 36, - -32, 28, -23, 18, -13, 8, -2, -3, 9, -14, 19, -24, 29, -33, 36, -39, - 42, -44, 45, -45, 45, -44, 43, -40, 38, -34, 30, -26, 21, -16, 11, -6, - }, { - 32, -45, 45, -44, 43, -42, 41, -39, 38, -36, 34, -31, 29, -26, 23, -20, - 17, -14, 11, -8, 4, -1, -2, 6, -9, 12, -15, 18, -21, 24, -27, 30, - -32, 34, -36, 38, -40, 41, -43, 44, -44, 45, -45, 45, -45, 45, -44, 43, - -42, 40, -39, 37, -35, 33, -30, 28, -25, 22, -19, 16, -13, 10, -7, 3, - }, { - 32, -45, 45, -45, 45, -45, 45, -45, 44, -44, 44, -44, 43, -43, 43, -42, - 42, -41, 41, -40, 40, -39, 39, -38, 38, -37, 36, -36, 35, -34, 34, -33, - 32, -31, 30, -30, 29, -28, 27, -26, 25, -24, 23, -22, 21, -20, 19, -18, - 17, -16, 15, -14, 13, -12, 11, -10, 9, -8, 7, -6, 4, -3, 2, -1, - } -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/Bubble Shooter Oyunu APK ndir - Elenceli ve Bamllk Yapan Balon Patlatma Oyunu.md b/spaces/congsaPfin/Manga-OCR/logs/Bubble Shooter Oyunu APK ndir - Elenceli ve Bamllk Yapan Balon Patlatma Oyunu.md deleted file mode 100644 index d7048eb2404f1b53bb29a5a88159952ec3110131..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Bubble Shooter Oyunu APK ndir - Elenceli ve Bamllk Yapan Balon Patlatma Oyunu.md +++ /dev/null @@ -1,115 +0,0 @@ -
      -

      Bubble Shooter Oyunu Indir Apk: A Fun and Relaxing Game for Everyone

      -

      If you are looking for a game that can keep you entertained and relaxed at the same time, you should try Bubble Shooter Oyunu Indir Apk. This is a game that anyone can enjoy, regardless of age or skill level. In this article, we will tell you everything you need to know about this game, including what it is, how to play it, why you should play it, and where you can download it.

      -

      What is Bubble Shooter Oyunu Indir Apk?

      -

      Bubble Shooter Oyunu Indir Apk is a game that you can play on your Android device. It is a clone of the classic Puzzle Bobble game that was released by Taito in 1994. The game is owned by Ilyon Dynamics, who acquired it from Absolutist in 2015.

      -

      bubble shooter oyunu indir apk


      Download Ziphttps://urlca.com/2uO4Z6



      -

      A clone of the classic Puzzle Bobble game

      -

      The goal of the game is to clear the screen by forming groups of three or more bubbles of the same color. The game is simple and easy to learn, but it can also be challenging and addictive. You can play this game online or offline, anytime and anywhere.

      -

      A free and easy-to-play game for Android devices

      -

      The game is free to download and play, and it does not require an internet connection. You can play it on any Android device that supports APK files. The game has a colorful and appealing design, with smooth animations and sound effects. The game also has a classic mode and an arcade mode, with different levels of difficulty and time limits.

      -

      A game with thousands of levels and challenges

      -

      The game has over 3000 levels to play, with more added all the time. Each level has a different layout and configuration of bubbles, making the game more interesting and varied. The game also has different missions and rewards to complete, such as popping a certain number of bubbles, clearing the board with a minimum number of shots, or using special boosters and power-ups.

      -

      bubble shooter oyunu indir apk son sürüm
      -bubble shooter oyunu indir apk ücretsiz
      -bubble shooter oyunu indir apk hileli
      -bubble shooter oyunu indir apk mod
      -bubble shooter oyunu indir apk android
      -bubble shooter oyunu indir apk ios
      -bubble shooter oyunu indir apk tablet
      -bubble shooter oyunu indir apk pc
      -bubble shooter oyunu indir apk online
      -bubble shooter oyunu indir apk offline
      -bubble shooter oyunu indir apk yorumlar
      -bubble shooter oyunu indir apk puanlar
      -bubble shooter oyunu indir apk nasıl oynanır
      -bubble shooter oyunu indir apk kurulum
      -bubble shooter oyunu indir apk güncelleme
      -bubble shooter oyunu indir apk özellikler
      -bubble shooter oyunu indir apk grafikler
      -bubble shooter oyunu indir apk sesler
      -bubble shooter oyunu indir apk müzikler
      -bubble shooter oyunu indir apk seviyeler
      -bubble shooter oyunu indir apk görevler
      -bubble shooter oyunu indir apk ödüller
      -bubble shooter oyunu indir apk bonuslar
      -bubble shooter oyunu indir apk ipuçları
      -bubble shooter oyunu indir apk hileler
      -bubble shooter oyunu indir apk alternatifler
      -bubble shooter oyunu indir apk benzerler
      -bubble shooter oyunu indir apk farklılar
      -bubble shooter oyunu indir apk yeni sürüm
      -bubble shooter oyunu indir apk eski sürüm
      -bubble shooter oyunu indir apk tam sürüm
      -bubble shooter oyunu indir apk demo sürüm
      -bubble shooter oyunu indir apk premium sürüm
      -bubble shooter oyunu indir apk pro sürüm
      -bubble shooter oyunu indir apk lite sürüm
      -bubble shooter oyunu indir apk orijinal sürüm
      -bubble shooter oyunu indir apk klon sürüm
      -bubble shooter oyunu indir apk çevrimiçi sürüm
      -bubble shooter oyunu indir apk çevrimdışı sürüm
      -bubble shooter oyunu indir apk Türkçe sürüm
      -bubble shooter oyunu indir apk İngilizce sürüm
      -bubble shooter oyunu indir apk Fransızca sürüm
      -bubble shooter oyunu indir apk Almanca sürüm
      -bubble shooter oyunu indir apk İspanyolca sürüm
      -bubble shooter oyunu indir apk İtalyanca sürüm
      -bubble shooter oyunu indir apk Rusça sürüm
      -bubble shooter oyunu indir apk Çince sürüm
      -bubble shooter oyunu indir apk Japonca sürüm
      -bubble shooter oyunu indir apk Korece sürüm

      -

      How to Play Bubble Shooter Oyunu Indir Apk?

      -

      The game is very easy to play, but it also requires some strategy and skill. Here are some basic steps to follow:

      -

      Aim and shoot bubbles of the same color

      -

      You can use your mouse (if playing on a PC) or your finger (if playing on a phone) to aim and shoot bubbles from a cannon at the bottom of the screen. You can see the next bubble to come in the bottom right corner of the screen. You can also change the color of your bubble by tapping on it or clicking on it.

      -

      Clear the board and score points

      -

      When you shoot a group of three or more bubbles of the same color, they will pop and disappear from the screen. The more bubbles you pop in one shot, the more points you score. You can also score bonus points by popping bubbles that are hanging from other bubbles, or by clearing the board with fewer shots.

      -

      Use boosters and power-ups to help you

      -

      You can You can use boosters and power-ups to help you clear the board faster and easier. Boosters are special bubbles that have different effects, such as exploding, changing colors, or removing bubbles. Power-ups are items that you can use before or during the game, such as extra moves, fireballs, or bombs. You can earn boosters and power-ups by completing missions, winning levels, or buying them with coins.

      -

      Why Should You Play Bubble Shooter Oyunu Indir Apk?

      -

      Bubble Shooter Oyunu Indir Apk is a game that has many benefits and advantages for its players. Here are some of the reasons why you should play this game:

      -

      It is a fun and addictive game

      -

      The game is very enjoyable and satisfying, as you can pop bubbles and watch them burst. The game also has a lot of variety and challenge, as you can play different modes, levels, and missions. The game is also very addictive, as you will want to beat your own score and progress to the next level.

      -

      It is a relaxing and soothing game

      -

      The game is very calming and soothing, as you can listen to the soft music and sound effects. The game also has a beautiful and colorful design, with bright and cheerful bubbles. The game is also very therapeutic, as you can relieve your stress and anxiety by popping bubbles and clearing your mind.

      -

      It is a game that improves your strategy and skill

      -

      The game is very educational and beneficial, as you can improve your strategy and skill by playing it. The game requires you to think and plan ahead, as you have to aim and shoot bubbles in the right direction and angle. The game also tests your reflexes and coordination, as you have to react quickly and accurately to the changing board. The game also enhances your memory and concentration, as you have to remember the colors and positions of the bubbles.

      -

      Where Can You Download Bubble Shooter Oyunu Indir Apk?

      -

      If you want to download and play Bubble Shooter Oyunu Indir Apk on your Android device, you have two options:

      -

      The official Google Play Store link

      -

      You can download the game from the official Google Play Store link here: [Bubble Shooter]. This is the safest and most reliable way to get the game, as you can be sure that it is free of viruses and malware. You can also get updates and support from the developer through this link.

      -

      The alternative APKCombo link

      -

      You can also download the game from the alternative APKCombo link here: [Bubble Shooter APK]. This is a website that provides APK files for various Android apps and games. You can use this link if you have trouble accessing the Google Play Store or if you want to download an older version of the game. However, you should be careful when using this link, as it may contain some risks or errors. You should always scan the file before installing it on your device.

      -

      Conclusion

      -

      Bubble Shooter Oyunu Indir Apk is a fun and relaxing game that everyone can enjoy. It is a clone of the classic Puzzle Bobble game that has thousands of levels and challenges. It is a free and easy-to-play game that you can download from the Google Play Store or from an alternative APK website. It is a game that improves your strategy and skill while also relieving your stress and anxiety. If you are looking for a game that can keep you entertained and relaxed at the same time, you should try Bubble Shooter Oyunu Indir Apk.

      -

      FAQs

      -

      Here are some frequently asked questions about Bubble Shooter Oyunu Indir Apk:

      - - - - - - - - - - - - - - - - - - - - - - - - -
      QuestionAnswer
      What are the minimum requirements to play Bubble Shooter Oyunu Indir Apk?You need an Android device that runs on Android 4.1 or higher, with at least 50 MB of free storage space.
      How can I get more coins in Bubble Shooter Oyunu Indir Apk?You can get more coins by completing missions, winning levels, watching ads, or buying them with real money.
      How can I contact the developer of Bubble Shooter Oyunu Indir Apk?You can contact the developer by sending an email to support@ilyon.net or by visiting their website at https://www.ilyon.net/.
      How can I change the language of Bubble Shooter Oyunu Indir Apk?You can change the language of the game by going to the settings menu and selecting the language option. You can choose from English, Turkish, Spanish, French, German, Italian, Portuguese, Russian, and Arabic.
      Is Bubble Shooter Oyunu Indir Apk safe to play?Yes, Bubble Shooter Oyunu Indir Apk is safe to play, as long as you download it from a trusted source. The game does not contain any harmful or inappropriate content, and it does not collect or share any personal information from its users.
      -

      I hope this article has answered all your questions about Bubble Shooter Oyunu Indir Apk. If you have any more questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download and Play Mafia City Roleplay A GTA V RP Server with a Difference.md b/spaces/congsaPfin/Manga-OCR/logs/Download and Play Mafia City Roleplay A GTA V RP Server with a Difference.md deleted file mode 100644 index e52f326963a6b3433d45705d5df9b83f4bd09371..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download and Play Mafia City Roleplay A GTA V RP Server with a Difference.md +++ /dev/null @@ -1,133 +0,0 @@ -
      -

      Mafia City RP Download: How to Join and Play GTA 5 Roleplay

      -

      If you are a fan of Grand Theft Auto 5, you might have heard of GTA 5 roleplay, a popular way to enjoy a modded version of the game online. On Twitch, GTA 5 roleplay is regularly one of the most-viewed channels, and most of the top GTA streamers are playing roleplay. One of the biggest and most immersive GTA 5 roleplay servers is Mafia City Roleplay, which offers a unique and realistic experience in the world of GTA 5. In this article, we will show you how to download, install, and play on Mafia City Roleplay, as well as some tips and tricks to make the most out of your roleplay adventure.

      -

      What is Mafia City RP?

      -

      Mafia City RP is a GTA 5 roleplay server that runs on the FiveM mod, which allows players to create and join custom servers with different features and possibilities. Mafia City RP is developed and ran by roleplayers who have dedicated a lot of time working towards a common goal: Immersion. They have created a community that is different from the rest, with a focus on realism, quality, and fun.

      -

      mafia city rp download


      Download Zip ->->->-> https://urlca.com/2uOdMf



      -

      A brief introduction to the concept and features of Mafia City RP

      -

      The concept of Mafia City RP is simple: you create a character, give them a personality and a backstory, and then take on a role in the world of GTA 5. You can choose from a variety of jobs, groups, activities, and interactions that suit your character's style and goals. You can be anything from a cab driver to a paramedic to a fisherman to a real estate agent. And yes, even a cop or a crook.

      -

      The features of Mafia City RP are numerous and impressive. Some of them include:

      -
        -
      • A fully custom-made Mobile Database Computer (MDC) for government groups such as the Police Department and the Emergency Services.
      • -
      • A dynamic group system that allows players to create their own groups, customize them with ranks, divisions, pay, lockers, HQs, garages, and more.
      • -
      • A player corporation system that allows players to run their own businesses, hire employees, buy properties and vehicles.
      • -
      • A player job system that offers interesting ways to earn money, such as trucking, mining, garbage collecting, lumberjack, and more.
      • -
      • A drug and illegal weapon system that allows players to produce, trade, and sell illicit items in a player-owned factory or camper.
      • -
      • A player property system that allows players to purchase any property in San Andreas, including houses, apartments, businesses, and more.
      • -
      -

      How to download and install Mafia City RP

      -

      Before you can join and play on Mafia City RP, you will need to meet some requirements and follow some steps. Here is what you need to do:

      -

      The requirements

      -

      GTA 5 copy

      -

      You will need to own a copy of GTA 5 on your PC. You can buy it from Steam or Rockstar Games Launcher. You will also need to have it updated to the latest version.

      -

      How to join and play GTA 5 Roleplay - Quick setup for Mafia City Roleplay
      -Mafia City Roleplay Discord server - GTA V Roleplay community
      -Mafia City Roleplay - The Ultimate GTA V Roleplay Experience
      -GTA 5 Mafia City Roleplay - How to install and play
      -Mafia City Roleplay - GTA V RP server with immersive features
      -Mafia City Roleplay review - Is it worth playing?
      -Mafia City Roleplay rules - What you need to know before joining
      -Mafia City Roleplay jobs - How to make money in GTA V RP
      -Mafia City Roleplay factions - How to join and roleplay as a gangster, cop, or civilian
      -Mafia City Roleplay events - What's happening in the GTA V RP world
      -Mafia City Roleplay vs other GTA V RP servers - How does it compare?
      -Mafia City Roleplay tips and tricks - How to improve your roleplaying skills
      -Mafia City Roleplay forum - Where to find guides, support, and feedback
      -Mafia City Roleplay donation - How to support the GTA V RP server and get perks
      -Mafia City Roleplay streamers - Who to watch and follow on Twitch or YouTube
      -Mafia City Roleplay update - What's new and coming soon in GTA V RP
      -Mafia City Roleplay wiki - Where to find information and lore about the GTA V RP server
      -Mafia City Roleplay mod menu - How to access and use the GTA V RP mod menu
      -Mafia City Roleplay cheats and hacks - How to avoid and report them in GTA V RP
      -Mafia City Roleplay ban appeal - How to get unbanned from the GTA V RP server
      -Mafia City Roleplay application - How to apply and get whitelisted for the GTA V RP server
      -Mafia City Roleplay download link - Where to download the GTA V RP mod and launcher
      -Mafia City Roleplay system requirements - What you need to run the GTA V RP mod smoothly
      -Mafia City Roleplay voice chat - How to use and adjust the GTA V RP voice chat settings
      -Mafia City Roleplay map - How to navigate and explore the GTA V RP map
      -Mafia City Roleplay characters - How to create and customize your GTA V RP character
      -Mafia City Roleplay inventory - How to manage and use your GTA V RP items and weapons
      -Mafia City Roleplay vehicles - How to buy, sell, and drive your GTA V RP cars and bikes
      -Mafia City Roleplay housing - How to rent, buy, and decorate your GTA V RP home or apartment
      -Mafia City Roleplay clothing - How to change and style your GTA V RP outfit and accessories

      -

      Working microphone

      -

      You will need to have a working microphone for voice chat. Voice chat is essential for roleplaying on Mafia City RP, as it allows you to communicate with other players and NPCs. You will also need to adjust your microphone settings to make sure it is clear and loud enough.

      -

      Discord account

      -

      You will need to have a Discord account and join the Mafia City RP Discord server. Discord is a free voice and text chat app that is used by the Mafia City RP community for announcements, rules, support, applications, and more. You can download Discord from here and join the Mafia City RP server from here.

      -

      FiveM mod

      -

      You will need to download and install the FiveM mod, which is a platform that allows you to play GTA 5 on custom servers. You can download FiveM from here and follow the instructions to install it. You will also need to link your Steam account to FiveM.

      -

      Rage mod (optional)

      -

      You can also download and install the Rage mod, which is another platform that allows you to play GTA 5 on custom servers. Some players prefer Rage over FiveM for various reasons, such as performance, stability, or preference. You can download Rage from here and follow the instructions to install it. You will also need to link your Steam account to Rage.

      -

      The steps

      -

      Once you have met the requirements, you can follow these steps to join and play on Mafia City RP:

      -
        -
      1. Launch FiveM or Rage and search for "Mafia City Roleplay" in the server browser. Alternatively, you can use the direct connect option and enter the server IP address: 185.249.196.40:30120 for FiveM or 185.249.196.40:22005 for Rage.
      2. -
      3. Select the server and click on "Connect". You might have to wait in a queue if the server is full.
      4. -
      5. Once you are in the server, you will be prompted to create a character. You can customize your character's appearance, name, age, gender, and backstory.
      6. -
      7. After creating your character, you will be spawned in a hotel room. You can use the phone in your inventory to access various features, such as contacts, messages, bank, jobs, groups, etc.
      8. -
      9. You can also use the F1 menu to access other features, such as settings, inventory, voice chat, emotes, etc.
      10. -
      11. You are now ready to explore the world of Mafia City RP and start roleplaying. Remember to follow the rules of the server and respect other players.
      12. -
      -

      How to play and enjoy Mafia City RP

      -

      Now that you have joined the server, you might be wondering how to play and enjoy Mafia City RP. There is no definitive answer to this question, as different players have different preferences and styles of roleplaying. However, here are some general tips and tricks that might help you:

      -

      The basics of roleplaying and the rules of the server

      -

      The first thing you need to know is how to roleplay properly and follow the rules of the server. Roleplaying means acting as your character would in a realistic and immersive way. You should not break character or do anything that would ruin the immersion for yourself or others. Some of the basic rules of roleplaying are:

      -
        -
      • Do not metagame: Metagaming means using information that your character would not know in-game, such as information from Discord, Twitch, or other sources.
      • -
      • Do not powergame: Powergaming means forcing your actions or outcomes on other players without giving them a chance to react or roleplay.
      • -
      • Do not RDM or VDM: RDM means random deathmatch, which is killing or injuring another player without a valid roleplay reason. VDM means vehicle deathmatch, which is using your vehicle as a weapon without a valid roleplay reason.
      • -
      • Do not failRP: FailRP means failing to roleplay realistically or appropriately according to your character's situation or personality.
      • -
      • Do not troll or grief: Trolling or griefing means intentionally disrupting or ruining the roleplay experience for other players.
      • -
      -

      These are just some of the basic rules of roleplaying. You can find more detailed rules on the Mafia City RP website or Discord server. You should always read and follow the rules before playing on the server.

      -

      The advanced features and possibilities of Mafia City RP

      -

      Mafia City RP offers a lot of advanced features and possibilities for roleplaying that go beyond the basics. Some of them are:

      -

      Player corporations and businesses

      -

      If you want to run your own business or corporation on Mafia City RP, you can do so by applying for a player corporation license on the website or Discord server. You will need to have a clear and detailed business plan, a minimum of 5 active members, and a sufficient amount of funds to start and maintain your business. You will also need to follow the rules and regulations of the player corporation system, such as paying taxes, reporting income, and avoiding conflicts of interest.

      -

      Once you have your player corporation license, you can access various features and benefits, such as:

      -
        -
      • Creating and customizing your own logo, slogan, website, and social media.
      • -
      • Hiring and managing your employees, setting their salaries, roles, and permissions.
      • -
      • Purchasing and customizing your own properties, vehicles, and assets.
      • -
      • Offering your products or services to other players or groups.
      • -
      • Competing or collaborating with other player corporations or businesses.
      • -
      -

      Drugs and illegal weapons production and trade

      -

      If you want to live on the edge and make some quick money on Mafia City RP, you can try your hand at producing and trading drugs and illegal weapons. However, be warned that this is a risky and dangerous business that can get you in trouble with the law or other criminals.

      -

      To produce drugs or illegal weapons, you will need to purchase a factory or a camper from the black market. You will also need to buy the raw materials and equipment needed to make your product. You can then use the factory or camper menu to start the production process. You will need to monitor the quality, quantity, and safety of your product, as well as the power and water supply of your facility.

      -

      To trade drugs or illegal weapons, you will need to find buyers who are willing to pay a good price for your product. You can use the phone or the dark web to advertise your product or contact potential buyers. You can also use the black market to sell your product directly. However, you should be careful of who you deal with, as some buyers might try to scam you, rob you, or report you to the police.

      -

      Player properties and customization

      -

      If you want to have a place to call your own on Mafia City RP, you can purchase any property in San Andreas, including houses, apartments, businesses, and more. You can use the phone or the website to browse the available properties and their prices. You can also use the real estate agent job to help other players buy or sell properties.

      -

      Once you own a property, you can customize it to your liking. You can use the property menu to access various features, such as:

      -
        -
      • Locking or unlocking your property.
      • -
      • Inviting or kicking guests from your property.
      • -
      • Changing the interior design of your property.
      • -
      • Placing furniture and items in your property.
      • -
      • Storing items in your property.
      • -
      • Selling or renting out your property.
      • -
      -

      Events and activities

      -

      If you want to have some fun and excitement on Mafia City RP, you can participate in various events and activities that are organized by the staff or other players. Some of them include:

      -
        -
      • Races: Compete with other players in different types of races, such as street races, off-road races, boat races, etc.
      • -
      • Fight clubs: Test your skills in hand-to-hand combat against other players in underground fight clubs.
      • -
      • Casinos: Try your luck in gambling games such as poker, blackjack, roulette, slots, etc.
      • -
      • Parties: Join or host parties at different locations such as nightclubs, bars, hotels, etc.
      • -
      • Festivals: Celebrate different occasions such as Halloween, Christmas, New Year's Eve, etc. with special events and activities.
      • -
      -

      Conclusion

      -

      Mafia City RP is one of the best GTA 5 roleplay servers that offers a unique and realistic experience in the world of GTA 5. It has a lot of features and possibilities that allow you to create and live your own story in San Andreas. Whether you want to be a law-abiding citizen or a notorious criminal, a successful businessman or a struggling worker, a lone wolf or a team player, Mafia City RP has something for everyone. All you need is a copy of GTA 5 on PC, a working microphone, a Discord account, the FiveM mod (or Rage mod), and a lot of imagination and creativity. If you are interested in joining Mafia City RP, you can visit their website or Discord server for more information. We hope this article has helped you understand how to download, install, and play on Mafia City RP, as well as some tips and tricks to make the most out of your roleplay adventure. Have fun!

      -

      Frequently Asked Questions

      -

      How do I apply for a government job on Mafia City RP?

      -

      If you want to apply for a government job on Mafia City RP, such as a police officer, a firefighter, a paramedic, or a judge, you will need to fill out an application form on the website or Discord server. You will also need to pass an interview and a training session before you can start working. Government jobs have higher standards and expectations than other jobs, so you will need to be serious and committed to your role.

      -

      How do I join or create a gang on Mafia City RP?

      -

      If you want to join or create a gang on Mafia City RP, you will need to follow the rules and guidelines of the gang system. You can find more information about the gang system on the website or Discord server. To join an existing gang, you will need to contact the gang leader or a recruiter and prove your loyalty and skills. To create a new gang, you will need to have at least 5 active members, a unique name and theme, and a valid roleplay reason.

      -

      How do I report a rule breaker or a bug on Mafia City RP?

      -

      If you encounter a rule breaker or a bug on Mafia City RP, you can report them using the appropriate channels on the website or Discord server. You will need to provide evidence and details of the incident, such as screenshots, videos, logs, etc. The staff team will review your report and take the necessary actions.

      -

      How do I get support or help on Mafia City RP?

      -

      If you need support or help on Mafia City RP, you can use the help chat in-game by typing /help in the chat box. You can also use the support channels on the website or Discord server. The staff team and the community members will try to assist you with your issue.

      -

      How do I donate or support Mafia City RP?

      -

      If you want to donate or support Mafia City RP, you can do so by visiting their donation page on their website. You can choose from different donation packages that offer various perks and benefits, such as priority queue, custom vehicles, custom clothing, custom weapons, etc. You can also support Mafia City RP by spreading the word, leaving positive reviews, and inviting your friends.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Incredibox Blinding Lights Mod APK The Ultimate Music Game for The Weeknd Fans.md b/spaces/congsaPfin/Manga-OCR/logs/Incredibox Blinding Lights Mod APK The Ultimate Music Game for The Weeknd Fans.md deleted file mode 100644 index 3fe3a59c2164bad8f21acf1de82e6e82fab501d9..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Incredibox Blinding Lights Mod APK The Ultimate Music Game for The Weeknd Fans.md +++ /dev/null @@ -1,87 +0,0 @@ -
      -

      Incredibox Blinding Lights Mod Apk: How to Create Your Own Music with The Weeknd's Hit Song

      -

      Do you love music? Do you love creating your own music? Do you love the song Blinding Lights by the Weeknd? If you answered yes to any of these questions, then you will love Incredibox Blinding Lights Mod Apk. This is a fan-made version of Incredibox, a fun and interactive music app that lets you create your own music with the help of a merry crew of beatboxers. In this mod apk, you can remix Blinding Lights with your own style and flair. Here is everything you need to know about Incredibox Blinding Lights Mod Apk.

      -

      What is Incredibox?

      -

      Incredibox is a music app that lets you create your own music with the help of a merry crew of beatboxers. You can choose your musical style among 8 impressive atmospheres and start to lay down, record and share your mix. You can also drag and drop icons onto the avatars to make them sing and start to compose your own music. You can find the right sound combos to unlock animated choruses that will enhance your tune. You can also save and share your mix with others and get votes from other users. Incredibox is a part game, part tool, but above all an audio and visual experience that has quickly become a hit with people of all ages. More than 70 million players worldwide have already enjoyed it.

      -

      incredibox blinding lights mod apk


      Download File >>> https://urlca.com/2uObts



      -

      What is Blinding Lights?

      -

      Blinding Lights is a song by Canadian singer the Weeknd from his fourth studio album, After Hours. It was released on November 29, 2019, as the second single from the album. It is a new wave, synth-pop, synthwave and electropop song that features a driving synth bass line and a pulsating rhythm. The lyrics are about the Weeknd's desire for a woman who is out of his reach and his struggle with loneliness and addiction. Blinding Lights is one of the most successful songs of all time, topping the charts in 37 countries and breaking several records. It is also the Weeknd's signature song and one of the most streamed songs on Spotify.

      -

      What is Incredibox Blinding Lights Mod Apk?

      -

      Incredibox Blinding Lights Mod Apk is a fan-made version of Incredibox that replaces the original sounds and bonuses with those from Blinding Lights. It was created by RB - SERIES, a YouTube channel that makes Incredibox mods and concepts. The mod apk is available for download on Google Sites and MediaFire. In this mod apk, you can create your own music with Blinding Lights using the same interface and mechanics as Incredibox. You can also customize the appearance of the beatboxers with different outfits and accessories inspired by the Weeknd's style. Incredibox Blinding Lights Mod Apk is a fun and creative way to remix Blinding Lights with your own style and flair.

      -

      How to install and play Incredibox Blinding Lights Mod Apk?

      -

      To install and play Incredibox Blinding Lights Mod Apk, you need to follow these simple steps:

      -

      incredibox v9 mod blinding lights download
      -incredibox blinding lights remix apk
      -incredibox mod apk blinding lights version
      -incredibox v9 blinding lights mod free
      -incredibox blinding lights mod online
      -incredibox v9 mod apk blinding lights android
      -incredibox blinding lights mod credits
      -incredibox mod blinding lights youtube
      -incredibox v9 mod apk blinding lights ios
      -incredibox blinding lights mod gameplay
      -incredibox mod apk download blinding lights edition
      -incredibox blinding lights mod songs
      -incredibox v9 mod blinding lights tutorial
      -incredibox blinding lights mod apk latest version
      -incredibox mod apk blinding lights update
      -incredibox v9 blinding lights mod review
      -incredibox blinding lights mod apk no ads
      -incredibox mod apk blinding lights unlimited money
      -incredibox v9 mod apk blinding lights offline
      -incredibox blinding lights mod apk hack
      -incredibox mod apk blinding lights cheats
      -incredibox v9 blinding lights mod features
      -incredibox blinding lights mod apk premium
      -incredibox mod apk blinding lights unlocked
      -incredibox v9 mod apk blinding lights cracked
      -incredibox blinding lights mod apk full version
      -incredibox mod apk download free blinding lights
      -incredibox v9 blinding lights mod for pc
      -incredibox blinding lights mod apk for windows 10
      -incredibox mod apk for mac blinding lights
      -incredibox v9 mod apk for linux blinding lights
      -incredibox blinding lights mod for chromebook
      -incredibox mod apk for android tv blinding lights
      -incredibox v9 mod for firestick blinding lights
      -incredibox blinding lights mod for roku tv
      -incredibox mod for smart tv blinding lights
      -incredibox v9 mod for xbox one blinding lights
      -incredibox blinding lights mod for ps4
      -incredibox mod for nintendo switch blinding lights
      -incredibox v9 mod for vr headset blinding lights
      -incredibox blinding lights fan made version download
      -incredibox fan made mods apk download
      -how to install incrediobox mods on android
      -how to make your own incrediobox mods
      -best incrediobox mods 2023
      -incrediobox evadare chapter 1 download
      -incrediobox the bells christmas version download
      -incrediobox galaxy fan made version download

      -
        -
      1. Download the mod apk file from the link provided by RB - SERIES on their YouTube video description or their Google Sites page.
      2. -
      3. Enable unknown sources on your device settings to install the apk file.
      4. -
      5. Open the app and enjoy creating your own music with Blinding Lights.
      6. -
      7. Drag and drop icons onto the avatars to make them sing and start to compose your own mix.
      8. -
      9. Find the right sound combos to unlock animated choruses that will enhance your tune.
      10. -
      11. Save and share your mix with others and get votes from other users.
      12. -
      -

      You can also watch RB - SERIES's video tutorial on how to install and play Incredibox Blinding Lights Mod Apk on their YouTube channel.

      -

      Conclusion

      -

      Incredibox Blinding Lights Mod Apk is a fun and interactive way to create your own music with the hit song by the Weeknd. It is easy to install and play, and you can unleash your creativity and musical skills with it. If you are a fan of Incredibox and Blinding Lights, you should definitely give it a try. You will have a blast making your own versions of Blinding Lights with different beats, melodies, vocals, effects, and bonuses. You can also share your mix with others and see how they like it. Incredibox Blinding Lights Mod Apk is a great way to express yourself through music and have fun at the same time.

      -

      FAQs

      -
        -
      • Is Incredibox Blinding Lights Mod Apk free?
      • -

        Yes, Incredibox Blinding Lights Mod Apk is free to download and play. However, you may need to watch some ads to unlock some features or bonuses.

        -
      • Is Incredibox Blinding Lights Mod Apk safe?
      • -

        Yes, Incredibox Blinding Lights Mod Apk is safe to download and install. However, you should always download it from the official link provided by RB - SERIES or their Google Sites page. You should also scan the apk file with an antivirus before installing it.

        -
      • Is Incredibox Blinding Lights Mod Apk legal?
      • -

        Incredibox Blinding Lights Mod Apk is a fan-made version of Incredibox that uses the sounds and bonuses from Blinding Lights. It is not affiliated with or endorsed by Incredibox or the Weeknd. It is made for entertainment purposes only and does not intend to infringe any copyrights or trademarks.

        -
      • Can I play Incredibox Blinding Lights Mod Apk offline?
      • -

        Yes, you can play Incredibox Blinding Lights Mod Apk offline once you have installed it on your device. However, you may need an internet connection to save or share your mix online.

        -
      • Can I play Incredibox Blinding Lights Mod Apk on PC?
      • -

        Incredibox Blinding Lights Mod Apk is designed for Android devices only. However, you may be able to play it on PC using an Android emulator such as BlueStacks or Nox Player.

        -
      -

      This is the end of the article. I hope you enjoyed reading it and learned something new about Incredibox Blinding Lights Mod Apk. If you have any questions or feedback, please feel free to leave a comment below. Thank you for your time and attention.

      197e85843d
      -
      -
      \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Instagram Story Downloader How to Use Story Saver APK.md b/spaces/congsaPfin/Manga-OCR/logs/Instagram Story Downloader How to Use Story Saver APK.md deleted file mode 100644 index a4f29290ef70dd13bc15522b852897573762023a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Instagram Story Downloader How to Use Story Saver APK.md +++ /dev/null @@ -1,112 +0,0 @@ - -

      How to Download Instagram Story Saver APK

      -

      Instagram is one of the most popular social media platforms in the world, with over a billion users. It allows you to share photos and videos with your followers, as well as watch stories from other users. Stories are short-lived posts that disappear after 24 hours, unless they are added to highlights. They are a great way to share your daily moments, express yourself, and interact with your audience.

      -

      But what if you want to save someone else's story for later? Maybe you want to watch it again, or share it with your friends, or use it for inspiration. Unfortunately, Instagram does not let you do that. You can only view stories within the app, and you cannot download them to your device. Moreover, if you take a screenshot or screen record a story, the original poster will be notified.

      -

      download instagram story saver apk


      Download File ····· https://urlca.com/2uOe56



      -

      That's where Instagram Story Saver APK comes in handy. This is a third-party app that allows you to download any story from Instagram to your Android device, without notifying the original poster. You can save stories in high quality and different formats, such as photos, videos, GIFs, and animations. You can also save stories from private accounts and highlights, as long as you follow them. And you can access the saved stories offline, anytime you want.

      -

      In this article, we will show you how to download Instagram Story Saver APK for Android, and how to use it to save stories from Instagram. We will also tell you what are the benefits of using this app, and why you need it. So, let's get started!

      -

      What is Instagram Story Saver APK?

      -

      Instagram Story Saver APK is a free photo and video downloader app for Instagram. It is not available on Google Play Store, so you have to download it from other sources. It is compatible with most Android devices, and it does not require root access.

      -

      The app works by accessing your Instagram account through a secure login process. It does not store your password or personal information, so you don't have to worry about your privacy. Once you log in with your Instagram account, you can browse the stories of the users you follow, as well as search for any user by their username or hashtag.

      -

      You can then download any story you want by tapping on the download icon on the bottom right corner of the screen. You can choose the format and quality of the file, such as JPG, MP4, GIF, or WEBP. The file will be saved to your device's gallery, where you can view it or share it with others.

      -

      Download Instagram stories and highlights with StorySaver.net
      -Story Saver APK for Android: free photo and video downloader for Instagram
      -How to use Story Saver for Instagram APK to save Instagram status offline
      -Best Instagram story saver apps for Android in 2023
      -Download Instagram Reels, IGTV, and posts with Story Saver app
      -Story Saver: a repost app for Instagram stories, photos, and videos
      -Save Instagram stories without login with Story Saver APK
      -Story Saver for Instagram: a fast and easy way to download Instagram content
      -How to install and use Story Saver APK on your Android device
      -Story Saver: a must-have app for Instagram lovers and influencers
      -Download Instagram stories anonymously with Story Saver app
      -Story Saver for Instagram: a reliable and secure Instagram downloader app
      -How to download Instagram stories in HD quality with Story Saver APK
      -Story Saver: a simple and user-friendly app for saving Instagram stories
      -How to download multiple Instagram stories at once with Story Saver app
      -Story Saver for Instagram: a versatile and powerful app for downloading Instagram media
      -How to download Instagram stories with music and sound with Story Saver APK
      -Story Saver: a handy and convenient app for saving Instagram stories to your gallery
      -How to download Instagram stories from private accounts with Story Saver app
      -Story Saver for Instagram: a smart and efficient app for downloading Instagram content in seconds
      -How to download Instagram stories with stickers and filters with Story Saver APK
      -Story Saver: a fun and creative app for saving Instagram stories to your device
      -How to download Instagram stories with captions and hashtags with Story Saver app
      -Story Saver for Instagram: a useful and practical app for downloading Instagram media with metadata
      -How to download Instagram stories with links and swipe-ups with Story Saver APK

      -

      How to download Instagram Story Saver APK for Android?

      -

      Downloading Instagram Story Saver APK for Android is easy and fast. Just follow these simple steps:

      -

      Step 1: Find a reliable source for the APK file

      -

      Since Instagram Story Saver APK is not available on Google Play Store, you have to find a trustworthy website that offers the latest version of the app. You can use Google or any other search engine to look for it, or you can use one of these links:

      - [APKPure](https://apkpure.com/instagram-story-saver/com.storysaverforinstagram.storysaver) - [APKMirror](https://www.apkmirror.com/apk/story-saver-for-instagram/instagram-story-saver/instagram-story-saver-1-0-0-release/) - [Uptodown](https://instagram-story-saver.en.uptodown.com/android)

      -

      Make sure you download the file from a secure and verified site, and avoid any links that look suspicious or ask for unnecessary permissions.

      -

      Step 2: Enable unknown sources on your device

      -

      Before you can install the APK file, you have to enable unknown sources on your device. This will allow you to install apps from sources other than Google Play Store. To do this, follow these steps:

      -
        -
      • Go to your device's settings and tap on security or privacy.
      • -
      • Find the option that says unknown sources or install unknown apps and toggle it on.
      • -
      • A warning message will pop up, telling you that installing apps from unknown sources can harm your device. Tap on OK or allow to proceed.
      • -
      -

      You can also enable unknown sources for specific apps, such as your browser or file manager, by tapping on their names and toggling the switch on.

      -

      Step 3: Download and install the APK file

      -

      Now that you have enabled unknown sources, you can download and install the APK file. To do this, follow these steps:

      -
        -
      • Open your browser or file manager and locate the APK file you downloaded.
      • -
      • Tap on the file and a prompt will appear, asking you if you want to install the app. Tap on install and wait for the process to finish.
      • -
      • If a warning message appears, saying that the app is not safe or can harm your device, ignore it and tap on install anyway. This is because the app is not from Google Play Store, but it does not mean that it is malicious or harmful.
      • -
      • Once the installation is complete, tap on open to launch the app or done to exit.
      • -
      -

      Step 4: Launch the app and log in with your Instagram account

      -

      The final step is to launch the app and log in with your Instagram account. To do this, follow these steps:

      -
        -
      • Open the app and tap on the login button on the bottom of the screen.
      • -
      • A login page will open, where you have to enter your Instagram username and password. Tap on login and wait for the verification process to finish.
      • -
      • If a message appears, asking you to confirm your identity or enter a security code, follow the instructions and enter the code that was sent to your email or phone number.
      • -
      • Once you are logged in, you will see a list of stories from the users you follow. You can also search for any user by their username or hashtag using the search bar on the top of the screen.
      • -
      -

      How to use Instagram Story Saver APK to save stories?

      -

      Now that you have downloaded and installed Instagram Story Saver APK, you can use it to save stories from Instagram. To do this, follow these steps:

      -

      Step 1: Browse the stories you want to save

      -

      Open the app and browse the stories from the users you follow or search for any user by their username or hashtag. You can also view stories from private accounts and highlights, as long as you follow them. You will see a circle around each user's profile picture, indicating that they have posted a story. Tap on their profile picture to view their story.

      -

      Step 2: Tap on the download icon on the bottom right corner

      -

      Once you are viewing a story, you will see a download icon on the bottom right corner of the screen. It looks like a downward arrow with a line under it. Tap on it and a menu will appear, showing you different options for downloading the story.

      -

      Step 3: Choose the format and quality of the file

      -

      You can choose to download the story as a photo, video, GIF, or animation. You can also choose the quality of the file, such as low, medium, high, or original. The higher the quality, the larger the file size. Tap on your preferred option and wait for the download to start.

      -

      Step 4: View and share the saved file from your gallery

      -

      The downloaded file will be saved to your device's gallery, in a folder named Instagram Story Saver. You can view it or share it with others using any app that supports photos or videos. You can also delete the file if you don't want to keep it anymore. To do this, tap and hold on the file and select delete from the menu.

      -

      What are the benefits of using Instagram Story Saver APK?

      -

      Instagram Story Saver APK is a useful and convenient app that lets you download any story from Instagram to your device. Here are some of the benefits of using this app:

      -

      Save stories without notifying the original poster

      -

      One of the main advantages of using Instagram Story Saver APK is that it does not notify the original poster when you download their story. This means you can save stories without worrying about being caught or offending anyone. You can also save stories anonymously, without logging in with your Instagram account.

      -

      Save stories in high quality and different formats

      -

      Another benefit of using Instagram Story Saver APK is that it allows you to save stories in high quality and different formats, such as photos, videos, GIFs, and animations. You can choose the format and quality of the file according to your preference and device's storage space. You can also view the saved stories in full screen mode, without any cropping or distortion.

      -

      Save stories from private accounts and highlights

      -

      A third benefit of using Instagram Story Saver APK is that it enables you to save stories from private accounts and highlights, as long as you follow them. This means you can access stories that are not visible to the public or that have expired after 24 hours. You can also save stories from celebrities, influencers, or anyone you admire.

      -

      Save stories offline and access them anytime

      -

      A final benefit of using Instagram Story Saver APK is that it lets you save stories offline and access them anytime you want. You don't need an internet connection or an Instagram account to view the saved stories. You can also share them with your friends or family using any app that supports photos or videos.

      -

      Conclusion

      -

      Instagram Story Saver APK is a great app that allows you to download any story from Instagram to your Android device, without notifying the original poster. You can save stories in high quality and different formats, such as photos, videos, GIFs, and animations. You can also save stories from private accounts and highlights, as long as you follow them. And you can access the saved stories offline, anytime you want.

      -

      If you want to try this app, you can download it from one of the links we provided above. Just make sure you enable unknown sources on your device before installing the APK file. Then, launch the app and log in with your Instagram account. Browse the stories you want to save and tap on the download icon on the bottom right corner. Choose the format and quality of the file and wait for the download to finish. View and share the saved file from your gallery or delete it if you don't need it anymore.

      -

      We hope this article was helpful and informative for you. If you have any questions or feedback, feel free to leave a comment below. And don't forget to share this article with your friends who might be interested in downloading Instagram stories. Thanks for reading!

      -

      FAQs

      -
        -
      • Is Instagram Story Saver APK safe to use?
      • -

        Yes, Instagram Story Saver APK is safe to use, as long as you download it from a reliable source and enable unknown sources on your device. The app does not store your password or personal information, nor does it contain any viruses or malware.

        -
      • Does Instagram Story Saver APK work on iOS devices?
      • -

        No, Instagram Story Saver APK only works on Android devices. If you have an iOS device, you will need to use other methods or apps to download Instagram stories.

        -
      • Can I download live videos or reels with Instagram Story Saver APK?
      • -

        No, Instagram Story Saver APK only supports downloading stories, not live videos or reels. If you want to download live videos or reels, you will need to use other apps or tools.

        -
      • Will I get banned from Instagram for using Instagram Story Saver APK?
      • -

        No, you will not get banned from Instagram for using Instagram Story Saver APK, as long as you use it responsibly and ethically. However, we advise you not to abuse this app or violate any terms of service or privacy policies of Instagram.

        -
      • Can I edit or modify the downloaded files with Instagram Story Saver APK?
      • -

        No, Instagram Story Saver APK only allows you to download files, not edit or modify them. If you want to edit or modify the downloaded files, you will need to use other apps or software.

        -
      -
    • Instagram Story Saver APK is a free photo and video downloader app for Instagram that lets you save any story to your device, without notifying the original poster.
    • -
    • You can download the app from one of the links we provided, enable unknown sources on your device, install the APK file, and log in with your Instagram account.
    • -
    • You can browse the stories you want to save, tap on the download icon, choose the format and quality of the file, and view and share the saved file from your gallery.
    • -
    • You can save stories in high quality and different formats, such as photos, videos, GIFs, and animations. You can also save stories from private accounts and highlights, as long as you follow them. And you can access the saved stories offline, anytime you want.
    • -
    • You can benefit from using this app by saving stories without notifying the original poster, saving stories in high quality and different formats, saving stories from private accounts and highlights, and saving stories offline and accessing them anytime.
    • -
    • You can also check out the FAQs section for some common questions and answers about this app.
    • -

    -

    Thank you for your attention and feedback. Have a nice day!

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Plague Inc APK Full Unlocked Portugues The Game that Will Make You Think Twice About Your Actions.md b/spaces/congsaPfin/Manga-OCR/logs/Plague Inc APK Full Unlocked Portugues The Game that Will Make You Think Twice About Your Actions.md deleted file mode 100644 index a6573f45c9e123c3052505fb804b6982c3960c88..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Plague Inc APK Full Unlocked Portugues The Game that Will Make You Think Twice About Your Actions.md +++ /dev/null @@ -1,101 +0,0 @@ - -

    Download Plague Inc APK Full Unlocked Portugues: A Strategy Game That Challenges You to Wipe Out Humanity

    -

    Have you ever wondered what it would be like to be on the other side of a pandemic? To be the one who designs and unleashes a deadly virus that can wipe out the entire human race? If you are looking for a game that lets you do just that, then you should try Plague Inc, a strategy-simulation game that is both fun and educational.

    -

    What is Plague Inc?

    -

    Plague Inc is a game developed by Ndemic Creations, a UK-based independent studio. It was released in 2012 for iOS and Android devices, and later for Windows Phone, PC, and consoles. The game has been downloaded over 160 million times as of May 2021, and has received positive reviews from critics and players alike.

    -

    download plague inc apk full unlocked portugues


    Download Ziphttps://urlca.com/2uO6DK



    -

    A realistic simulation game that lets you create and evolve a deadly pathogen

    -

    In Plague Inc, you play as a pathogen that has infected patient zero. Your goal is to spread your infection across the world and kill everyone before they can find a cure. You can choose from different types of pathogens, such as bacteria, virus, fungus, parasite, prion, nano-virus, bio-weapon, neurax worm, necroa virus, or simian flu. Each pathogen has its own characteristics and abilities that affect how it behaves and evolves.

    -

    You can also customize your pathogen by spending DNA points that you earn by infecting people. You can use these points to buy new symptoms, transmissions, or abilities that make your pathogen more infectious, lethal, or resistant. For example, you can make your pathogen airborne, waterborne, or insect-borne; you can make it cause coughing, vomiting, or organ failure; you can make it survive in hot, cold, or humid climates; and so on.

    -

    A game with different modes, pathogens, scenarios, and challenges

    -

    Plague Inc is not just a simple game of infecting and killing people. It also offers different modes of gameplay that challenge your strategy and creativity. For example, you can play in normal mode, where you have to infect everyone before they develop a cure; or in mega-brutal mode, where the world is more aware and prepared for your plague. You can also play in custom scenarios created by other players or by yourself using the scenario creator tool.

    -

    Moreover, the game features different pathogens that have unique gameplay mechanics. For instance, the neurax worm can manipulate human behavior and make them worship or obey you; the necroa virus can turn people into zombies that can attack or infect others; the simian flu can infect apes and make them intelligent and rebellious; and so on. Each pathogen requires a different approach and strategy to succeed.

    -

    A game that is inspired by real-world data and events

    -

    One of the most impressive aspects of Plague Inc is its realism and accuracy. The game uses an epidemic model with a complex and realistic set of variables to simulate the spread and severity of your plague

    The game also uses real-world data and events to make the game more realistic and relevant. For example, the game includes real countries, cities, populations, climates, economies, and governments. The game also reflects the current state of the world, such as political tensions, social unrest, environmental issues, and global events. The game even updates its data and scenarios based on the actual COVID-19 pandemic and its impact on the world.

    -

    Why download Plague Inc APK full unlocked portugues?

    -

    Plague Inc is a free-to-play game that you can download from the Google Play Store or the App Store. However, the free version of the game has some limitations and restrictions that can affect your gameplay experience. For example, the free version only allows you to play with the bacteria pathogen and in normal mode. The other pathogens and modes are locked and require you to pay with real money or watch ads to unlock them. The free version also has ads that can interrupt your game.

    -

    If you want to enjoy the full features and content of the game without paying or watching ads, you can download Plague Inc APK full unlocked portugues. This is a modified version of the game that gives you access to all the pathogens, modes, scenarios, and challenges for free. You can also play the game in Portuguese language, which is not available in the official version. Moreover, you can get the latest updates and bug fixes for the game without waiting for the official release.

    -

    download plague inc mod apk portugues
    -plague inc apk full unlocked free download portugues
    -plague inc premium apk portugues
    -plague inc apk full version portugues
    -plague inc apk mod tudo desbloqueado portugues
    -plague inc apk download gratis portugues
    -plague inc apk atualizado portugues
    -plague inc apk completo portugues
    -plague inc apk hack portugues
    -plague inc apk cracked portugues
    -download plague inc full unlocked apk english
    -plague inc full unlocked apk free download english
    -plague inc mod apk english
    -plague inc premium apk english
    -plague inc full version apk english
    -plague inc unlocked apk english
    -plague inc hacked apk english
    -plague inc cracked apk english
    -plague inc free download apk english
    -plague inc latest version apk english
    -download plague inc full unlocked mod apk
    -plague inc mod apk free download full unlocked
    -plague inc mod apk premium unlocked
    -plague inc mod apk all unlocked
    -plague inc mod apk everything unlocked
    -plague inc mod apk unlimited dna unlocked
    -plague inc mod apk scenarios unlocked
    -plague inc mod apk cheats unlocked
    -plague inc mod apk expansions unlocked
    -plague inc mod apk special plagues unlocked
    -download plague inc full version free android
    -plague inc full version free download android
    -plague inc full version free android
    -plague inc full version android
    -plague inc android download free full version
    -plague inc android free full version
    -plague inc android full version
    -plague inc android hack full version
    -plague inc android crack full version
    -plague inc android mod full version

    -

    How to download Plague Inc APK full unlocked portugues?

    -

    Downloading Plague Inc APK full unlocked portugues is not difficult, but you need to follow some steps to ensure that you get a safe and working file. Here are the steps that you need to follow:

    -

    Step 1: Find a reliable source for the APK file

    -

    The first step is to find a website that offers Plague Inc APK full unlocked portugues for download. There are many websites that claim to provide this file, but not all of them are trustworthy or legitimate. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Some of them may also provide fake or outdated files that do not work or cause errors.

    -

    To avoid these risks, you need to find a reliable source for the APK file. One of the best sources that we recommend is APKdone.com . This is a website that provides high-quality and safe APK files for various games and apps. You can download Plague Inc APK full unlocked portugues from this website without any worries.

    -

    Step 2: Enable unknown sources on your device settings

    -

    The next step is to enable unknown sources on your device settings. This is a security feature that prevents you from installing apps from sources other than the official app stores. Since you are downloading an APK file from a third-party website, you need to enable this option to allow your device to install it.

    -

    To enable unknown sources on your device settings, follow these steps:

    - - Go to your device settings and look for security or privacy options. - Find the option that says unknown sources or allow installation from unknown sources and turn it on. - Confirm your choice by tapping OK or Yes.

    Step 3: Download and install the APK file

    -

    The third step is to download and install the APK file on your device. To do this, follow these steps:

    - - Go to APKdone.com and search for Plague Inc APK full unlocked portugues. - Tap on the download button and wait for the file to be downloaded. - Once the download is complete, go to your device's file manager and locate the downloaded file. - Tap on the file and follow the instructions on the screen to install it. - Wait for the installation to finish.

    Step 4: Launch the game and enjoy

    -

    The final step is to launch the game and enjoy it. To do this, follow these steps:

    - - Go to your device's app drawer and look for Plague Inc icon. - Tap on it and wait for the game to load. - Choose your language preference (Portuguese) and agree to the terms and conditions. - Start playing with any pathogen, mode, scenario, or challenge that you want.

    Tips and tricks for playing Plague Inc

    -

    Plague Inc is a game that requires strategy and planning. It is not easy to infect and kill everyone in the world without being detected or cured. To help you achieve your goal, here are some tips and tricks that you can use:

    -

    Infect before killing

    -

    One of the most important tips for playing Plague Inc is to infect as

    many people as possible before making your pathogen lethal. This way, you can ensure that your pathogen spreads to every country and region before they notice or react. If you make your pathogen too lethal too soon, you may kill your hosts before they can infect others, or you may alert the world and trigger a faster cure research.

    -

    To infect more people, you should focus on buying transmission traits that increase your pathogen's infectivity and adaptability. You should also avoid buying symptoms that are too noticeable or deadly, such as hemorrhagic shock, total organ failure, or insanity. You can also devolve any symptoms that mutate automatically to save DNA points and avoid detection.

    -

    Start in an isolated country

    -

    Another tip for playing Plague Inc is to start your infection in an isolated country that has few connections to other countries. This way, you can delay the spread of your pathogen to other regions and avoid early detection or response. Some of the best countries to start in are Greenland, Iceland, Madagascar, New Zealand, or Papua New Guinea. These countries have low population density, limited air or sea travel, and cold or hot climates that can slow down your pathogen.

    -

    To spread from these countries, you should invest in transmission traits that can overcome their isolation and climate. For example, you can buy cold resistance or heat resistance to survive in extreme temperatures; you can buy water transmission or air transmission to travel by boat or plane; you can buy livestock transmission or bird transmission to infect animals that can cross borders; and so on.

    -

    Watch the news

    -

    A third tip for playing Plague Inc is to watch the news ticker at the bottom of the screen. This ticker shows various news headlines and events that are happening around the world. These news can give you valuable information and clues about the state of the world and how it reacts to your plague. For example, you can learn about which countries are infected or not infected; which countries are developing a cure or implementing measures; which countries are experiencing riots, wars, or disasters; and so on.

    -

    By watching the news, you can adjust your strategy accordingly and take advantage of opportunities or avoid threats. For example, you can target countries that are vulnerable or distracted by other events; you can evolve traits that counteract the cure or the measures; you can exploit events that increase your pathogen's spread or lethality; and so on.

    -

    Research countries

    -

    A fourth tip for playing Plague Inc is to research countries and learn about their characteristics and statistics. You can do this by tapping on a country on the map and viewing its information panel. This panel shows various data about the country, such as its population, climate, wealth, health care, government, culture, and more. These data can help you understand how your pathogen affects and is affected by each country.

    -

    By researching countries, you can optimize your pathogen's evolution and transmission for each region. For example, you can evolve traits that match the climate or wealth of a country; you can buy transmissions that suit the culture or government of a country; you can cause symptoms that exploit the health care or population of a country; and so on.

    -

    Conclusion

    -

    Plague Inc is a strategy-simulation game that challenges you to wipe out humanity with a deadly pathogen. It is a realistic and engaging game that lets you create and evolve your own plague with different types, modes, scenarios, and challenges. It is also a game that is inspired by real-world data and events that make it more relevant and educational.

    -

    If you want to enjoy the full features and content of the game without paying or watching ads, you can download Plague Inc APK full unlocked portugues. This is a modified version of the game that gives you access to all the pathogens, modes, scenarios, and challenges for free. You can also play the game in Portuguese language and get the latest updates and bug fixes.

    -

    To download Plague Inc APK full unlocked portugues, you need to find a reliable source for the APK file, enable unknown sources on your device settings, download and install the APK file, and launch the game and enjoy. You can also use some tips and tricks for playing Plague Inc, such as infecting before killing, starting in an isolated country, watching the news, and researching countries.

    -

    Plague Inc is a game that will test your strategy and creativity as you try to end the world with a plague. It is also a game that will teach you about epidemiology and global health issues as you learn about how diseases spread and how people react. It is a game that is both fun and educational.

    -

    FAQs

    -

    What are the best pathogens to use in Plague Inc?

    -

    The best

    The best pathogens to use in Plague Inc depend on your preference and strategy. However, some of the most popular and powerful pathogens are the virus, the bio-weapon, the neurax worm, and the simian flu. These pathogens have high mutation rates, high lethality, or unique abilities that make them hard to cure or control.

    -

    How to beat Plague Inc on mega-brutal difficulty?

    -

    Beating Plague Inc on mega-brutal difficulty is very challenging and requires a lot of skill and luck. However, there are some general tips that can help you succeed. For example, you should choose a pathogen that has a high mutation rate or a special ability that can counter the cure; you should start in a country that has poor health care or low awareness; you should evolve your pathogen to be highly infectious and resistant; you should avoid symptoms that are too noticeable or deadly until the end; and you should monitor the world events and react accordingly.

    -

    How to create a custom scenario in Plague Inc?

    -

    Creating a custom scenario in Plague Inc is a fun and creative way to make your own plague and challenge other players. To create a custom scenario, you need to use the scenario creator tool that is available as an in-app purchase or as a separate app. With this tool, you can customize your pathogen, your world, your events, and your win conditions. You can also share your scenario with other players and play their scenarios as well.

    -

    Is Plague Inc based on real science?

    -

    Plague Inc is based on real science and data, but it is not a realistic or accurate representation of how diseases work or how people respond. The game simplifies and exaggerates some aspects of epidemiology and global health for the sake of gameplay and entertainment. The game also does not account for some factors that may affect the spread and severity of a disease, such as human behavior, social dynamics, ethical issues, or political decisions. Therefore, the game should not be taken as a reliable source of information or education on these topics.

    -

    Is Plague Inc banned in China?

    -

    Plague Inc was banned in China in February 2020, amid the COVID-19 outbreak. The game was removed from the Chinese app stores by the authorities, who claimed that it contained illegal content that was not suitable for the Chinese market. The developers of the game expressed their regret and confusion over this decision, and said that they were working with the Chinese officials to resolve the issue. However, as of June 2021, the game remains banned in China.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/Alice Greenfingers 2 Free Download No Time Limit.rar.md b/spaces/contluForse/HuggingGPT/assets/Alice Greenfingers 2 Free Download No Time Limit.rar.md deleted file mode 100644 index 61610fd1eb192ac2bbdcd842b0fa0b1c605d17be..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Alice Greenfingers 2 Free Download No Time Limit.rar.md +++ /dev/null @@ -1,50 +0,0 @@ -

    alice greenfingers 2 free download no time limit.rar


    Download File ✵✵✵ https://ssurll.com/2uzz3E



    -
    -2.5 Mb - - I guess I'll try extracting the RAR file and see what happens - - can you install it that way? - - got to a desktop with a file explorer and it's the only file. extracted the RAR file, but it has like 150.RAR and.ISO files in it. - - no, it's not an archive I created, the.rar file I'm trying to install is a.iso - - d3n4ri4, use the software center... - - "Install" is the correct command to run - - would extracting the ISO file fix it or do I have to get a program that can extract.iso files? - - d3n4ri4, if you type apt-cache policy linux-generic in a terminal, what does it say? - - I have been trying to make a custom ISO file with usb bootable but I couldn't find any tutorial. How to make an ISO with python or with bash? - - I am using Ubuntu 16.10 and I already have usb-creator-gtk installed - - anyone here running 16.10 can help me figure out why i cant connect to my wifi network? - - welovfree: - - orange_, details required - - cfhowlett: i have tried it with both my android device and a computer that is connected to the same network as my phone and it wont connect - - welovfree: if you need special settings for the iso (iso file encryption etc) see #ubuntu-server - - There isn't an Ubuntu 16.10 release? - - OrangeCat, 16.04.1 - - Or...16.10 has been released? - - OrangeCat: thats not the plan for this release. you will need to wait for the 17.04 release. - - Whoops. - - welovfree, 16.10 has been released but no ISO yet. not until 17.04. - - thank you OrangeCat 4fefd39f24
    -
    -
    -

    diff --git a/spaces/contluForse/HuggingGPT/assets/Download A Fortaleza 2 Dublado [BEST].md b/spaces/contluForse/HuggingGPT/assets/Download A Fortaleza 2 Dublado [BEST].md deleted file mode 100644 index ef1ddb56f3306a6f9384b5e86b867c2b1162fb0b..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/Download A Fortaleza 2 Dublado [BEST].md +++ /dev/null @@ -1,6 +0,0 @@ -

    download a fortaleza 2 dublado


    DOWNLOADhttps://ssurll.com/2uzxNo



    - -WORK Download A Fortaleza 2 Dublado. 2021.01.31 07:47. 関連記事. |LINK| Dil Juunglee Hd 1080p Blu-ray Download Torrentl. 2021.01.31 07:54 · Windows ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/hooks/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/hooks/__init__.py deleted file mode 100644 index 915af28cefab14a14c1188ed861161080fd138a3..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/runner/hooks/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .checkpoint import CheckpointHook -from .closure import ClosureHook -from .ema import EMAHook -from .evaluation import DistEvalHook, EvalHook -from .hook import HOOKS, Hook -from .iter_timer import IterTimerHook -from .logger import (DvcliveLoggerHook, LoggerHook, MlflowLoggerHook, - NeptuneLoggerHook, PaviLoggerHook, TensorboardLoggerHook, - TextLoggerHook, WandbLoggerHook) -from .lr_updater import LrUpdaterHook -from .memory import EmptyCacheHook -from .momentum_updater import MomentumUpdaterHook -from .optimizer import (Fp16OptimizerHook, GradientCumulativeFp16OptimizerHook, - GradientCumulativeOptimizerHook, OptimizerHook) -from .profiler import ProfilerHook -from .sampler_seed import DistSamplerSeedHook -from .sync_buffer import SyncBuffersHook - -__all__ = [ - 'HOOKS', 'Hook', 'CheckpointHook', 'ClosureHook', 'LrUpdaterHook', - 'OptimizerHook', 'Fp16OptimizerHook', 'IterTimerHook', - 'DistSamplerSeedHook', 'EmptyCacheHook', 'LoggerHook', 'MlflowLoggerHook', - 'PaviLoggerHook', 'TextLoggerHook', 'TensorboardLoggerHook', - 'NeptuneLoggerHook', 'WandbLoggerHook', 'DvcliveLoggerHook', - 'MomentumUpdaterHook', 'SyncBuffersHook', 'EMAHook', 'EvalHook', - 'DistEvalHook', 'ProfilerHook', 'GradientCumulativeOptimizerHook', - 'GradientCumulativeFp16OptimizerHook' -] diff --git a/spaces/crystalai/EleutherAI-gpt-j-6b/README.md b/spaces/crystalai/EleutherAI-gpt-j-6b/README.md deleted file mode 100644 index 84092e4a1edd441d2597a1cffb985128a8895613..0000000000000000000000000000000000000000 --- a/spaces/crystalai/EleutherAI-gpt-j-6b/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: EleutherAI Gpt J 6b -emoji: 👁 -colorFrom: blue -colorTo: gray -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/curt-park/segment-anything-with-clip/app.py b/spaces/curt-park/segment-anything-with-clip/app.py deleted file mode 100644 index 3d90346841f4bda4d2c6808315e6cb1659e20887..0000000000000000000000000000000000000000 --- a/spaces/curt-park/segment-anything-with-clip/app.py +++ /dev/null @@ -1,232 +0,0 @@ -import os -import urllib -from functools import lru_cache -from random import randint -from typing import Any, Callable, Dict, List, Tuple - -import clip -import cv2 -import gradio as gr -import numpy as np -import PIL -import torch -from segment_anything import SamAutomaticMaskGenerator, sam_model_registry - -CHECKPOINT_PATH = os.path.join(os.path.expanduser("~"), ".cache", "SAM") -CHECKPOINT_NAME = "sam_vit_h_4b8939.pth" -CHECKPOINT_URL = "https://dl.fbaipublicfiles.com/segment_anything/sam_vit_h_4b8939.pth" -MODEL_TYPE = "default" -MAX_WIDTH = MAX_HEIGHT = 1024 -TOP_K_OBJ = 100 -THRESHOLD = 0.85 -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -@lru_cache -def load_mask_generator() -> SamAutomaticMaskGenerator: - if not os.path.exists(CHECKPOINT_PATH): - os.makedirs(CHECKPOINT_PATH) - checkpoint = os.path.join(CHECKPOINT_PATH, CHECKPOINT_NAME) - if not os.path.exists(checkpoint): - urllib.request.urlretrieve(CHECKPOINT_URL, checkpoint) - sam = sam_model_registry[MODEL_TYPE](checkpoint=checkpoint).to(device) - mask_generator = SamAutomaticMaskGenerator(sam) - return mask_generator - - -@lru_cache -def load_clip( - name: str = "ViT-B/32", -) -> Tuple[torch.nn.Module, Callable[[PIL.Image.Image], torch.Tensor]]: - model, preprocess = clip.load(name, device=device) - return model.to(device), preprocess - - -def adjust_image_size(image: np.ndarray) -> np.ndarray: - height, width = image.shape[:2] - if height > width: - if height > MAX_HEIGHT: - height, width = MAX_HEIGHT, int(MAX_HEIGHT / height * width) - else: - if width > MAX_WIDTH: - height, width = int(MAX_WIDTH / width * height), MAX_WIDTH - image = cv2.resize(image, (width, height)) - return image - - -@torch.no_grad() -def get_score(crop: PIL.Image.Image, texts: List[str]) -> torch.Tensor: - model, preprocess = load_clip() - preprocessed = preprocess(crop).unsqueeze(0).to(device) - tokens = clip.tokenize(texts).to(device) - logits_per_image, _ = model(preprocessed, tokens) - similarity = logits_per_image.softmax(-1).cpu() - return similarity[0, 0] - - -def crop_image(image: np.ndarray, mask: Dict[str, Any]) -> PIL.Image.Image: - x, y, w, h = mask["bbox"] - masked = image * np.expand_dims(mask["segmentation"], -1) - crop = masked[y : y + h, x : x + w] - if h > w: - top, bottom, left, right = 0, 0, (h - w) // 2, (h - w) // 2 - else: - top, bottom, left, right = (w - h) // 2, (w - h) // 2, 0, 0 - # padding - crop = cv2.copyMakeBorder( - crop, - top, - bottom, - left, - right, - cv2.BORDER_CONSTANT, - value=(0, 0, 0), - ) - crop = PIL.Image.fromarray(crop) - return crop - - -def get_texts(query: str) -> List[str]: - return [f"a picture of {query}", "a picture of background"] - - -def filter_masks( - image: np.ndarray, - masks: List[Dict[str, Any]], - predicted_iou_threshold: float, - stability_score_threshold: float, - query: str, - clip_threshold: float, -) -> List[Dict[str, Any]]: - filtered_masks: List[Dict[str, Any]] = [] - - for mask in sorted(masks, key=lambda mask: mask["area"])[-TOP_K_OBJ:]: - if ( - mask["predicted_iou"] < predicted_iou_threshold - or mask["stability_score"] < stability_score_threshold - or image.shape[:2] != mask["segmentation"].shape[:2] - or query - and get_score(crop_image(image, mask), get_texts(query)) < clip_threshold - ): - continue - - filtered_masks.append(mask) - - return filtered_masks - - -def draw_masks( - image: np.ndarray, masks: List[np.ndarray], alpha: float = 0.7 -) -> np.ndarray: - for mask in masks: - color = [randint(127, 255) for _ in range(3)] - - # draw mask overlay - colored_mask = np.expand_dims(mask["segmentation"], 0).repeat(3, axis=0) - colored_mask = np.moveaxis(colored_mask, 0, -1) - masked = np.ma.MaskedArray(image, mask=colored_mask, fill_value=color) - image_overlay = masked.filled() - image = cv2.addWeighted(image, 1 - alpha, image_overlay, alpha, 0) - - # draw contour - contours, _ = cv2.findContours( - np.uint8(mask["segmentation"]), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE - ) - cv2.drawContours(image, contours, -1, (0, 0, 255), 2) - return image - - -def segment( - predicted_iou_threshold: float, - stability_score_threshold: float, - clip_threshold: float, - image_path: str, - query: str, -) -> PIL.ImageFile.ImageFile: - mask_generator = load_mask_generator() - image = cv2.imread(image_path, cv2.IMREAD_COLOR) - image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) - - # reduce the size to save gpu memory - image = adjust_image_size(image) - masks = mask_generator.generate(image) - masks = filter_masks( - image, - masks, - predicted_iou_threshold, - stability_score_threshold, - query, - clip_threshold, - ) - image = draw_masks(image, masks) - image = PIL.Image.fromarray(image) - return image - - -demo = gr.Interface( - fn=segment, - inputs=[ - gr.Slider(0, 1, value=0.9, label="predicted_iou_threshold"), - gr.Slider(0, 1, value=0.8, label="stability_score_threshold"), - gr.Slider(0, 1, value=0.85, label="clip_threshold"), - gr.Image(type="filepath"), - "text", - ], - outputs="image", - allow_flagging="never", - title="Segment Anything with CLIP", - examples=[ - [ - 0.9, - 0.8, - 0.99, - os.path.join(os.path.dirname(__file__), "examples/dog.jpg"), - "dog", - ], - [ - 0.9, - 0.8, - 0.75, - os.path.join(os.path.dirname(__file__), "examples/city.jpg"), - "building", - ], - [ - 0.9, - 0.8, - 0.998, - os.path.join(os.path.dirname(__file__), "examples/food.jpg"), - "strawberry", - ], - [ - 0.9, - 0.8, - 0.75, - os.path.join(os.path.dirname(__file__), "examples/horse.jpg"), - "horse", - ], - [ - 0.9, - 0.8, - 0.99, - os.path.join(os.path.dirname(__file__), "examples/bears.jpg"), - "bear", - ], - [ - 0.9, - 0.8, - 0.99, - os.path.join(os.path.dirname(__file__), "examples/cats.jpg"), - "cat", - ], - [ - 0.9, - 0.8, - 0.99, - os.path.join(os.path.dirname(__file__), "examples/fish.jpg"), - "fish", - ], - ], -) - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/cymic/Waifu_Diffusion_Webui/modules/codeformer/vqgan_arch.py b/spaces/cymic/Waifu_Diffusion_Webui/modules/codeformer/vqgan_arch.py deleted file mode 100644 index c06c590ca611f46404d1756b1652adc4c7397532..0000000000000000000000000000000000000000 --- a/spaces/cymic/Waifu_Diffusion_Webui/modules/codeformer/vqgan_arch.py +++ /dev/null @@ -1,437 +0,0 @@ -# this file is copied from CodeFormer repository. Please see comment in modules/codeformer_model.py - -''' -VQGAN code, adapted from the original created by the Unleashing Transformers authors: -https://github.com/samb-t/unleashing-transformers/blob/master/models/vqgan.py - -''' -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -import copy -from basicsr.utils import get_root_logger -from basicsr.utils.registry import ARCH_REGISTRY - -def normalize(in_channels): - return torch.nn.GroupNorm(num_groups=32, num_channels=in_channels, eps=1e-6, affine=True) - - -@torch.jit.script -def swish(x): - return x*torch.sigmoid(x) - - -# Define VQVAE classes -class VectorQuantizer(nn.Module): - def __init__(self, codebook_size, emb_dim, beta): - super(VectorQuantizer, self).__init__() - self.codebook_size = codebook_size # number of embeddings - self.emb_dim = emb_dim # dimension of embedding - self.beta = beta # commitment cost used in loss term, beta * ||z_e(x)-sg[e]||^2 - self.embedding = nn.Embedding(self.codebook_size, self.emb_dim) - self.embedding.weight.data.uniform_(-1.0 / self.codebook_size, 1.0 / self.codebook_size) - - def forward(self, z): - # reshape z -> (batch, height, width, channel) and flatten - z = z.permute(0, 2, 3, 1).contiguous() - z_flattened = z.view(-1, self.emb_dim) - - # distances from z to embeddings e_j (z - e)^2 = z^2 + e^2 - 2 e * z - d = (z_flattened ** 2).sum(dim=1, keepdim=True) + (self.embedding.weight**2).sum(1) - \ - 2 * torch.matmul(z_flattened, self.embedding.weight.t()) - - mean_distance = torch.mean(d) - # find closest encodings - # min_encoding_indices = torch.argmin(d, dim=1).unsqueeze(1) - min_encoding_scores, min_encoding_indices = torch.topk(d, 1, dim=1, largest=False) - # [0-1], higher score, higher confidence - min_encoding_scores = torch.exp(-min_encoding_scores/10) - - min_encodings = torch.zeros(min_encoding_indices.shape[0], self.codebook_size).to(z) - min_encodings.scatter_(1, min_encoding_indices, 1) - - # get quantized latent vectors - z_q = torch.matmul(min_encodings, self.embedding.weight).view(z.shape) - # compute loss for embedding - loss = torch.mean((z_q.detach()-z)**2) + self.beta * torch.mean((z_q - z.detach()) ** 2) - # preserve gradients - z_q = z + (z_q - z).detach() - - # perplexity - e_mean = torch.mean(min_encodings, dim=0) - perplexity = torch.exp(-torch.sum(e_mean * torch.log(e_mean + 1e-10))) - # reshape back to match original input shape - z_q = z_q.permute(0, 3, 1, 2).contiguous() - - return z_q, loss, { - "perplexity": perplexity, - "min_encodings": min_encodings, - "min_encoding_indices": min_encoding_indices, - "min_encoding_scores": min_encoding_scores, - "mean_distance": mean_distance - } - - def get_codebook_feat(self, indices, shape): - # input indices: batch*token_num -> (batch*token_num)*1 - # shape: batch, height, width, channel - indices = indices.view(-1,1) - min_encodings = torch.zeros(indices.shape[0], self.codebook_size).to(indices) - min_encodings.scatter_(1, indices, 1) - # get quantized latent vectors - z_q = torch.matmul(min_encodings.float(), self.embedding.weight) - - if shape is not None: # reshape back to match original input shape - z_q = z_q.view(shape).permute(0, 3, 1, 2).contiguous() - - return z_q - - -class GumbelQuantizer(nn.Module): - def __init__(self, codebook_size, emb_dim, num_hiddens, straight_through=False, kl_weight=5e-4, temp_init=1.0): - super().__init__() - self.codebook_size = codebook_size # number of embeddings - self.emb_dim = emb_dim # dimension of embedding - self.straight_through = straight_through - self.temperature = temp_init - self.kl_weight = kl_weight - self.proj = nn.Conv2d(num_hiddens, codebook_size, 1) # projects last encoder layer to quantized logits - self.embed = nn.Embedding(codebook_size, emb_dim) - - def forward(self, z): - hard = self.straight_through if self.training else True - - logits = self.proj(z) - - soft_one_hot = F.gumbel_softmax(logits, tau=self.temperature, dim=1, hard=hard) - - z_q = torch.einsum("b n h w, n d -> b d h w", soft_one_hot, self.embed.weight) - - # + kl divergence to the prior loss - qy = F.softmax(logits, dim=1) - diff = self.kl_weight * torch.sum(qy * torch.log(qy * self.codebook_size + 1e-10), dim=1).mean() - min_encoding_indices = soft_one_hot.argmax(dim=1) - - return z_q, diff, { - "min_encoding_indices": min_encoding_indices - } - - -class Downsample(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.conv = torch.nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=2, padding=0) - - def forward(self, x): - pad = (0, 1, 0, 1) - x = torch.nn.functional.pad(x, pad, mode="constant", value=0) - x = self.conv(x) - return x - - -class Upsample(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.conv = nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1) - - def forward(self, x): - x = F.interpolate(x, scale_factor=2.0, mode="nearest") - x = self.conv(x) - - return x - - -class ResBlock(nn.Module): - def __init__(self, in_channels, out_channels=None): - super(ResBlock, self).__init__() - self.in_channels = in_channels - self.out_channels = in_channels if out_channels is None else out_channels - self.norm1 = normalize(in_channels) - self.conv1 = nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=1, padding=1) - self.norm2 = normalize(out_channels) - self.conv2 = nn.Conv2d(out_channels, out_channels, kernel_size=3, stride=1, padding=1) - if self.in_channels != self.out_channels: - self.conv_out = nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=1, padding=0) - - def forward(self, x_in): - x = x_in - x = self.norm1(x) - x = swish(x) - x = self.conv1(x) - x = self.norm2(x) - x = swish(x) - x = self.conv2(x) - if self.in_channels != self.out_channels: - x_in = self.conv_out(x_in) - - return x + x_in - - -class AttnBlock(nn.Module): - def __init__(self, in_channels): - super().__init__() - self.in_channels = in_channels - - self.norm = normalize(in_channels) - self.q = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.k = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.v = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - self.proj_out = torch.nn.Conv2d( - in_channels, - in_channels, - kernel_size=1, - stride=1, - padding=0 - ) - - def forward(self, x): - h_ = x - h_ = self.norm(h_) - q = self.q(h_) - k = self.k(h_) - v = self.v(h_) - - # compute attention - b, c, h, w = q.shape - q = q.reshape(b, c, h*w) - q = q.permute(0, 2, 1) - k = k.reshape(b, c, h*w) - w_ = torch.bmm(q, k) - w_ = w_ * (int(c)**(-0.5)) - w_ = F.softmax(w_, dim=2) - - # attend to values - v = v.reshape(b, c, h*w) - w_ = w_.permute(0, 2, 1) - h_ = torch.bmm(v, w_) - h_ = h_.reshape(b, c, h, w) - - h_ = self.proj_out(h_) - - return x+h_ - - -class Encoder(nn.Module): - def __init__(self, in_channels, nf, emb_dim, ch_mult, num_res_blocks, resolution, attn_resolutions): - super().__init__() - self.nf = nf - self.num_resolutions = len(ch_mult) - self.num_res_blocks = num_res_blocks - self.resolution = resolution - self.attn_resolutions = attn_resolutions - - curr_res = self.resolution - in_ch_mult = (1,)+tuple(ch_mult) - - blocks = [] - # initial convultion - blocks.append(nn.Conv2d(in_channels, nf, kernel_size=3, stride=1, padding=1)) - - # residual and downsampling blocks, with attention on smaller res (16x16) - for i in range(self.num_resolutions): - block_in_ch = nf * in_ch_mult[i] - block_out_ch = nf * ch_mult[i] - for _ in range(self.num_res_blocks): - blocks.append(ResBlock(block_in_ch, block_out_ch)) - block_in_ch = block_out_ch - if curr_res in attn_resolutions: - blocks.append(AttnBlock(block_in_ch)) - - if i != self.num_resolutions - 1: - blocks.append(Downsample(block_in_ch)) - curr_res = curr_res // 2 - - # non-local attention block - blocks.append(ResBlock(block_in_ch, block_in_ch)) - blocks.append(AttnBlock(block_in_ch)) - blocks.append(ResBlock(block_in_ch, block_in_ch)) - - # normalise and convert to latent size - blocks.append(normalize(block_in_ch)) - blocks.append(nn.Conv2d(block_in_ch, emb_dim, kernel_size=3, stride=1, padding=1)) - self.blocks = nn.ModuleList(blocks) - - def forward(self, x): - for block in self.blocks: - x = block(x) - - return x - - -class Generator(nn.Module): - def __init__(self, nf, emb_dim, ch_mult, res_blocks, img_size, attn_resolutions): - super().__init__() - self.nf = nf - self.ch_mult = ch_mult - self.num_resolutions = len(self.ch_mult) - self.num_res_blocks = res_blocks - self.resolution = img_size - self.attn_resolutions = attn_resolutions - self.in_channels = emb_dim - self.out_channels = 3 - block_in_ch = self.nf * self.ch_mult[-1] - curr_res = self.resolution // 2 ** (self.num_resolutions-1) - - blocks = [] - # initial conv - blocks.append(nn.Conv2d(self.in_channels, block_in_ch, kernel_size=3, stride=1, padding=1)) - - # non-local attention block - blocks.append(ResBlock(block_in_ch, block_in_ch)) - blocks.append(AttnBlock(block_in_ch)) - blocks.append(ResBlock(block_in_ch, block_in_ch)) - - for i in reversed(range(self.num_resolutions)): - block_out_ch = self.nf * self.ch_mult[i] - - for _ in range(self.num_res_blocks): - blocks.append(ResBlock(block_in_ch, block_out_ch)) - block_in_ch = block_out_ch - - if curr_res in self.attn_resolutions: - blocks.append(AttnBlock(block_in_ch)) - - if i != 0: - blocks.append(Upsample(block_in_ch)) - curr_res = curr_res * 2 - - blocks.append(normalize(block_in_ch)) - blocks.append(nn.Conv2d(block_in_ch, self.out_channels, kernel_size=3, stride=1, padding=1)) - - self.blocks = nn.ModuleList(blocks) - - - def forward(self, x): - for block in self.blocks: - x = block(x) - - return x - - -@ARCH_REGISTRY.register() -class VQAutoEncoder(nn.Module): - def __init__(self, img_size, nf, ch_mult, quantizer="nearest", res_blocks=2, attn_resolutions=[16], codebook_size=1024, emb_dim=256, - beta=0.25, gumbel_straight_through=False, gumbel_kl_weight=1e-8, model_path=None): - super().__init__() - logger = get_root_logger() - self.in_channels = 3 - self.nf = nf - self.n_blocks = res_blocks - self.codebook_size = codebook_size - self.embed_dim = emb_dim - self.ch_mult = ch_mult - self.resolution = img_size - self.attn_resolutions = attn_resolutions - self.quantizer_type = quantizer - self.encoder = Encoder( - self.in_channels, - self.nf, - self.embed_dim, - self.ch_mult, - self.n_blocks, - self.resolution, - self.attn_resolutions - ) - if self.quantizer_type == "nearest": - self.beta = beta #0.25 - self.quantize = VectorQuantizer(self.codebook_size, self.embed_dim, self.beta) - elif self.quantizer_type == "gumbel": - self.gumbel_num_hiddens = emb_dim - self.straight_through = gumbel_straight_through - self.kl_weight = gumbel_kl_weight - self.quantize = GumbelQuantizer( - self.codebook_size, - self.embed_dim, - self.gumbel_num_hiddens, - self.straight_through, - self.kl_weight - ) - self.generator = Generator( - self.nf, - self.embed_dim, - self.ch_mult, - self.n_blocks, - self.resolution, - self.attn_resolutions - ) - - if model_path is not None: - chkpt = torch.load(model_path, map_location='cpu') - if 'params_ema' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params_ema']) - logger.info(f'vqgan is loaded from: {model_path} [params_ema]') - elif 'params' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params']) - logger.info(f'vqgan is loaded from: {model_path} [params]') - else: - raise ValueError(f'Wrong params!') - - - def forward(self, x): - x = self.encoder(x) - quant, codebook_loss, quant_stats = self.quantize(x) - x = self.generator(quant) - return x, codebook_loss, quant_stats - - - -# patch based discriminator -@ARCH_REGISTRY.register() -class VQGANDiscriminator(nn.Module): - def __init__(self, nc=3, ndf=64, n_layers=4, model_path=None): - super().__init__() - - layers = [nn.Conv2d(nc, ndf, kernel_size=4, stride=2, padding=1), nn.LeakyReLU(0.2, True)] - ndf_mult = 1 - ndf_mult_prev = 1 - for n in range(1, n_layers): # gradually increase the number of filters - ndf_mult_prev = ndf_mult - ndf_mult = min(2 ** n, 8) - layers += [ - nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=2, padding=1, bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - ndf_mult_prev = ndf_mult - ndf_mult = min(2 ** n_layers, 8) - - layers += [ - nn.Conv2d(ndf * ndf_mult_prev, ndf * ndf_mult, kernel_size=4, stride=1, padding=1, bias=False), - nn.BatchNorm2d(ndf * ndf_mult), - nn.LeakyReLU(0.2, True) - ] - - layers += [ - nn.Conv2d(ndf * ndf_mult, 1, kernel_size=4, stride=1, padding=1)] # output 1 channel prediction map - self.main = nn.Sequential(*layers) - - if model_path is not None: - chkpt = torch.load(model_path, map_location='cpu') - if 'params_d' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params_d']) - elif 'params' in chkpt: - self.load_state_dict(torch.load(model_path, map_location='cpu')['params']) - else: - raise ValueError(f'Wrong params!') - - def forward(self, x): - return self.main(x) \ No newline at end of file diff --git a/spaces/d0r1h/Hindi_News_Summarizer/extractdata.py b/spaces/d0r1h/Hindi_News_Summarizer/extractdata.py deleted file mode 100644 index 7c3c071f2e1f335c55bdb30164e4c96eb6afac37..0000000000000000000000000000000000000000 --- a/spaces/d0r1h/Hindi_News_Summarizer/extractdata.py +++ /dev/null @@ -1,33 +0,0 @@ -import re -import requests -from bs4 import BeautifulSoup - - -noise1 = re.compile(r"[([].*?[\)\]]\s+") # वर्ल्ड कप 2019 (World Cup 2019) --> वर्ल्ड कप 2019 -noise2 = re.compile(r"\{.*?\}") # { googletag.display{ googletag.display(div-gpt-ad-1517823702248-0); });} } -noise3 = re.compile(r"[a-zA-Z]") -noise4 = re.compile(r"[\{()#@:%,_;&!=}\]]") -noise5 = re.compile(r'[\?\]]') - - -def extract_text(url): - - data = requests.get(url) - soup = BeautifulSoup(data.content, "html.parser") - - try: - vistaar = soup.find(class_ = "article-desc ul_styling") - vistaar = vistaar.text - except Exception as e: - print(f"Not able to fetch text {e}") - - vistaar = vistaar.replace("विस्तार ", ' ') - vistaar = vistaar.replace("विज्ञापन", ' ') - vistaar = vistaar.replace("\n", ' ') - vistaar = re.sub('\xa0', ' ', vistaar) - vistaar = re.sub(noise2, ' ', vistaar) - vistaar = re.sub(noise3, ' ', vistaar) - vistaar = re.sub(noise4, ' ', vistaar) - vistaar = re.sub(' +', ' ', vistaar) - - return vistaar \ No newline at end of file diff --git a/spaces/daarumadx/bot/src/argv/gpu_info.py b/spaces/daarumadx/bot/src/argv/gpu_info.py deleted file mode 100644 index 22470a41330123e9be9e384bf417cb4d512f66cd..0000000000000000000000000000000000000000 --- a/spaces/daarumadx/bot/src/argv/gpu_info.py +++ /dev/null @@ -1,30 +0,0 @@ -import gpu_info -from argv.common import arg_debug, arg_help, arg_version - - -def init_gpu_info_sub_parser(subparsers): - gpu_info_parser = subparsers.add_parser( - 'gpu-info', - description="Getting GPU capabilities information for processing with dreampower", - help="Getting GPU capabilities information for processing with dreampower", - add_help=False - ) - gpu_info_parser.set_defaults(func=gpu_info.main) - - # add gpu-info arguments - arg_json(gpu_info_parser) - - arg_help(gpu_info_parser) - arg_debug(gpu_info_parser) - arg_version(gpu_info_parser) - - return gpu_info_parser - - -def arg_json(parser): - parser.add_argument( - "-j", - "--json", - action='store_true', - help="" - ) diff --git a/spaces/danterivers/music-generation-samples/audiocraft/models/musicgen.py b/spaces/danterivers/music-generation-samples/audiocraft/models/musicgen.py deleted file mode 100644 index c3feb18d95c3915dae0074aacd1d4c980c1bb0e0..0000000000000000000000000000000000000000 --- a/spaces/danterivers/music-generation-samples/audiocraft/models/musicgen.py +++ /dev/null @@ -1,283 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Main model for using MusicGen. This will combine all the required components -and provide easy access to the generation API. -""" - -import os -import typing as tp - -import torch - -from .encodec import CompressionModel -from .lm import LMModel -from .builders import get_debug_compression_model, get_debug_lm_model -from .loaders import load_compression_model, load_lm_model, HF_MODEL_CHECKPOINTS_MAP -from ..data.audio_utils import convert_audio -from ..modules.conditioners import ConditioningAttributes, WavCondition -from ..utils.autocast import TorchAutocast - - -MelodyList = tp.List[tp.Optional[torch.Tensor]] -MelodyType = tp.Union[torch.Tensor, MelodyList] - - -class MusicGen: - """MusicGen main model with convenient generation API. - - Args: - name (str): name of the model. - compression_model (CompressionModel): Compression model - used to map audio to invertible discrete representations. - lm (LMModel): Language model over discrete representations. - """ - def __init__(self, name: str, compression_model: CompressionModel, lm: LMModel): - self.name = name - self.compression_model = compression_model - self.lm = lm - self.device = next(iter(lm.parameters())).device - self.generation_params: dict = {} - self.set_generation_params(duration=15) # 15 seconds by default - if self.device.type == 'cpu': - self.autocast = TorchAutocast(enabled=False) - else: - self.autocast = TorchAutocast( - enabled=True, device_type=self.device.type, dtype=torch.float16) - - @property - def frame_rate(self) -> int: - """Roughly the number of AR steps per seconds.""" - return self.compression_model.frame_rate - - @property - def sample_rate(self) -> int: - """Sample rate of the generated audio.""" - return self.compression_model.sample_rate - - @property - def audio_channels(self) -> int: - """Audio channels of the generated audio.""" - return self.compression_model.channels - - @staticmethod - def get_pretrained(name: str = 'melody', device='cuda'): - """Return pretrained model, we provide four models: - - small (300M), text to music, # see: https://huggingface.co/facebook/musicgen-small - - medium (1.5B), text to music, # see: https://huggingface.co/facebook/musicgen-medium - - melody (1.5B) text to music and text+melody to music, # see: https://huggingface.co/facebook/musicgen-melody - - large (3.3B), text to music, # see: https://huggingface.co/facebook/musicgen-large - """ - - if name == 'debug': - # used only for unit tests - compression_model = get_debug_compression_model(device) - lm = get_debug_lm_model(device) - return MusicGen(name, compression_model, lm) - - if name not in HF_MODEL_CHECKPOINTS_MAP: - raise ValueError( - f"{name} is not a valid checkpoint name. " - f"Choose one of {', '.join(HF_MODEL_CHECKPOINTS_MAP.keys())}" - ) - - cache_dir = os.environ.get('MUSICGEN_ROOT', None) - compression_model = load_compression_model(name, device=device, cache_dir=cache_dir) - lm = load_lm_model(name, device=device, cache_dir=cache_dir) - - return MusicGen(name, compression_model, lm) - - def set_generation_params(self, use_sampling: bool = True, top_k: int = 250, - top_p: float = 0.0, temperature: float = 1.0, - duration: float = 30.0, cfg_coef: float = 3.0, - two_step_cfg: bool = False): - """Set the generation parameters for MusicGen. - - Args: - use_sampling (bool, optional): Use sampling if True, else do argmax decoding. Defaults to True. - top_k (int, optional): top_k used for sampling. Defaults to 250. - top_p (float, optional): top_p used for sampling, when set to 0 top_k is used. Defaults to 0.0. - temperature (float, optional): Softmax temperature parameter. Defaults to 1.0. - duration (float, optional): Duration of the generated waveform. Defaults to 30.0. - cfg_coef (float, optional): Coefficient used for classifier free guidance. Defaults to 3.0. - two_step_cfg (bool, optional): If True, performs 2 forward for Classifier Free Guidance, - instead of batching together the two. This has some impact on how things - are padded but seems to have little impact in practice. - """ - assert duration <= 30, "The MusicGen cannot generate more than 30 seconds" - self.generation_params = { - 'max_gen_len': int(duration * self.frame_rate), - 'use_sampling': use_sampling, - 'temp': temperature, - 'top_k': top_k, - 'top_p': top_p, - 'cfg_coef': cfg_coef, - 'two_step_cfg': two_step_cfg, - } - - def generate_unconditional(self, num_samples: int, progress: bool = False) -> torch.Tensor: - """Generate samples in an unconditional manner. - - Args: - num_samples (int): Number of samples to be generated. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - descriptions: tp.List[tp.Optional[str]] = [None] * num_samples - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate(self, descriptions: tp.List[str], progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, None) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_with_chroma(self, descriptions: tp.List[str], melody_wavs: MelodyType, - melody_sample_rate: int, progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on text and melody. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - melody_wavs: (torch.Tensor or list of Tensor): A batch of waveforms used as - melody conditioning. Should have shape [B, C, T] with B matching the description length, - C=1 or 2. It can be [C, T] if there is a single description. It can also be - a list of [C, T] tensors. - melody_sample_rate: (int): Sample rate of the melody waveforms. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if isinstance(melody_wavs, torch.Tensor): - if melody_wavs.dim() == 2: - melody_wavs = melody_wavs[None] - if melody_wavs.dim() != 3: - raise ValueError("Melody wavs should have a shape [B, C, T].") - melody_wavs = list(melody_wavs) - else: - for melody in melody_wavs: - if melody is not None: - assert melody.dim() == 2, "One melody in the list has the wrong number of dims." - - melody_wavs = [ - convert_audio(wav, melody_sample_rate, self.sample_rate, self.audio_channels) - if wav is not None else None - for wav in melody_wavs] - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions=descriptions, prompt=None, - melody_wavs=melody_wavs) - assert prompt_tokens is None - return self._generate_tokens(attributes, prompt_tokens, progress) - - def generate_continuation(self, prompt: torch.Tensor, prompt_sample_rate: int, - descriptions: tp.Optional[tp.List[tp.Optional[str]]] = None, - progress: bool = False) -> torch.Tensor: - """Generate samples conditioned on audio prompts. - - Args: - prompt (torch.Tensor): A batch of waveforms used for continuation. - Prompt should be [B, C, T], or [C, T] if only one sample is generated. - prompt_sample_rate (int): Sampling rate of the given audio waveforms. - descriptions (tp.List[str], optional): A list of strings used as text conditioning. Defaults to None. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - """ - if prompt.dim() == 2: - prompt = prompt[None] - if prompt.dim() != 3: - raise ValueError("prompt should have 3 dimensions: [B, C, T] (C = 1).") - prompt = convert_audio(prompt, prompt_sample_rate, self.sample_rate, self.audio_channels) - if descriptions is None: - descriptions = [None] * len(prompt) - attributes, prompt_tokens = self._prepare_tokens_and_attributes(descriptions, prompt) - assert prompt_tokens is not None - return self._generate_tokens(attributes, prompt_tokens, progress) - - @torch.no_grad() - def _prepare_tokens_and_attributes( - self, - descriptions: tp.Sequence[tp.Optional[str]], - prompt: tp.Optional[torch.Tensor], - melody_wavs: tp.Optional[MelodyList] = None, - ) -> tp.Tuple[tp.List[ConditioningAttributes], tp.Optional[torch.Tensor]]: - """Prepare model inputs. - - Args: - descriptions (tp.List[str]): A list of strings used as text conditioning. - prompt (torch.Tensor): A batch of waveforms used for continuation. - melody_wavs (tp.Optional[torch.Tensor], optional): A batch of waveforms - used as melody conditioning. Defaults to None. - """ - attributes = [ - ConditioningAttributes(text={'description': description}) - for description in descriptions] - - if melody_wavs is None: - for attr in attributes: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - if self.name != "melody": - raise RuntimeError("This model doesn't support melody conditioning. " - "Use the `melody` model.") - assert len(melody_wavs) == len(descriptions), \ - f"number of melody wavs must match number of descriptions! " \ - f"got melody len={len(melody_wavs)}, and descriptions len={len(descriptions)}" - for attr, melody in zip(attributes, melody_wavs): - if melody is None: - attr.wav['self_wav'] = WavCondition( - torch.zeros((1, 1), device=self.device), - torch.tensor([0], device=self.device), - path='null_wav') # type: ignore - else: - attr.wav['self_wav'] = WavCondition( - melody.to(device=self.device), - torch.tensor([melody.shape[-1]], device=self.device)) - - if prompt is not None: - if descriptions is not None: - assert len(descriptions) == len(prompt), "Prompt and nb. descriptions doesn't match" - prompt = prompt.to(self.device) - prompt_tokens, scale = self.compression_model.encode(prompt) - assert scale is None - else: - prompt_tokens = None - return attributes, prompt_tokens - - def _generate_tokens(self, attributes: tp.List[ConditioningAttributes], - prompt_tokens: tp.Optional[torch.Tensor], progress: bool = False) -> torch.Tensor: - """Generate discrete audio tokens given audio prompt and/or conditions. - - Args: - attributes (tp.List[ConditioningAttributes]): Conditions used for generation (text/melody). - prompt_tokens (tp.Optional[torch.Tensor]): Audio prompt used for continuation. - progress (bool, optional): Flag to display progress of the generation process. Defaults to False. - Returns: - torch.Tensor: Generated audio, of shape [B, C, T], T is defined by the generation params. - """ - def _progress_callback(generated_tokens: int, tokens_to_generate: int): - print(f'{generated_tokens: 6d} / {tokens_to_generate: 6d}', end='\r') - - if prompt_tokens is not None: - assert self.generation_params['max_gen_len'] > prompt_tokens.shape[-1], \ - "Prompt is longer than audio to generate" - - callback = None - if progress: - callback = _progress_callback - - # generate by sampling from LM - with self.autocast: - gen_tokens = self.lm.generate(prompt_tokens, attributes, callback=callback, **self.generation_params) - - # generate audio - assert gen_tokens.dim() == 3 - with torch.no_grad(): - gen_audio = self.compression_model.decode(gen_tokens, None) - return gen_audio diff --git a/spaces/davidpiscasio/unpaired-img2img/data/unaligned_dataset.py b/spaces/davidpiscasio/unpaired-img2img/data/unaligned_dataset.py deleted file mode 100644 index bd7af1d5ab2d7623e83decc7232c30cbb83e5367..0000000000000000000000000000000000000000 --- a/spaces/davidpiscasio/unpaired-img2img/data/unaligned_dataset.py +++ /dev/null @@ -1,71 +0,0 @@ -import os -from data.base_dataset import BaseDataset, get_transform -from data.image_folder import make_dataset -from PIL import Image -import random - - -class UnalignedDataset(BaseDataset): - """ - This dataset class can load unaligned/unpaired datasets. - - It requires two directories to host training images from domain A '/path/to/data/trainA' - and from domain B '/path/to/data/trainB' respectively. - You can train the model with the dataset flag '--dataroot /path/to/data'. - Similarly, you need to prepare two directories: - '/path/to/data/testA' and '/path/to/data/testB' during test time. - """ - - def __init__(self, opt): - """Initialize this dataset class. - - Parameters: - opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions - """ - BaseDataset.__init__(self, opt) - self.dir_A = os.path.join(opt.dataroot, opt.phase + 'A') # create a path '/path/to/data/trainA' - self.dir_B = os.path.join(opt.dataroot, opt.phase + 'B') # create a path '/path/to/data/trainB' - - self.A_paths = sorted(make_dataset(self.dir_A, opt.max_dataset_size)) # load images from '/path/to/data/trainA' - self.B_paths = sorted(make_dataset(self.dir_B, opt.max_dataset_size)) # load images from '/path/to/data/trainB' - self.A_size = len(self.A_paths) # get the size of dataset A - self.B_size = len(self.B_paths) # get the size of dataset B - btoA = self.opt.direction == 'BtoA' - input_nc = self.opt.output_nc if btoA else self.opt.input_nc # get the number of channels of input image - output_nc = self.opt.input_nc if btoA else self.opt.output_nc # get the number of channels of output image - self.transform_A = get_transform(self.opt, grayscale=(input_nc == 1)) - self.transform_B = get_transform(self.opt, grayscale=(output_nc == 1)) - - def __getitem__(self, index): - """Return a data point and its metadata information. - - Parameters: - index (int) -- a random integer for data indexing - - Returns a dictionary that contains A, B, A_paths and B_paths - A (tensor) -- an image in the input domain - B (tensor) -- its corresponding image in the target domain - A_paths (str) -- image paths - B_paths (str) -- image paths - """ - A_path = self.A_paths[index % self.A_size] # make sure index is within then range - if self.opt.serial_batches: # make sure index is within then range - index_B = index % self.B_size - else: # randomize the index for domain B to avoid fixed pairs. - index_B = random.randint(0, self.B_size - 1) - B_path = self.B_paths[index_B] - A_img = Image.open(A_path).convert('RGB') - B_img = Image.open(B_path).convert('RGB') - # apply image transformation - A = self.transform_A(A_img) - B = self.transform_B(B_img) - - return {'A': A, 'B': B, 'A_paths': A_path, 'B_paths': B_path} - - def __len__(self): - """Return the total number of images in the dataset. - - As we have two datasets with potentially different number of images, - we take a maximum of - """ - return max(self.A_size, self.B_size) diff --git a/spaces/davila7/youtubegpt/README.md b/spaces/davila7/youtubegpt/README.md deleted file mode 100644 index cdf75a15e2c37c55914b6c30c53a78c8aac48f9b..0000000000000000000000000000000000000000 --- a/spaces/davila7/youtubegpt/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Youtubegpt -emoji: 🏢 -colorFrom: indigo -colorTo: blue -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/__init__.py b/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/__init__.py deleted file mode 100644 index e9f728f2f273be5d5fdbec6c6cc41d737176a8c0..0000000000000000000000000000000000000000 --- a/spaces/dawood/audioldm-text-to-audio-generation/audioldm/clap/open_clip/__init__.py +++ /dev/null @@ -1,25 +0,0 @@ -from .factory import ( - list_models, - create_model, - create_model_and_transforms, - add_model_config, -) -from .loss import ClipLoss, gather_features, LPLoss, lp_gather_features, LPMetrics -from .model import ( - CLAP, - CLAPTextCfg, - CLAPVisionCfg, - CLAPAudioCfp, - convert_weights_to_fp16, - trace_model, -) -from .openai import load_openai_model, list_openai_models -from .pretrained import ( - list_pretrained, - list_pretrained_tag_models, - list_pretrained_model_tags, - get_pretrained_url, - download_pretrained, -) -from .tokenizer import SimpleTokenizer, tokenize -from .transform import image_transform diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/qu2cuPen.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/qu2cuPen.py deleted file mode 100644 index 7e400f98c45cb7fdbbba00df009b7819adffec4c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/qu2cuPen.py +++ /dev/null @@ -1,105 +0,0 @@ -# Copyright 2016 Google Inc. All Rights Reserved. -# Copyright 2023 Behdad Esfahbod. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from fontTools.qu2cu import quadratic_to_curves -from fontTools.pens.filterPen import ContourFilterPen -from fontTools.pens.reverseContourPen import ReverseContourPen -import math - - -class Qu2CuPen(ContourFilterPen): - """A filter pen to convert quadratic bezier splines to cubic curves - using the FontTools SegmentPen protocol. - - Args: - - other_pen: another SegmentPen used to draw the transformed outline. - max_err: maximum approximation error in font units. For optimal results, - if you know the UPEM of the font, we recommend setting this to a - value equal, or close to UPEM / 1000. - reverse_direction: flip the contours' direction but keep starting point. - stats: a dictionary counting the point numbers of cubic segments. - """ - - def __init__( - self, - other_pen, - max_err, - all_cubic=False, - reverse_direction=False, - stats=None, - ): - if reverse_direction: - other_pen = ReverseContourPen(other_pen) - super().__init__(other_pen) - self.all_cubic = all_cubic - self.max_err = max_err - self.stats = stats - - def _quadratics_to_curve(self, q): - curves = quadratic_to_curves(q, self.max_err, all_cubic=self.all_cubic) - if self.stats is not None: - for curve in curves: - n = str(len(curve) - 2) - self.stats[n] = self.stats.get(n, 0) + 1 - for curve in curves: - if len(curve) == 4: - yield ("curveTo", curve[1:]) - else: - yield ("qCurveTo", curve[1:]) - - def filterContour(self, contour): - quadratics = [] - currentPt = None - newContour = [] - for op, args in contour: - if op == "qCurveTo" and ( - self.all_cubic or (len(args) > 2 and args[-1] is not None) - ): - if args[-1] is None: - raise NotImplementedError( - "oncurve-less contours with all_cubic not implemented" - ) - quadratics.append((currentPt,) + args) - else: - if quadratics: - newContour.extend(self._quadratics_to_curve(quadratics)) - quadratics = [] - newContour.append((op, args)) - currentPt = args[-1] if args else None - if quadratics: - newContour.extend(self._quadratics_to_curve(quadratics)) - - if not self.all_cubic: - # Add back implicit oncurve points - contour = newContour - newContour = [] - for op, args in contour: - if op == "qCurveTo" and newContour and newContour[-1][0] == "qCurveTo": - pt0 = newContour[-1][1][-2] - pt1 = newContour[-1][1][-1] - pt2 = args[0] - if ( - pt1 is not None - and math.isclose(pt2[0] - pt1[0], pt1[0] - pt0[0]) - and math.isclose(pt2[1] - pt1[1], pt1[1] - pt0[1]) - ): - newArgs = newContour[-1][1][:-1] + args - newContour[-1] = (op, newArgs) - continue - - newContour.append((op, args)) - - return newContour diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicodedata/Scripts.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicodedata/Scripts.py deleted file mode 100644 index 68bb91b396d62b03a8bfd650c64ce0b7375e1e48..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/unicodedata/Scripts.py +++ /dev/null @@ -1,3509 +0,0 @@ -# -*- coding: utf-8 -*- -# -# NOTE: This file was auto-generated with MetaTools/buildUCD.py. -# Source: https://unicode.org/Public/UNIDATA/Scripts.txt -# License: http://unicode.org/copyright.html#License -# -# Scripts-15.0.0.txt -# Date: 2022-04-26, 23:15:02 GMT -# © 2022 Unicode®, Inc. -# Unicode and the Unicode Logo are registered trademarks of Unicode, Inc. in the U.S. and other countries. -# For terms of use, see https://www.unicode.org/terms_of_use.html -# -# Unicode Character Database -# For documentation, see https://www.unicode.org/reports/tr44/ -# For more information, see: -# UAX #24, Unicode Script Property: https://www.unicode.org/reports/tr24/ -# Especially the sections: -# https://www.unicode.org/reports/tr24/#Assignment_Script_Values -# https://www.unicode.org/reports/tr24/#Assignment_ScriptX_Values -# - - -RANGES = [ - 0x0000, # .. 0x0040 ; Common - 0x0041, # .. 0x005A ; Latin - 0x005B, # .. 0x0060 ; Common - 0x0061, # .. 0x007A ; Latin - 0x007B, # .. 0x00A9 ; Common - 0x00AA, # .. 0x00AA ; Latin - 0x00AB, # .. 0x00B9 ; Common - 0x00BA, # .. 0x00BA ; Latin - 0x00BB, # .. 0x00BF ; Common - 0x00C0, # .. 0x00D6 ; Latin - 0x00D7, # .. 0x00D7 ; Common - 0x00D8, # .. 0x00F6 ; Latin - 0x00F7, # .. 0x00F7 ; Common - 0x00F8, # .. 0x02B8 ; Latin - 0x02B9, # .. 0x02DF ; Common - 0x02E0, # .. 0x02E4 ; Latin - 0x02E5, # .. 0x02E9 ; Common - 0x02EA, # .. 0x02EB ; Bopomofo - 0x02EC, # .. 0x02FF ; Common - 0x0300, # .. 0x036F ; Inherited - 0x0370, # .. 0x0373 ; Greek - 0x0374, # .. 0x0374 ; Common - 0x0375, # .. 0x0377 ; Greek - 0x0378, # .. 0x0379 ; Unknown - 0x037A, # .. 0x037D ; Greek - 0x037E, # .. 0x037E ; Common - 0x037F, # .. 0x037F ; Greek - 0x0380, # .. 0x0383 ; Unknown - 0x0384, # .. 0x0384 ; Greek - 0x0385, # .. 0x0385 ; Common - 0x0386, # .. 0x0386 ; Greek - 0x0387, # .. 0x0387 ; Common - 0x0388, # .. 0x038A ; Greek - 0x038B, # .. 0x038B ; Unknown - 0x038C, # .. 0x038C ; Greek - 0x038D, # .. 0x038D ; Unknown - 0x038E, # .. 0x03A1 ; Greek - 0x03A2, # .. 0x03A2 ; Unknown - 0x03A3, # .. 0x03E1 ; Greek - 0x03E2, # .. 0x03EF ; Coptic - 0x03F0, # .. 0x03FF ; Greek - 0x0400, # .. 0x0484 ; Cyrillic - 0x0485, # .. 0x0486 ; Inherited - 0x0487, # .. 0x052F ; Cyrillic - 0x0530, # .. 0x0530 ; Unknown - 0x0531, # .. 0x0556 ; Armenian - 0x0557, # .. 0x0558 ; Unknown - 0x0559, # .. 0x058A ; Armenian - 0x058B, # .. 0x058C ; Unknown - 0x058D, # .. 0x058F ; Armenian - 0x0590, # .. 0x0590 ; Unknown - 0x0591, # .. 0x05C7 ; Hebrew - 0x05C8, # .. 0x05CF ; Unknown - 0x05D0, # .. 0x05EA ; Hebrew - 0x05EB, # .. 0x05EE ; Unknown - 0x05EF, # .. 0x05F4 ; Hebrew - 0x05F5, # .. 0x05FF ; Unknown - 0x0600, # .. 0x0604 ; Arabic - 0x0605, # .. 0x0605 ; Common - 0x0606, # .. 0x060B ; Arabic - 0x060C, # .. 0x060C ; Common - 0x060D, # .. 0x061A ; Arabic - 0x061B, # .. 0x061B ; Common - 0x061C, # .. 0x061E ; Arabic - 0x061F, # .. 0x061F ; Common - 0x0620, # .. 0x063F ; Arabic - 0x0640, # .. 0x0640 ; Common - 0x0641, # .. 0x064A ; Arabic - 0x064B, # .. 0x0655 ; Inherited - 0x0656, # .. 0x066F ; Arabic - 0x0670, # .. 0x0670 ; Inherited - 0x0671, # .. 0x06DC ; Arabic - 0x06DD, # .. 0x06DD ; Common - 0x06DE, # .. 0x06FF ; Arabic - 0x0700, # .. 0x070D ; Syriac - 0x070E, # .. 0x070E ; Unknown - 0x070F, # .. 0x074A ; Syriac - 0x074B, # .. 0x074C ; Unknown - 0x074D, # .. 0x074F ; Syriac - 0x0750, # .. 0x077F ; Arabic - 0x0780, # .. 0x07B1 ; Thaana - 0x07B2, # .. 0x07BF ; Unknown - 0x07C0, # .. 0x07FA ; Nko - 0x07FB, # .. 0x07FC ; Unknown - 0x07FD, # .. 0x07FF ; Nko - 0x0800, # .. 0x082D ; Samaritan - 0x082E, # .. 0x082F ; Unknown - 0x0830, # .. 0x083E ; Samaritan - 0x083F, # .. 0x083F ; Unknown - 0x0840, # .. 0x085B ; Mandaic - 0x085C, # .. 0x085D ; Unknown - 0x085E, # .. 0x085E ; Mandaic - 0x085F, # .. 0x085F ; Unknown - 0x0860, # .. 0x086A ; Syriac - 0x086B, # .. 0x086F ; Unknown - 0x0870, # .. 0x088E ; Arabic - 0x088F, # .. 0x088F ; Unknown - 0x0890, # .. 0x0891 ; Arabic - 0x0892, # .. 0x0897 ; Unknown - 0x0898, # .. 0x08E1 ; Arabic - 0x08E2, # .. 0x08E2 ; Common - 0x08E3, # .. 0x08FF ; Arabic - 0x0900, # .. 0x0950 ; Devanagari - 0x0951, # .. 0x0954 ; Inherited - 0x0955, # .. 0x0963 ; Devanagari - 0x0964, # .. 0x0965 ; Common - 0x0966, # .. 0x097F ; Devanagari - 0x0980, # .. 0x0983 ; Bengali - 0x0984, # .. 0x0984 ; Unknown - 0x0985, # .. 0x098C ; Bengali - 0x098D, # .. 0x098E ; Unknown - 0x098F, # .. 0x0990 ; Bengali - 0x0991, # .. 0x0992 ; Unknown - 0x0993, # .. 0x09A8 ; Bengali - 0x09A9, # .. 0x09A9 ; Unknown - 0x09AA, # .. 0x09B0 ; Bengali - 0x09B1, # .. 0x09B1 ; Unknown - 0x09B2, # .. 0x09B2 ; Bengali - 0x09B3, # .. 0x09B5 ; Unknown - 0x09B6, # .. 0x09B9 ; Bengali - 0x09BA, # .. 0x09BB ; Unknown - 0x09BC, # .. 0x09C4 ; Bengali - 0x09C5, # .. 0x09C6 ; Unknown - 0x09C7, # .. 0x09C8 ; Bengali - 0x09C9, # .. 0x09CA ; Unknown - 0x09CB, # .. 0x09CE ; Bengali - 0x09CF, # .. 0x09D6 ; Unknown - 0x09D7, # .. 0x09D7 ; Bengali - 0x09D8, # .. 0x09DB ; Unknown - 0x09DC, # .. 0x09DD ; Bengali - 0x09DE, # .. 0x09DE ; Unknown - 0x09DF, # .. 0x09E3 ; Bengali - 0x09E4, # .. 0x09E5 ; Unknown - 0x09E6, # .. 0x09FE ; Bengali - 0x09FF, # .. 0x0A00 ; Unknown - 0x0A01, # .. 0x0A03 ; Gurmukhi - 0x0A04, # .. 0x0A04 ; Unknown - 0x0A05, # .. 0x0A0A ; Gurmukhi - 0x0A0B, # .. 0x0A0E ; Unknown - 0x0A0F, # .. 0x0A10 ; Gurmukhi - 0x0A11, # .. 0x0A12 ; Unknown - 0x0A13, # .. 0x0A28 ; Gurmukhi - 0x0A29, # .. 0x0A29 ; Unknown - 0x0A2A, # .. 0x0A30 ; Gurmukhi - 0x0A31, # .. 0x0A31 ; Unknown - 0x0A32, # .. 0x0A33 ; Gurmukhi - 0x0A34, # .. 0x0A34 ; Unknown - 0x0A35, # .. 0x0A36 ; Gurmukhi - 0x0A37, # .. 0x0A37 ; Unknown - 0x0A38, # .. 0x0A39 ; Gurmukhi - 0x0A3A, # .. 0x0A3B ; Unknown - 0x0A3C, # .. 0x0A3C ; Gurmukhi - 0x0A3D, # .. 0x0A3D ; Unknown - 0x0A3E, # .. 0x0A42 ; Gurmukhi - 0x0A43, # .. 0x0A46 ; Unknown - 0x0A47, # .. 0x0A48 ; Gurmukhi - 0x0A49, # .. 0x0A4A ; Unknown - 0x0A4B, # .. 0x0A4D ; Gurmukhi - 0x0A4E, # .. 0x0A50 ; Unknown - 0x0A51, # .. 0x0A51 ; Gurmukhi - 0x0A52, # .. 0x0A58 ; Unknown - 0x0A59, # .. 0x0A5C ; Gurmukhi - 0x0A5D, # .. 0x0A5D ; Unknown - 0x0A5E, # .. 0x0A5E ; Gurmukhi - 0x0A5F, # .. 0x0A65 ; Unknown - 0x0A66, # .. 0x0A76 ; Gurmukhi - 0x0A77, # .. 0x0A80 ; Unknown - 0x0A81, # .. 0x0A83 ; Gujarati - 0x0A84, # .. 0x0A84 ; Unknown - 0x0A85, # .. 0x0A8D ; Gujarati - 0x0A8E, # .. 0x0A8E ; Unknown - 0x0A8F, # .. 0x0A91 ; Gujarati - 0x0A92, # .. 0x0A92 ; Unknown - 0x0A93, # .. 0x0AA8 ; Gujarati - 0x0AA9, # .. 0x0AA9 ; Unknown - 0x0AAA, # .. 0x0AB0 ; Gujarati - 0x0AB1, # .. 0x0AB1 ; Unknown - 0x0AB2, # .. 0x0AB3 ; Gujarati - 0x0AB4, # .. 0x0AB4 ; Unknown - 0x0AB5, # .. 0x0AB9 ; Gujarati - 0x0ABA, # .. 0x0ABB ; Unknown - 0x0ABC, # .. 0x0AC5 ; Gujarati - 0x0AC6, # .. 0x0AC6 ; Unknown - 0x0AC7, # .. 0x0AC9 ; Gujarati - 0x0ACA, # .. 0x0ACA ; Unknown - 0x0ACB, # .. 0x0ACD ; Gujarati - 0x0ACE, # .. 0x0ACF ; Unknown - 0x0AD0, # .. 0x0AD0 ; Gujarati - 0x0AD1, # .. 0x0ADF ; Unknown - 0x0AE0, # .. 0x0AE3 ; Gujarati - 0x0AE4, # .. 0x0AE5 ; Unknown - 0x0AE6, # .. 0x0AF1 ; Gujarati - 0x0AF2, # .. 0x0AF8 ; Unknown - 0x0AF9, # .. 0x0AFF ; Gujarati - 0x0B00, # .. 0x0B00 ; Unknown - 0x0B01, # .. 0x0B03 ; Oriya - 0x0B04, # .. 0x0B04 ; Unknown - 0x0B05, # .. 0x0B0C ; Oriya - 0x0B0D, # .. 0x0B0E ; Unknown - 0x0B0F, # .. 0x0B10 ; Oriya - 0x0B11, # .. 0x0B12 ; Unknown - 0x0B13, # .. 0x0B28 ; Oriya - 0x0B29, # .. 0x0B29 ; Unknown - 0x0B2A, # .. 0x0B30 ; Oriya - 0x0B31, # .. 0x0B31 ; Unknown - 0x0B32, # .. 0x0B33 ; Oriya - 0x0B34, # .. 0x0B34 ; Unknown - 0x0B35, # .. 0x0B39 ; Oriya - 0x0B3A, # .. 0x0B3B ; Unknown - 0x0B3C, # .. 0x0B44 ; Oriya - 0x0B45, # .. 0x0B46 ; Unknown - 0x0B47, # .. 0x0B48 ; Oriya - 0x0B49, # .. 0x0B4A ; Unknown - 0x0B4B, # .. 0x0B4D ; Oriya - 0x0B4E, # .. 0x0B54 ; Unknown - 0x0B55, # .. 0x0B57 ; Oriya - 0x0B58, # .. 0x0B5B ; Unknown - 0x0B5C, # .. 0x0B5D ; Oriya - 0x0B5E, # .. 0x0B5E ; Unknown - 0x0B5F, # .. 0x0B63 ; Oriya - 0x0B64, # .. 0x0B65 ; Unknown - 0x0B66, # .. 0x0B77 ; Oriya - 0x0B78, # .. 0x0B81 ; Unknown - 0x0B82, # .. 0x0B83 ; Tamil - 0x0B84, # .. 0x0B84 ; Unknown - 0x0B85, # .. 0x0B8A ; Tamil - 0x0B8B, # .. 0x0B8D ; Unknown - 0x0B8E, # .. 0x0B90 ; Tamil - 0x0B91, # .. 0x0B91 ; Unknown - 0x0B92, # .. 0x0B95 ; Tamil - 0x0B96, # .. 0x0B98 ; Unknown - 0x0B99, # .. 0x0B9A ; Tamil - 0x0B9B, # .. 0x0B9B ; Unknown - 0x0B9C, # .. 0x0B9C ; Tamil - 0x0B9D, # .. 0x0B9D ; Unknown - 0x0B9E, # .. 0x0B9F ; Tamil - 0x0BA0, # .. 0x0BA2 ; Unknown - 0x0BA3, # .. 0x0BA4 ; Tamil - 0x0BA5, # .. 0x0BA7 ; Unknown - 0x0BA8, # .. 0x0BAA ; Tamil - 0x0BAB, # .. 0x0BAD ; Unknown - 0x0BAE, # .. 0x0BB9 ; Tamil - 0x0BBA, # .. 0x0BBD ; Unknown - 0x0BBE, # .. 0x0BC2 ; Tamil - 0x0BC3, # .. 0x0BC5 ; Unknown - 0x0BC6, # .. 0x0BC8 ; Tamil - 0x0BC9, # .. 0x0BC9 ; Unknown - 0x0BCA, # .. 0x0BCD ; Tamil - 0x0BCE, # .. 0x0BCF ; Unknown - 0x0BD0, # .. 0x0BD0 ; Tamil - 0x0BD1, # .. 0x0BD6 ; Unknown - 0x0BD7, # .. 0x0BD7 ; Tamil - 0x0BD8, # .. 0x0BE5 ; Unknown - 0x0BE6, # .. 0x0BFA ; Tamil - 0x0BFB, # .. 0x0BFF ; Unknown - 0x0C00, # .. 0x0C0C ; Telugu - 0x0C0D, # .. 0x0C0D ; Unknown - 0x0C0E, # .. 0x0C10 ; Telugu - 0x0C11, # .. 0x0C11 ; Unknown - 0x0C12, # .. 0x0C28 ; Telugu - 0x0C29, # .. 0x0C29 ; Unknown - 0x0C2A, # .. 0x0C39 ; Telugu - 0x0C3A, # .. 0x0C3B ; Unknown - 0x0C3C, # .. 0x0C44 ; Telugu - 0x0C45, # .. 0x0C45 ; Unknown - 0x0C46, # .. 0x0C48 ; Telugu - 0x0C49, # .. 0x0C49 ; Unknown - 0x0C4A, # .. 0x0C4D ; Telugu - 0x0C4E, # .. 0x0C54 ; Unknown - 0x0C55, # .. 0x0C56 ; Telugu - 0x0C57, # .. 0x0C57 ; Unknown - 0x0C58, # .. 0x0C5A ; Telugu - 0x0C5B, # .. 0x0C5C ; Unknown - 0x0C5D, # .. 0x0C5D ; Telugu - 0x0C5E, # .. 0x0C5F ; Unknown - 0x0C60, # .. 0x0C63 ; Telugu - 0x0C64, # .. 0x0C65 ; Unknown - 0x0C66, # .. 0x0C6F ; Telugu - 0x0C70, # .. 0x0C76 ; Unknown - 0x0C77, # .. 0x0C7F ; Telugu - 0x0C80, # .. 0x0C8C ; Kannada - 0x0C8D, # .. 0x0C8D ; Unknown - 0x0C8E, # .. 0x0C90 ; Kannada - 0x0C91, # .. 0x0C91 ; Unknown - 0x0C92, # .. 0x0CA8 ; Kannada - 0x0CA9, # .. 0x0CA9 ; Unknown - 0x0CAA, # .. 0x0CB3 ; Kannada - 0x0CB4, # .. 0x0CB4 ; Unknown - 0x0CB5, # .. 0x0CB9 ; Kannada - 0x0CBA, # .. 0x0CBB ; Unknown - 0x0CBC, # .. 0x0CC4 ; Kannada - 0x0CC5, # .. 0x0CC5 ; Unknown - 0x0CC6, # .. 0x0CC8 ; Kannada - 0x0CC9, # .. 0x0CC9 ; Unknown - 0x0CCA, # .. 0x0CCD ; Kannada - 0x0CCE, # .. 0x0CD4 ; Unknown - 0x0CD5, # .. 0x0CD6 ; Kannada - 0x0CD7, # .. 0x0CDC ; Unknown - 0x0CDD, # .. 0x0CDE ; Kannada - 0x0CDF, # .. 0x0CDF ; Unknown - 0x0CE0, # .. 0x0CE3 ; Kannada - 0x0CE4, # .. 0x0CE5 ; Unknown - 0x0CE6, # .. 0x0CEF ; Kannada - 0x0CF0, # .. 0x0CF0 ; Unknown - 0x0CF1, # .. 0x0CF3 ; Kannada - 0x0CF4, # .. 0x0CFF ; Unknown - 0x0D00, # .. 0x0D0C ; Malayalam - 0x0D0D, # .. 0x0D0D ; Unknown - 0x0D0E, # .. 0x0D10 ; Malayalam - 0x0D11, # .. 0x0D11 ; Unknown - 0x0D12, # .. 0x0D44 ; Malayalam - 0x0D45, # .. 0x0D45 ; Unknown - 0x0D46, # .. 0x0D48 ; Malayalam - 0x0D49, # .. 0x0D49 ; Unknown - 0x0D4A, # .. 0x0D4F ; Malayalam - 0x0D50, # .. 0x0D53 ; Unknown - 0x0D54, # .. 0x0D63 ; Malayalam - 0x0D64, # .. 0x0D65 ; Unknown - 0x0D66, # .. 0x0D7F ; Malayalam - 0x0D80, # .. 0x0D80 ; Unknown - 0x0D81, # .. 0x0D83 ; Sinhala - 0x0D84, # .. 0x0D84 ; Unknown - 0x0D85, # .. 0x0D96 ; Sinhala - 0x0D97, # .. 0x0D99 ; Unknown - 0x0D9A, # .. 0x0DB1 ; Sinhala - 0x0DB2, # .. 0x0DB2 ; Unknown - 0x0DB3, # .. 0x0DBB ; Sinhala - 0x0DBC, # .. 0x0DBC ; Unknown - 0x0DBD, # .. 0x0DBD ; Sinhala - 0x0DBE, # .. 0x0DBF ; Unknown - 0x0DC0, # .. 0x0DC6 ; Sinhala - 0x0DC7, # .. 0x0DC9 ; Unknown - 0x0DCA, # .. 0x0DCA ; Sinhala - 0x0DCB, # .. 0x0DCE ; Unknown - 0x0DCF, # .. 0x0DD4 ; Sinhala - 0x0DD5, # .. 0x0DD5 ; Unknown - 0x0DD6, # .. 0x0DD6 ; Sinhala - 0x0DD7, # .. 0x0DD7 ; Unknown - 0x0DD8, # .. 0x0DDF ; Sinhala - 0x0DE0, # .. 0x0DE5 ; Unknown - 0x0DE6, # .. 0x0DEF ; Sinhala - 0x0DF0, # .. 0x0DF1 ; Unknown - 0x0DF2, # .. 0x0DF4 ; Sinhala - 0x0DF5, # .. 0x0E00 ; Unknown - 0x0E01, # .. 0x0E3A ; Thai - 0x0E3B, # .. 0x0E3E ; Unknown - 0x0E3F, # .. 0x0E3F ; Common - 0x0E40, # .. 0x0E5B ; Thai - 0x0E5C, # .. 0x0E80 ; Unknown - 0x0E81, # .. 0x0E82 ; Lao - 0x0E83, # .. 0x0E83 ; Unknown - 0x0E84, # .. 0x0E84 ; Lao - 0x0E85, # .. 0x0E85 ; Unknown - 0x0E86, # .. 0x0E8A ; Lao - 0x0E8B, # .. 0x0E8B ; Unknown - 0x0E8C, # .. 0x0EA3 ; Lao - 0x0EA4, # .. 0x0EA4 ; Unknown - 0x0EA5, # .. 0x0EA5 ; Lao - 0x0EA6, # .. 0x0EA6 ; Unknown - 0x0EA7, # .. 0x0EBD ; Lao - 0x0EBE, # .. 0x0EBF ; Unknown - 0x0EC0, # .. 0x0EC4 ; Lao - 0x0EC5, # .. 0x0EC5 ; Unknown - 0x0EC6, # .. 0x0EC6 ; Lao - 0x0EC7, # .. 0x0EC7 ; Unknown - 0x0EC8, # .. 0x0ECE ; Lao - 0x0ECF, # .. 0x0ECF ; Unknown - 0x0ED0, # .. 0x0ED9 ; Lao - 0x0EDA, # .. 0x0EDB ; Unknown - 0x0EDC, # .. 0x0EDF ; Lao - 0x0EE0, # .. 0x0EFF ; Unknown - 0x0F00, # .. 0x0F47 ; Tibetan - 0x0F48, # .. 0x0F48 ; Unknown - 0x0F49, # .. 0x0F6C ; Tibetan - 0x0F6D, # .. 0x0F70 ; Unknown - 0x0F71, # .. 0x0F97 ; Tibetan - 0x0F98, # .. 0x0F98 ; Unknown - 0x0F99, # .. 0x0FBC ; Tibetan - 0x0FBD, # .. 0x0FBD ; Unknown - 0x0FBE, # .. 0x0FCC ; Tibetan - 0x0FCD, # .. 0x0FCD ; Unknown - 0x0FCE, # .. 0x0FD4 ; Tibetan - 0x0FD5, # .. 0x0FD8 ; Common - 0x0FD9, # .. 0x0FDA ; Tibetan - 0x0FDB, # .. 0x0FFF ; Unknown - 0x1000, # .. 0x109F ; Myanmar - 0x10A0, # .. 0x10C5 ; Georgian - 0x10C6, # .. 0x10C6 ; Unknown - 0x10C7, # .. 0x10C7 ; Georgian - 0x10C8, # .. 0x10CC ; Unknown - 0x10CD, # .. 0x10CD ; Georgian - 0x10CE, # .. 0x10CF ; Unknown - 0x10D0, # .. 0x10FA ; Georgian - 0x10FB, # .. 0x10FB ; Common - 0x10FC, # .. 0x10FF ; Georgian - 0x1100, # .. 0x11FF ; Hangul - 0x1200, # .. 0x1248 ; Ethiopic - 0x1249, # .. 0x1249 ; Unknown - 0x124A, # .. 0x124D ; Ethiopic - 0x124E, # .. 0x124F ; Unknown - 0x1250, # .. 0x1256 ; Ethiopic - 0x1257, # .. 0x1257 ; Unknown - 0x1258, # .. 0x1258 ; Ethiopic - 0x1259, # .. 0x1259 ; Unknown - 0x125A, # .. 0x125D ; Ethiopic - 0x125E, # .. 0x125F ; Unknown - 0x1260, # .. 0x1288 ; Ethiopic - 0x1289, # .. 0x1289 ; Unknown - 0x128A, # .. 0x128D ; Ethiopic - 0x128E, # .. 0x128F ; Unknown - 0x1290, # .. 0x12B0 ; Ethiopic - 0x12B1, # .. 0x12B1 ; Unknown - 0x12B2, # .. 0x12B5 ; Ethiopic - 0x12B6, # .. 0x12B7 ; Unknown - 0x12B8, # .. 0x12BE ; Ethiopic - 0x12BF, # .. 0x12BF ; Unknown - 0x12C0, # .. 0x12C0 ; Ethiopic - 0x12C1, # .. 0x12C1 ; Unknown - 0x12C2, # .. 0x12C5 ; Ethiopic - 0x12C6, # .. 0x12C7 ; Unknown - 0x12C8, # .. 0x12D6 ; Ethiopic - 0x12D7, # .. 0x12D7 ; Unknown - 0x12D8, # .. 0x1310 ; Ethiopic - 0x1311, # .. 0x1311 ; Unknown - 0x1312, # .. 0x1315 ; Ethiopic - 0x1316, # .. 0x1317 ; Unknown - 0x1318, # .. 0x135A ; Ethiopic - 0x135B, # .. 0x135C ; Unknown - 0x135D, # .. 0x137C ; Ethiopic - 0x137D, # .. 0x137F ; Unknown - 0x1380, # .. 0x1399 ; Ethiopic - 0x139A, # .. 0x139F ; Unknown - 0x13A0, # .. 0x13F5 ; Cherokee - 0x13F6, # .. 0x13F7 ; Unknown - 0x13F8, # .. 0x13FD ; Cherokee - 0x13FE, # .. 0x13FF ; Unknown - 0x1400, # .. 0x167F ; Canadian_Aboriginal - 0x1680, # .. 0x169C ; Ogham - 0x169D, # .. 0x169F ; Unknown - 0x16A0, # .. 0x16EA ; Runic - 0x16EB, # .. 0x16ED ; Common - 0x16EE, # .. 0x16F8 ; Runic - 0x16F9, # .. 0x16FF ; Unknown - 0x1700, # .. 0x1715 ; Tagalog - 0x1716, # .. 0x171E ; Unknown - 0x171F, # .. 0x171F ; Tagalog - 0x1720, # .. 0x1734 ; Hanunoo - 0x1735, # .. 0x1736 ; Common - 0x1737, # .. 0x173F ; Unknown - 0x1740, # .. 0x1753 ; Buhid - 0x1754, # .. 0x175F ; Unknown - 0x1760, # .. 0x176C ; Tagbanwa - 0x176D, # .. 0x176D ; Unknown - 0x176E, # .. 0x1770 ; Tagbanwa - 0x1771, # .. 0x1771 ; Unknown - 0x1772, # .. 0x1773 ; Tagbanwa - 0x1774, # .. 0x177F ; Unknown - 0x1780, # .. 0x17DD ; Khmer - 0x17DE, # .. 0x17DF ; Unknown - 0x17E0, # .. 0x17E9 ; Khmer - 0x17EA, # .. 0x17EF ; Unknown - 0x17F0, # .. 0x17F9 ; Khmer - 0x17FA, # .. 0x17FF ; Unknown - 0x1800, # .. 0x1801 ; Mongolian - 0x1802, # .. 0x1803 ; Common - 0x1804, # .. 0x1804 ; Mongolian - 0x1805, # .. 0x1805 ; Common - 0x1806, # .. 0x1819 ; Mongolian - 0x181A, # .. 0x181F ; Unknown - 0x1820, # .. 0x1878 ; Mongolian - 0x1879, # .. 0x187F ; Unknown - 0x1880, # .. 0x18AA ; Mongolian - 0x18AB, # .. 0x18AF ; Unknown - 0x18B0, # .. 0x18F5 ; Canadian_Aboriginal - 0x18F6, # .. 0x18FF ; Unknown - 0x1900, # .. 0x191E ; Limbu - 0x191F, # .. 0x191F ; Unknown - 0x1920, # .. 0x192B ; Limbu - 0x192C, # .. 0x192F ; Unknown - 0x1930, # .. 0x193B ; Limbu - 0x193C, # .. 0x193F ; Unknown - 0x1940, # .. 0x1940 ; Limbu - 0x1941, # .. 0x1943 ; Unknown - 0x1944, # .. 0x194F ; Limbu - 0x1950, # .. 0x196D ; Tai_Le - 0x196E, # .. 0x196F ; Unknown - 0x1970, # .. 0x1974 ; Tai_Le - 0x1975, # .. 0x197F ; Unknown - 0x1980, # .. 0x19AB ; New_Tai_Lue - 0x19AC, # .. 0x19AF ; Unknown - 0x19B0, # .. 0x19C9 ; New_Tai_Lue - 0x19CA, # .. 0x19CF ; Unknown - 0x19D0, # .. 0x19DA ; New_Tai_Lue - 0x19DB, # .. 0x19DD ; Unknown - 0x19DE, # .. 0x19DF ; New_Tai_Lue - 0x19E0, # .. 0x19FF ; Khmer - 0x1A00, # .. 0x1A1B ; Buginese - 0x1A1C, # .. 0x1A1D ; Unknown - 0x1A1E, # .. 0x1A1F ; Buginese - 0x1A20, # .. 0x1A5E ; Tai_Tham - 0x1A5F, # .. 0x1A5F ; Unknown - 0x1A60, # .. 0x1A7C ; Tai_Tham - 0x1A7D, # .. 0x1A7E ; Unknown - 0x1A7F, # .. 0x1A89 ; Tai_Tham - 0x1A8A, # .. 0x1A8F ; Unknown - 0x1A90, # .. 0x1A99 ; Tai_Tham - 0x1A9A, # .. 0x1A9F ; Unknown - 0x1AA0, # .. 0x1AAD ; Tai_Tham - 0x1AAE, # .. 0x1AAF ; Unknown - 0x1AB0, # .. 0x1ACE ; Inherited - 0x1ACF, # .. 0x1AFF ; Unknown - 0x1B00, # .. 0x1B4C ; Balinese - 0x1B4D, # .. 0x1B4F ; Unknown - 0x1B50, # .. 0x1B7E ; Balinese - 0x1B7F, # .. 0x1B7F ; Unknown - 0x1B80, # .. 0x1BBF ; Sundanese - 0x1BC0, # .. 0x1BF3 ; Batak - 0x1BF4, # .. 0x1BFB ; Unknown - 0x1BFC, # .. 0x1BFF ; Batak - 0x1C00, # .. 0x1C37 ; Lepcha - 0x1C38, # .. 0x1C3A ; Unknown - 0x1C3B, # .. 0x1C49 ; Lepcha - 0x1C4A, # .. 0x1C4C ; Unknown - 0x1C4D, # .. 0x1C4F ; Lepcha - 0x1C50, # .. 0x1C7F ; Ol_Chiki - 0x1C80, # .. 0x1C88 ; Cyrillic - 0x1C89, # .. 0x1C8F ; Unknown - 0x1C90, # .. 0x1CBA ; Georgian - 0x1CBB, # .. 0x1CBC ; Unknown - 0x1CBD, # .. 0x1CBF ; Georgian - 0x1CC0, # .. 0x1CC7 ; Sundanese - 0x1CC8, # .. 0x1CCF ; Unknown - 0x1CD0, # .. 0x1CD2 ; Inherited - 0x1CD3, # .. 0x1CD3 ; Common - 0x1CD4, # .. 0x1CE0 ; Inherited - 0x1CE1, # .. 0x1CE1 ; Common - 0x1CE2, # .. 0x1CE8 ; Inherited - 0x1CE9, # .. 0x1CEC ; Common - 0x1CED, # .. 0x1CED ; Inherited - 0x1CEE, # .. 0x1CF3 ; Common - 0x1CF4, # .. 0x1CF4 ; Inherited - 0x1CF5, # .. 0x1CF7 ; Common - 0x1CF8, # .. 0x1CF9 ; Inherited - 0x1CFA, # .. 0x1CFA ; Common - 0x1CFB, # .. 0x1CFF ; Unknown - 0x1D00, # .. 0x1D25 ; Latin - 0x1D26, # .. 0x1D2A ; Greek - 0x1D2B, # .. 0x1D2B ; Cyrillic - 0x1D2C, # .. 0x1D5C ; Latin - 0x1D5D, # .. 0x1D61 ; Greek - 0x1D62, # .. 0x1D65 ; Latin - 0x1D66, # .. 0x1D6A ; Greek - 0x1D6B, # .. 0x1D77 ; Latin - 0x1D78, # .. 0x1D78 ; Cyrillic - 0x1D79, # .. 0x1DBE ; Latin - 0x1DBF, # .. 0x1DBF ; Greek - 0x1DC0, # .. 0x1DFF ; Inherited - 0x1E00, # .. 0x1EFF ; Latin - 0x1F00, # .. 0x1F15 ; Greek - 0x1F16, # .. 0x1F17 ; Unknown - 0x1F18, # .. 0x1F1D ; Greek - 0x1F1E, # .. 0x1F1F ; Unknown - 0x1F20, # .. 0x1F45 ; Greek - 0x1F46, # .. 0x1F47 ; Unknown - 0x1F48, # .. 0x1F4D ; Greek - 0x1F4E, # .. 0x1F4F ; Unknown - 0x1F50, # .. 0x1F57 ; Greek - 0x1F58, # .. 0x1F58 ; Unknown - 0x1F59, # .. 0x1F59 ; Greek - 0x1F5A, # .. 0x1F5A ; Unknown - 0x1F5B, # .. 0x1F5B ; Greek - 0x1F5C, # .. 0x1F5C ; Unknown - 0x1F5D, # .. 0x1F5D ; Greek - 0x1F5E, # .. 0x1F5E ; Unknown - 0x1F5F, # .. 0x1F7D ; Greek - 0x1F7E, # .. 0x1F7F ; Unknown - 0x1F80, # .. 0x1FB4 ; Greek - 0x1FB5, # .. 0x1FB5 ; Unknown - 0x1FB6, # .. 0x1FC4 ; Greek - 0x1FC5, # .. 0x1FC5 ; Unknown - 0x1FC6, # .. 0x1FD3 ; Greek - 0x1FD4, # .. 0x1FD5 ; Unknown - 0x1FD6, # .. 0x1FDB ; Greek - 0x1FDC, # .. 0x1FDC ; Unknown - 0x1FDD, # .. 0x1FEF ; Greek - 0x1FF0, # .. 0x1FF1 ; Unknown - 0x1FF2, # .. 0x1FF4 ; Greek - 0x1FF5, # .. 0x1FF5 ; Unknown - 0x1FF6, # .. 0x1FFE ; Greek - 0x1FFF, # .. 0x1FFF ; Unknown - 0x2000, # .. 0x200B ; Common - 0x200C, # .. 0x200D ; Inherited - 0x200E, # .. 0x2064 ; Common - 0x2065, # .. 0x2065 ; Unknown - 0x2066, # .. 0x2070 ; Common - 0x2071, # .. 0x2071 ; Latin - 0x2072, # .. 0x2073 ; Unknown - 0x2074, # .. 0x207E ; Common - 0x207F, # .. 0x207F ; Latin - 0x2080, # .. 0x208E ; Common - 0x208F, # .. 0x208F ; Unknown - 0x2090, # .. 0x209C ; Latin - 0x209D, # .. 0x209F ; Unknown - 0x20A0, # .. 0x20C0 ; Common - 0x20C1, # .. 0x20CF ; Unknown - 0x20D0, # .. 0x20F0 ; Inherited - 0x20F1, # .. 0x20FF ; Unknown - 0x2100, # .. 0x2125 ; Common - 0x2126, # .. 0x2126 ; Greek - 0x2127, # .. 0x2129 ; Common - 0x212A, # .. 0x212B ; Latin - 0x212C, # .. 0x2131 ; Common - 0x2132, # .. 0x2132 ; Latin - 0x2133, # .. 0x214D ; Common - 0x214E, # .. 0x214E ; Latin - 0x214F, # .. 0x215F ; Common - 0x2160, # .. 0x2188 ; Latin - 0x2189, # .. 0x218B ; Common - 0x218C, # .. 0x218F ; Unknown - 0x2190, # .. 0x2426 ; Common - 0x2427, # .. 0x243F ; Unknown - 0x2440, # .. 0x244A ; Common - 0x244B, # .. 0x245F ; Unknown - 0x2460, # .. 0x27FF ; Common - 0x2800, # .. 0x28FF ; Braille - 0x2900, # .. 0x2B73 ; Common - 0x2B74, # .. 0x2B75 ; Unknown - 0x2B76, # .. 0x2B95 ; Common - 0x2B96, # .. 0x2B96 ; Unknown - 0x2B97, # .. 0x2BFF ; Common - 0x2C00, # .. 0x2C5F ; Glagolitic - 0x2C60, # .. 0x2C7F ; Latin - 0x2C80, # .. 0x2CF3 ; Coptic - 0x2CF4, # .. 0x2CF8 ; Unknown - 0x2CF9, # .. 0x2CFF ; Coptic - 0x2D00, # .. 0x2D25 ; Georgian - 0x2D26, # .. 0x2D26 ; Unknown - 0x2D27, # .. 0x2D27 ; Georgian - 0x2D28, # .. 0x2D2C ; Unknown - 0x2D2D, # .. 0x2D2D ; Georgian - 0x2D2E, # .. 0x2D2F ; Unknown - 0x2D30, # .. 0x2D67 ; Tifinagh - 0x2D68, # .. 0x2D6E ; Unknown - 0x2D6F, # .. 0x2D70 ; Tifinagh - 0x2D71, # .. 0x2D7E ; Unknown - 0x2D7F, # .. 0x2D7F ; Tifinagh - 0x2D80, # .. 0x2D96 ; Ethiopic - 0x2D97, # .. 0x2D9F ; Unknown - 0x2DA0, # .. 0x2DA6 ; Ethiopic - 0x2DA7, # .. 0x2DA7 ; Unknown - 0x2DA8, # .. 0x2DAE ; Ethiopic - 0x2DAF, # .. 0x2DAF ; Unknown - 0x2DB0, # .. 0x2DB6 ; Ethiopic - 0x2DB7, # .. 0x2DB7 ; Unknown - 0x2DB8, # .. 0x2DBE ; Ethiopic - 0x2DBF, # .. 0x2DBF ; Unknown - 0x2DC0, # .. 0x2DC6 ; Ethiopic - 0x2DC7, # .. 0x2DC7 ; Unknown - 0x2DC8, # .. 0x2DCE ; Ethiopic - 0x2DCF, # .. 0x2DCF ; Unknown - 0x2DD0, # .. 0x2DD6 ; Ethiopic - 0x2DD7, # .. 0x2DD7 ; Unknown - 0x2DD8, # .. 0x2DDE ; Ethiopic - 0x2DDF, # .. 0x2DDF ; Unknown - 0x2DE0, # .. 0x2DFF ; Cyrillic - 0x2E00, # .. 0x2E5D ; Common - 0x2E5E, # .. 0x2E7F ; Unknown - 0x2E80, # .. 0x2E99 ; Han - 0x2E9A, # .. 0x2E9A ; Unknown - 0x2E9B, # .. 0x2EF3 ; Han - 0x2EF4, # .. 0x2EFF ; Unknown - 0x2F00, # .. 0x2FD5 ; Han - 0x2FD6, # .. 0x2FEF ; Unknown - 0x2FF0, # .. 0x2FFB ; Common - 0x2FFC, # .. 0x2FFF ; Unknown - 0x3000, # .. 0x3004 ; Common - 0x3005, # .. 0x3005 ; Han - 0x3006, # .. 0x3006 ; Common - 0x3007, # .. 0x3007 ; Han - 0x3008, # .. 0x3020 ; Common - 0x3021, # .. 0x3029 ; Han - 0x302A, # .. 0x302D ; Inherited - 0x302E, # .. 0x302F ; Hangul - 0x3030, # .. 0x3037 ; Common - 0x3038, # .. 0x303B ; Han - 0x303C, # .. 0x303F ; Common - 0x3040, # .. 0x3040 ; Unknown - 0x3041, # .. 0x3096 ; Hiragana - 0x3097, # .. 0x3098 ; Unknown - 0x3099, # .. 0x309A ; Inherited - 0x309B, # .. 0x309C ; Common - 0x309D, # .. 0x309F ; Hiragana - 0x30A0, # .. 0x30A0 ; Common - 0x30A1, # .. 0x30FA ; Katakana - 0x30FB, # .. 0x30FC ; Common - 0x30FD, # .. 0x30FF ; Katakana - 0x3100, # .. 0x3104 ; Unknown - 0x3105, # .. 0x312F ; Bopomofo - 0x3130, # .. 0x3130 ; Unknown - 0x3131, # .. 0x318E ; Hangul - 0x318F, # .. 0x318F ; Unknown - 0x3190, # .. 0x319F ; Common - 0x31A0, # .. 0x31BF ; Bopomofo - 0x31C0, # .. 0x31E3 ; Common - 0x31E4, # .. 0x31EF ; Unknown - 0x31F0, # .. 0x31FF ; Katakana - 0x3200, # .. 0x321E ; Hangul - 0x321F, # .. 0x321F ; Unknown - 0x3220, # .. 0x325F ; Common - 0x3260, # .. 0x327E ; Hangul - 0x327F, # .. 0x32CF ; Common - 0x32D0, # .. 0x32FE ; Katakana - 0x32FF, # .. 0x32FF ; Common - 0x3300, # .. 0x3357 ; Katakana - 0x3358, # .. 0x33FF ; Common - 0x3400, # .. 0x4DBF ; Han - 0x4DC0, # .. 0x4DFF ; Common - 0x4E00, # .. 0x9FFF ; Han - 0xA000, # .. 0xA48C ; Yi - 0xA48D, # .. 0xA48F ; Unknown - 0xA490, # .. 0xA4C6 ; Yi - 0xA4C7, # .. 0xA4CF ; Unknown - 0xA4D0, # .. 0xA4FF ; Lisu - 0xA500, # .. 0xA62B ; Vai - 0xA62C, # .. 0xA63F ; Unknown - 0xA640, # .. 0xA69F ; Cyrillic - 0xA6A0, # .. 0xA6F7 ; Bamum - 0xA6F8, # .. 0xA6FF ; Unknown - 0xA700, # .. 0xA721 ; Common - 0xA722, # .. 0xA787 ; Latin - 0xA788, # .. 0xA78A ; Common - 0xA78B, # .. 0xA7CA ; Latin - 0xA7CB, # .. 0xA7CF ; Unknown - 0xA7D0, # .. 0xA7D1 ; Latin - 0xA7D2, # .. 0xA7D2 ; Unknown - 0xA7D3, # .. 0xA7D3 ; Latin - 0xA7D4, # .. 0xA7D4 ; Unknown - 0xA7D5, # .. 0xA7D9 ; Latin - 0xA7DA, # .. 0xA7F1 ; Unknown - 0xA7F2, # .. 0xA7FF ; Latin - 0xA800, # .. 0xA82C ; Syloti_Nagri - 0xA82D, # .. 0xA82F ; Unknown - 0xA830, # .. 0xA839 ; Common - 0xA83A, # .. 0xA83F ; Unknown - 0xA840, # .. 0xA877 ; Phags_Pa - 0xA878, # .. 0xA87F ; Unknown - 0xA880, # .. 0xA8C5 ; Saurashtra - 0xA8C6, # .. 0xA8CD ; Unknown - 0xA8CE, # .. 0xA8D9 ; Saurashtra - 0xA8DA, # .. 0xA8DF ; Unknown - 0xA8E0, # .. 0xA8FF ; Devanagari - 0xA900, # .. 0xA92D ; Kayah_Li - 0xA92E, # .. 0xA92E ; Common - 0xA92F, # .. 0xA92F ; Kayah_Li - 0xA930, # .. 0xA953 ; Rejang - 0xA954, # .. 0xA95E ; Unknown - 0xA95F, # .. 0xA95F ; Rejang - 0xA960, # .. 0xA97C ; Hangul - 0xA97D, # .. 0xA97F ; Unknown - 0xA980, # .. 0xA9CD ; Javanese - 0xA9CE, # .. 0xA9CE ; Unknown - 0xA9CF, # .. 0xA9CF ; Common - 0xA9D0, # .. 0xA9D9 ; Javanese - 0xA9DA, # .. 0xA9DD ; Unknown - 0xA9DE, # .. 0xA9DF ; Javanese - 0xA9E0, # .. 0xA9FE ; Myanmar - 0xA9FF, # .. 0xA9FF ; Unknown - 0xAA00, # .. 0xAA36 ; Cham - 0xAA37, # .. 0xAA3F ; Unknown - 0xAA40, # .. 0xAA4D ; Cham - 0xAA4E, # .. 0xAA4F ; Unknown - 0xAA50, # .. 0xAA59 ; Cham - 0xAA5A, # .. 0xAA5B ; Unknown - 0xAA5C, # .. 0xAA5F ; Cham - 0xAA60, # .. 0xAA7F ; Myanmar - 0xAA80, # .. 0xAAC2 ; Tai_Viet - 0xAAC3, # .. 0xAADA ; Unknown - 0xAADB, # .. 0xAADF ; Tai_Viet - 0xAAE0, # .. 0xAAF6 ; Meetei_Mayek - 0xAAF7, # .. 0xAB00 ; Unknown - 0xAB01, # .. 0xAB06 ; Ethiopic - 0xAB07, # .. 0xAB08 ; Unknown - 0xAB09, # .. 0xAB0E ; Ethiopic - 0xAB0F, # .. 0xAB10 ; Unknown - 0xAB11, # .. 0xAB16 ; Ethiopic - 0xAB17, # .. 0xAB1F ; Unknown - 0xAB20, # .. 0xAB26 ; Ethiopic - 0xAB27, # .. 0xAB27 ; Unknown - 0xAB28, # .. 0xAB2E ; Ethiopic - 0xAB2F, # .. 0xAB2F ; Unknown - 0xAB30, # .. 0xAB5A ; Latin - 0xAB5B, # .. 0xAB5B ; Common - 0xAB5C, # .. 0xAB64 ; Latin - 0xAB65, # .. 0xAB65 ; Greek - 0xAB66, # .. 0xAB69 ; Latin - 0xAB6A, # .. 0xAB6B ; Common - 0xAB6C, # .. 0xAB6F ; Unknown - 0xAB70, # .. 0xABBF ; Cherokee - 0xABC0, # .. 0xABED ; Meetei_Mayek - 0xABEE, # .. 0xABEF ; Unknown - 0xABF0, # .. 0xABF9 ; Meetei_Mayek - 0xABFA, # .. 0xABFF ; Unknown - 0xAC00, # .. 0xD7A3 ; Hangul - 0xD7A4, # .. 0xD7AF ; Unknown - 0xD7B0, # .. 0xD7C6 ; Hangul - 0xD7C7, # .. 0xD7CA ; Unknown - 0xD7CB, # .. 0xD7FB ; Hangul - 0xD7FC, # .. 0xF8FF ; Unknown - 0xF900, # .. 0xFA6D ; Han - 0xFA6E, # .. 0xFA6F ; Unknown - 0xFA70, # .. 0xFAD9 ; Han - 0xFADA, # .. 0xFAFF ; Unknown - 0xFB00, # .. 0xFB06 ; Latin - 0xFB07, # .. 0xFB12 ; Unknown - 0xFB13, # .. 0xFB17 ; Armenian - 0xFB18, # .. 0xFB1C ; Unknown - 0xFB1D, # .. 0xFB36 ; Hebrew - 0xFB37, # .. 0xFB37 ; Unknown - 0xFB38, # .. 0xFB3C ; Hebrew - 0xFB3D, # .. 0xFB3D ; Unknown - 0xFB3E, # .. 0xFB3E ; Hebrew - 0xFB3F, # .. 0xFB3F ; Unknown - 0xFB40, # .. 0xFB41 ; Hebrew - 0xFB42, # .. 0xFB42 ; Unknown - 0xFB43, # .. 0xFB44 ; Hebrew - 0xFB45, # .. 0xFB45 ; Unknown - 0xFB46, # .. 0xFB4F ; Hebrew - 0xFB50, # .. 0xFBC2 ; Arabic - 0xFBC3, # .. 0xFBD2 ; Unknown - 0xFBD3, # .. 0xFD3D ; Arabic - 0xFD3E, # .. 0xFD3F ; Common - 0xFD40, # .. 0xFD8F ; Arabic - 0xFD90, # .. 0xFD91 ; Unknown - 0xFD92, # .. 0xFDC7 ; Arabic - 0xFDC8, # .. 0xFDCE ; Unknown - 0xFDCF, # .. 0xFDCF ; Arabic - 0xFDD0, # .. 0xFDEF ; Unknown - 0xFDF0, # .. 0xFDFF ; Arabic - 0xFE00, # .. 0xFE0F ; Inherited - 0xFE10, # .. 0xFE19 ; Common - 0xFE1A, # .. 0xFE1F ; Unknown - 0xFE20, # .. 0xFE2D ; Inherited - 0xFE2E, # .. 0xFE2F ; Cyrillic - 0xFE30, # .. 0xFE52 ; Common - 0xFE53, # .. 0xFE53 ; Unknown - 0xFE54, # .. 0xFE66 ; Common - 0xFE67, # .. 0xFE67 ; Unknown - 0xFE68, # .. 0xFE6B ; Common - 0xFE6C, # .. 0xFE6F ; Unknown - 0xFE70, # .. 0xFE74 ; Arabic - 0xFE75, # .. 0xFE75 ; Unknown - 0xFE76, # .. 0xFEFC ; Arabic - 0xFEFD, # .. 0xFEFE ; Unknown - 0xFEFF, # .. 0xFEFF ; Common - 0xFF00, # .. 0xFF00 ; Unknown - 0xFF01, # .. 0xFF20 ; Common - 0xFF21, # .. 0xFF3A ; Latin - 0xFF3B, # .. 0xFF40 ; Common - 0xFF41, # .. 0xFF5A ; Latin - 0xFF5B, # .. 0xFF65 ; Common - 0xFF66, # .. 0xFF6F ; Katakana - 0xFF70, # .. 0xFF70 ; Common - 0xFF71, # .. 0xFF9D ; Katakana - 0xFF9E, # .. 0xFF9F ; Common - 0xFFA0, # .. 0xFFBE ; Hangul - 0xFFBF, # .. 0xFFC1 ; Unknown - 0xFFC2, # .. 0xFFC7 ; Hangul - 0xFFC8, # .. 0xFFC9 ; Unknown - 0xFFCA, # .. 0xFFCF ; Hangul - 0xFFD0, # .. 0xFFD1 ; Unknown - 0xFFD2, # .. 0xFFD7 ; Hangul - 0xFFD8, # .. 0xFFD9 ; Unknown - 0xFFDA, # .. 0xFFDC ; Hangul - 0xFFDD, # .. 0xFFDF ; Unknown - 0xFFE0, # .. 0xFFE6 ; Common - 0xFFE7, # .. 0xFFE7 ; Unknown - 0xFFE8, # .. 0xFFEE ; Common - 0xFFEF, # .. 0xFFF8 ; Unknown - 0xFFF9, # .. 0xFFFD ; Common - 0xFFFE, # .. 0xFFFF ; Unknown - 0x10000, # .. 0x1000B ; Linear_B - 0x1000C, # .. 0x1000C ; Unknown - 0x1000D, # .. 0x10026 ; Linear_B - 0x10027, # .. 0x10027 ; Unknown - 0x10028, # .. 0x1003A ; Linear_B - 0x1003B, # .. 0x1003B ; Unknown - 0x1003C, # .. 0x1003D ; Linear_B - 0x1003E, # .. 0x1003E ; Unknown - 0x1003F, # .. 0x1004D ; Linear_B - 0x1004E, # .. 0x1004F ; Unknown - 0x10050, # .. 0x1005D ; Linear_B - 0x1005E, # .. 0x1007F ; Unknown - 0x10080, # .. 0x100FA ; Linear_B - 0x100FB, # .. 0x100FF ; Unknown - 0x10100, # .. 0x10102 ; Common - 0x10103, # .. 0x10106 ; Unknown - 0x10107, # .. 0x10133 ; Common - 0x10134, # .. 0x10136 ; Unknown - 0x10137, # .. 0x1013F ; Common - 0x10140, # .. 0x1018E ; Greek - 0x1018F, # .. 0x1018F ; Unknown - 0x10190, # .. 0x1019C ; Common - 0x1019D, # .. 0x1019F ; Unknown - 0x101A0, # .. 0x101A0 ; Greek - 0x101A1, # .. 0x101CF ; Unknown - 0x101D0, # .. 0x101FC ; Common - 0x101FD, # .. 0x101FD ; Inherited - 0x101FE, # .. 0x1027F ; Unknown - 0x10280, # .. 0x1029C ; Lycian - 0x1029D, # .. 0x1029F ; Unknown - 0x102A0, # .. 0x102D0 ; Carian - 0x102D1, # .. 0x102DF ; Unknown - 0x102E0, # .. 0x102E0 ; Inherited - 0x102E1, # .. 0x102FB ; Common - 0x102FC, # .. 0x102FF ; Unknown - 0x10300, # .. 0x10323 ; Old_Italic - 0x10324, # .. 0x1032C ; Unknown - 0x1032D, # .. 0x1032F ; Old_Italic - 0x10330, # .. 0x1034A ; Gothic - 0x1034B, # .. 0x1034F ; Unknown - 0x10350, # .. 0x1037A ; Old_Permic - 0x1037B, # .. 0x1037F ; Unknown - 0x10380, # .. 0x1039D ; Ugaritic - 0x1039E, # .. 0x1039E ; Unknown - 0x1039F, # .. 0x1039F ; Ugaritic - 0x103A0, # .. 0x103C3 ; Old_Persian - 0x103C4, # .. 0x103C7 ; Unknown - 0x103C8, # .. 0x103D5 ; Old_Persian - 0x103D6, # .. 0x103FF ; Unknown - 0x10400, # .. 0x1044F ; Deseret - 0x10450, # .. 0x1047F ; Shavian - 0x10480, # .. 0x1049D ; Osmanya - 0x1049E, # .. 0x1049F ; Unknown - 0x104A0, # .. 0x104A9 ; Osmanya - 0x104AA, # .. 0x104AF ; Unknown - 0x104B0, # .. 0x104D3 ; Osage - 0x104D4, # .. 0x104D7 ; Unknown - 0x104D8, # .. 0x104FB ; Osage - 0x104FC, # .. 0x104FF ; Unknown - 0x10500, # .. 0x10527 ; Elbasan - 0x10528, # .. 0x1052F ; Unknown - 0x10530, # .. 0x10563 ; Caucasian_Albanian - 0x10564, # .. 0x1056E ; Unknown - 0x1056F, # .. 0x1056F ; Caucasian_Albanian - 0x10570, # .. 0x1057A ; Vithkuqi - 0x1057B, # .. 0x1057B ; Unknown - 0x1057C, # .. 0x1058A ; Vithkuqi - 0x1058B, # .. 0x1058B ; Unknown - 0x1058C, # .. 0x10592 ; Vithkuqi - 0x10593, # .. 0x10593 ; Unknown - 0x10594, # .. 0x10595 ; Vithkuqi - 0x10596, # .. 0x10596 ; Unknown - 0x10597, # .. 0x105A1 ; Vithkuqi - 0x105A2, # .. 0x105A2 ; Unknown - 0x105A3, # .. 0x105B1 ; Vithkuqi - 0x105B2, # .. 0x105B2 ; Unknown - 0x105B3, # .. 0x105B9 ; Vithkuqi - 0x105BA, # .. 0x105BA ; Unknown - 0x105BB, # .. 0x105BC ; Vithkuqi - 0x105BD, # .. 0x105FF ; Unknown - 0x10600, # .. 0x10736 ; Linear_A - 0x10737, # .. 0x1073F ; Unknown - 0x10740, # .. 0x10755 ; Linear_A - 0x10756, # .. 0x1075F ; Unknown - 0x10760, # .. 0x10767 ; Linear_A - 0x10768, # .. 0x1077F ; Unknown - 0x10780, # .. 0x10785 ; Latin - 0x10786, # .. 0x10786 ; Unknown - 0x10787, # .. 0x107B0 ; Latin - 0x107B1, # .. 0x107B1 ; Unknown - 0x107B2, # .. 0x107BA ; Latin - 0x107BB, # .. 0x107FF ; Unknown - 0x10800, # .. 0x10805 ; Cypriot - 0x10806, # .. 0x10807 ; Unknown - 0x10808, # .. 0x10808 ; Cypriot - 0x10809, # .. 0x10809 ; Unknown - 0x1080A, # .. 0x10835 ; Cypriot - 0x10836, # .. 0x10836 ; Unknown - 0x10837, # .. 0x10838 ; Cypriot - 0x10839, # .. 0x1083B ; Unknown - 0x1083C, # .. 0x1083C ; Cypriot - 0x1083D, # .. 0x1083E ; Unknown - 0x1083F, # .. 0x1083F ; Cypriot - 0x10840, # .. 0x10855 ; Imperial_Aramaic - 0x10856, # .. 0x10856 ; Unknown - 0x10857, # .. 0x1085F ; Imperial_Aramaic - 0x10860, # .. 0x1087F ; Palmyrene - 0x10880, # .. 0x1089E ; Nabataean - 0x1089F, # .. 0x108A6 ; Unknown - 0x108A7, # .. 0x108AF ; Nabataean - 0x108B0, # .. 0x108DF ; Unknown - 0x108E0, # .. 0x108F2 ; Hatran - 0x108F3, # .. 0x108F3 ; Unknown - 0x108F4, # .. 0x108F5 ; Hatran - 0x108F6, # .. 0x108FA ; Unknown - 0x108FB, # .. 0x108FF ; Hatran - 0x10900, # .. 0x1091B ; Phoenician - 0x1091C, # .. 0x1091E ; Unknown - 0x1091F, # .. 0x1091F ; Phoenician - 0x10920, # .. 0x10939 ; Lydian - 0x1093A, # .. 0x1093E ; Unknown - 0x1093F, # .. 0x1093F ; Lydian - 0x10940, # .. 0x1097F ; Unknown - 0x10980, # .. 0x1099F ; Meroitic_Hieroglyphs - 0x109A0, # .. 0x109B7 ; Meroitic_Cursive - 0x109B8, # .. 0x109BB ; Unknown - 0x109BC, # .. 0x109CF ; Meroitic_Cursive - 0x109D0, # .. 0x109D1 ; Unknown - 0x109D2, # .. 0x109FF ; Meroitic_Cursive - 0x10A00, # .. 0x10A03 ; Kharoshthi - 0x10A04, # .. 0x10A04 ; Unknown - 0x10A05, # .. 0x10A06 ; Kharoshthi - 0x10A07, # .. 0x10A0B ; Unknown - 0x10A0C, # .. 0x10A13 ; Kharoshthi - 0x10A14, # .. 0x10A14 ; Unknown - 0x10A15, # .. 0x10A17 ; Kharoshthi - 0x10A18, # .. 0x10A18 ; Unknown - 0x10A19, # .. 0x10A35 ; Kharoshthi - 0x10A36, # .. 0x10A37 ; Unknown - 0x10A38, # .. 0x10A3A ; Kharoshthi - 0x10A3B, # .. 0x10A3E ; Unknown - 0x10A3F, # .. 0x10A48 ; Kharoshthi - 0x10A49, # .. 0x10A4F ; Unknown - 0x10A50, # .. 0x10A58 ; Kharoshthi - 0x10A59, # .. 0x10A5F ; Unknown - 0x10A60, # .. 0x10A7F ; Old_South_Arabian - 0x10A80, # .. 0x10A9F ; Old_North_Arabian - 0x10AA0, # .. 0x10ABF ; Unknown - 0x10AC0, # .. 0x10AE6 ; Manichaean - 0x10AE7, # .. 0x10AEA ; Unknown - 0x10AEB, # .. 0x10AF6 ; Manichaean - 0x10AF7, # .. 0x10AFF ; Unknown - 0x10B00, # .. 0x10B35 ; Avestan - 0x10B36, # .. 0x10B38 ; Unknown - 0x10B39, # .. 0x10B3F ; Avestan - 0x10B40, # .. 0x10B55 ; Inscriptional_Parthian - 0x10B56, # .. 0x10B57 ; Unknown - 0x10B58, # .. 0x10B5F ; Inscriptional_Parthian - 0x10B60, # .. 0x10B72 ; Inscriptional_Pahlavi - 0x10B73, # .. 0x10B77 ; Unknown - 0x10B78, # .. 0x10B7F ; Inscriptional_Pahlavi - 0x10B80, # .. 0x10B91 ; Psalter_Pahlavi - 0x10B92, # .. 0x10B98 ; Unknown - 0x10B99, # .. 0x10B9C ; Psalter_Pahlavi - 0x10B9D, # .. 0x10BA8 ; Unknown - 0x10BA9, # .. 0x10BAF ; Psalter_Pahlavi - 0x10BB0, # .. 0x10BFF ; Unknown - 0x10C00, # .. 0x10C48 ; Old_Turkic - 0x10C49, # .. 0x10C7F ; Unknown - 0x10C80, # .. 0x10CB2 ; Old_Hungarian - 0x10CB3, # .. 0x10CBF ; Unknown - 0x10CC0, # .. 0x10CF2 ; Old_Hungarian - 0x10CF3, # .. 0x10CF9 ; Unknown - 0x10CFA, # .. 0x10CFF ; Old_Hungarian - 0x10D00, # .. 0x10D27 ; Hanifi_Rohingya - 0x10D28, # .. 0x10D2F ; Unknown - 0x10D30, # .. 0x10D39 ; Hanifi_Rohingya - 0x10D3A, # .. 0x10E5F ; Unknown - 0x10E60, # .. 0x10E7E ; Arabic - 0x10E7F, # .. 0x10E7F ; Unknown - 0x10E80, # .. 0x10EA9 ; Yezidi - 0x10EAA, # .. 0x10EAA ; Unknown - 0x10EAB, # .. 0x10EAD ; Yezidi - 0x10EAE, # .. 0x10EAF ; Unknown - 0x10EB0, # .. 0x10EB1 ; Yezidi - 0x10EB2, # .. 0x10EFC ; Unknown - 0x10EFD, # .. 0x10EFF ; Arabic - 0x10F00, # .. 0x10F27 ; Old_Sogdian - 0x10F28, # .. 0x10F2F ; Unknown - 0x10F30, # .. 0x10F59 ; Sogdian - 0x10F5A, # .. 0x10F6F ; Unknown - 0x10F70, # .. 0x10F89 ; Old_Uyghur - 0x10F8A, # .. 0x10FAF ; Unknown - 0x10FB0, # .. 0x10FCB ; Chorasmian - 0x10FCC, # .. 0x10FDF ; Unknown - 0x10FE0, # .. 0x10FF6 ; Elymaic - 0x10FF7, # .. 0x10FFF ; Unknown - 0x11000, # .. 0x1104D ; Brahmi - 0x1104E, # .. 0x11051 ; Unknown - 0x11052, # .. 0x11075 ; Brahmi - 0x11076, # .. 0x1107E ; Unknown - 0x1107F, # .. 0x1107F ; Brahmi - 0x11080, # .. 0x110C2 ; Kaithi - 0x110C3, # .. 0x110CC ; Unknown - 0x110CD, # .. 0x110CD ; Kaithi - 0x110CE, # .. 0x110CF ; Unknown - 0x110D0, # .. 0x110E8 ; Sora_Sompeng - 0x110E9, # .. 0x110EF ; Unknown - 0x110F0, # .. 0x110F9 ; Sora_Sompeng - 0x110FA, # .. 0x110FF ; Unknown - 0x11100, # .. 0x11134 ; Chakma - 0x11135, # .. 0x11135 ; Unknown - 0x11136, # .. 0x11147 ; Chakma - 0x11148, # .. 0x1114F ; Unknown - 0x11150, # .. 0x11176 ; Mahajani - 0x11177, # .. 0x1117F ; Unknown - 0x11180, # .. 0x111DF ; Sharada - 0x111E0, # .. 0x111E0 ; Unknown - 0x111E1, # .. 0x111F4 ; Sinhala - 0x111F5, # .. 0x111FF ; Unknown - 0x11200, # .. 0x11211 ; Khojki - 0x11212, # .. 0x11212 ; Unknown - 0x11213, # .. 0x11241 ; Khojki - 0x11242, # .. 0x1127F ; Unknown - 0x11280, # .. 0x11286 ; Multani - 0x11287, # .. 0x11287 ; Unknown - 0x11288, # .. 0x11288 ; Multani - 0x11289, # .. 0x11289 ; Unknown - 0x1128A, # .. 0x1128D ; Multani - 0x1128E, # .. 0x1128E ; Unknown - 0x1128F, # .. 0x1129D ; Multani - 0x1129E, # .. 0x1129E ; Unknown - 0x1129F, # .. 0x112A9 ; Multani - 0x112AA, # .. 0x112AF ; Unknown - 0x112B0, # .. 0x112EA ; Khudawadi - 0x112EB, # .. 0x112EF ; Unknown - 0x112F0, # .. 0x112F9 ; Khudawadi - 0x112FA, # .. 0x112FF ; Unknown - 0x11300, # .. 0x11303 ; Grantha - 0x11304, # .. 0x11304 ; Unknown - 0x11305, # .. 0x1130C ; Grantha - 0x1130D, # .. 0x1130E ; Unknown - 0x1130F, # .. 0x11310 ; Grantha - 0x11311, # .. 0x11312 ; Unknown - 0x11313, # .. 0x11328 ; Grantha - 0x11329, # .. 0x11329 ; Unknown - 0x1132A, # .. 0x11330 ; Grantha - 0x11331, # .. 0x11331 ; Unknown - 0x11332, # .. 0x11333 ; Grantha - 0x11334, # .. 0x11334 ; Unknown - 0x11335, # .. 0x11339 ; Grantha - 0x1133A, # .. 0x1133A ; Unknown - 0x1133B, # .. 0x1133B ; Inherited - 0x1133C, # .. 0x11344 ; Grantha - 0x11345, # .. 0x11346 ; Unknown - 0x11347, # .. 0x11348 ; Grantha - 0x11349, # .. 0x1134A ; Unknown - 0x1134B, # .. 0x1134D ; Grantha - 0x1134E, # .. 0x1134F ; Unknown - 0x11350, # .. 0x11350 ; Grantha - 0x11351, # .. 0x11356 ; Unknown - 0x11357, # .. 0x11357 ; Grantha - 0x11358, # .. 0x1135C ; Unknown - 0x1135D, # .. 0x11363 ; Grantha - 0x11364, # .. 0x11365 ; Unknown - 0x11366, # .. 0x1136C ; Grantha - 0x1136D, # .. 0x1136F ; Unknown - 0x11370, # .. 0x11374 ; Grantha - 0x11375, # .. 0x113FF ; Unknown - 0x11400, # .. 0x1145B ; Newa - 0x1145C, # .. 0x1145C ; Unknown - 0x1145D, # .. 0x11461 ; Newa - 0x11462, # .. 0x1147F ; Unknown - 0x11480, # .. 0x114C7 ; Tirhuta - 0x114C8, # .. 0x114CF ; Unknown - 0x114D0, # .. 0x114D9 ; Tirhuta - 0x114DA, # .. 0x1157F ; Unknown - 0x11580, # .. 0x115B5 ; Siddham - 0x115B6, # .. 0x115B7 ; Unknown - 0x115B8, # .. 0x115DD ; Siddham - 0x115DE, # .. 0x115FF ; Unknown - 0x11600, # .. 0x11644 ; Modi - 0x11645, # .. 0x1164F ; Unknown - 0x11650, # .. 0x11659 ; Modi - 0x1165A, # .. 0x1165F ; Unknown - 0x11660, # .. 0x1166C ; Mongolian - 0x1166D, # .. 0x1167F ; Unknown - 0x11680, # .. 0x116B9 ; Takri - 0x116BA, # .. 0x116BF ; Unknown - 0x116C0, # .. 0x116C9 ; Takri - 0x116CA, # .. 0x116FF ; Unknown - 0x11700, # .. 0x1171A ; Ahom - 0x1171B, # .. 0x1171C ; Unknown - 0x1171D, # .. 0x1172B ; Ahom - 0x1172C, # .. 0x1172F ; Unknown - 0x11730, # .. 0x11746 ; Ahom - 0x11747, # .. 0x117FF ; Unknown - 0x11800, # .. 0x1183B ; Dogra - 0x1183C, # .. 0x1189F ; Unknown - 0x118A0, # .. 0x118F2 ; Warang_Citi - 0x118F3, # .. 0x118FE ; Unknown - 0x118FF, # .. 0x118FF ; Warang_Citi - 0x11900, # .. 0x11906 ; Dives_Akuru - 0x11907, # .. 0x11908 ; Unknown - 0x11909, # .. 0x11909 ; Dives_Akuru - 0x1190A, # .. 0x1190B ; Unknown - 0x1190C, # .. 0x11913 ; Dives_Akuru - 0x11914, # .. 0x11914 ; Unknown - 0x11915, # .. 0x11916 ; Dives_Akuru - 0x11917, # .. 0x11917 ; Unknown - 0x11918, # .. 0x11935 ; Dives_Akuru - 0x11936, # .. 0x11936 ; Unknown - 0x11937, # .. 0x11938 ; Dives_Akuru - 0x11939, # .. 0x1193A ; Unknown - 0x1193B, # .. 0x11946 ; Dives_Akuru - 0x11947, # .. 0x1194F ; Unknown - 0x11950, # .. 0x11959 ; Dives_Akuru - 0x1195A, # .. 0x1199F ; Unknown - 0x119A0, # .. 0x119A7 ; Nandinagari - 0x119A8, # .. 0x119A9 ; Unknown - 0x119AA, # .. 0x119D7 ; Nandinagari - 0x119D8, # .. 0x119D9 ; Unknown - 0x119DA, # .. 0x119E4 ; Nandinagari - 0x119E5, # .. 0x119FF ; Unknown - 0x11A00, # .. 0x11A47 ; Zanabazar_Square - 0x11A48, # .. 0x11A4F ; Unknown - 0x11A50, # .. 0x11AA2 ; Soyombo - 0x11AA3, # .. 0x11AAF ; Unknown - 0x11AB0, # .. 0x11ABF ; Canadian_Aboriginal - 0x11AC0, # .. 0x11AF8 ; Pau_Cin_Hau - 0x11AF9, # .. 0x11AFF ; Unknown - 0x11B00, # .. 0x11B09 ; Devanagari - 0x11B0A, # .. 0x11BFF ; Unknown - 0x11C00, # .. 0x11C08 ; Bhaiksuki - 0x11C09, # .. 0x11C09 ; Unknown - 0x11C0A, # .. 0x11C36 ; Bhaiksuki - 0x11C37, # .. 0x11C37 ; Unknown - 0x11C38, # .. 0x11C45 ; Bhaiksuki - 0x11C46, # .. 0x11C4F ; Unknown - 0x11C50, # .. 0x11C6C ; Bhaiksuki - 0x11C6D, # .. 0x11C6F ; Unknown - 0x11C70, # .. 0x11C8F ; Marchen - 0x11C90, # .. 0x11C91 ; Unknown - 0x11C92, # .. 0x11CA7 ; Marchen - 0x11CA8, # .. 0x11CA8 ; Unknown - 0x11CA9, # .. 0x11CB6 ; Marchen - 0x11CB7, # .. 0x11CFF ; Unknown - 0x11D00, # .. 0x11D06 ; Masaram_Gondi - 0x11D07, # .. 0x11D07 ; Unknown - 0x11D08, # .. 0x11D09 ; Masaram_Gondi - 0x11D0A, # .. 0x11D0A ; Unknown - 0x11D0B, # .. 0x11D36 ; Masaram_Gondi - 0x11D37, # .. 0x11D39 ; Unknown - 0x11D3A, # .. 0x11D3A ; Masaram_Gondi - 0x11D3B, # .. 0x11D3B ; Unknown - 0x11D3C, # .. 0x11D3D ; Masaram_Gondi - 0x11D3E, # .. 0x11D3E ; Unknown - 0x11D3F, # .. 0x11D47 ; Masaram_Gondi - 0x11D48, # .. 0x11D4F ; Unknown - 0x11D50, # .. 0x11D59 ; Masaram_Gondi - 0x11D5A, # .. 0x11D5F ; Unknown - 0x11D60, # .. 0x11D65 ; Gunjala_Gondi - 0x11D66, # .. 0x11D66 ; Unknown - 0x11D67, # .. 0x11D68 ; Gunjala_Gondi - 0x11D69, # .. 0x11D69 ; Unknown - 0x11D6A, # .. 0x11D8E ; Gunjala_Gondi - 0x11D8F, # .. 0x11D8F ; Unknown - 0x11D90, # .. 0x11D91 ; Gunjala_Gondi - 0x11D92, # .. 0x11D92 ; Unknown - 0x11D93, # .. 0x11D98 ; Gunjala_Gondi - 0x11D99, # .. 0x11D9F ; Unknown - 0x11DA0, # .. 0x11DA9 ; Gunjala_Gondi - 0x11DAA, # .. 0x11EDF ; Unknown - 0x11EE0, # .. 0x11EF8 ; Makasar - 0x11EF9, # .. 0x11EFF ; Unknown - 0x11F00, # .. 0x11F10 ; Kawi - 0x11F11, # .. 0x11F11 ; Unknown - 0x11F12, # .. 0x11F3A ; Kawi - 0x11F3B, # .. 0x11F3D ; Unknown - 0x11F3E, # .. 0x11F59 ; Kawi - 0x11F5A, # .. 0x11FAF ; Unknown - 0x11FB0, # .. 0x11FB0 ; Lisu - 0x11FB1, # .. 0x11FBF ; Unknown - 0x11FC0, # .. 0x11FF1 ; Tamil - 0x11FF2, # .. 0x11FFE ; Unknown - 0x11FFF, # .. 0x11FFF ; Tamil - 0x12000, # .. 0x12399 ; Cuneiform - 0x1239A, # .. 0x123FF ; Unknown - 0x12400, # .. 0x1246E ; Cuneiform - 0x1246F, # .. 0x1246F ; Unknown - 0x12470, # .. 0x12474 ; Cuneiform - 0x12475, # .. 0x1247F ; Unknown - 0x12480, # .. 0x12543 ; Cuneiform - 0x12544, # .. 0x12F8F ; Unknown - 0x12F90, # .. 0x12FF2 ; Cypro_Minoan - 0x12FF3, # .. 0x12FFF ; Unknown - 0x13000, # .. 0x13455 ; Egyptian_Hieroglyphs - 0x13456, # .. 0x143FF ; Unknown - 0x14400, # .. 0x14646 ; Anatolian_Hieroglyphs - 0x14647, # .. 0x167FF ; Unknown - 0x16800, # .. 0x16A38 ; Bamum - 0x16A39, # .. 0x16A3F ; Unknown - 0x16A40, # .. 0x16A5E ; Mro - 0x16A5F, # .. 0x16A5F ; Unknown - 0x16A60, # .. 0x16A69 ; Mro - 0x16A6A, # .. 0x16A6D ; Unknown - 0x16A6E, # .. 0x16A6F ; Mro - 0x16A70, # .. 0x16ABE ; Tangsa - 0x16ABF, # .. 0x16ABF ; Unknown - 0x16AC0, # .. 0x16AC9 ; Tangsa - 0x16ACA, # .. 0x16ACF ; Unknown - 0x16AD0, # .. 0x16AED ; Bassa_Vah - 0x16AEE, # .. 0x16AEF ; Unknown - 0x16AF0, # .. 0x16AF5 ; Bassa_Vah - 0x16AF6, # .. 0x16AFF ; Unknown - 0x16B00, # .. 0x16B45 ; Pahawh_Hmong - 0x16B46, # .. 0x16B4F ; Unknown - 0x16B50, # .. 0x16B59 ; Pahawh_Hmong - 0x16B5A, # .. 0x16B5A ; Unknown - 0x16B5B, # .. 0x16B61 ; Pahawh_Hmong - 0x16B62, # .. 0x16B62 ; Unknown - 0x16B63, # .. 0x16B77 ; Pahawh_Hmong - 0x16B78, # .. 0x16B7C ; Unknown - 0x16B7D, # .. 0x16B8F ; Pahawh_Hmong - 0x16B90, # .. 0x16E3F ; Unknown - 0x16E40, # .. 0x16E9A ; Medefaidrin - 0x16E9B, # .. 0x16EFF ; Unknown - 0x16F00, # .. 0x16F4A ; Miao - 0x16F4B, # .. 0x16F4E ; Unknown - 0x16F4F, # .. 0x16F87 ; Miao - 0x16F88, # .. 0x16F8E ; Unknown - 0x16F8F, # .. 0x16F9F ; Miao - 0x16FA0, # .. 0x16FDF ; Unknown - 0x16FE0, # .. 0x16FE0 ; Tangut - 0x16FE1, # .. 0x16FE1 ; Nushu - 0x16FE2, # .. 0x16FE3 ; Han - 0x16FE4, # .. 0x16FE4 ; Khitan_Small_Script - 0x16FE5, # .. 0x16FEF ; Unknown - 0x16FF0, # .. 0x16FF1 ; Han - 0x16FF2, # .. 0x16FFF ; Unknown - 0x17000, # .. 0x187F7 ; Tangut - 0x187F8, # .. 0x187FF ; Unknown - 0x18800, # .. 0x18AFF ; Tangut - 0x18B00, # .. 0x18CD5 ; Khitan_Small_Script - 0x18CD6, # .. 0x18CFF ; Unknown - 0x18D00, # .. 0x18D08 ; Tangut - 0x18D09, # .. 0x1AFEF ; Unknown - 0x1AFF0, # .. 0x1AFF3 ; Katakana - 0x1AFF4, # .. 0x1AFF4 ; Unknown - 0x1AFF5, # .. 0x1AFFB ; Katakana - 0x1AFFC, # .. 0x1AFFC ; Unknown - 0x1AFFD, # .. 0x1AFFE ; Katakana - 0x1AFFF, # .. 0x1AFFF ; Unknown - 0x1B000, # .. 0x1B000 ; Katakana - 0x1B001, # .. 0x1B11F ; Hiragana - 0x1B120, # .. 0x1B122 ; Katakana - 0x1B123, # .. 0x1B131 ; Unknown - 0x1B132, # .. 0x1B132 ; Hiragana - 0x1B133, # .. 0x1B14F ; Unknown - 0x1B150, # .. 0x1B152 ; Hiragana - 0x1B153, # .. 0x1B154 ; Unknown - 0x1B155, # .. 0x1B155 ; Katakana - 0x1B156, # .. 0x1B163 ; Unknown - 0x1B164, # .. 0x1B167 ; Katakana - 0x1B168, # .. 0x1B16F ; Unknown - 0x1B170, # .. 0x1B2FB ; Nushu - 0x1B2FC, # .. 0x1BBFF ; Unknown - 0x1BC00, # .. 0x1BC6A ; Duployan - 0x1BC6B, # .. 0x1BC6F ; Unknown - 0x1BC70, # .. 0x1BC7C ; Duployan - 0x1BC7D, # .. 0x1BC7F ; Unknown - 0x1BC80, # .. 0x1BC88 ; Duployan - 0x1BC89, # .. 0x1BC8F ; Unknown - 0x1BC90, # .. 0x1BC99 ; Duployan - 0x1BC9A, # .. 0x1BC9B ; Unknown - 0x1BC9C, # .. 0x1BC9F ; Duployan - 0x1BCA0, # .. 0x1BCA3 ; Common - 0x1BCA4, # .. 0x1CEFF ; Unknown - 0x1CF00, # .. 0x1CF2D ; Inherited - 0x1CF2E, # .. 0x1CF2F ; Unknown - 0x1CF30, # .. 0x1CF46 ; Inherited - 0x1CF47, # .. 0x1CF4F ; Unknown - 0x1CF50, # .. 0x1CFC3 ; Common - 0x1CFC4, # .. 0x1CFFF ; Unknown - 0x1D000, # .. 0x1D0F5 ; Common - 0x1D0F6, # .. 0x1D0FF ; Unknown - 0x1D100, # .. 0x1D126 ; Common - 0x1D127, # .. 0x1D128 ; Unknown - 0x1D129, # .. 0x1D166 ; Common - 0x1D167, # .. 0x1D169 ; Inherited - 0x1D16A, # .. 0x1D17A ; Common - 0x1D17B, # .. 0x1D182 ; Inherited - 0x1D183, # .. 0x1D184 ; Common - 0x1D185, # .. 0x1D18B ; Inherited - 0x1D18C, # .. 0x1D1A9 ; Common - 0x1D1AA, # .. 0x1D1AD ; Inherited - 0x1D1AE, # .. 0x1D1EA ; Common - 0x1D1EB, # .. 0x1D1FF ; Unknown - 0x1D200, # .. 0x1D245 ; Greek - 0x1D246, # .. 0x1D2BF ; Unknown - 0x1D2C0, # .. 0x1D2D3 ; Common - 0x1D2D4, # .. 0x1D2DF ; Unknown - 0x1D2E0, # .. 0x1D2F3 ; Common - 0x1D2F4, # .. 0x1D2FF ; Unknown - 0x1D300, # .. 0x1D356 ; Common - 0x1D357, # .. 0x1D35F ; Unknown - 0x1D360, # .. 0x1D378 ; Common - 0x1D379, # .. 0x1D3FF ; Unknown - 0x1D400, # .. 0x1D454 ; Common - 0x1D455, # .. 0x1D455 ; Unknown - 0x1D456, # .. 0x1D49C ; Common - 0x1D49D, # .. 0x1D49D ; Unknown - 0x1D49E, # .. 0x1D49F ; Common - 0x1D4A0, # .. 0x1D4A1 ; Unknown - 0x1D4A2, # .. 0x1D4A2 ; Common - 0x1D4A3, # .. 0x1D4A4 ; Unknown - 0x1D4A5, # .. 0x1D4A6 ; Common - 0x1D4A7, # .. 0x1D4A8 ; Unknown - 0x1D4A9, # .. 0x1D4AC ; Common - 0x1D4AD, # .. 0x1D4AD ; Unknown - 0x1D4AE, # .. 0x1D4B9 ; Common - 0x1D4BA, # .. 0x1D4BA ; Unknown - 0x1D4BB, # .. 0x1D4BB ; Common - 0x1D4BC, # .. 0x1D4BC ; Unknown - 0x1D4BD, # .. 0x1D4C3 ; Common - 0x1D4C4, # .. 0x1D4C4 ; Unknown - 0x1D4C5, # .. 0x1D505 ; Common - 0x1D506, # .. 0x1D506 ; Unknown - 0x1D507, # .. 0x1D50A ; Common - 0x1D50B, # .. 0x1D50C ; Unknown - 0x1D50D, # .. 0x1D514 ; Common - 0x1D515, # .. 0x1D515 ; Unknown - 0x1D516, # .. 0x1D51C ; Common - 0x1D51D, # .. 0x1D51D ; Unknown - 0x1D51E, # .. 0x1D539 ; Common - 0x1D53A, # .. 0x1D53A ; Unknown - 0x1D53B, # .. 0x1D53E ; Common - 0x1D53F, # .. 0x1D53F ; Unknown - 0x1D540, # .. 0x1D544 ; Common - 0x1D545, # .. 0x1D545 ; Unknown - 0x1D546, # .. 0x1D546 ; Common - 0x1D547, # .. 0x1D549 ; Unknown - 0x1D54A, # .. 0x1D550 ; Common - 0x1D551, # .. 0x1D551 ; Unknown - 0x1D552, # .. 0x1D6A5 ; Common - 0x1D6A6, # .. 0x1D6A7 ; Unknown - 0x1D6A8, # .. 0x1D7CB ; Common - 0x1D7CC, # .. 0x1D7CD ; Unknown - 0x1D7CE, # .. 0x1D7FF ; Common - 0x1D800, # .. 0x1DA8B ; SignWriting - 0x1DA8C, # .. 0x1DA9A ; Unknown - 0x1DA9B, # .. 0x1DA9F ; SignWriting - 0x1DAA0, # .. 0x1DAA0 ; Unknown - 0x1DAA1, # .. 0x1DAAF ; SignWriting - 0x1DAB0, # .. 0x1DEFF ; Unknown - 0x1DF00, # .. 0x1DF1E ; Latin - 0x1DF1F, # .. 0x1DF24 ; Unknown - 0x1DF25, # .. 0x1DF2A ; Latin - 0x1DF2B, # .. 0x1DFFF ; Unknown - 0x1E000, # .. 0x1E006 ; Glagolitic - 0x1E007, # .. 0x1E007 ; Unknown - 0x1E008, # .. 0x1E018 ; Glagolitic - 0x1E019, # .. 0x1E01A ; Unknown - 0x1E01B, # .. 0x1E021 ; Glagolitic - 0x1E022, # .. 0x1E022 ; Unknown - 0x1E023, # .. 0x1E024 ; Glagolitic - 0x1E025, # .. 0x1E025 ; Unknown - 0x1E026, # .. 0x1E02A ; Glagolitic - 0x1E02B, # .. 0x1E02F ; Unknown - 0x1E030, # .. 0x1E06D ; Cyrillic - 0x1E06E, # .. 0x1E08E ; Unknown - 0x1E08F, # .. 0x1E08F ; Cyrillic - 0x1E090, # .. 0x1E0FF ; Unknown - 0x1E100, # .. 0x1E12C ; Nyiakeng_Puachue_Hmong - 0x1E12D, # .. 0x1E12F ; Unknown - 0x1E130, # .. 0x1E13D ; Nyiakeng_Puachue_Hmong - 0x1E13E, # .. 0x1E13F ; Unknown - 0x1E140, # .. 0x1E149 ; Nyiakeng_Puachue_Hmong - 0x1E14A, # .. 0x1E14D ; Unknown - 0x1E14E, # .. 0x1E14F ; Nyiakeng_Puachue_Hmong - 0x1E150, # .. 0x1E28F ; Unknown - 0x1E290, # .. 0x1E2AE ; Toto - 0x1E2AF, # .. 0x1E2BF ; Unknown - 0x1E2C0, # .. 0x1E2F9 ; Wancho - 0x1E2FA, # .. 0x1E2FE ; Unknown - 0x1E2FF, # .. 0x1E2FF ; Wancho - 0x1E300, # .. 0x1E4CF ; Unknown - 0x1E4D0, # .. 0x1E4F9 ; Nag_Mundari - 0x1E4FA, # .. 0x1E7DF ; Unknown - 0x1E7E0, # .. 0x1E7E6 ; Ethiopic - 0x1E7E7, # .. 0x1E7E7 ; Unknown - 0x1E7E8, # .. 0x1E7EB ; Ethiopic - 0x1E7EC, # .. 0x1E7EC ; Unknown - 0x1E7ED, # .. 0x1E7EE ; Ethiopic - 0x1E7EF, # .. 0x1E7EF ; Unknown - 0x1E7F0, # .. 0x1E7FE ; Ethiopic - 0x1E7FF, # .. 0x1E7FF ; Unknown - 0x1E800, # .. 0x1E8C4 ; Mende_Kikakui - 0x1E8C5, # .. 0x1E8C6 ; Unknown - 0x1E8C7, # .. 0x1E8D6 ; Mende_Kikakui - 0x1E8D7, # .. 0x1E8FF ; Unknown - 0x1E900, # .. 0x1E94B ; Adlam - 0x1E94C, # .. 0x1E94F ; Unknown - 0x1E950, # .. 0x1E959 ; Adlam - 0x1E95A, # .. 0x1E95D ; Unknown - 0x1E95E, # .. 0x1E95F ; Adlam - 0x1E960, # .. 0x1EC70 ; Unknown - 0x1EC71, # .. 0x1ECB4 ; Common - 0x1ECB5, # .. 0x1ED00 ; Unknown - 0x1ED01, # .. 0x1ED3D ; Common - 0x1ED3E, # .. 0x1EDFF ; Unknown - 0x1EE00, # .. 0x1EE03 ; Arabic - 0x1EE04, # .. 0x1EE04 ; Unknown - 0x1EE05, # .. 0x1EE1F ; Arabic - 0x1EE20, # .. 0x1EE20 ; Unknown - 0x1EE21, # .. 0x1EE22 ; Arabic - 0x1EE23, # .. 0x1EE23 ; Unknown - 0x1EE24, # .. 0x1EE24 ; Arabic - 0x1EE25, # .. 0x1EE26 ; Unknown - 0x1EE27, # .. 0x1EE27 ; Arabic - 0x1EE28, # .. 0x1EE28 ; Unknown - 0x1EE29, # .. 0x1EE32 ; Arabic - 0x1EE33, # .. 0x1EE33 ; Unknown - 0x1EE34, # .. 0x1EE37 ; Arabic - 0x1EE38, # .. 0x1EE38 ; Unknown - 0x1EE39, # .. 0x1EE39 ; Arabic - 0x1EE3A, # .. 0x1EE3A ; Unknown - 0x1EE3B, # .. 0x1EE3B ; Arabic - 0x1EE3C, # .. 0x1EE41 ; Unknown - 0x1EE42, # .. 0x1EE42 ; Arabic - 0x1EE43, # .. 0x1EE46 ; Unknown - 0x1EE47, # .. 0x1EE47 ; Arabic - 0x1EE48, # .. 0x1EE48 ; Unknown - 0x1EE49, # .. 0x1EE49 ; Arabic - 0x1EE4A, # .. 0x1EE4A ; Unknown - 0x1EE4B, # .. 0x1EE4B ; Arabic - 0x1EE4C, # .. 0x1EE4C ; Unknown - 0x1EE4D, # .. 0x1EE4F ; Arabic - 0x1EE50, # .. 0x1EE50 ; Unknown - 0x1EE51, # .. 0x1EE52 ; Arabic - 0x1EE53, # .. 0x1EE53 ; Unknown - 0x1EE54, # .. 0x1EE54 ; Arabic - 0x1EE55, # .. 0x1EE56 ; Unknown - 0x1EE57, # .. 0x1EE57 ; Arabic - 0x1EE58, # .. 0x1EE58 ; Unknown - 0x1EE59, # .. 0x1EE59 ; Arabic - 0x1EE5A, # .. 0x1EE5A ; Unknown - 0x1EE5B, # .. 0x1EE5B ; Arabic - 0x1EE5C, # .. 0x1EE5C ; Unknown - 0x1EE5D, # .. 0x1EE5D ; Arabic - 0x1EE5E, # .. 0x1EE5E ; Unknown - 0x1EE5F, # .. 0x1EE5F ; Arabic - 0x1EE60, # .. 0x1EE60 ; Unknown - 0x1EE61, # .. 0x1EE62 ; Arabic - 0x1EE63, # .. 0x1EE63 ; Unknown - 0x1EE64, # .. 0x1EE64 ; Arabic - 0x1EE65, # .. 0x1EE66 ; Unknown - 0x1EE67, # .. 0x1EE6A ; Arabic - 0x1EE6B, # .. 0x1EE6B ; Unknown - 0x1EE6C, # .. 0x1EE72 ; Arabic - 0x1EE73, # .. 0x1EE73 ; Unknown - 0x1EE74, # .. 0x1EE77 ; Arabic - 0x1EE78, # .. 0x1EE78 ; Unknown - 0x1EE79, # .. 0x1EE7C ; Arabic - 0x1EE7D, # .. 0x1EE7D ; Unknown - 0x1EE7E, # .. 0x1EE7E ; Arabic - 0x1EE7F, # .. 0x1EE7F ; Unknown - 0x1EE80, # .. 0x1EE89 ; Arabic - 0x1EE8A, # .. 0x1EE8A ; Unknown - 0x1EE8B, # .. 0x1EE9B ; Arabic - 0x1EE9C, # .. 0x1EEA0 ; Unknown - 0x1EEA1, # .. 0x1EEA3 ; Arabic - 0x1EEA4, # .. 0x1EEA4 ; Unknown - 0x1EEA5, # .. 0x1EEA9 ; Arabic - 0x1EEAA, # .. 0x1EEAA ; Unknown - 0x1EEAB, # .. 0x1EEBB ; Arabic - 0x1EEBC, # .. 0x1EEEF ; Unknown - 0x1EEF0, # .. 0x1EEF1 ; Arabic - 0x1EEF2, # .. 0x1EFFF ; Unknown - 0x1F000, # .. 0x1F02B ; Common - 0x1F02C, # .. 0x1F02F ; Unknown - 0x1F030, # .. 0x1F093 ; Common - 0x1F094, # .. 0x1F09F ; Unknown - 0x1F0A0, # .. 0x1F0AE ; Common - 0x1F0AF, # .. 0x1F0B0 ; Unknown - 0x1F0B1, # .. 0x1F0BF ; Common - 0x1F0C0, # .. 0x1F0C0 ; Unknown - 0x1F0C1, # .. 0x1F0CF ; Common - 0x1F0D0, # .. 0x1F0D0 ; Unknown - 0x1F0D1, # .. 0x1F0F5 ; Common - 0x1F0F6, # .. 0x1F0FF ; Unknown - 0x1F100, # .. 0x1F1AD ; Common - 0x1F1AE, # .. 0x1F1E5 ; Unknown - 0x1F1E6, # .. 0x1F1FF ; Common - 0x1F200, # .. 0x1F200 ; Hiragana - 0x1F201, # .. 0x1F202 ; Common - 0x1F203, # .. 0x1F20F ; Unknown - 0x1F210, # .. 0x1F23B ; Common - 0x1F23C, # .. 0x1F23F ; Unknown - 0x1F240, # .. 0x1F248 ; Common - 0x1F249, # .. 0x1F24F ; Unknown - 0x1F250, # .. 0x1F251 ; Common - 0x1F252, # .. 0x1F25F ; Unknown - 0x1F260, # .. 0x1F265 ; Common - 0x1F266, # .. 0x1F2FF ; Unknown - 0x1F300, # .. 0x1F6D7 ; Common - 0x1F6D8, # .. 0x1F6DB ; Unknown - 0x1F6DC, # .. 0x1F6EC ; Common - 0x1F6ED, # .. 0x1F6EF ; Unknown - 0x1F6F0, # .. 0x1F6FC ; Common - 0x1F6FD, # .. 0x1F6FF ; Unknown - 0x1F700, # .. 0x1F776 ; Common - 0x1F777, # .. 0x1F77A ; Unknown - 0x1F77B, # .. 0x1F7D9 ; Common - 0x1F7DA, # .. 0x1F7DF ; Unknown - 0x1F7E0, # .. 0x1F7EB ; Common - 0x1F7EC, # .. 0x1F7EF ; Unknown - 0x1F7F0, # .. 0x1F7F0 ; Common - 0x1F7F1, # .. 0x1F7FF ; Unknown - 0x1F800, # .. 0x1F80B ; Common - 0x1F80C, # .. 0x1F80F ; Unknown - 0x1F810, # .. 0x1F847 ; Common - 0x1F848, # .. 0x1F84F ; Unknown - 0x1F850, # .. 0x1F859 ; Common - 0x1F85A, # .. 0x1F85F ; Unknown - 0x1F860, # .. 0x1F887 ; Common - 0x1F888, # .. 0x1F88F ; Unknown - 0x1F890, # .. 0x1F8AD ; Common - 0x1F8AE, # .. 0x1F8AF ; Unknown - 0x1F8B0, # .. 0x1F8B1 ; Common - 0x1F8B2, # .. 0x1F8FF ; Unknown - 0x1F900, # .. 0x1FA53 ; Common - 0x1FA54, # .. 0x1FA5F ; Unknown - 0x1FA60, # .. 0x1FA6D ; Common - 0x1FA6E, # .. 0x1FA6F ; Unknown - 0x1FA70, # .. 0x1FA7C ; Common - 0x1FA7D, # .. 0x1FA7F ; Unknown - 0x1FA80, # .. 0x1FA88 ; Common - 0x1FA89, # .. 0x1FA8F ; Unknown - 0x1FA90, # .. 0x1FABD ; Common - 0x1FABE, # .. 0x1FABE ; Unknown - 0x1FABF, # .. 0x1FAC5 ; Common - 0x1FAC6, # .. 0x1FACD ; Unknown - 0x1FACE, # .. 0x1FADB ; Common - 0x1FADC, # .. 0x1FADF ; Unknown - 0x1FAE0, # .. 0x1FAE8 ; Common - 0x1FAE9, # .. 0x1FAEF ; Unknown - 0x1FAF0, # .. 0x1FAF8 ; Common - 0x1FAF9, # .. 0x1FAFF ; Unknown - 0x1FB00, # .. 0x1FB92 ; Common - 0x1FB93, # .. 0x1FB93 ; Unknown - 0x1FB94, # .. 0x1FBCA ; Common - 0x1FBCB, # .. 0x1FBEF ; Unknown - 0x1FBF0, # .. 0x1FBF9 ; Common - 0x1FBFA, # .. 0x1FFFF ; Unknown - 0x20000, # .. 0x2A6DF ; Han - 0x2A6E0, # .. 0x2A6FF ; Unknown - 0x2A700, # .. 0x2B739 ; Han - 0x2B73A, # .. 0x2B73F ; Unknown - 0x2B740, # .. 0x2B81D ; Han - 0x2B81E, # .. 0x2B81F ; Unknown - 0x2B820, # .. 0x2CEA1 ; Han - 0x2CEA2, # .. 0x2CEAF ; Unknown - 0x2CEB0, # .. 0x2EBE0 ; Han - 0x2EBE1, # .. 0x2F7FF ; Unknown - 0x2F800, # .. 0x2FA1D ; Han - 0x2FA1E, # .. 0x2FFFF ; Unknown - 0x30000, # .. 0x3134A ; Han - 0x3134B, # .. 0x3134F ; Unknown - 0x31350, # .. 0x323AF ; Han - 0x323B0, # .. 0xE0000 ; Unknown - 0xE0001, # .. 0xE0001 ; Common - 0xE0002, # .. 0xE001F ; Unknown - 0xE0020, # .. 0xE007F ; Common - 0xE0080, # .. 0xE00FF ; Unknown - 0xE0100, # .. 0xE01EF ; Inherited - 0xE01F0, # .. 0x10FFFF ; Unknown -] - -VALUES = [ - "Zyyy", # 0000..0040 ; Common - "Latn", # 0041..005A ; Latin - "Zyyy", # 005B..0060 ; Common - "Latn", # 0061..007A ; Latin - "Zyyy", # 007B..00A9 ; Common - "Latn", # 00AA..00AA ; Latin - "Zyyy", # 00AB..00B9 ; Common - "Latn", # 00BA..00BA ; Latin - "Zyyy", # 00BB..00BF ; Common - "Latn", # 00C0..00D6 ; Latin - "Zyyy", # 00D7..00D7 ; Common - "Latn", # 00D8..00F6 ; Latin - "Zyyy", # 00F7..00F7 ; Common - "Latn", # 00F8..02B8 ; Latin - "Zyyy", # 02B9..02DF ; Common - "Latn", # 02E0..02E4 ; Latin - "Zyyy", # 02E5..02E9 ; Common - "Bopo", # 02EA..02EB ; Bopomofo - "Zyyy", # 02EC..02FF ; Common - "Zinh", # 0300..036F ; Inherited - "Grek", # 0370..0373 ; Greek - "Zyyy", # 0374..0374 ; Common - "Grek", # 0375..0377 ; Greek - "Zzzz", # 0378..0379 ; Unknown - "Grek", # 037A..037D ; Greek - "Zyyy", # 037E..037E ; Common - "Grek", # 037F..037F ; Greek - "Zzzz", # 0380..0383 ; Unknown - "Grek", # 0384..0384 ; Greek - "Zyyy", # 0385..0385 ; Common - "Grek", # 0386..0386 ; Greek - "Zyyy", # 0387..0387 ; Common - "Grek", # 0388..038A ; Greek - "Zzzz", # 038B..038B ; Unknown - "Grek", # 038C..038C ; Greek - "Zzzz", # 038D..038D ; Unknown - "Grek", # 038E..03A1 ; Greek - "Zzzz", # 03A2..03A2 ; Unknown - "Grek", # 03A3..03E1 ; Greek - "Copt", # 03E2..03EF ; Coptic - "Grek", # 03F0..03FF ; Greek - "Cyrl", # 0400..0484 ; Cyrillic - "Zinh", # 0485..0486 ; Inherited - "Cyrl", # 0487..052F ; Cyrillic - "Zzzz", # 0530..0530 ; Unknown - "Armn", # 0531..0556 ; Armenian - "Zzzz", # 0557..0558 ; Unknown - "Armn", # 0559..058A ; Armenian - "Zzzz", # 058B..058C ; Unknown - "Armn", # 058D..058F ; Armenian - "Zzzz", # 0590..0590 ; Unknown - "Hebr", # 0591..05C7 ; Hebrew - "Zzzz", # 05C8..05CF ; Unknown - "Hebr", # 05D0..05EA ; Hebrew - "Zzzz", # 05EB..05EE ; Unknown - "Hebr", # 05EF..05F4 ; Hebrew - "Zzzz", # 05F5..05FF ; Unknown - "Arab", # 0600..0604 ; Arabic - "Zyyy", # 0605..0605 ; Common - "Arab", # 0606..060B ; Arabic - "Zyyy", # 060C..060C ; Common - "Arab", # 060D..061A ; Arabic - "Zyyy", # 061B..061B ; Common - "Arab", # 061C..061E ; Arabic - "Zyyy", # 061F..061F ; Common - "Arab", # 0620..063F ; Arabic - "Zyyy", # 0640..0640 ; Common - "Arab", # 0641..064A ; Arabic - "Zinh", # 064B..0655 ; Inherited - "Arab", # 0656..066F ; Arabic - "Zinh", # 0670..0670 ; Inherited - "Arab", # 0671..06DC ; Arabic - "Zyyy", # 06DD..06DD ; Common - "Arab", # 06DE..06FF ; Arabic - "Syrc", # 0700..070D ; Syriac - "Zzzz", # 070E..070E ; Unknown - "Syrc", # 070F..074A ; Syriac - "Zzzz", # 074B..074C ; Unknown - "Syrc", # 074D..074F ; Syriac - "Arab", # 0750..077F ; Arabic - "Thaa", # 0780..07B1 ; Thaana - "Zzzz", # 07B2..07BF ; Unknown - "Nkoo", # 07C0..07FA ; Nko - "Zzzz", # 07FB..07FC ; Unknown - "Nkoo", # 07FD..07FF ; Nko - "Samr", # 0800..082D ; Samaritan - "Zzzz", # 082E..082F ; Unknown - "Samr", # 0830..083E ; Samaritan - "Zzzz", # 083F..083F ; Unknown - "Mand", # 0840..085B ; Mandaic - "Zzzz", # 085C..085D ; Unknown - "Mand", # 085E..085E ; Mandaic - "Zzzz", # 085F..085F ; Unknown - "Syrc", # 0860..086A ; Syriac - "Zzzz", # 086B..086F ; Unknown - "Arab", # 0870..088E ; Arabic - "Zzzz", # 088F..088F ; Unknown - "Arab", # 0890..0891 ; Arabic - "Zzzz", # 0892..0897 ; Unknown - "Arab", # 0898..08E1 ; Arabic - "Zyyy", # 08E2..08E2 ; Common - "Arab", # 08E3..08FF ; Arabic - "Deva", # 0900..0950 ; Devanagari - "Zinh", # 0951..0954 ; Inherited - "Deva", # 0955..0963 ; Devanagari - "Zyyy", # 0964..0965 ; Common - "Deva", # 0966..097F ; Devanagari - "Beng", # 0980..0983 ; Bengali - "Zzzz", # 0984..0984 ; Unknown - "Beng", # 0985..098C ; Bengali - "Zzzz", # 098D..098E ; Unknown - "Beng", # 098F..0990 ; Bengali - "Zzzz", # 0991..0992 ; Unknown - "Beng", # 0993..09A8 ; Bengali - "Zzzz", # 09A9..09A9 ; Unknown - "Beng", # 09AA..09B0 ; Bengali - "Zzzz", # 09B1..09B1 ; Unknown - "Beng", # 09B2..09B2 ; Bengali - "Zzzz", # 09B3..09B5 ; Unknown - "Beng", # 09B6..09B9 ; Bengali - "Zzzz", # 09BA..09BB ; Unknown - "Beng", # 09BC..09C4 ; Bengali - "Zzzz", # 09C5..09C6 ; Unknown - "Beng", # 09C7..09C8 ; Bengali - "Zzzz", # 09C9..09CA ; Unknown - "Beng", # 09CB..09CE ; Bengali - "Zzzz", # 09CF..09D6 ; Unknown - "Beng", # 09D7..09D7 ; Bengali - "Zzzz", # 09D8..09DB ; Unknown - "Beng", # 09DC..09DD ; Bengali - "Zzzz", # 09DE..09DE ; Unknown - "Beng", # 09DF..09E3 ; Bengali - "Zzzz", # 09E4..09E5 ; Unknown - "Beng", # 09E6..09FE ; Bengali - "Zzzz", # 09FF..0A00 ; Unknown - "Guru", # 0A01..0A03 ; Gurmukhi - "Zzzz", # 0A04..0A04 ; Unknown - "Guru", # 0A05..0A0A ; Gurmukhi - "Zzzz", # 0A0B..0A0E ; Unknown - "Guru", # 0A0F..0A10 ; Gurmukhi - "Zzzz", # 0A11..0A12 ; Unknown - "Guru", # 0A13..0A28 ; Gurmukhi - "Zzzz", # 0A29..0A29 ; Unknown - "Guru", # 0A2A..0A30 ; Gurmukhi - "Zzzz", # 0A31..0A31 ; Unknown - "Guru", # 0A32..0A33 ; Gurmukhi - "Zzzz", # 0A34..0A34 ; Unknown - "Guru", # 0A35..0A36 ; Gurmukhi - "Zzzz", # 0A37..0A37 ; Unknown - "Guru", # 0A38..0A39 ; Gurmukhi - "Zzzz", # 0A3A..0A3B ; Unknown - "Guru", # 0A3C..0A3C ; Gurmukhi - "Zzzz", # 0A3D..0A3D ; Unknown - "Guru", # 0A3E..0A42 ; Gurmukhi - "Zzzz", # 0A43..0A46 ; Unknown - "Guru", # 0A47..0A48 ; Gurmukhi - "Zzzz", # 0A49..0A4A ; Unknown - "Guru", # 0A4B..0A4D ; Gurmukhi - "Zzzz", # 0A4E..0A50 ; Unknown - "Guru", # 0A51..0A51 ; Gurmukhi - "Zzzz", # 0A52..0A58 ; Unknown - "Guru", # 0A59..0A5C ; Gurmukhi - "Zzzz", # 0A5D..0A5D ; Unknown - "Guru", # 0A5E..0A5E ; Gurmukhi - "Zzzz", # 0A5F..0A65 ; Unknown - "Guru", # 0A66..0A76 ; Gurmukhi - "Zzzz", # 0A77..0A80 ; Unknown - "Gujr", # 0A81..0A83 ; Gujarati - "Zzzz", # 0A84..0A84 ; Unknown - "Gujr", # 0A85..0A8D ; Gujarati - "Zzzz", # 0A8E..0A8E ; Unknown - "Gujr", # 0A8F..0A91 ; Gujarati - "Zzzz", # 0A92..0A92 ; Unknown - "Gujr", # 0A93..0AA8 ; Gujarati - "Zzzz", # 0AA9..0AA9 ; Unknown - "Gujr", # 0AAA..0AB0 ; Gujarati - "Zzzz", # 0AB1..0AB1 ; Unknown - "Gujr", # 0AB2..0AB3 ; Gujarati - "Zzzz", # 0AB4..0AB4 ; Unknown - "Gujr", # 0AB5..0AB9 ; Gujarati - "Zzzz", # 0ABA..0ABB ; Unknown - "Gujr", # 0ABC..0AC5 ; Gujarati - "Zzzz", # 0AC6..0AC6 ; Unknown - "Gujr", # 0AC7..0AC9 ; Gujarati - "Zzzz", # 0ACA..0ACA ; Unknown - "Gujr", # 0ACB..0ACD ; Gujarati - "Zzzz", # 0ACE..0ACF ; Unknown - "Gujr", # 0AD0..0AD0 ; Gujarati - "Zzzz", # 0AD1..0ADF ; Unknown - "Gujr", # 0AE0..0AE3 ; Gujarati - "Zzzz", # 0AE4..0AE5 ; Unknown - "Gujr", # 0AE6..0AF1 ; Gujarati - "Zzzz", # 0AF2..0AF8 ; Unknown - "Gujr", # 0AF9..0AFF ; Gujarati - "Zzzz", # 0B00..0B00 ; Unknown - "Orya", # 0B01..0B03 ; Oriya - "Zzzz", # 0B04..0B04 ; Unknown - "Orya", # 0B05..0B0C ; Oriya - "Zzzz", # 0B0D..0B0E ; Unknown - "Orya", # 0B0F..0B10 ; Oriya - "Zzzz", # 0B11..0B12 ; Unknown - "Orya", # 0B13..0B28 ; Oriya - "Zzzz", # 0B29..0B29 ; Unknown - "Orya", # 0B2A..0B30 ; Oriya - "Zzzz", # 0B31..0B31 ; Unknown - "Orya", # 0B32..0B33 ; Oriya - "Zzzz", # 0B34..0B34 ; Unknown - "Orya", # 0B35..0B39 ; Oriya - "Zzzz", # 0B3A..0B3B ; Unknown - "Orya", # 0B3C..0B44 ; Oriya - "Zzzz", # 0B45..0B46 ; Unknown - "Orya", # 0B47..0B48 ; Oriya - "Zzzz", # 0B49..0B4A ; Unknown - "Orya", # 0B4B..0B4D ; Oriya - "Zzzz", # 0B4E..0B54 ; Unknown - "Orya", # 0B55..0B57 ; Oriya - "Zzzz", # 0B58..0B5B ; Unknown - "Orya", # 0B5C..0B5D ; Oriya - "Zzzz", # 0B5E..0B5E ; Unknown - "Orya", # 0B5F..0B63 ; Oriya - "Zzzz", # 0B64..0B65 ; Unknown - "Orya", # 0B66..0B77 ; Oriya - "Zzzz", # 0B78..0B81 ; Unknown - "Taml", # 0B82..0B83 ; Tamil - "Zzzz", # 0B84..0B84 ; Unknown - "Taml", # 0B85..0B8A ; Tamil - "Zzzz", # 0B8B..0B8D ; Unknown - "Taml", # 0B8E..0B90 ; Tamil - "Zzzz", # 0B91..0B91 ; Unknown - "Taml", # 0B92..0B95 ; Tamil - "Zzzz", # 0B96..0B98 ; Unknown - "Taml", # 0B99..0B9A ; Tamil - "Zzzz", # 0B9B..0B9B ; Unknown - "Taml", # 0B9C..0B9C ; Tamil - "Zzzz", # 0B9D..0B9D ; Unknown - "Taml", # 0B9E..0B9F ; Tamil - "Zzzz", # 0BA0..0BA2 ; Unknown - "Taml", # 0BA3..0BA4 ; Tamil - "Zzzz", # 0BA5..0BA7 ; Unknown - "Taml", # 0BA8..0BAA ; Tamil - "Zzzz", # 0BAB..0BAD ; Unknown - "Taml", # 0BAE..0BB9 ; Tamil - "Zzzz", # 0BBA..0BBD ; Unknown - "Taml", # 0BBE..0BC2 ; Tamil - "Zzzz", # 0BC3..0BC5 ; Unknown - "Taml", # 0BC6..0BC8 ; Tamil - "Zzzz", # 0BC9..0BC9 ; Unknown - "Taml", # 0BCA..0BCD ; Tamil - "Zzzz", # 0BCE..0BCF ; Unknown - "Taml", # 0BD0..0BD0 ; Tamil - "Zzzz", # 0BD1..0BD6 ; Unknown - "Taml", # 0BD7..0BD7 ; Tamil - "Zzzz", # 0BD8..0BE5 ; Unknown - "Taml", # 0BE6..0BFA ; Tamil - "Zzzz", # 0BFB..0BFF ; Unknown - "Telu", # 0C00..0C0C ; Telugu - "Zzzz", # 0C0D..0C0D ; Unknown - "Telu", # 0C0E..0C10 ; Telugu - "Zzzz", # 0C11..0C11 ; Unknown - "Telu", # 0C12..0C28 ; Telugu - "Zzzz", # 0C29..0C29 ; Unknown - "Telu", # 0C2A..0C39 ; Telugu - "Zzzz", # 0C3A..0C3B ; Unknown - "Telu", # 0C3C..0C44 ; Telugu - "Zzzz", # 0C45..0C45 ; Unknown - "Telu", # 0C46..0C48 ; Telugu - "Zzzz", # 0C49..0C49 ; Unknown - "Telu", # 0C4A..0C4D ; Telugu - "Zzzz", # 0C4E..0C54 ; Unknown - "Telu", # 0C55..0C56 ; Telugu - "Zzzz", # 0C57..0C57 ; Unknown - "Telu", # 0C58..0C5A ; Telugu - "Zzzz", # 0C5B..0C5C ; Unknown - "Telu", # 0C5D..0C5D ; Telugu - "Zzzz", # 0C5E..0C5F ; Unknown - "Telu", # 0C60..0C63 ; Telugu - "Zzzz", # 0C64..0C65 ; Unknown - "Telu", # 0C66..0C6F ; Telugu - "Zzzz", # 0C70..0C76 ; Unknown - "Telu", # 0C77..0C7F ; Telugu - "Knda", # 0C80..0C8C ; Kannada - "Zzzz", # 0C8D..0C8D ; Unknown - "Knda", # 0C8E..0C90 ; Kannada - "Zzzz", # 0C91..0C91 ; Unknown - "Knda", # 0C92..0CA8 ; Kannada - "Zzzz", # 0CA9..0CA9 ; Unknown - "Knda", # 0CAA..0CB3 ; Kannada - "Zzzz", # 0CB4..0CB4 ; Unknown - "Knda", # 0CB5..0CB9 ; Kannada - "Zzzz", # 0CBA..0CBB ; Unknown - "Knda", # 0CBC..0CC4 ; Kannada - "Zzzz", # 0CC5..0CC5 ; Unknown - "Knda", # 0CC6..0CC8 ; Kannada - "Zzzz", # 0CC9..0CC9 ; Unknown - "Knda", # 0CCA..0CCD ; Kannada - "Zzzz", # 0CCE..0CD4 ; Unknown - "Knda", # 0CD5..0CD6 ; Kannada - "Zzzz", # 0CD7..0CDC ; Unknown - "Knda", # 0CDD..0CDE ; Kannada - "Zzzz", # 0CDF..0CDF ; Unknown - "Knda", # 0CE0..0CE3 ; Kannada - "Zzzz", # 0CE4..0CE5 ; Unknown - "Knda", # 0CE6..0CEF ; Kannada - "Zzzz", # 0CF0..0CF0 ; Unknown - "Knda", # 0CF1..0CF3 ; Kannada - "Zzzz", # 0CF4..0CFF ; Unknown - "Mlym", # 0D00..0D0C ; Malayalam - "Zzzz", # 0D0D..0D0D ; Unknown - "Mlym", # 0D0E..0D10 ; Malayalam - "Zzzz", # 0D11..0D11 ; Unknown - "Mlym", # 0D12..0D44 ; Malayalam - "Zzzz", # 0D45..0D45 ; Unknown - "Mlym", # 0D46..0D48 ; Malayalam - "Zzzz", # 0D49..0D49 ; Unknown - "Mlym", # 0D4A..0D4F ; Malayalam - "Zzzz", # 0D50..0D53 ; Unknown - "Mlym", # 0D54..0D63 ; Malayalam - "Zzzz", # 0D64..0D65 ; Unknown - "Mlym", # 0D66..0D7F ; Malayalam - "Zzzz", # 0D80..0D80 ; Unknown - "Sinh", # 0D81..0D83 ; Sinhala - "Zzzz", # 0D84..0D84 ; Unknown - "Sinh", # 0D85..0D96 ; Sinhala - "Zzzz", # 0D97..0D99 ; Unknown - "Sinh", # 0D9A..0DB1 ; Sinhala - "Zzzz", # 0DB2..0DB2 ; Unknown - "Sinh", # 0DB3..0DBB ; Sinhala - "Zzzz", # 0DBC..0DBC ; Unknown - "Sinh", # 0DBD..0DBD ; Sinhala - "Zzzz", # 0DBE..0DBF ; Unknown - "Sinh", # 0DC0..0DC6 ; Sinhala - "Zzzz", # 0DC7..0DC9 ; Unknown - "Sinh", # 0DCA..0DCA ; Sinhala - "Zzzz", # 0DCB..0DCE ; Unknown - "Sinh", # 0DCF..0DD4 ; Sinhala - "Zzzz", # 0DD5..0DD5 ; Unknown - "Sinh", # 0DD6..0DD6 ; Sinhala - "Zzzz", # 0DD7..0DD7 ; Unknown - "Sinh", # 0DD8..0DDF ; Sinhala - "Zzzz", # 0DE0..0DE5 ; Unknown - "Sinh", # 0DE6..0DEF ; Sinhala - "Zzzz", # 0DF0..0DF1 ; Unknown - "Sinh", # 0DF2..0DF4 ; Sinhala - "Zzzz", # 0DF5..0E00 ; Unknown - "Thai", # 0E01..0E3A ; Thai - "Zzzz", # 0E3B..0E3E ; Unknown - "Zyyy", # 0E3F..0E3F ; Common - "Thai", # 0E40..0E5B ; Thai - "Zzzz", # 0E5C..0E80 ; Unknown - "Laoo", # 0E81..0E82 ; Lao - "Zzzz", # 0E83..0E83 ; Unknown - "Laoo", # 0E84..0E84 ; Lao - "Zzzz", # 0E85..0E85 ; Unknown - "Laoo", # 0E86..0E8A ; Lao - "Zzzz", # 0E8B..0E8B ; Unknown - "Laoo", # 0E8C..0EA3 ; Lao - "Zzzz", # 0EA4..0EA4 ; Unknown - "Laoo", # 0EA5..0EA5 ; Lao - "Zzzz", # 0EA6..0EA6 ; Unknown - "Laoo", # 0EA7..0EBD ; Lao - "Zzzz", # 0EBE..0EBF ; Unknown - "Laoo", # 0EC0..0EC4 ; Lao - "Zzzz", # 0EC5..0EC5 ; Unknown - "Laoo", # 0EC6..0EC6 ; Lao - "Zzzz", # 0EC7..0EC7 ; Unknown - "Laoo", # 0EC8..0ECE ; Lao - "Zzzz", # 0ECF..0ECF ; Unknown - "Laoo", # 0ED0..0ED9 ; Lao - "Zzzz", # 0EDA..0EDB ; Unknown - "Laoo", # 0EDC..0EDF ; Lao - "Zzzz", # 0EE0..0EFF ; Unknown - "Tibt", # 0F00..0F47 ; Tibetan - "Zzzz", # 0F48..0F48 ; Unknown - "Tibt", # 0F49..0F6C ; Tibetan - "Zzzz", # 0F6D..0F70 ; Unknown - "Tibt", # 0F71..0F97 ; Tibetan - "Zzzz", # 0F98..0F98 ; Unknown - "Tibt", # 0F99..0FBC ; Tibetan - "Zzzz", # 0FBD..0FBD ; Unknown - "Tibt", # 0FBE..0FCC ; Tibetan - "Zzzz", # 0FCD..0FCD ; Unknown - "Tibt", # 0FCE..0FD4 ; Tibetan - "Zyyy", # 0FD5..0FD8 ; Common - "Tibt", # 0FD9..0FDA ; Tibetan - "Zzzz", # 0FDB..0FFF ; Unknown - "Mymr", # 1000..109F ; Myanmar - "Geor", # 10A0..10C5 ; Georgian - "Zzzz", # 10C6..10C6 ; Unknown - "Geor", # 10C7..10C7 ; Georgian - "Zzzz", # 10C8..10CC ; Unknown - "Geor", # 10CD..10CD ; Georgian - "Zzzz", # 10CE..10CF ; Unknown - "Geor", # 10D0..10FA ; Georgian - "Zyyy", # 10FB..10FB ; Common - "Geor", # 10FC..10FF ; Georgian - "Hang", # 1100..11FF ; Hangul - "Ethi", # 1200..1248 ; Ethiopic - "Zzzz", # 1249..1249 ; Unknown - "Ethi", # 124A..124D ; Ethiopic - "Zzzz", # 124E..124F ; Unknown - "Ethi", # 1250..1256 ; Ethiopic - "Zzzz", # 1257..1257 ; Unknown - "Ethi", # 1258..1258 ; Ethiopic - "Zzzz", # 1259..1259 ; Unknown - "Ethi", # 125A..125D ; Ethiopic - "Zzzz", # 125E..125F ; Unknown - "Ethi", # 1260..1288 ; Ethiopic - "Zzzz", # 1289..1289 ; Unknown - "Ethi", # 128A..128D ; Ethiopic - "Zzzz", # 128E..128F ; Unknown - "Ethi", # 1290..12B0 ; Ethiopic - "Zzzz", # 12B1..12B1 ; Unknown - "Ethi", # 12B2..12B5 ; Ethiopic - "Zzzz", # 12B6..12B7 ; Unknown - "Ethi", # 12B8..12BE ; Ethiopic - "Zzzz", # 12BF..12BF ; Unknown - "Ethi", # 12C0..12C0 ; Ethiopic - "Zzzz", # 12C1..12C1 ; Unknown - "Ethi", # 12C2..12C5 ; Ethiopic - "Zzzz", # 12C6..12C7 ; Unknown - "Ethi", # 12C8..12D6 ; Ethiopic - "Zzzz", # 12D7..12D7 ; Unknown - "Ethi", # 12D8..1310 ; Ethiopic - "Zzzz", # 1311..1311 ; Unknown - "Ethi", # 1312..1315 ; Ethiopic - "Zzzz", # 1316..1317 ; Unknown - "Ethi", # 1318..135A ; Ethiopic - "Zzzz", # 135B..135C ; Unknown - "Ethi", # 135D..137C ; Ethiopic - "Zzzz", # 137D..137F ; Unknown - "Ethi", # 1380..1399 ; Ethiopic - "Zzzz", # 139A..139F ; Unknown - "Cher", # 13A0..13F5 ; Cherokee - "Zzzz", # 13F6..13F7 ; Unknown - "Cher", # 13F8..13FD ; Cherokee - "Zzzz", # 13FE..13FF ; Unknown - "Cans", # 1400..167F ; Canadian_Aboriginal - "Ogam", # 1680..169C ; Ogham - "Zzzz", # 169D..169F ; Unknown - "Runr", # 16A0..16EA ; Runic - "Zyyy", # 16EB..16ED ; Common - "Runr", # 16EE..16F8 ; Runic - "Zzzz", # 16F9..16FF ; Unknown - "Tglg", # 1700..1715 ; Tagalog - "Zzzz", # 1716..171E ; Unknown - "Tglg", # 171F..171F ; Tagalog - "Hano", # 1720..1734 ; Hanunoo - "Zyyy", # 1735..1736 ; Common - "Zzzz", # 1737..173F ; Unknown - "Buhd", # 1740..1753 ; Buhid - "Zzzz", # 1754..175F ; Unknown - "Tagb", # 1760..176C ; Tagbanwa - "Zzzz", # 176D..176D ; Unknown - "Tagb", # 176E..1770 ; Tagbanwa - "Zzzz", # 1771..1771 ; Unknown - "Tagb", # 1772..1773 ; Tagbanwa - "Zzzz", # 1774..177F ; Unknown - "Khmr", # 1780..17DD ; Khmer - "Zzzz", # 17DE..17DF ; Unknown - "Khmr", # 17E0..17E9 ; Khmer - "Zzzz", # 17EA..17EF ; Unknown - "Khmr", # 17F0..17F9 ; Khmer - "Zzzz", # 17FA..17FF ; Unknown - "Mong", # 1800..1801 ; Mongolian - "Zyyy", # 1802..1803 ; Common - "Mong", # 1804..1804 ; Mongolian - "Zyyy", # 1805..1805 ; Common - "Mong", # 1806..1819 ; Mongolian - "Zzzz", # 181A..181F ; Unknown - "Mong", # 1820..1878 ; Mongolian - "Zzzz", # 1879..187F ; Unknown - "Mong", # 1880..18AA ; Mongolian - "Zzzz", # 18AB..18AF ; Unknown - "Cans", # 18B0..18F5 ; Canadian_Aboriginal - "Zzzz", # 18F6..18FF ; Unknown - "Limb", # 1900..191E ; Limbu - "Zzzz", # 191F..191F ; Unknown - "Limb", # 1920..192B ; Limbu - "Zzzz", # 192C..192F ; Unknown - "Limb", # 1930..193B ; Limbu - "Zzzz", # 193C..193F ; Unknown - "Limb", # 1940..1940 ; Limbu - "Zzzz", # 1941..1943 ; Unknown - "Limb", # 1944..194F ; Limbu - "Tale", # 1950..196D ; Tai_Le - "Zzzz", # 196E..196F ; Unknown - "Tale", # 1970..1974 ; Tai_Le - "Zzzz", # 1975..197F ; Unknown - "Talu", # 1980..19AB ; New_Tai_Lue - "Zzzz", # 19AC..19AF ; Unknown - "Talu", # 19B0..19C9 ; New_Tai_Lue - "Zzzz", # 19CA..19CF ; Unknown - "Talu", # 19D0..19DA ; New_Tai_Lue - "Zzzz", # 19DB..19DD ; Unknown - "Talu", # 19DE..19DF ; New_Tai_Lue - "Khmr", # 19E0..19FF ; Khmer - "Bugi", # 1A00..1A1B ; Buginese - "Zzzz", # 1A1C..1A1D ; Unknown - "Bugi", # 1A1E..1A1F ; Buginese - "Lana", # 1A20..1A5E ; Tai_Tham - "Zzzz", # 1A5F..1A5F ; Unknown - "Lana", # 1A60..1A7C ; Tai_Tham - "Zzzz", # 1A7D..1A7E ; Unknown - "Lana", # 1A7F..1A89 ; Tai_Tham - "Zzzz", # 1A8A..1A8F ; Unknown - "Lana", # 1A90..1A99 ; Tai_Tham - "Zzzz", # 1A9A..1A9F ; Unknown - "Lana", # 1AA0..1AAD ; Tai_Tham - "Zzzz", # 1AAE..1AAF ; Unknown - "Zinh", # 1AB0..1ACE ; Inherited - "Zzzz", # 1ACF..1AFF ; Unknown - "Bali", # 1B00..1B4C ; Balinese - "Zzzz", # 1B4D..1B4F ; Unknown - "Bali", # 1B50..1B7E ; Balinese - "Zzzz", # 1B7F..1B7F ; Unknown - "Sund", # 1B80..1BBF ; Sundanese - "Batk", # 1BC0..1BF3 ; Batak - "Zzzz", # 1BF4..1BFB ; Unknown - "Batk", # 1BFC..1BFF ; Batak - "Lepc", # 1C00..1C37 ; Lepcha - "Zzzz", # 1C38..1C3A ; Unknown - "Lepc", # 1C3B..1C49 ; Lepcha - "Zzzz", # 1C4A..1C4C ; Unknown - "Lepc", # 1C4D..1C4F ; Lepcha - "Olck", # 1C50..1C7F ; Ol_Chiki - "Cyrl", # 1C80..1C88 ; Cyrillic - "Zzzz", # 1C89..1C8F ; Unknown - "Geor", # 1C90..1CBA ; Georgian - "Zzzz", # 1CBB..1CBC ; Unknown - "Geor", # 1CBD..1CBF ; Georgian - "Sund", # 1CC0..1CC7 ; Sundanese - "Zzzz", # 1CC8..1CCF ; Unknown - "Zinh", # 1CD0..1CD2 ; Inherited - "Zyyy", # 1CD3..1CD3 ; Common - "Zinh", # 1CD4..1CE0 ; Inherited - "Zyyy", # 1CE1..1CE1 ; Common - "Zinh", # 1CE2..1CE8 ; Inherited - "Zyyy", # 1CE9..1CEC ; Common - "Zinh", # 1CED..1CED ; Inherited - "Zyyy", # 1CEE..1CF3 ; Common - "Zinh", # 1CF4..1CF4 ; Inherited - "Zyyy", # 1CF5..1CF7 ; Common - "Zinh", # 1CF8..1CF9 ; Inherited - "Zyyy", # 1CFA..1CFA ; Common - "Zzzz", # 1CFB..1CFF ; Unknown - "Latn", # 1D00..1D25 ; Latin - "Grek", # 1D26..1D2A ; Greek - "Cyrl", # 1D2B..1D2B ; Cyrillic - "Latn", # 1D2C..1D5C ; Latin - "Grek", # 1D5D..1D61 ; Greek - "Latn", # 1D62..1D65 ; Latin - "Grek", # 1D66..1D6A ; Greek - "Latn", # 1D6B..1D77 ; Latin - "Cyrl", # 1D78..1D78 ; Cyrillic - "Latn", # 1D79..1DBE ; Latin - "Grek", # 1DBF..1DBF ; Greek - "Zinh", # 1DC0..1DFF ; Inherited - "Latn", # 1E00..1EFF ; Latin - "Grek", # 1F00..1F15 ; Greek - "Zzzz", # 1F16..1F17 ; Unknown - "Grek", # 1F18..1F1D ; Greek - "Zzzz", # 1F1E..1F1F ; Unknown - "Grek", # 1F20..1F45 ; Greek - "Zzzz", # 1F46..1F47 ; Unknown - "Grek", # 1F48..1F4D ; Greek - "Zzzz", # 1F4E..1F4F ; Unknown - "Grek", # 1F50..1F57 ; Greek - "Zzzz", # 1F58..1F58 ; Unknown - "Grek", # 1F59..1F59 ; Greek - "Zzzz", # 1F5A..1F5A ; Unknown - "Grek", # 1F5B..1F5B ; Greek - "Zzzz", # 1F5C..1F5C ; Unknown - "Grek", # 1F5D..1F5D ; Greek - "Zzzz", # 1F5E..1F5E ; Unknown - "Grek", # 1F5F..1F7D ; Greek - "Zzzz", # 1F7E..1F7F ; Unknown - "Grek", # 1F80..1FB4 ; Greek - "Zzzz", # 1FB5..1FB5 ; Unknown - "Grek", # 1FB6..1FC4 ; Greek - "Zzzz", # 1FC5..1FC5 ; Unknown - "Grek", # 1FC6..1FD3 ; Greek - "Zzzz", # 1FD4..1FD5 ; Unknown - "Grek", # 1FD6..1FDB ; Greek - "Zzzz", # 1FDC..1FDC ; Unknown - "Grek", # 1FDD..1FEF ; Greek - "Zzzz", # 1FF0..1FF1 ; Unknown - "Grek", # 1FF2..1FF4 ; Greek - "Zzzz", # 1FF5..1FF5 ; Unknown - "Grek", # 1FF6..1FFE ; Greek - "Zzzz", # 1FFF..1FFF ; Unknown - "Zyyy", # 2000..200B ; Common - "Zinh", # 200C..200D ; Inherited - "Zyyy", # 200E..2064 ; Common - "Zzzz", # 2065..2065 ; Unknown - "Zyyy", # 2066..2070 ; Common - "Latn", # 2071..2071 ; Latin - "Zzzz", # 2072..2073 ; Unknown - "Zyyy", # 2074..207E ; Common - "Latn", # 207F..207F ; Latin - "Zyyy", # 2080..208E ; Common - "Zzzz", # 208F..208F ; Unknown - "Latn", # 2090..209C ; Latin - "Zzzz", # 209D..209F ; Unknown - "Zyyy", # 20A0..20C0 ; Common - "Zzzz", # 20C1..20CF ; Unknown - "Zinh", # 20D0..20F0 ; Inherited - "Zzzz", # 20F1..20FF ; Unknown - "Zyyy", # 2100..2125 ; Common - "Grek", # 2126..2126 ; Greek - "Zyyy", # 2127..2129 ; Common - "Latn", # 212A..212B ; Latin - "Zyyy", # 212C..2131 ; Common - "Latn", # 2132..2132 ; Latin - "Zyyy", # 2133..214D ; Common - "Latn", # 214E..214E ; Latin - "Zyyy", # 214F..215F ; Common - "Latn", # 2160..2188 ; Latin - "Zyyy", # 2189..218B ; Common - "Zzzz", # 218C..218F ; Unknown - "Zyyy", # 2190..2426 ; Common - "Zzzz", # 2427..243F ; Unknown - "Zyyy", # 2440..244A ; Common - "Zzzz", # 244B..245F ; Unknown - "Zyyy", # 2460..27FF ; Common - "Brai", # 2800..28FF ; Braille - "Zyyy", # 2900..2B73 ; Common - "Zzzz", # 2B74..2B75 ; Unknown - "Zyyy", # 2B76..2B95 ; Common - "Zzzz", # 2B96..2B96 ; Unknown - "Zyyy", # 2B97..2BFF ; Common - "Glag", # 2C00..2C5F ; Glagolitic - "Latn", # 2C60..2C7F ; Latin - "Copt", # 2C80..2CF3 ; Coptic - "Zzzz", # 2CF4..2CF8 ; Unknown - "Copt", # 2CF9..2CFF ; Coptic - "Geor", # 2D00..2D25 ; Georgian - "Zzzz", # 2D26..2D26 ; Unknown - "Geor", # 2D27..2D27 ; Georgian - "Zzzz", # 2D28..2D2C ; Unknown - "Geor", # 2D2D..2D2D ; Georgian - "Zzzz", # 2D2E..2D2F ; Unknown - "Tfng", # 2D30..2D67 ; Tifinagh - "Zzzz", # 2D68..2D6E ; Unknown - "Tfng", # 2D6F..2D70 ; Tifinagh - "Zzzz", # 2D71..2D7E ; Unknown - "Tfng", # 2D7F..2D7F ; Tifinagh - "Ethi", # 2D80..2D96 ; Ethiopic - "Zzzz", # 2D97..2D9F ; Unknown - "Ethi", # 2DA0..2DA6 ; Ethiopic - "Zzzz", # 2DA7..2DA7 ; Unknown - "Ethi", # 2DA8..2DAE ; Ethiopic - "Zzzz", # 2DAF..2DAF ; Unknown - "Ethi", # 2DB0..2DB6 ; Ethiopic - "Zzzz", # 2DB7..2DB7 ; Unknown - "Ethi", # 2DB8..2DBE ; Ethiopic - "Zzzz", # 2DBF..2DBF ; Unknown - "Ethi", # 2DC0..2DC6 ; Ethiopic - "Zzzz", # 2DC7..2DC7 ; Unknown - "Ethi", # 2DC8..2DCE ; Ethiopic - "Zzzz", # 2DCF..2DCF ; Unknown - "Ethi", # 2DD0..2DD6 ; Ethiopic - "Zzzz", # 2DD7..2DD7 ; Unknown - "Ethi", # 2DD8..2DDE ; Ethiopic - "Zzzz", # 2DDF..2DDF ; Unknown - "Cyrl", # 2DE0..2DFF ; Cyrillic - "Zyyy", # 2E00..2E5D ; Common - "Zzzz", # 2E5E..2E7F ; Unknown - "Hani", # 2E80..2E99 ; Han - "Zzzz", # 2E9A..2E9A ; Unknown - "Hani", # 2E9B..2EF3 ; Han - "Zzzz", # 2EF4..2EFF ; Unknown - "Hani", # 2F00..2FD5 ; Han - "Zzzz", # 2FD6..2FEF ; Unknown - "Zyyy", # 2FF0..2FFB ; Common - "Zzzz", # 2FFC..2FFF ; Unknown - "Zyyy", # 3000..3004 ; Common - "Hani", # 3005..3005 ; Han - "Zyyy", # 3006..3006 ; Common - "Hani", # 3007..3007 ; Han - "Zyyy", # 3008..3020 ; Common - "Hani", # 3021..3029 ; Han - "Zinh", # 302A..302D ; Inherited - "Hang", # 302E..302F ; Hangul - "Zyyy", # 3030..3037 ; Common - "Hani", # 3038..303B ; Han - "Zyyy", # 303C..303F ; Common - "Zzzz", # 3040..3040 ; Unknown - "Hira", # 3041..3096 ; Hiragana - "Zzzz", # 3097..3098 ; Unknown - "Zinh", # 3099..309A ; Inherited - "Zyyy", # 309B..309C ; Common - "Hira", # 309D..309F ; Hiragana - "Zyyy", # 30A0..30A0 ; Common - "Kana", # 30A1..30FA ; Katakana - "Zyyy", # 30FB..30FC ; Common - "Kana", # 30FD..30FF ; Katakana - "Zzzz", # 3100..3104 ; Unknown - "Bopo", # 3105..312F ; Bopomofo - "Zzzz", # 3130..3130 ; Unknown - "Hang", # 3131..318E ; Hangul - "Zzzz", # 318F..318F ; Unknown - "Zyyy", # 3190..319F ; Common - "Bopo", # 31A0..31BF ; Bopomofo - "Zyyy", # 31C0..31E3 ; Common - "Zzzz", # 31E4..31EF ; Unknown - "Kana", # 31F0..31FF ; Katakana - "Hang", # 3200..321E ; Hangul - "Zzzz", # 321F..321F ; Unknown - "Zyyy", # 3220..325F ; Common - "Hang", # 3260..327E ; Hangul - "Zyyy", # 327F..32CF ; Common - "Kana", # 32D0..32FE ; Katakana - "Zyyy", # 32FF..32FF ; Common - "Kana", # 3300..3357 ; Katakana - "Zyyy", # 3358..33FF ; Common - "Hani", # 3400..4DBF ; Han - "Zyyy", # 4DC0..4DFF ; Common - "Hani", # 4E00..9FFF ; Han - "Yiii", # A000..A48C ; Yi - "Zzzz", # A48D..A48F ; Unknown - "Yiii", # A490..A4C6 ; Yi - "Zzzz", # A4C7..A4CF ; Unknown - "Lisu", # A4D0..A4FF ; Lisu - "Vaii", # A500..A62B ; Vai - "Zzzz", # A62C..A63F ; Unknown - "Cyrl", # A640..A69F ; Cyrillic - "Bamu", # A6A0..A6F7 ; Bamum - "Zzzz", # A6F8..A6FF ; Unknown - "Zyyy", # A700..A721 ; Common - "Latn", # A722..A787 ; Latin - "Zyyy", # A788..A78A ; Common - "Latn", # A78B..A7CA ; Latin - "Zzzz", # A7CB..A7CF ; Unknown - "Latn", # A7D0..A7D1 ; Latin - "Zzzz", # A7D2..A7D2 ; Unknown - "Latn", # A7D3..A7D3 ; Latin - "Zzzz", # A7D4..A7D4 ; Unknown - "Latn", # A7D5..A7D9 ; Latin - "Zzzz", # A7DA..A7F1 ; Unknown - "Latn", # A7F2..A7FF ; Latin - "Sylo", # A800..A82C ; Syloti_Nagri - "Zzzz", # A82D..A82F ; Unknown - "Zyyy", # A830..A839 ; Common - "Zzzz", # A83A..A83F ; Unknown - "Phag", # A840..A877 ; Phags_Pa - "Zzzz", # A878..A87F ; Unknown - "Saur", # A880..A8C5 ; Saurashtra - "Zzzz", # A8C6..A8CD ; Unknown - "Saur", # A8CE..A8D9 ; Saurashtra - "Zzzz", # A8DA..A8DF ; Unknown - "Deva", # A8E0..A8FF ; Devanagari - "Kali", # A900..A92D ; Kayah_Li - "Zyyy", # A92E..A92E ; Common - "Kali", # A92F..A92F ; Kayah_Li - "Rjng", # A930..A953 ; Rejang - "Zzzz", # A954..A95E ; Unknown - "Rjng", # A95F..A95F ; Rejang - "Hang", # A960..A97C ; Hangul - "Zzzz", # A97D..A97F ; Unknown - "Java", # A980..A9CD ; Javanese - "Zzzz", # A9CE..A9CE ; Unknown - "Zyyy", # A9CF..A9CF ; Common - "Java", # A9D0..A9D9 ; Javanese - "Zzzz", # A9DA..A9DD ; Unknown - "Java", # A9DE..A9DF ; Javanese - "Mymr", # A9E0..A9FE ; Myanmar - "Zzzz", # A9FF..A9FF ; Unknown - "Cham", # AA00..AA36 ; Cham - "Zzzz", # AA37..AA3F ; Unknown - "Cham", # AA40..AA4D ; Cham - "Zzzz", # AA4E..AA4F ; Unknown - "Cham", # AA50..AA59 ; Cham - "Zzzz", # AA5A..AA5B ; Unknown - "Cham", # AA5C..AA5F ; Cham - "Mymr", # AA60..AA7F ; Myanmar - "Tavt", # AA80..AAC2 ; Tai_Viet - "Zzzz", # AAC3..AADA ; Unknown - "Tavt", # AADB..AADF ; Tai_Viet - "Mtei", # AAE0..AAF6 ; Meetei_Mayek - "Zzzz", # AAF7..AB00 ; Unknown - "Ethi", # AB01..AB06 ; Ethiopic - "Zzzz", # AB07..AB08 ; Unknown - "Ethi", # AB09..AB0E ; Ethiopic - "Zzzz", # AB0F..AB10 ; Unknown - "Ethi", # AB11..AB16 ; Ethiopic - "Zzzz", # AB17..AB1F ; Unknown - "Ethi", # AB20..AB26 ; Ethiopic - "Zzzz", # AB27..AB27 ; Unknown - "Ethi", # AB28..AB2E ; Ethiopic - "Zzzz", # AB2F..AB2F ; Unknown - "Latn", # AB30..AB5A ; Latin - "Zyyy", # AB5B..AB5B ; Common - "Latn", # AB5C..AB64 ; Latin - "Grek", # AB65..AB65 ; Greek - "Latn", # AB66..AB69 ; Latin - "Zyyy", # AB6A..AB6B ; Common - "Zzzz", # AB6C..AB6F ; Unknown - "Cher", # AB70..ABBF ; Cherokee - "Mtei", # ABC0..ABED ; Meetei_Mayek - "Zzzz", # ABEE..ABEF ; Unknown - "Mtei", # ABF0..ABF9 ; Meetei_Mayek - "Zzzz", # ABFA..ABFF ; Unknown - "Hang", # AC00..D7A3 ; Hangul - "Zzzz", # D7A4..D7AF ; Unknown - "Hang", # D7B0..D7C6 ; Hangul - "Zzzz", # D7C7..D7CA ; Unknown - "Hang", # D7CB..D7FB ; Hangul - "Zzzz", # D7FC..F8FF ; Unknown - "Hani", # F900..FA6D ; Han - "Zzzz", # FA6E..FA6F ; Unknown - "Hani", # FA70..FAD9 ; Han - "Zzzz", # FADA..FAFF ; Unknown - "Latn", # FB00..FB06 ; Latin - "Zzzz", # FB07..FB12 ; Unknown - "Armn", # FB13..FB17 ; Armenian - "Zzzz", # FB18..FB1C ; Unknown - "Hebr", # FB1D..FB36 ; Hebrew - "Zzzz", # FB37..FB37 ; Unknown - "Hebr", # FB38..FB3C ; Hebrew - "Zzzz", # FB3D..FB3D ; Unknown - "Hebr", # FB3E..FB3E ; Hebrew - "Zzzz", # FB3F..FB3F ; Unknown - "Hebr", # FB40..FB41 ; Hebrew - "Zzzz", # FB42..FB42 ; Unknown - "Hebr", # FB43..FB44 ; Hebrew - "Zzzz", # FB45..FB45 ; Unknown - "Hebr", # FB46..FB4F ; Hebrew - "Arab", # FB50..FBC2 ; Arabic - "Zzzz", # FBC3..FBD2 ; Unknown - "Arab", # FBD3..FD3D ; Arabic - "Zyyy", # FD3E..FD3F ; Common - "Arab", # FD40..FD8F ; Arabic - "Zzzz", # FD90..FD91 ; Unknown - "Arab", # FD92..FDC7 ; Arabic - "Zzzz", # FDC8..FDCE ; Unknown - "Arab", # FDCF..FDCF ; Arabic - "Zzzz", # FDD0..FDEF ; Unknown - "Arab", # FDF0..FDFF ; Arabic - "Zinh", # FE00..FE0F ; Inherited - "Zyyy", # FE10..FE19 ; Common - "Zzzz", # FE1A..FE1F ; Unknown - "Zinh", # FE20..FE2D ; Inherited - "Cyrl", # FE2E..FE2F ; Cyrillic - "Zyyy", # FE30..FE52 ; Common - "Zzzz", # FE53..FE53 ; Unknown - "Zyyy", # FE54..FE66 ; Common - "Zzzz", # FE67..FE67 ; Unknown - "Zyyy", # FE68..FE6B ; Common - "Zzzz", # FE6C..FE6F ; Unknown - "Arab", # FE70..FE74 ; Arabic - "Zzzz", # FE75..FE75 ; Unknown - "Arab", # FE76..FEFC ; Arabic - "Zzzz", # FEFD..FEFE ; Unknown - "Zyyy", # FEFF..FEFF ; Common - "Zzzz", # FF00..FF00 ; Unknown - "Zyyy", # FF01..FF20 ; Common - "Latn", # FF21..FF3A ; Latin - "Zyyy", # FF3B..FF40 ; Common - "Latn", # FF41..FF5A ; Latin - "Zyyy", # FF5B..FF65 ; Common - "Kana", # FF66..FF6F ; Katakana - "Zyyy", # FF70..FF70 ; Common - "Kana", # FF71..FF9D ; Katakana - "Zyyy", # FF9E..FF9F ; Common - "Hang", # FFA0..FFBE ; Hangul - "Zzzz", # FFBF..FFC1 ; Unknown - "Hang", # FFC2..FFC7 ; Hangul - "Zzzz", # FFC8..FFC9 ; Unknown - "Hang", # FFCA..FFCF ; Hangul - "Zzzz", # FFD0..FFD1 ; Unknown - "Hang", # FFD2..FFD7 ; Hangul - "Zzzz", # FFD8..FFD9 ; Unknown - "Hang", # FFDA..FFDC ; Hangul - "Zzzz", # FFDD..FFDF ; Unknown - "Zyyy", # FFE0..FFE6 ; Common - "Zzzz", # FFE7..FFE7 ; Unknown - "Zyyy", # FFE8..FFEE ; Common - "Zzzz", # FFEF..FFF8 ; Unknown - "Zyyy", # FFF9..FFFD ; Common - "Zzzz", # FFFE..FFFF ; Unknown - "Linb", # 10000..1000B ; Linear_B - "Zzzz", # 1000C..1000C ; Unknown - "Linb", # 1000D..10026 ; Linear_B - "Zzzz", # 10027..10027 ; Unknown - "Linb", # 10028..1003A ; Linear_B - "Zzzz", # 1003B..1003B ; Unknown - "Linb", # 1003C..1003D ; Linear_B - "Zzzz", # 1003E..1003E ; Unknown - "Linb", # 1003F..1004D ; Linear_B - "Zzzz", # 1004E..1004F ; Unknown - "Linb", # 10050..1005D ; Linear_B - "Zzzz", # 1005E..1007F ; Unknown - "Linb", # 10080..100FA ; Linear_B - "Zzzz", # 100FB..100FF ; Unknown - "Zyyy", # 10100..10102 ; Common - "Zzzz", # 10103..10106 ; Unknown - "Zyyy", # 10107..10133 ; Common - "Zzzz", # 10134..10136 ; Unknown - "Zyyy", # 10137..1013F ; Common - "Grek", # 10140..1018E ; Greek - "Zzzz", # 1018F..1018F ; Unknown - "Zyyy", # 10190..1019C ; Common - "Zzzz", # 1019D..1019F ; Unknown - "Grek", # 101A0..101A0 ; Greek - "Zzzz", # 101A1..101CF ; Unknown - "Zyyy", # 101D0..101FC ; Common - "Zinh", # 101FD..101FD ; Inherited - "Zzzz", # 101FE..1027F ; Unknown - "Lyci", # 10280..1029C ; Lycian - "Zzzz", # 1029D..1029F ; Unknown - "Cari", # 102A0..102D0 ; Carian - "Zzzz", # 102D1..102DF ; Unknown - "Zinh", # 102E0..102E0 ; Inherited - "Zyyy", # 102E1..102FB ; Common - "Zzzz", # 102FC..102FF ; Unknown - "Ital", # 10300..10323 ; Old_Italic - "Zzzz", # 10324..1032C ; Unknown - "Ital", # 1032D..1032F ; Old_Italic - "Goth", # 10330..1034A ; Gothic - "Zzzz", # 1034B..1034F ; Unknown - "Perm", # 10350..1037A ; Old_Permic - "Zzzz", # 1037B..1037F ; Unknown - "Ugar", # 10380..1039D ; Ugaritic - "Zzzz", # 1039E..1039E ; Unknown - "Ugar", # 1039F..1039F ; Ugaritic - "Xpeo", # 103A0..103C3 ; Old_Persian - "Zzzz", # 103C4..103C7 ; Unknown - "Xpeo", # 103C8..103D5 ; Old_Persian - "Zzzz", # 103D6..103FF ; Unknown - "Dsrt", # 10400..1044F ; Deseret - "Shaw", # 10450..1047F ; Shavian - "Osma", # 10480..1049D ; Osmanya - "Zzzz", # 1049E..1049F ; Unknown - "Osma", # 104A0..104A9 ; Osmanya - "Zzzz", # 104AA..104AF ; Unknown - "Osge", # 104B0..104D3 ; Osage - "Zzzz", # 104D4..104D7 ; Unknown - "Osge", # 104D8..104FB ; Osage - "Zzzz", # 104FC..104FF ; Unknown - "Elba", # 10500..10527 ; Elbasan - "Zzzz", # 10528..1052F ; Unknown - "Aghb", # 10530..10563 ; Caucasian_Albanian - "Zzzz", # 10564..1056E ; Unknown - "Aghb", # 1056F..1056F ; Caucasian_Albanian - "Vith", # 10570..1057A ; Vithkuqi - "Zzzz", # 1057B..1057B ; Unknown - "Vith", # 1057C..1058A ; Vithkuqi - "Zzzz", # 1058B..1058B ; Unknown - "Vith", # 1058C..10592 ; Vithkuqi - "Zzzz", # 10593..10593 ; Unknown - "Vith", # 10594..10595 ; Vithkuqi - "Zzzz", # 10596..10596 ; Unknown - "Vith", # 10597..105A1 ; Vithkuqi - "Zzzz", # 105A2..105A2 ; Unknown - "Vith", # 105A3..105B1 ; Vithkuqi - "Zzzz", # 105B2..105B2 ; Unknown - "Vith", # 105B3..105B9 ; Vithkuqi - "Zzzz", # 105BA..105BA ; Unknown - "Vith", # 105BB..105BC ; Vithkuqi - "Zzzz", # 105BD..105FF ; Unknown - "Lina", # 10600..10736 ; Linear_A - "Zzzz", # 10737..1073F ; Unknown - "Lina", # 10740..10755 ; Linear_A - "Zzzz", # 10756..1075F ; Unknown - "Lina", # 10760..10767 ; Linear_A - "Zzzz", # 10768..1077F ; Unknown - "Latn", # 10780..10785 ; Latin - "Zzzz", # 10786..10786 ; Unknown - "Latn", # 10787..107B0 ; Latin - "Zzzz", # 107B1..107B1 ; Unknown - "Latn", # 107B2..107BA ; Latin - "Zzzz", # 107BB..107FF ; Unknown - "Cprt", # 10800..10805 ; Cypriot - "Zzzz", # 10806..10807 ; Unknown - "Cprt", # 10808..10808 ; Cypriot - "Zzzz", # 10809..10809 ; Unknown - "Cprt", # 1080A..10835 ; Cypriot - "Zzzz", # 10836..10836 ; Unknown - "Cprt", # 10837..10838 ; Cypriot - "Zzzz", # 10839..1083B ; Unknown - "Cprt", # 1083C..1083C ; Cypriot - "Zzzz", # 1083D..1083E ; Unknown - "Cprt", # 1083F..1083F ; Cypriot - "Armi", # 10840..10855 ; Imperial_Aramaic - "Zzzz", # 10856..10856 ; Unknown - "Armi", # 10857..1085F ; Imperial_Aramaic - "Palm", # 10860..1087F ; Palmyrene - "Nbat", # 10880..1089E ; Nabataean - "Zzzz", # 1089F..108A6 ; Unknown - "Nbat", # 108A7..108AF ; Nabataean - "Zzzz", # 108B0..108DF ; Unknown - "Hatr", # 108E0..108F2 ; Hatran - "Zzzz", # 108F3..108F3 ; Unknown - "Hatr", # 108F4..108F5 ; Hatran - "Zzzz", # 108F6..108FA ; Unknown - "Hatr", # 108FB..108FF ; Hatran - "Phnx", # 10900..1091B ; Phoenician - "Zzzz", # 1091C..1091E ; Unknown - "Phnx", # 1091F..1091F ; Phoenician - "Lydi", # 10920..10939 ; Lydian - "Zzzz", # 1093A..1093E ; Unknown - "Lydi", # 1093F..1093F ; Lydian - "Zzzz", # 10940..1097F ; Unknown - "Mero", # 10980..1099F ; Meroitic_Hieroglyphs - "Merc", # 109A0..109B7 ; Meroitic_Cursive - "Zzzz", # 109B8..109BB ; Unknown - "Merc", # 109BC..109CF ; Meroitic_Cursive - "Zzzz", # 109D0..109D1 ; Unknown - "Merc", # 109D2..109FF ; Meroitic_Cursive - "Khar", # 10A00..10A03 ; Kharoshthi - "Zzzz", # 10A04..10A04 ; Unknown - "Khar", # 10A05..10A06 ; Kharoshthi - "Zzzz", # 10A07..10A0B ; Unknown - "Khar", # 10A0C..10A13 ; Kharoshthi - "Zzzz", # 10A14..10A14 ; Unknown - "Khar", # 10A15..10A17 ; Kharoshthi - "Zzzz", # 10A18..10A18 ; Unknown - "Khar", # 10A19..10A35 ; Kharoshthi - "Zzzz", # 10A36..10A37 ; Unknown - "Khar", # 10A38..10A3A ; Kharoshthi - "Zzzz", # 10A3B..10A3E ; Unknown - "Khar", # 10A3F..10A48 ; Kharoshthi - "Zzzz", # 10A49..10A4F ; Unknown - "Khar", # 10A50..10A58 ; Kharoshthi - "Zzzz", # 10A59..10A5F ; Unknown - "Sarb", # 10A60..10A7F ; Old_South_Arabian - "Narb", # 10A80..10A9F ; Old_North_Arabian - "Zzzz", # 10AA0..10ABF ; Unknown - "Mani", # 10AC0..10AE6 ; Manichaean - "Zzzz", # 10AE7..10AEA ; Unknown - "Mani", # 10AEB..10AF6 ; Manichaean - "Zzzz", # 10AF7..10AFF ; Unknown - "Avst", # 10B00..10B35 ; Avestan - "Zzzz", # 10B36..10B38 ; Unknown - "Avst", # 10B39..10B3F ; Avestan - "Prti", # 10B40..10B55 ; Inscriptional_Parthian - "Zzzz", # 10B56..10B57 ; Unknown - "Prti", # 10B58..10B5F ; Inscriptional_Parthian - "Phli", # 10B60..10B72 ; Inscriptional_Pahlavi - "Zzzz", # 10B73..10B77 ; Unknown - "Phli", # 10B78..10B7F ; Inscriptional_Pahlavi - "Phlp", # 10B80..10B91 ; Psalter_Pahlavi - "Zzzz", # 10B92..10B98 ; Unknown - "Phlp", # 10B99..10B9C ; Psalter_Pahlavi - "Zzzz", # 10B9D..10BA8 ; Unknown - "Phlp", # 10BA9..10BAF ; Psalter_Pahlavi - "Zzzz", # 10BB0..10BFF ; Unknown - "Orkh", # 10C00..10C48 ; Old_Turkic - "Zzzz", # 10C49..10C7F ; Unknown - "Hung", # 10C80..10CB2 ; Old_Hungarian - "Zzzz", # 10CB3..10CBF ; Unknown - "Hung", # 10CC0..10CF2 ; Old_Hungarian - "Zzzz", # 10CF3..10CF9 ; Unknown - "Hung", # 10CFA..10CFF ; Old_Hungarian - "Rohg", # 10D00..10D27 ; Hanifi_Rohingya - "Zzzz", # 10D28..10D2F ; Unknown - "Rohg", # 10D30..10D39 ; Hanifi_Rohingya - "Zzzz", # 10D3A..10E5F ; Unknown - "Arab", # 10E60..10E7E ; Arabic - "Zzzz", # 10E7F..10E7F ; Unknown - "Yezi", # 10E80..10EA9 ; Yezidi - "Zzzz", # 10EAA..10EAA ; Unknown - "Yezi", # 10EAB..10EAD ; Yezidi - "Zzzz", # 10EAE..10EAF ; Unknown - "Yezi", # 10EB0..10EB1 ; Yezidi - "Zzzz", # 10EB2..10EFC ; Unknown - "Arab", # 10EFD..10EFF ; Arabic - "Sogo", # 10F00..10F27 ; Old_Sogdian - "Zzzz", # 10F28..10F2F ; Unknown - "Sogd", # 10F30..10F59 ; Sogdian - "Zzzz", # 10F5A..10F6F ; Unknown - "Ougr", # 10F70..10F89 ; Old_Uyghur - "Zzzz", # 10F8A..10FAF ; Unknown - "Chrs", # 10FB0..10FCB ; Chorasmian - "Zzzz", # 10FCC..10FDF ; Unknown - "Elym", # 10FE0..10FF6 ; Elymaic - "Zzzz", # 10FF7..10FFF ; Unknown - "Brah", # 11000..1104D ; Brahmi - "Zzzz", # 1104E..11051 ; Unknown - "Brah", # 11052..11075 ; Brahmi - "Zzzz", # 11076..1107E ; Unknown - "Brah", # 1107F..1107F ; Brahmi - "Kthi", # 11080..110C2 ; Kaithi - "Zzzz", # 110C3..110CC ; Unknown - "Kthi", # 110CD..110CD ; Kaithi - "Zzzz", # 110CE..110CF ; Unknown - "Sora", # 110D0..110E8 ; Sora_Sompeng - "Zzzz", # 110E9..110EF ; Unknown - "Sora", # 110F0..110F9 ; Sora_Sompeng - "Zzzz", # 110FA..110FF ; Unknown - "Cakm", # 11100..11134 ; Chakma - "Zzzz", # 11135..11135 ; Unknown - "Cakm", # 11136..11147 ; Chakma - "Zzzz", # 11148..1114F ; Unknown - "Mahj", # 11150..11176 ; Mahajani - "Zzzz", # 11177..1117F ; Unknown - "Shrd", # 11180..111DF ; Sharada - "Zzzz", # 111E0..111E0 ; Unknown - "Sinh", # 111E1..111F4 ; Sinhala - "Zzzz", # 111F5..111FF ; Unknown - "Khoj", # 11200..11211 ; Khojki - "Zzzz", # 11212..11212 ; Unknown - "Khoj", # 11213..11241 ; Khojki - "Zzzz", # 11242..1127F ; Unknown - "Mult", # 11280..11286 ; Multani - "Zzzz", # 11287..11287 ; Unknown - "Mult", # 11288..11288 ; Multani - "Zzzz", # 11289..11289 ; Unknown - "Mult", # 1128A..1128D ; Multani - "Zzzz", # 1128E..1128E ; Unknown - "Mult", # 1128F..1129D ; Multani - "Zzzz", # 1129E..1129E ; Unknown - "Mult", # 1129F..112A9 ; Multani - "Zzzz", # 112AA..112AF ; Unknown - "Sind", # 112B0..112EA ; Khudawadi - "Zzzz", # 112EB..112EF ; Unknown - "Sind", # 112F0..112F9 ; Khudawadi - "Zzzz", # 112FA..112FF ; Unknown - "Gran", # 11300..11303 ; Grantha - "Zzzz", # 11304..11304 ; Unknown - "Gran", # 11305..1130C ; Grantha - "Zzzz", # 1130D..1130E ; Unknown - "Gran", # 1130F..11310 ; Grantha - "Zzzz", # 11311..11312 ; Unknown - "Gran", # 11313..11328 ; Grantha - "Zzzz", # 11329..11329 ; Unknown - "Gran", # 1132A..11330 ; Grantha - "Zzzz", # 11331..11331 ; Unknown - "Gran", # 11332..11333 ; Grantha - "Zzzz", # 11334..11334 ; Unknown - "Gran", # 11335..11339 ; Grantha - "Zzzz", # 1133A..1133A ; Unknown - "Zinh", # 1133B..1133B ; Inherited - "Gran", # 1133C..11344 ; Grantha - "Zzzz", # 11345..11346 ; Unknown - "Gran", # 11347..11348 ; Grantha - "Zzzz", # 11349..1134A ; Unknown - "Gran", # 1134B..1134D ; Grantha - "Zzzz", # 1134E..1134F ; Unknown - "Gran", # 11350..11350 ; Grantha - "Zzzz", # 11351..11356 ; Unknown - "Gran", # 11357..11357 ; Grantha - "Zzzz", # 11358..1135C ; Unknown - "Gran", # 1135D..11363 ; Grantha - "Zzzz", # 11364..11365 ; Unknown - "Gran", # 11366..1136C ; Grantha - "Zzzz", # 1136D..1136F ; Unknown - "Gran", # 11370..11374 ; Grantha - "Zzzz", # 11375..113FF ; Unknown - "Newa", # 11400..1145B ; Newa - "Zzzz", # 1145C..1145C ; Unknown - "Newa", # 1145D..11461 ; Newa - "Zzzz", # 11462..1147F ; Unknown - "Tirh", # 11480..114C7 ; Tirhuta - "Zzzz", # 114C8..114CF ; Unknown - "Tirh", # 114D0..114D9 ; Tirhuta - "Zzzz", # 114DA..1157F ; Unknown - "Sidd", # 11580..115B5 ; Siddham - "Zzzz", # 115B6..115B7 ; Unknown - "Sidd", # 115B8..115DD ; Siddham - "Zzzz", # 115DE..115FF ; Unknown - "Modi", # 11600..11644 ; Modi - "Zzzz", # 11645..1164F ; Unknown - "Modi", # 11650..11659 ; Modi - "Zzzz", # 1165A..1165F ; Unknown - "Mong", # 11660..1166C ; Mongolian - "Zzzz", # 1166D..1167F ; Unknown - "Takr", # 11680..116B9 ; Takri - "Zzzz", # 116BA..116BF ; Unknown - "Takr", # 116C0..116C9 ; Takri - "Zzzz", # 116CA..116FF ; Unknown - "Ahom", # 11700..1171A ; Ahom - "Zzzz", # 1171B..1171C ; Unknown - "Ahom", # 1171D..1172B ; Ahom - "Zzzz", # 1172C..1172F ; Unknown - "Ahom", # 11730..11746 ; Ahom - "Zzzz", # 11747..117FF ; Unknown - "Dogr", # 11800..1183B ; Dogra - "Zzzz", # 1183C..1189F ; Unknown - "Wara", # 118A0..118F2 ; Warang_Citi - "Zzzz", # 118F3..118FE ; Unknown - "Wara", # 118FF..118FF ; Warang_Citi - "Diak", # 11900..11906 ; Dives_Akuru - "Zzzz", # 11907..11908 ; Unknown - "Diak", # 11909..11909 ; Dives_Akuru - "Zzzz", # 1190A..1190B ; Unknown - "Diak", # 1190C..11913 ; Dives_Akuru - "Zzzz", # 11914..11914 ; Unknown - "Diak", # 11915..11916 ; Dives_Akuru - "Zzzz", # 11917..11917 ; Unknown - "Diak", # 11918..11935 ; Dives_Akuru - "Zzzz", # 11936..11936 ; Unknown - "Diak", # 11937..11938 ; Dives_Akuru - "Zzzz", # 11939..1193A ; Unknown - "Diak", # 1193B..11946 ; Dives_Akuru - "Zzzz", # 11947..1194F ; Unknown - "Diak", # 11950..11959 ; Dives_Akuru - "Zzzz", # 1195A..1199F ; Unknown - "Nand", # 119A0..119A7 ; Nandinagari - "Zzzz", # 119A8..119A9 ; Unknown - "Nand", # 119AA..119D7 ; Nandinagari - "Zzzz", # 119D8..119D9 ; Unknown - "Nand", # 119DA..119E4 ; Nandinagari - "Zzzz", # 119E5..119FF ; Unknown - "Zanb", # 11A00..11A47 ; Zanabazar_Square - "Zzzz", # 11A48..11A4F ; Unknown - "Soyo", # 11A50..11AA2 ; Soyombo - "Zzzz", # 11AA3..11AAF ; Unknown - "Cans", # 11AB0..11ABF ; Canadian_Aboriginal - "Pauc", # 11AC0..11AF8 ; Pau_Cin_Hau - "Zzzz", # 11AF9..11AFF ; Unknown - "Deva", # 11B00..11B09 ; Devanagari - "Zzzz", # 11B0A..11BFF ; Unknown - "Bhks", # 11C00..11C08 ; Bhaiksuki - "Zzzz", # 11C09..11C09 ; Unknown - "Bhks", # 11C0A..11C36 ; Bhaiksuki - "Zzzz", # 11C37..11C37 ; Unknown - "Bhks", # 11C38..11C45 ; Bhaiksuki - "Zzzz", # 11C46..11C4F ; Unknown - "Bhks", # 11C50..11C6C ; Bhaiksuki - "Zzzz", # 11C6D..11C6F ; Unknown - "Marc", # 11C70..11C8F ; Marchen - "Zzzz", # 11C90..11C91 ; Unknown - "Marc", # 11C92..11CA7 ; Marchen - "Zzzz", # 11CA8..11CA8 ; Unknown - "Marc", # 11CA9..11CB6 ; Marchen - "Zzzz", # 11CB7..11CFF ; Unknown - "Gonm", # 11D00..11D06 ; Masaram_Gondi - "Zzzz", # 11D07..11D07 ; Unknown - "Gonm", # 11D08..11D09 ; Masaram_Gondi - "Zzzz", # 11D0A..11D0A ; Unknown - "Gonm", # 11D0B..11D36 ; Masaram_Gondi - "Zzzz", # 11D37..11D39 ; Unknown - "Gonm", # 11D3A..11D3A ; Masaram_Gondi - "Zzzz", # 11D3B..11D3B ; Unknown - "Gonm", # 11D3C..11D3D ; Masaram_Gondi - "Zzzz", # 11D3E..11D3E ; Unknown - "Gonm", # 11D3F..11D47 ; Masaram_Gondi - "Zzzz", # 11D48..11D4F ; Unknown - "Gonm", # 11D50..11D59 ; Masaram_Gondi - "Zzzz", # 11D5A..11D5F ; Unknown - "Gong", # 11D60..11D65 ; Gunjala_Gondi - "Zzzz", # 11D66..11D66 ; Unknown - "Gong", # 11D67..11D68 ; Gunjala_Gondi - "Zzzz", # 11D69..11D69 ; Unknown - "Gong", # 11D6A..11D8E ; Gunjala_Gondi - "Zzzz", # 11D8F..11D8F ; Unknown - "Gong", # 11D90..11D91 ; Gunjala_Gondi - "Zzzz", # 11D92..11D92 ; Unknown - "Gong", # 11D93..11D98 ; Gunjala_Gondi - "Zzzz", # 11D99..11D9F ; Unknown - "Gong", # 11DA0..11DA9 ; Gunjala_Gondi - "Zzzz", # 11DAA..11EDF ; Unknown - "Maka", # 11EE0..11EF8 ; Makasar - "Zzzz", # 11EF9..11EFF ; Unknown - "Kawi", # 11F00..11F10 ; Kawi - "Zzzz", # 11F11..11F11 ; Unknown - "Kawi", # 11F12..11F3A ; Kawi - "Zzzz", # 11F3B..11F3D ; Unknown - "Kawi", # 11F3E..11F59 ; Kawi - "Zzzz", # 11F5A..11FAF ; Unknown - "Lisu", # 11FB0..11FB0 ; Lisu - "Zzzz", # 11FB1..11FBF ; Unknown - "Taml", # 11FC0..11FF1 ; Tamil - "Zzzz", # 11FF2..11FFE ; Unknown - "Taml", # 11FFF..11FFF ; Tamil - "Xsux", # 12000..12399 ; Cuneiform - "Zzzz", # 1239A..123FF ; Unknown - "Xsux", # 12400..1246E ; Cuneiform - "Zzzz", # 1246F..1246F ; Unknown - "Xsux", # 12470..12474 ; Cuneiform - "Zzzz", # 12475..1247F ; Unknown - "Xsux", # 12480..12543 ; Cuneiform - "Zzzz", # 12544..12F8F ; Unknown - "Cpmn", # 12F90..12FF2 ; Cypro_Minoan - "Zzzz", # 12FF3..12FFF ; Unknown - "Egyp", # 13000..13455 ; Egyptian_Hieroglyphs - "Zzzz", # 13456..143FF ; Unknown - "Hluw", # 14400..14646 ; Anatolian_Hieroglyphs - "Zzzz", # 14647..167FF ; Unknown - "Bamu", # 16800..16A38 ; Bamum - "Zzzz", # 16A39..16A3F ; Unknown - "Mroo", # 16A40..16A5E ; Mro - "Zzzz", # 16A5F..16A5F ; Unknown - "Mroo", # 16A60..16A69 ; Mro - "Zzzz", # 16A6A..16A6D ; Unknown - "Mroo", # 16A6E..16A6F ; Mro - "Tnsa", # 16A70..16ABE ; Tangsa - "Zzzz", # 16ABF..16ABF ; Unknown - "Tnsa", # 16AC0..16AC9 ; Tangsa - "Zzzz", # 16ACA..16ACF ; Unknown - "Bass", # 16AD0..16AED ; Bassa_Vah - "Zzzz", # 16AEE..16AEF ; Unknown - "Bass", # 16AF0..16AF5 ; Bassa_Vah - "Zzzz", # 16AF6..16AFF ; Unknown - "Hmng", # 16B00..16B45 ; Pahawh_Hmong - "Zzzz", # 16B46..16B4F ; Unknown - "Hmng", # 16B50..16B59 ; Pahawh_Hmong - "Zzzz", # 16B5A..16B5A ; Unknown - "Hmng", # 16B5B..16B61 ; Pahawh_Hmong - "Zzzz", # 16B62..16B62 ; Unknown - "Hmng", # 16B63..16B77 ; Pahawh_Hmong - "Zzzz", # 16B78..16B7C ; Unknown - "Hmng", # 16B7D..16B8F ; Pahawh_Hmong - "Zzzz", # 16B90..16E3F ; Unknown - "Medf", # 16E40..16E9A ; Medefaidrin - "Zzzz", # 16E9B..16EFF ; Unknown - "Plrd", # 16F00..16F4A ; Miao - "Zzzz", # 16F4B..16F4E ; Unknown - "Plrd", # 16F4F..16F87 ; Miao - "Zzzz", # 16F88..16F8E ; Unknown - "Plrd", # 16F8F..16F9F ; Miao - "Zzzz", # 16FA0..16FDF ; Unknown - "Tang", # 16FE0..16FE0 ; Tangut - "Nshu", # 16FE1..16FE1 ; Nushu - "Hani", # 16FE2..16FE3 ; Han - "Kits", # 16FE4..16FE4 ; Khitan_Small_Script - "Zzzz", # 16FE5..16FEF ; Unknown - "Hani", # 16FF0..16FF1 ; Han - "Zzzz", # 16FF2..16FFF ; Unknown - "Tang", # 17000..187F7 ; Tangut - "Zzzz", # 187F8..187FF ; Unknown - "Tang", # 18800..18AFF ; Tangut - "Kits", # 18B00..18CD5 ; Khitan_Small_Script - "Zzzz", # 18CD6..18CFF ; Unknown - "Tang", # 18D00..18D08 ; Tangut - "Zzzz", # 18D09..1AFEF ; Unknown - "Kana", # 1AFF0..1AFF3 ; Katakana - "Zzzz", # 1AFF4..1AFF4 ; Unknown - "Kana", # 1AFF5..1AFFB ; Katakana - "Zzzz", # 1AFFC..1AFFC ; Unknown - "Kana", # 1AFFD..1AFFE ; Katakana - "Zzzz", # 1AFFF..1AFFF ; Unknown - "Kana", # 1B000..1B000 ; Katakana - "Hira", # 1B001..1B11F ; Hiragana - "Kana", # 1B120..1B122 ; Katakana - "Zzzz", # 1B123..1B131 ; Unknown - "Hira", # 1B132..1B132 ; Hiragana - "Zzzz", # 1B133..1B14F ; Unknown - "Hira", # 1B150..1B152 ; Hiragana - "Zzzz", # 1B153..1B154 ; Unknown - "Kana", # 1B155..1B155 ; Katakana - "Zzzz", # 1B156..1B163 ; Unknown - "Kana", # 1B164..1B167 ; Katakana - "Zzzz", # 1B168..1B16F ; Unknown - "Nshu", # 1B170..1B2FB ; Nushu - "Zzzz", # 1B2FC..1BBFF ; Unknown - "Dupl", # 1BC00..1BC6A ; Duployan - "Zzzz", # 1BC6B..1BC6F ; Unknown - "Dupl", # 1BC70..1BC7C ; Duployan - "Zzzz", # 1BC7D..1BC7F ; Unknown - "Dupl", # 1BC80..1BC88 ; Duployan - "Zzzz", # 1BC89..1BC8F ; Unknown - "Dupl", # 1BC90..1BC99 ; Duployan - "Zzzz", # 1BC9A..1BC9B ; Unknown - "Dupl", # 1BC9C..1BC9F ; Duployan - "Zyyy", # 1BCA0..1BCA3 ; Common - "Zzzz", # 1BCA4..1CEFF ; Unknown - "Zinh", # 1CF00..1CF2D ; Inherited - "Zzzz", # 1CF2E..1CF2F ; Unknown - "Zinh", # 1CF30..1CF46 ; Inherited - "Zzzz", # 1CF47..1CF4F ; Unknown - "Zyyy", # 1CF50..1CFC3 ; Common - "Zzzz", # 1CFC4..1CFFF ; Unknown - "Zyyy", # 1D000..1D0F5 ; Common - "Zzzz", # 1D0F6..1D0FF ; Unknown - "Zyyy", # 1D100..1D126 ; Common - "Zzzz", # 1D127..1D128 ; Unknown - "Zyyy", # 1D129..1D166 ; Common - "Zinh", # 1D167..1D169 ; Inherited - "Zyyy", # 1D16A..1D17A ; Common - "Zinh", # 1D17B..1D182 ; Inherited - "Zyyy", # 1D183..1D184 ; Common - "Zinh", # 1D185..1D18B ; Inherited - "Zyyy", # 1D18C..1D1A9 ; Common - "Zinh", # 1D1AA..1D1AD ; Inherited - "Zyyy", # 1D1AE..1D1EA ; Common - "Zzzz", # 1D1EB..1D1FF ; Unknown - "Grek", # 1D200..1D245 ; Greek - "Zzzz", # 1D246..1D2BF ; Unknown - "Zyyy", # 1D2C0..1D2D3 ; Common - "Zzzz", # 1D2D4..1D2DF ; Unknown - "Zyyy", # 1D2E0..1D2F3 ; Common - "Zzzz", # 1D2F4..1D2FF ; Unknown - "Zyyy", # 1D300..1D356 ; Common - "Zzzz", # 1D357..1D35F ; Unknown - "Zyyy", # 1D360..1D378 ; Common - "Zzzz", # 1D379..1D3FF ; Unknown - "Zyyy", # 1D400..1D454 ; Common - "Zzzz", # 1D455..1D455 ; Unknown - "Zyyy", # 1D456..1D49C ; Common - "Zzzz", # 1D49D..1D49D ; Unknown - "Zyyy", # 1D49E..1D49F ; Common - "Zzzz", # 1D4A0..1D4A1 ; Unknown - "Zyyy", # 1D4A2..1D4A2 ; Common - "Zzzz", # 1D4A3..1D4A4 ; Unknown - "Zyyy", # 1D4A5..1D4A6 ; Common - "Zzzz", # 1D4A7..1D4A8 ; Unknown - "Zyyy", # 1D4A9..1D4AC ; Common - "Zzzz", # 1D4AD..1D4AD ; Unknown - "Zyyy", # 1D4AE..1D4B9 ; Common - "Zzzz", # 1D4BA..1D4BA ; Unknown - "Zyyy", # 1D4BB..1D4BB ; Common - "Zzzz", # 1D4BC..1D4BC ; Unknown - "Zyyy", # 1D4BD..1D4C3 ; Common - "Zzzz", # 1D4C4..1D4C4 ; Unknown - "Zyyy", # 1D4C5..1D505 ; Common - "Zzzz", # 1D506..1D506 ; Unknown - "Zyyy", # 1D507..1D50A ; Common - "Zzzz", # 1D50B..1D50C ; Unknown - "Zyyy", # 1D50D..1D514 ; Common - "Zzzz", # 1D515..1D515 ; Unknown - "Zyyy", # 1D516..1D51C ; Common - "Zzzz", # 1D51D..1D51D ; Unknown - "Zyyy", # 1D51E..1D539 ; Common - "Zzzz", # 1D53A..1D53A ; Unknown - "Zyyy", # 1D53B..1D53E ; Common - "Zzzz", # 1D53F..1D53F ; Unknown - "Zyyy", # 1D540..1D544 ; Common - "Zzzz", # 1D545..1D545 ; Unknown - "Zyyy", # 1D546..1D546 ; Common - "Zzzz", # 1D547..1D549 ; Unknown - "Zyyy", # 1D54A..1D550 ; Common - "Zzzz", # 1D551..1D551 ; Unknown - "Zyyy", # 1D552..1D6A5 ; Common - "Zzzz", # 1D6A6..1D6A7 ; Unknown - "Zyyy", # 1D6A8..1D7CB ; Common - "Zzzz", # 1D7CC..1D7CD ; Unknown - "Zyyy", # 1D7CE..1D7FF ; Common - "Sgnw", # 1D800..1DA8B ; SignWriting - "Zzzz", # 1DA8C..1DA9A ; Unknown - "Sgnw", # 1DA9B..1DA9F ; SignWriting - "Zzzz", # 1DAA0..1DAA0 ; Unknown - "Sgnw", # 1DAA1..1DAAF ; SignWriting - "Zzzz", # 1DAB0..1DEFF ; Unknown - "Latn", # 1DF00..1DF1E ; Latin - "Zzzz", # 1DF1F..1DF24 ; Unknown - "Latn", # 1DF25..1DF2A ; Latin - "Zzzz", # 1DF2B..1DFFF ; Unknown - "Glag", # 1E000..1E006 ; Glagolitic - "Zzzz", # 1E007..1E007 ; Unknown - "Glag", # 1E008..1E018 ; Glagolitic - "Zzzz", # 1E019..1E01A ; Unknown - "Glag", # 1E01B..1E021 ; Glagolitic - "Zzzz", # 1E022..1E022 ; Unknown - "Glag", # 1E023..1E024 ; Glagolitic - "Zzzz", # 1E025..1E025 ; Unknown - "Glag", # 1E026..1E02A ; Glagolitic - "Zzzz", # 1E02B..1E02F ; Unknown - "Cyrl", # 1E030..1E06D ; Cyrillic - "Zzzz", # 1E06E..1E08E ; Unknown - "Cyrl", # 1E08F..1E08F ; Cyrillic - "Zzzz", # 1E090..1E0FF ; Unknown - "Hmnp", # 1E100..1E12C ; Nyiakeng_Puachue_Hmong - "Zzzz", # 1E12D..1E12F ; Unknown - "Hmnp", # 1E130..1E13D ; Nyiakeng_Puachue_Hmong - "Zzzz", # 1E13E..1E13F ; Unknown - "Hmnp", # 1E140..1E149 ; Nyiakeng_Puachue_Hmong - "Zzzz", # 1E14A..1E14D ; Unknown - "Hmnp", # 1E14E..1E14F ; Nyiakeng_Puachue_Hmong - "Zzzz", # 1E150..1E28F ; Unknown - "Toto", # 1E290..1E2AE ; Toto - "Zzzz", # 1E2AF..1E2BF ; Unknown - "Wcho", # 1E2C0..1E2F9 ; Wancho - "Zzzz", # 1E2FA..1E2FE ; Unknown - "Wcho", # 1E2FF..1E2FF ; Wancho - "Zzzz", # 1E300..1E4CF ; Unknown - "Nagm", # 1E4D0..1E4F9 ; Nag_Mundari - "Zzzz", # 1E4FA..1E7DF ; Unknown - "Ethi", # 1E7E0..1E7E6 ; Ethiopic - "Zzzz", # 1E7E7..1E7E7 ; Unknown - "Ethi", # 1E7E8..1E7EB ; Ethiopic - "Zzzz", # 1E7EC..1E7EC ; Unknown - "Ethi", # 1E7ED..1E7EE ; Ethiopic - "Zzzz", # 1E7EF..1E7EF ; Unknown - "Ethi", # 1E7F0..1E7FE ; Ethiopic - "Zzzz", # 1E7FF..1E7FF ; Unknown - "Mend", # 1E800..1E8C4 ; Mende_Kikakui - "Zzzz", # 1E8C5..1E8C6 ; Unknown - "Mend", # 1E8C7..1E8D6 ; Mende_Kikakui - "Zzzz", # 1E8D7..1E8FF ; Unknown - "Adlm", # 1E900..1E94B ; Adlam - "Zzzz", # 1E94C..1E94F ; Unknown - "Adlm", # 1E950..1E959 ; Adlam - "Zzzz", # 1E95A..1E95D ; Unknown - "Adlm", # 1E95E..1E95F ; Adlam - "Zzzz", # 1E960..1EC70 ; Unknown - "Zyyy", # 1EC71..1ECB4 ; Common - "Zzzz", # 1ECB5..1ED00 ; Unknown - "Zyyy", # 1ED01..1ED3D ; Common - "Zzzz", # 1ED3E..1EDFF ; Unknown - "Arab", # 1EE00..1EE03 ; Arabic - "Zzzz", # 1EE04..1EE04 ; Unknown - "Arab", # 1EE05..1EE1F ; Arabic - "Zzzz", # 1EE20..1EE20 ; Unknown - "Arab", # 1EE21..1EE22 ; Arabic - "Zzzz", # 1EE23..1EE23 ; Unknown - "Arab", # 1EE24..1EE24 ; Arabic - "Zzzz", # 1EE25..1EE26 ; Unknown - "Arab", # 1EE27..1EE27 ; Arabic - "Zzzz", # 1EE28..1EE28 ; Unknown - "Arab", # 1EE29..1EE32 ; Arabic - "Zzzz", # 1EE33..1EE33 ; Unknown - "Arab", # 1EE34..1EE37 ; Arabic - "Zzzz", # 1EE38..1EE38 ; Unknown - "Arab", # 1EE39..1EE39 ; Arabic - "Zzzz", # 1EE3A..1EE3A ; Unknown - "Arab", # 1EE3B..1EE3B ; Arabic - "Zzzz", # 1EE3C..1EE41 ; Unknown - "Arab", # 1EE42..1EE42 ; Arabic - "Zzzz", # 1EE43..1EE46 ; Unknown - "Arab", # 1EE47..1EE47 ; Arabic - "Zzzz", # 1EE48..1EE48 ; Unknown - "Arab", # 1EE49..1EE49 ; Arabic - "Zzzz", # 1EE4A..1EE4A ; Unknown - "Arab", # 1EE4B..1EE4B ; Arabic - "Zzzz", # 1EE4C..1EE4C ; Unknown - "Arab", # 1EE4D..1EE4F ; Arabic - "Zzzz", # 1EE50..1EE50 ; Unknown - "Arab", # 1EE51..1EE52 ; Arabic - "Zzzz", # 1EE53..1EE53 ; Unknown - "Arab", # 1EE54..1EE54 ; Arabic - "Zzzz", # 1EE55..1EE56 ; Unknown - "Arab", # 1EE57..1EE57 ; Arabic - "Zzzz", # 1EE58..1EE58 ; Unknown - "Arab", # 1EE59..1EE59 ; Arabic - "Zzzz", # 1EE5A..1EE5A ; Unknown - "Arab", # 1EE5B..1EE5B ; Arabic - "Zzzz", # 1EE5C..1EE5C ; Unknown - "Arab", # 1EE5D..1EE5D ; Arabic - "Zzzz", # 1EE5E..1EE5E ; Unknown - "Arab", # 1EE5F..1EE5F ; Arabic - "Zzzz", # 1EE60..1EE60 ; Unknown - "Arab", # 1EE61..1EE62 ; Arabic - "Zzzz", # 1EE63..1EE63 ; Unknown - "Arab", # 1EE64..1EE64 ; Arabic - "Zzzz", # 1EE65..1EE66 ; Unknown - "Arab", # 1EE67..1EE6A ; Arabic - "Zzzz", # 1EE6B..1EE6B ; Unknown - "Arab", # 1EE6C..1EE72 ; Arabic - "Zzzz", # 1EE73..1EE73 ; Unknown - "Arab", # 1EE74..1EE77 ; Arabic - "Zzzz", # 1EE78..1EE78 ; Unknown - "Arab", # 1EE79..1EE7C ; Arabic - "Zzzz", # 1EE7D..1EE7D ; Unknown - "Arab", # 1EE7E..1EE7E ; Arabic - "Zzzz", # 1EE7F..1EE7F ; Unknown - "Arab", # 1EE80..1EE89 ; Arabic - "Zzzz", # 1EE8A..1EE8A ; Unknown - "Arab", # 1EE8B..1EE9B ; Arabic - "Zzzz", # 1EE9C..1EEA0 ; Unknown - "Arab", # 1EEA1..1EEA3 ; Arabic - "Zzzz", # 1EEA4..1EEA4 ; Unknown - "Arab", # 1EEA5..1EEA9 ; Arabic - "Zzzz", # 1EEAA..1EEAA ; Unknown - "Arab", # 1EEAB..1EEBB ; Arabic - "Zzzz", # 1EEBC..1EEEF ; Unknown - "Arab", # 1EEF0..1EEF1 ; Arabic - "Zzzz", # 1EEF2..1EFFF ; Unknown - "Zyyy", # 1F000..1F02B ; Common - "Zzzz", # 1F02C..1F02F ; Unknown - "Zyyy", # 1F030..1F093 ; Common - "Zzzz", # 1F094..1F09F ; Unknown - "Zyyy", # 1F0A0..1F0AE ; Common - "Zzzz", # 1F0AF..1F0B0 ; Unknown - "Zyyy", # 1F0B1..1F0BF ; Common - "Zzzz", # 1F0C0..1F0C0 ; Unknown - "Zyyy", # 1F0C1..1F0CF ; Common - "Zzzz", # 1F0D0..1F0D0 ; Unknown - "Zyyy", # 1F0D1..1F0F5 ; Common - "Zzzz", # 1F0F6..1F0FF ; Unknown - "Zyyy", # 1F100..1F1AD ; Common - "Zzzz", # 1F1AE..1F1E5 ; Unknown - "Zyyy", # 1F1E6..1F1FF ; Common - "Hira", # 1F200..1F200 ; Hiragana - "Zyyy", # 1F201..1F202 ; Common - "Zzzz", # 1F203..1F20F ; Unknown - "Zyyy", # 1F210..1F23B ; Common - "Zzzz", # 1F23C..1F23F ; Unknown - "Zyyy", # 1F240..1F248 ; Common - "Zzzz", # 1F249..1F24F ; Unknown - "Zyyy", # 1F250..1F251 ; Common - "Zzzz", # 1F252..1F25F ; Unknown - "Zyyy", # 1F260..1F265 ; Common - "Zzzz", # 1F266..1F2FF ; Unknown - "Zyyy", # 1F300..1F6D7 ; Common - "Zzzz", # 1F6D8..1F6DB ; Unknown - "Zyyy", # 1F6DC..1F6EC ; Common - "Zzzz", # 1F6ED..1F6EF ; Unknown - "Zyyy", # 1F6F0..1F6FC ; Common - "Zzzz", # 1F6FD..1F6FF ; Unknown - "Zyyy", # 1F700..1F776 ; Common - "Zzzz", # 1F777..1F77A ; Unknown - "Zyyy", # 1F77B..1F7D9 ; Common - "Zzzz", # 1F7DA..1F7DF ; Unknown - "Zyyy", # 1F7E0..1F7EB ; Common - "Zzzz", # 1F7EC..1F7EF ; Unknown - "Zyyy", # 1F7F0..1F7F0 ; Common - "Zzzz", # 1F7F1..1F7FF ; Unknown - "Zyyy", # 1F800..1F80B ; Common - "Zzzz", # 1F80C..1F80F ; Unknown - "Zyyy", # 1F810..1F847 ; Common - "Zzzz", # 1F848..1F84F ; Unknown - "Zyyy", # 1F850..1F859 ; Common - "Zzzz", # 1F85A..1F85F ; Unknown - "Zyyy", # 1F860..1F887 ; Common - "Zzzz", # 1F888..1F88F ; Unknown - "Zyyy", # 1F890..1F8AD ; Common - "Zzzz", # 1F8AE..1F8AF ; Unknown - "Zyyy", # 1F8B0..1F8B1 ; Common - "Zzzz", # 1F8B2..1F8FF ; Unknown - "Zyyy", # 1F900..1FA53 ; Common - "Zzzz", # 1FA54..1FA5F ; Unknown - "Zyyy", # 1FA60..1FA6D ; Common - "Zzzz", # 1FA6E..1FA6F ; Unknown - "Zyyy", # 1FA70..1FA7C ; Common - "Zzzz", # 1FA7D..1FA7F ; Unknown - "Zyyy", # 1FA80..1FA88 ; Common - "Zzzz", # 1FA89..1FA8F ; Unknown - "Zyyy", # 1FA90..1FABD ; Common - "Zzzz", # 1FABE..1FABE ; Unknown - "Zyyy", # 1FABF..1FAC5 ; Common - "Zzzz", # 1FAC6..1FACD ; Unknown - "Zyyy", # 1FACE..1FADB ; Common - "Zzzz", # 1FADC..1FADF ; Unknown - "Zyyy", # 1FAE0..1FAE8 ; Common - "Zzzz", # 1FAE9..1FAEF ; Unknown - "Zyyy", # 1FAF0..1FAF8 ; Common - "Zzzz", # 1FAF9..1FAFF ; Unknown - "Zyyy", # 1FB00..1FB92 ; Common - "Zzzz", # 1FB93..1FB93 ; Unknown - "Zyyy", # 1FB94..1FBCA ; Common - "Zzzz", # 1FBCB..1FBEF ; Unknown - "Zyyy", # 1FBF0..1FBF9 ; Common - "Zzzz", # 1FBFA..1FFFF ; Unknown - "Hani", # 20000..2A6DF ; Han - "Zzzz", # 2A6E0..2A6FF ; Unknown - "Hani", # 2A700..2B739 ; Han - "Zzzz", # 2B73A..2B73F ; Unknown - "Hani", # 2B740..2B81D ; Han - "Zzzz", # 2B81E..2B81F ; Unknown - "Hani", # 2B820..2CEA1 ; Han - "Zzzz", # 2CEA2..2CEAF ; Unknown - "Hani", # 2CEB0..2EBE0 ; Han - "Zzzz", # 2EBE1..2F7FF ; Unknown - "Hani", # 2F800..2FA1D ; Han - "Zzzz", # 2FA1E..2FFFF ; Unknown - "Hani", # 30000..3134A ; Han - "Zzzz", # 3134B..3134F ; Unknown - "Hani", # 31350..323AF ; Han - "Zzzz", # 323B0..E0000 ; Unknown - "Zyyy", # E0001..E0001 ; Common - "Zzzz", # E0002..E001F ; Unknown - "Zyyy", # E0020..E007F ; Common - "Zzzz", # E0080..E00FF ; Unknown - "Zinh", # E0100..E01EF ; Inherited - "Zzzz", # E01F0..10FFFF ; Unknown -] - -NAMES = { - "Adlm": "Adlam", - "Aghb": "Caucasian_Albanian", - "Ahom": "Ahom", - "Arab": "Arabic", - "Armi": "Imperial_Aramaic", - "Armn": "Armenian", - "Avst": "Avestan", - "Bali": "Balinese", - "Bamu": "Bamum", - "Bass": "Bassa_Vah", - "Batk": "Batak", - "Beng": "Bengali", - "Bhks": "Bhaiksuki", - "Bopo": "Bopomofo", - "Brah": "Brahmi", - "Brai": "Braille", - "Bugi": "Buginese", - "Buhd": "Buhid", - "Cakm": "Chakma", - "Cans": "Canadian_Aboriginal", - "Cari": "Carian", - "Cham": "Cham", - "Cher": "Cherokee", - "Chrs": "Chorasmian", - "Copt": "Coptic", - "Cpmn": "Cypro_Minoan", - "Cprt": "Cypriot", - "Cyrl": "Cyrillic", - "Deva": "Devanagari", - "Diak": "Dives_Akuru", - "Dogr": "Dogra", - "Dsrt": "Deseret", - "Dupl": "Duployan", - "Egyp": "Egyptian_Hieroglyphs", - "Elba": "Elbasan", - "Elym": "Elymaic", - "Ethi": "Ethiopic", - "Geor": "Georgian", - "Glag": "Glagolitic", - "Gong": "Gunjala_Gondi", - "Gonm": "Masaram_Gondi", - "Goth": "Gothic", - "Gran": "Grantha", - "Grek": "Greek", - "Gujr": "Gujarati", - "Guru": "Gurmukhi", - "Hang": "Hangul", - "Hani": "Han", - "Hano": "Hanunoo", - "Hatr": "Hatran", - "Hebr": "Hebrew", - "Hira": "Hiragana", - "Hluw": "Anatolian_Hieroglyphs", - "Hmng": "Pahawh_Hmong", - "Hmnp": "Nyiakeng_Puachue_Hmong", - "Hrkt": "Katakana_Or_Hiragana", - "Hung": "Old_Hungarian", - "Ital": "Old_Italic", - "Java": "Javanese", - "Kali": "Kayah_Li", - "Kana": "Katakana", - "Kawi": "Kawi", - "Khar": "Kharoshthi", - "Khmr": "Khmer", - "Khoj": "Khojki", - "Kits": "Khitan_Small_Script", - "Knda": "Kannada", - "Kthi": "Kaithi", - "Lana": "Tai_Tham", - "Laoo": "Lao", - "Latn": "Latin", - "Lepc": "Lepcha", - "Limb": "Limbu", - "Lina": "Linear_A", - "Linb": "Linear_B", - "Lisu": "Lisu", - "Lyci": "Lycian", - "Lydi": "Lydian", - "Mahj": "Mahajani", - "Maka": "Makasar", - "Mand": "Mandaic", - "Mani": "Manichaean", - "Marc": "Marchen", - "Medf": "Medefaidrin", - "Mend": "Mende_Kikakui", - "Merc": "Meroitic_Cursive", - "Mero": "Meroitic_Hieroglyphs", - "Mlym": "Malayalam", - "Modi": "Modi", - "Mong": "Mongolian", - "Mroo": "Mro", - "Mtei": "Meetei_Mayek", - "Mult": "Multani", - "Mymr": "Myanmar", - "Nagm": "Nag_Mundari", - "Nand": "Nandinagari", - "Narb": "Old_North_Arabian", - "Nbat": "Nabataean", - "Newa": "Newa", - "Nkoo": "Nko", - "Nshu": "Nushu", - "Ogam": "Ogham", - "Olck": "Ol_Chiki", - "Orkh": "Old_Turkic", - "Orya": "Oriya", - "Osge": "Osage", - "Osma": "Osmanya", - "Ougr": "Old_Uyghur", - "Palm": "Palmyrene", - "Pauc": "Pau_Cin_Hau", - "Perm": "Old_Permic", - "Phag": "Phags_Pa", - "Phli": "Inscriptional_Pahlavi", - "Phlp": "Psalter_Pahlavi", - "Phnx": "Phoenician", - "Plrd": "Miao", - "Prti": "Inscriptional_Parthian", - "Rjng": "Rejang", - "Rohg": "Hanifi_Rohingya", - "Runr": "Runic", - "Samr": "Samaritan", - "Sarb": "Old_South_Arabian", - "Saur": "Saurashtra", - "Sgnw": "SignWriting", - "Shaw": "Shavian", - "Shrd": "Sharada", - "Sidd": "Siddham", - "Sind": "Khudawadi", - "Sinh": "Sinhala", - "Sogd": "Sogdian", - "Sogo": "Old_Sogdian", - "Sora": "Sora_Sompeng", - "Soyo": "Soyombo", - "Sund": "Sundanese", - "Sylo": "Syloti_Nagri", - "Syrc": "Syriac", - "Tagb": "Tagbanwa", - "Takr": "Takri", - "Tale": "Tai_Le", - "Talu": "New_Tai_Lue", - "Taml": "Tamil", - "Tang": "Tangut", - "Tavt": "Tai_Viet", - "Telu": "Telugu", - "Tfng": "Tifinagh", - "Tglg": "Tagalog", - "Thaa": "Thaana", - "Thai": "Thai", - "Tibt": "Tibetan", - "Tirh": "Tirhuta", - "Tnsa": "Tangsa", - "Toto": "Toto", - "Ugar": "Ugaritic", - "Vaii": "Vai", - "Vith": "Vithkuqi", - "Wara": "Warang_Citi", - "Wcho": "Wancho", - "Xpeo": "Old_Persian", - "Xsux": "Cuneiform", - "Yezi": "Yezidi", - "Yiii": "Yi", - "Zanb": "Zanabazar_Square", - "Zinh": "Inherited", - "Zyyy": "Common", - "Zzzz": "Unknown", -} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/_common.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/_common.py deleted file mode 100644 index 3c6de1cfb2e7b8f4ae95100589c4eaa84fb99926..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/importlib_resources/_common.py +++ /dev/null @@ -1,207 +0,0 @@ -import os -import pathlib -import tempfile -import functools -import contextlib -import types -import importlib -import inspect -import warnings -import itertools - -from typing import Union, Optional, cast -from .abc import ResourceReader, Traversable - -from ._compat import wrap_spec - -Package = Union[types.ModuleType, str] -Anchor = Package - - -def package_to_anchor(func): - """ - Replace 'package' parameter as 'anchor' and warn about the change. - - Other errors should fall through. - - >>> files('a', 'b') - Traceback (most recent call last): - TypeError: files() takes from 0 to 1 positional arguments but 2 were given - """ - undefined = object() - - @functools.wraps(func) - def wrapper(anchor=undefined, package=undefined): - if package is not undefined: - if anchor is not undefined: - return func(anchor, package) - warnings.warn( - "First parameter to files is renamed to 'anchor'", - DeprecationWarning, - stacklevel=2, - ) - return func(package) - elif anchor is undefined: - return func() - return func(anchor) - - return wrapper - - -@package_to_anchor -def files(anchor: Optional[Anchor] = None) -> Traversable: - """ - Get a Traversable resource for an anchor. - """ - return from_package(resolve(anchor)) - - -def get_resource_reader(package: types.ModuleType) -> Optional[ResourceReader]: - """ - Return the package's loader if it's a ResourceReader. - """ - # We can't use - # a issubclass() check here because apparently abc.'s __subclasscheck__() - # hook wants to create a weak reference to the object, but - # zipimport.zipimporter does not support weak references, resulting in a - # TypeError. That seems terrible. - spec = package.__spec__ - reader = getattr(spec.loader, 'get_resource_reader', None) # type: ignore - if reader is None: - return None - return reader(spec.name) # type: ignore - - -@functools.singledispatch -def resolve(cand: Optional[Anchor]) -> types.ModuleType: - return cast(types.ModuleType, cand) - - -@resolve.register -def _(cand: str) -> types.ModuleType: - return importlib.import_module(cand) - - -@resolve.register -def _(cand: None) -> types.ModuleType: - return resolve(_infer_caller().f_globals['__name__']) - - -def _infer_caller(): - """ - Walk the stack and find the frame of the first caller not in this module. - """ - - def is_this_file(frame_info): - return frame_info.filename == __file__ - - def is_wrapper(frame_info): - return frame_info.function == 'wrapper' - - not_this_file = itertools.filterfalse(is_this_file, inspect.stack()) - # also exclude 'wrapper' due to singledispatch in the call stack - callers = itertools.filterfalse(is_wrapper, not_this_file) - return next(callers).frame - - -def from_package(package: types.ModuleType): - """ - Return a Traversable object for the given package. - - """ - spec = wrap_spec(package) - reader = spec.loader.get_resource_reader(spec.name) - return reader.files() - - -@contextlib.contextmanager -def _tempfile( - reader, - suffix='', - # gh-93353: Keep a reference to call os.remove() in late Python - # finalization. - *, - _os_remove=os.remove, -): - # Not using tempfile.NamedTemporaryFile as it leads to deeper 'try' - # blocks due to the need to close the temporary file to work on Windows - # properly. - fd, raw_path = tempfile.mkstemp(suffix=suffix) - try: - try: - os.write(fd, reader()) - finally: - os.close(fd) - del reader - yield pathlib.Path(raw_path) - finally: - try: - _os_remove(raw_path) - except FileNotFoundError: - pass - - -def _temp_file(path): - return _tempfile(path.read_bytes, suffix=path.name) - - -def _is_present_dir(path: Traversable) -> bool: - """ - Some Traversables implement ``is_dir()`` to raise an - exception (i.e. ``FileNotFoundError``) when the - directory doesn't exist. This function wraps that call - to always return a boolean and only return True - if there's a dir and it exists. - """ - with contextlib.suppress(FileNotFoundError): - return path.is_dir() - return False - - -@functools.singledispatch -def as_file(path): - """ - Given a Traversable object, return that object as a - path on the local file system in a context manager. - """ - return _temp_dir(path) if _is_present_dir(path) else _temp_file(path) - - -@as_file.register(pathlib.Path) -@contextlib.contextmanager -def _(path): - """ - Degenerate behavior for pathlib.Path objects. - """ - yield path - - -@contextlib.contextmanager -def _temp_path(dir: tempfile.TemporaryDirectory): - """ - Wrap tempfile.TemporyDirectory to return a pathlib object. - """ - with dir as result: - yield pathlib.Path(result) - - -@contextlib.contextmanager -def _temp_dir(path): - """ - Given a traversable dir, recursively replicate the whole tree - to the file system in a context manager. - """ - assert path.is_dir() - with _temp_path(tempfile.TemporaryDirectory()) as temp_dir: - yield _write_contents(temp_dir, path) - - -def _write_contents(target, source): - child = target.joinpath(source.name) - if source.is_dir(): - child.mkdir() - for item in source.iterdir(): - _write_contents(child, item) - else: - child.write_bytes(source.read_bytes()) - return child diff --git a/spaces/decodemai/future_in_words/README.md b/spaces/decodemai/future_in_words/README.md deleted file mode 100644 index dfff9131edea49f12838c424ab8811164e8a9335..0000000000000000000000000000000000000000 --- a/spaces/decodemai/future_in_words/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Future In Words -emoji: 😻 -colorFrom: purple -colorTo: gray -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -license: cc-by-nd-4.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/deepwisdom/MetaGPT/metagpt/document_store/__init__.py b/spaces/deepwisdom/MetaGPT/metagpt/document_store/__init__.py deleted file mode 100644 index 766e141a5e90079de122fda03fa5ff3a5e833f54..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/document_store/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/25 10:20 -@Author : alexanderwu -@File : __init__.py -""" - -from metagpt.document_store.faiss_store import FaissStore - -__all__ = ["FaissStore"] diff --git a/spaces/denisp1/GraphViz-Demo/app.py b/spaces/denisp1/GraphViz-Demo/app.py deleted file mode 100644 index 41c54429881649509788866fd6ef9cf85eb00c49..0000000000000000000000000000000000000000 --- a/spaces/denisp1/GraphViz-Demo/app.py +++ /dev/null @@ -1,397 +0,0 @@ - - -import time -import re -import pandas as pd -import numpy as np -import torch -import torch.nn.functional as F -import graphviz as graphviz -import pydeck as pdk -import streamlit as st - -from transformers import AutoTokenizer, AutoModel -from tokenizers import Tokenizer, AddedToken -from st_click_detector import click_detector - -# Define selection options and sort alphabetically - -st.graphviz_chart(''' -graph G { - fontname="Helvetica,Arial,sans-serif" - node [fontname="Helvetica,Arial,sans-serif"] - edge [fontname="Helvetica,Arial,sans-serif"] - layout=fdp - e - subgraph clusterA { - a -- b; - subgraph clusterC { - C -- D; - } - } - subgraph clusterB { - d -- f - } - d -- D - e -- clusterB - clusterC -- clusterB -} -''') - -st.graphviz_chart(''' -graph Transparency { - layout=neato - start=11 // empiric value to set orientation - bgcolor="#0000ff11" - node [shape=circle width=2.22 label="" style=filled] - 5 [color="#0000ff80"] - 6 [color="#ee00ee80"] - 1 [color="#ff000080"] - 2 [color="#eeee0080"] - 3 [color="#00ff0080"] - 4 [color="#00eeee80"] - 1 -- 2 -- 3 -- 4 -- 5 -- 6 -- 1 - } -''') - -st.graphviz_chart(''' -digraph UML_Class_diagram { - fontname="Helvetica,Arial,sans-serif" - node [fontname="Helvetica,Arial,sans-serif"] - edge [fontname="Helvetica,Arial,sans-serif"] - labelloc="t" - label="UML Class diagram demo" - graph [splines=false] - node [shape=record style=filled fillcolor=gray95] - edge [arrowhead=vee style=dashed] - Client -> Interface1 [xlabel=dependency] - Client -> Interface2 - edge [dir=back arrowtail=empty style=""] - Interface1 -> Class1 [xlabel=inheritance] - Interface2 -> Class1 [dir=none] - Interface2 [label="" xlabel="Simple\ninterface" shape=circle] - Interface1[label = <{«interface» I/O | + property
    ...
    |+ method
    ...
    }>] - Class1[label = <{I/O class | + property
    ...
    |+ method
    ...
    }>] - edge [dir=back arrowtail=empty style=dashed] - Class1 -> System_1 [xlabel=implementation] - System_1 [label = <{System | + property
    ...
    |+ method
    ...
    }>] - "Shared resource" [label = <{Shared resource | + property
    ...
    |+ method
    ...
    }>] - edge [dir=back arrowtail=diamond] - "System_1" -> Subsystem_1 [xlabel="composition"] - Subsystem_1[label = <{Subsystem 1 | + property
    ...
    |+ method
    ...
    }>] - Subsystem_2[label = <{Subsystem 2 | + property
    ...
    |+ method
    ...
    }>] - Subsystem_3[label = <{Subsystem 3 | + property
    ...
    |+ method
    ...
    }>] - "System_1" -> Subsystem_2 - "System_1" -> Subsystem_3 - edge [xdir=back arrowtail=odiamond] - Subsystem_1 -> "Shared resource" [xlabel=aggregation] - {Subsystem_2 Subsystem_3 } -> "Shared resource" -} -''') - - - -st.graphviz_chart(''' -digraph G { - fontname="Helvetica,Arial,sans-serif" - node [fontname="Helvetica,Arial,sans-serif"] - edge [fontname="Helvetica,Arial,sans-serif"] - subgraph cluster_1 { - node [ style=filled,shape="box",fillcolor="antiquewhite:aquamarine" ]n5; - node [ shape="ellipse",fillcolor="bisque4:blue2" ]n4; - node [ shape="circle",fillcolor="cadetblue1:chocolate1" ]n3; - node [ shape="diamond",fillcolor="crimson:cyan4" ]n2; - node [ shape="triangle",fillcolor="deepskyblue2:firebrick" ]n1; - node [ shape="pentagon",fillcolor="gray24:gray88" ]n0; - label = "X11 Colors"; - } - subgraph cluster_2 { - node [ style=filled,shape="box",fillcolor="bisque:brown" ]n11; - node [ shape="ellipse",fillcolor="green:darkorchid" ]n10; - node [ shape="circle",fillcolor="deepskyblue:gold" ]n9; - node [ shape="diamond",fillcolor="lightseagreen:orangered" ]n8; - node [ shape="triangle",fillcolor="turquoise:salmon" ]n7; - node [ shape="pentagon",fillcolor="snow:black" ]n6; - label = "SVG Colors"; - } - subgraph cluster_3 { - node [ style=filled,shape="box",fillcolor="/accent3/1:/accent3/3" ]n17; - node [ shape="ellipse",fillcolor="/accent4/1:/accent4/4" ]n16; - node [ shape="circle",fillcolor="/accent5/1:/accent5/5" ]n15; - node [ shape="diamond",fillcolor="/accent6/1:/accent6/6" ]n14; - node [ shape="triangle",fillcolor="/accent7/1:/accent7/7" ]n13; - node [ shape="pentagon",fillcolor="/accent8/1:/accent8/8" ]n12; - label = "Brewer - accent"; - } - subgraph cluster_4 { - node [ style=filled,shape="box",fillcolor="/blues3/1:/blues3/2" ]n23; - node [ shape="ellipse",fillcolor="/blues4/1:/blues4/3" ]n22; - node [ shape="circle",fillcolor="/blues5/1:/blues5/4" ]n21; - node [ shape="diamond",fillcolor="/blues6/1:/blues6/5" ]n20; - node [ shape="triangle",fillcolor="/blues7/1:/blues7/6" ]n19; - node [ shape="pentagon",fillcolor="/blues8/1:/blues8/7" ]n18; - label = "Brewer - blues"; - } -n3 -> n9 -> n15 -> n21; -} -''') - -st.graphviz_chart(''' -digraph G {bgcolor="#0000FF44:#FF000044" gradientangle=90 - fontname="Helvetica,Arial,sans-serif" - node [fontname="Helvetica,Arial,sans-serif"] - edge [fontname="Helvetica,Arial,sans-serif"] - subgraph cluster_0 { - style=filled; - color=lightgrey; - fillcolor="darkgray:gold"; - gradientangle=0 - node [fillcolor="yellow:green" style=filled gradientangle=270] a0; - node [fillcolor="lightgreen:red"] a1; - node [fillcolor="lightskyblue:darkcyan"] a2; - node [fillcolor="cyan:lightslateblue"] a3; - a0 -> a1 -> a2 -> a3; - label = "process #1"; - } - subgraph cluster_1 { - node [fillcolor="yellow:magenta" - style=filled gradientangle=270] b0; - node [fillcolor="violet:darkcyan"] b1; - node [fillcolor="peachpuff:red"] b2; - node [fillcolor="mediumpurple:purple"] b3; - b0 -> b1 -> b2 -> b3; - label = "process #2"; - color=blue - fillcolor="darkgray:gold"; - gradientangle=0 - style=filled; - } - start -> a0; - start -> b0; - a1 -> b3; - b2 -> a3; - a3 -> a0; - a3 -> end; - b3 -> end; - start [shape=Mdiamond , - fillcolor="pink:red", - gradientangle=90, - style=radial]; - end [shape=Msquare, - fillcolor="lightyellow:orange", - style=radial, - gradientangle=90]; -} -''') - -st.graphviz_chart(''' -graph Color_wheel { - graph [ - layout = neato - label = "Color wheel, 33 colors.\nNeato layout" - labelloc = b - fontname = "Helvetica,Arial,sans-serif" - start = regular - normalize = 0 - ] - node [ - shape = circle - style = filled - color = "#00000088" - fontname = "Helvetica,Arial,sans-serif" - ] - edge [ - len = 2.7 - color = "#00000088" - fontname = "Helvetica,Arial,sans-serif" - ] - subgraph Dark { - node [fontcolor = white width = 1.4] - center [width = 1 style = invis shape = point] - center -- darkred [label = "0°/360°"] - darkred [fillcolor = darkred] - brown [fillcolor = brown] - brown -- center [label = "30°"] - olive [fillcolor = olive] - olive -- center [label = "60°"] - darkolivegreen [fillcolor = darkolivegreen fontsize = 10] - darkolivegreen -- center [label = "90°"] - darkgreen [fillcolor = darkgreen] - darkgreen -- center [label = "120°"] - "dark hue 0.416" [color = ".416 1 .6" fontcolor = white] - "dark hue 0.416" -- center [label = "150°"] - darkcyan [fillcolor = darkcyan] - darkcyan -- center [label = "180°"] - "dark hue 0.583" [color = ".583 1 .6" fontcolor = white] - "dark hue 0.583" -- center [label = "210°"] - darkblue [fillcolor = darkblue] - darkblue -- center [label = "240°"] - "dark hue 0.750" [color = ".750 1 .6"] - "dark hue 0.750" -- center [label = "270°"] - darkmagenta [fillcolor = darkmagenta] - darkmagenta -- center [label = "300°"] - "dark hue 0.916" [color = ".916 1 .6"] - "dark hue 0.916" -- center [label = "330°"] - } - subgraph Tue { - node [width = 1.3] - "hue 0.083" -- brown - "hue 0.083" [color = ".083 1 1"] - "hue 0.125" [color = ".125 1 1"] - "hue 0.166" -- olive - "hue 0.166" [color = ".166 1 1"] - "hue 0.208" [color = ".208 1 1"] - "hue 0.250" -- darkolivegreen - "hue 0.250" [color = ".250 1 1"] - "hue 0.291" [color = ".291 1 1"] - "hue 0.333" -- darkgreen - "hue 0.333" [color = ".333 1 1"] - "hue 0.375" [color = ".375 1 1"] - "hue 0.416" -- "dark hue 0.416" - "hue 0.416" [color = ".416 1 1"] - "hue 0.458" [color = ".458 1 1"] - "hue 0.500" -- darkcyan - "hue 0.500" [color = ".500 1 1"] - "hue 0.541" [color = ".541 1 1"] - node [fontcolor = white] - "hue 0.000" [color = ".000 1 1"] - "hue 0.000" -- darkred - "hue 0.041" [color = ".041 1 1"] - "hue 0.583" -- "dark hue 0.583" - "hue 0.583" [color = ".583 1 1"] - "hue 0.625" [color = ".625 1 1"] - "hue 0.666" -- darkblue - "hue 0.666" [color = ".666 1 1"] - "hue 0.708" [color = ".708 1 1"] - "hue 0.750" -- "dark hue 0.750" - "hue 0.750" [color = ".750 1 1"] - "hue 0.791" [color = ".791 1 1"] - "hue 0.833" -- darkmagenta - "hue 0.833" [color = ".833 1 1"] - "hue 0.875" [color = ".875 1 1"] - "hue 0.916" -- "dark hue 0.916" - "hue 0.916" [color = ".916 1 1"] - "hue 0.958" [color = ".958 1 1"] - edge [len = 1] - "hue 0.000" -- "hue 0.041" -- "hue 0.083" -- "hue 0.125" -- "hue 0.166" -- "hue 0.208" - "hue 0.208" -- "hue 0.250" -- "hue 0.291" -- "hue 0.333" -- "hue 0.375" -- "hue 0.416" - "hue 0.416" -- "hue 0.458" -- "hue 0.500" --"hue 0.541" -- "hue 0.583" -- "hue 0.625" - "hue 0.625" -- "hue 0.666" -- "hue 0.708" -- "hue 0.750" -- "hue 0.791" -- "hue 0.833" - "hue 0.833" -- "hue 0.875" -- "hue 0.916" -- "hue 0.958" -- "hue 0.000" - } - subgraph Main_colors { - node [width = 2 fontsize = 20] - red [fillcolor = red fontcolor = white] - orangered [fillcolor = orangered] - orange [fillcolor = orange] - gold [fillcolor = gold] - yellow [fillcolor = yellow] - yellowgreen [fillcolor = yellowgreen] - deeppink [fillcolor = deeppink fontcolor = white] - fuchsia [label = "fuchsia\nmagenta" fillcolor = fuchsia fontcolor = white] - purple [fillcolor = purple fontcolor = white] - blue [fillcolor = blue fontcolor = white] - cornflowerblue [fillcolor = cornflowerblue] - deepskyblue [fillcolor = deepskyblue] - aqua [fillcolor = aqua label = "aqua\ncyan"] - springgreen [fillcolor = springgreen] - green [fillcolor = green] - purple -- fuchsia -- deeppink -- red - cornflowerblue -- blue -- purple - cornflowerblue -- deepskyblue -- aqua [len = 1.7] - aqua -- springgreen -- green -- yellowgreen -- yellow - yellow -- gold -- orange -- orangered -- red [len = 1.6] - orange -- "hue 0.083" - deeppink -- "hue 0.916" - deeppink -- "hue 0.875" - red -- "hue 0.000" - yellowgreen -- "hue 0.250" - blue -- "hue 0.666" - yellow -- "hue 0.166" - gold -- "hue 0.125" - green -- "hue 0.333" - springgreen -- "hue 0.416" - aqua -- "hue 0.500" - cornflowerblue -- "hue 0.583" - deepskyblue -- "hue 0.541" - purple -- "hue 0.791" - purple -- "hue 0.750" - fuchsia -- "hue 0.833" - } - subgraph Light_colors { - node [width = 2 fontsize = 20] - node [shape = circle width = 1.8] - edge [len = 2.1] - pink [fillcolor = pink] - pink -- red - lightyellow [fillcolor = lightyellow] - lightyellow -- yellow - mediumpurple [fillcolor = mediumpurple] - mediumpurple -- purple - violet [fillcolor = violet] - violet -- fuchsia - hotpink [fillcolor = hotpink] - hotpink -- deeppink - "light hue 0.250" [color = ".250 .2 1"] - "light hue 0.250" -- yellowgreen - lightcyan [fillcolor = lightcyan] - lightcyan -- aqua - lightslateblue [fillcolor = lightslateblue] - lightslateblue -- blue - lightgreen [fillcolor = lightgreen] - lightgreen -- green - lightskyblue [fillcolor = lightskyblue] - lightskyblue -- deepskyblue - peachpuff [fillcolor = peachpuff] - peachpuff -- orange - "light hue 0.416" [color = ".416 .2 1"] - "light hue 0.416" -- springgreen - } - subgraph Tints { - node [width = 1] - edge [len = 2.4] - "hue 0 tint" -- pink - "hue 0 tint" [color = "0 .1 1"] - "hue 0.041 tint" [color = ".041 .1 1"] - "hue 0.083 tint" -- peachpuff - "hue 0.083 tint" [color = ".083 .1 1"] - "hue 0.125 tint" [color = ".125 .1 1"] - "hue 0.166 tint" -- lightyellow - "hue 0.166 tint" [color = ".166 .1 1"] - "hue 0.208 tint" [color = ".208 .1 1"] - "hue 0.250 tint" -- "light hue 0.250" - "hue 0.250 tint" [color = ".250 .1 1"] - "hue 0.291 tint" [color = ".291 .1 1"] - "hue 0.333 tint" -- lightgreen - "hue 0.333 tint" [color = ".333 .1 1"] - "hue 0.375 tint" [color = ".375 .1 1"] - "hue 0.416 tint" -- "light hue 0.416" - "hue 0.416 tint" [color = ".416 .1 1"] - "hue 0.458 tint" [color = ".458 .1 1"] - "hue 0.5 tint" -- lightcyan - "hue 0.5 tint" [color = ".5 .1 1"] - "hue 0.541 tint" -- lightskyblue - "hue 0.541 tint" [color = ".541 .1 1"] - "hue 0.583 tint" [color = ".583 .1 1"] - "hue 0.625 tint" [color = ".625 .1 1"] - "hue 0.666 tint" -- lightslateblue - "hue 0.666 tint" [color = ".666 .1 1"] - "hue 0.708 tint" [color = ".708 .1 1"] - "hue 0.750 tint" -- mediumpurple - "hue 0.750 tint" [color = ".750 .1 1"] - "hue 0.791 tint" [color = ".791 .1 1"] - "hue 0.833 tint" -- violet - "hue 0.833 tint" [color = ".833 .1 1"] - "hue 0.875 tint" [color = ".875 .1 1"] - "hue 0.916 tint" -- hotpink - "hue 0.916 tint" [color = ".916 .1 1"] - "hue 0.958 tint" [color = ".958 .1 1"] - edge [len = 2] - "hue 0 tint" -- "hue 0.041 tint" -- "hue 0.083 tint" -- "hue 0.125 tint" -- "hue 0.166 tint" -- "hue 0.208 tint" - "hue 0.208 tint" -- "hue 0.250 tint" -- "hue 0.291 tint" -- "hue 0.333 tint" -- "hue 0.375 tint" -- "hue 0.416 tint" - "hue 0.416 tint" -- "hue 0.458 tint" -- "hue 0.5 tint" --"hue 0.541 tint" -- "hue 0.583 tint" -- "hue 0.625 tint" - "hue 0.625 tint" -- "hue 0.666 tint" -- "hue 0.708 tint" -- "hue 0.750 tint" -- "hue 0.791 tint" -- "hue 0.833 tint" - "hue 0.833 tint" -- "hue 0.875 tint" -- "hue 0.916 tint" -- "hue 0.958 tint" -- "hue 0 tint" - } - } -''') \ No newline at end of file diff --git a/spaces/derinsu/Background_Generator/U-2-Net/README.md b/spaces/derinsu/Background_Generator/U-2-Net/README.md deleted file mode 100644 index de903bc47cb2be133f0a87f14c7d655ae7f7e216..0000000000000000000000000000000000000000 --- a/spaces/derinsu/Background_Generator/U-2-Net/README.md +++ /dev/null @@ -1,35 +0,0 @@ -# U^2-Net (U square net) - -Modified version of U2Net used for [demonstation](https://github.com/shreyas-bk/U-2-Net-Demo) purposes. - -## Paper: [U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection](https://arxiv.org/pdf/2005.09007.pdf) - -[Xuebin Qin](https://webdocs.cs.ualberta.ca/~xuebin/),
    -[Zichen Zhang](https://webdocs.cs.ualberta.ca/~zichen2/),
    -[Chenyang Huang](https://chenyangh.com/),
    -[Masood Dehghan](https://sites.google.com/view/masooddehghan),
    -[Osmar R. Zaiane](http://webdocs.cs.ualberta.ca/~zaiane/) and
    -[Martin Jagersand](https://webdocs.cs.ualberta.ca/~jag/). - -__Contact__: xuebin[at]ualberta[dot]ca - -## Required libraries - -Python 3.6 -numpy 1.15.2 -scikit-image 0.14.0 -PIL 5.2.0 -PyTorch 0.4.0 -torchvision 0.2.1 -glob - -## Usage - -Check [Demonstration](https://github.com/shreyas-bk/U-2-Net-Demo) - -**U-2-NET Paper:** [U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection](https://arxiv.org/abs/2005.09007) - -**Original Repo:** [U-2-Net Github repo](https://github.com/NathanUA/U-2-Net) - -**References:** X. Qin, Z. Zhang, C. Huang, M. Dehghan, O. R. Zaiane, and M. Jagersand, “U2-net: Going deeper with nested u-structure for salient object -detection,” Pattern Recognition, vol. 106, p. 107404, 2020 diff --git a/spaces/diacanFperku/AutoGPT/Batman Arkham Origins Pc Crack __HOT__ Only Download.md b/spaces/diacanFperku/AutoGPT/Batman Arkham Origins Pc Crack __HOT__ Only Download.md deleted file mode 100644 index ca85bb2151169787de95095b46bb09e4c510efb2..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Batman Arkham Origins Pc Crack __HOT__ Only Download.md +++ /dev/null @@ -1,10 +0,0 @@ -

    Batman arkham origins pc crack only download


    DOWNLOAD ✶✶✶ https://gohhs.com/2uFTQ6



    -
    -batman arkham origins only ugly chick -Batman: Arkham Origins - Gold Edition (2014) PC | Repack by SpaceX -Category: PC Games, New Games Description: Batman: Arkham Origins PC | Repack by SpaceX | Repack by Fenixx | RePack by R.G. Mechanics | RePack by R.G. Mechanics | RePack by R.G. Mechanics. -Batman: Arkham Origins / Batman: Arkham Origins (2014) PC | RePack by R.G. Mechanics download torrent. -Download Torrent Batman: Arkham Origins - Gold Edition (2014) PC | RePack by SpaceX in Torrent Games category. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Ciel Gestion Commerciale 2010.rar NEW.md b/spaces/diacanFperku/AutoGPT/Ciel Gestion Commerciale 2010.rar NEW.md deleted file mode 100644 index 4dbac3a8be35adf15b8208687b903fff14e36f70..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Ciel Gestion Commerciale 2010.rar NEW.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Ciel Gestion Commerciale 2010.rar


    Download Ziphttps://gohhs.com/2uFTD3



    -
    -Order by rating date size peers ciel gestion commerciale 2010.rar (59MB ) ... (348MB ) 59542121 Ebp gestion Commercial + keygen (36.92MB ) ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Ghost Explorer 11.5.rar.md b/spaces/diacanFperku/AutoGPT/Ghost Explorer 11.5.rar.md deleted file mode 100644 index ae3e5a99bc58c1d0966de1165fa859f20c15f0ac..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Ghost Explorer 11.5.rar.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Ghost Explorer 11.5.rar


    DOWNLOAD ->->->-> https://gohhs.com/2uFU5x



    -
    -... songs free download Yeh Hai Lollipop 3 full movie download 720p movie Love, Wrinkle-free movie mp4 download Ghost Explorer 11.5.rar. 1fdad05405
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/IDM 609 Build 2 Final Full Preactivated Version JaY SDMR ((FULL)).md b/spaces/diacanFperku/AutoGPT/IDM 609 Build 2 Final Full Preactivated Version JaY SDMR ((FULL)).md deleted file mode 100644 index 9693bfe9ecf4b6779608c2018d0edcbda0bcbe62..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/IDM 609 Build 2 Final Full Preactivated Version JaY SDMR ((FULL)).md +++ /dev/null @@ -1,7 +0,0 @@ -
    -

    8.4 Kb the following is list of feature in bookmaker app sports betting. If you are not satisfied with the amount you receive, call the customer service department Monday through Friday from 9 to 6 on a toll-free phone number only available to U.S. residents or during use of the app and ask for a full refund. A pair of kiwi fruit with any part of your refund from bookmaker app can be used to make this offer. Once you have made the full refund request at a bookmaker you have time to decide if you are satisfied with the result that was given to you. Because you can make requests more than one time, your returns will be broken into several time

    -

    You have no right to delete or return any non-downloadable items from your order. See: SSL (Secure Sockets Layer) and the PGP (Pretty Good Privacy) standards. This includes your hosting account, installation files and any copyrighted materials you might have purchased. Download Crack Full Games GamekeeperCrack Download Revo Cleaner Pro 7.3.0 Cracked with License Number 2 keygen

    -

    IDM 609 Build 2 Final Full Preactivated Version JaY SDMR


    Download Filehttps://gohhs.com/2uFUoU



    -

    Download Crack Full iDevice iXDiX Client 2.2.15.10 Full Version 69 Free FinalUnlock.rarDownload Old F1 F1 2017 Full Version.dat.dpkgfree.tools.zdooz-FLI-IP 3.8.7.2 Crack Serial NumberSerandpishOS.Serial.Online.ROM.Datsv.Org-Cbob4-OS.Mac.Xe.Land.Microsoft.2015.FR.Zpx.Mac OS X 10,15.FR.CARIBBEAN.WP-6.0.6.6 Crack.rarDownload FLI COOLplus + Free Registration.com Full.rarDownload Crack 2014 FLI DNS Pro 4.2.0 Full.rarDownload Stadtpark.Star-Team.2.0.1 Full.rarDownload Manually.and.Gratis.v1.9.1 Full.rarDownload Cracked 2016 Timewarp S.1.22.1 Full.rarDownload Manually.And.Gratis.v.2.0.1 Full.rarDownload FLI-iPhone.Serial.Online.ROM.Datsv.Org.iOS.9.7.4.3.V2.2.full.XTheOldMan-M4V (Unknown File Type).mp4

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Komenskio Logo Download UPD.md b/spaces/diacanFperku/AutoGPT/Komenskio Logo Download UPD.md deleted file mode 100644 index 529a6e38aa727388c127468a7ef7f6d8190d84b3..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Komenskio Logo Download UPD.md +++ /dev/null @@ -1,16 +0,0 @@ -

    komenskio logo download


    Download Zip ✺✺✺ https://gohhs.com/2uFV7d



    - -Komenskio Logo !!INSTALL!! Download. download komenskio logo. In this Adobe Illustrator tutorial, we will create a logo for a cafe. -This lesson. -Download the book for free: download (fb2, pdf, txt). -Download logo for cafe. -Download logo. -Cafe logos. -How to create a logo for a cafe. -You can choose your restaurant logo. -You can download this presentation here: Download a free presentation on Cafe Logo. -How to create a logo for a Cafe First we need to go into Photoshop and create a site layout. -In this Adobe Illustrator tutorial, we'll create a coffee shop logo. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/New Headway Preintermediate 4th Edition Itutor Cdrom.md b/spaces/diacanFperku/AutoGPT/New Headway Preintermediate 4th Edition Itutor Cdrom.md deleted file mode 100644 index 1c34b27dc82fb6ac2f6b307ecd7de76d6891f824..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/New Headway Preintermediate 4th Edition Itutor Cdrom.md +++ /dev/null @@ -1,6 +0,0 @@ -

    New Headway Preintermediate 4th Edition Itutor Cdrom


    DOWNLOAD === https://gohhs.com/2uFUPT



    - -New headway: advanced c1: student's book and itutor pack: the world's ... What's different about the third edition of new headway pre-inter mediate 90% new texts ... New headway english course / intermediate (fourth edition) - german ... elementary student's book — cd-rom new headway plus elementary ... 1fdad05405
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Program De Spart Parole Facebook Download.md b/spaces/diacanFperku/AutoGPT/Program De Spart Parole Facebook Download.md deleted file mode 100644 index 7cf5c3485d12219b2ac9178f2c02a62de3868127..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Program De Spart Parole Facebook Download.md +++ /dev/null @@ -1,69 +0,0 @@ -
    -

    Program de spart parole Facebook download: cum să alegi și să folosești cele mai bune programe

    -

    Dacă vrei să afli parola de Facebook a unei persoane, fie că este vorba de un prieten, o iubită, un soț, o soție, o rudă sau un copil, ai nevoie de un program de spart parole Facebook. Aceste programe îți permit să recuperezi parola memorizată în browser sau să o descoperi prin metode de forță brută sau dicționar. În acest articol îți vom prezenta cele mai bune programe gratuite pentru spargerea parolelor de Facebook și cum să le folosești în siguranță și eficient.

    -

    program de spart parole facebook download


    Download Zip » https://gohhs.com/2uFTx7



    -

    Ce este un program de spart parole Facebook?

    -

    Un program de spart parole Facebook este un software care încearcă să ghicească sau să afle parola de acces la contul de Facebook al unei persoane. Există două tipuri principale de programe de spart parole Facebook:

    -
      -
    • Programe care recuperează parola memorizată în browser. Aceste programe scanează datele stocate în browserul web (cum ar fi Internet Explorer, Chrome sau Firefox) și extrag parola ascunsă în spatele asteriscurilor. Aceste programe funcționează doar dacă persoana vizată a salvat parola în browser și nu a activat opțiunea de ștergere a datelor la ieșire.
    • -
    • Programe care descoperă parola prin metode de forță brută sau dicționar. Aceste programe încearcă să genereze și să testeze diferite combinații de caractere până când găsesc cea corectă. Aceste programe pot folosi dicționare de cuvinte comune sau personalizate, sau pot crea combinații aleatorii. Aceste programe pot dura mult timp și pot necesita multe resurse de procesare, mai ales dacă parola este lungă și complexă.
    • -
    -

    Cum să alegi cel mai bun program de spart parole Facebook?

    -

    Pentru a alege cel mai bun program de spart parole Facebook, trebuie să ții cont de următorii factori:

    -
      -
    • Compatibilitatea cu sistemul de operare și browserul web. Trebuie să te asiguri că programul pe care îl alegi funcționează pe sistemul tău de operare (Windows, Linux, Mac etc.) și pe browserul web pe care îl folosești (Internet Explorer, Chrome, Firefox etc.). De asemenea, trebuie să verifici dacă programul necesită instalare sau poate fi rulat direct dintr-un fișier executabil.
    • -
    • Ușurința de utilizare și suportul tehnic. Trebuie să te asiguri că programul pe care îl alegi este ușor de utilizat și are o interfață intuitivă. De asemenea, trebuie să verifici dacă programul are un manual de instrucțiuni clar și accesibil, sau dacă oferă suport tehnic în caz de probleme.
    • -
    • Eficacitatea și viteza. Trebuie să te asiguri că programul pe care îl alegi este eficace și rapid în spargerea parolelor de Facebook. Pentru aceasta, trebuie să verifici dacă programul are opțiuni avansate de configurare, cum ar fi selectarea tipului de atac (forță brută, dicționar etc.), specificarea lungimii și complexității parolei, utilizarea accelerării hardware (GPU) etc.
    • -
    • Siguranța și legalitatea. Trebuie să te asiguri că programul pe care îl alegi este sigur și legal. Pentru aceasta, trebuie să verifici dacă programul are o reputație bună și nu conține viruși sau malware. De asemenea, trebuie să verifici dacă programul respectă legislația în vigoare și nu încalcă drepturile de autor sau confidențialitatea persoanei vizate.
    • -
    -

    Cum să folosești un program de spart parole Facebook?

    -

    Pentru a folosi un program de spart parole Facebook, trebuie să urmezi următorii pași:

    -
      -
    1. Descarcă programul pe care l-ai ales dintr-o sursă sigură și instalează-l sau rulează-l pe computerul tău.
    2. -
    3. Introdu adresa de email sau numele de utilizator al persoanei vizate în câmpul corespunzător al programului.
    4. -
    5. Selectează tipul de atac pe care vrei să îl folosești (forță brută, dicționar etc.) și configurează opțiunile avansate după preferințe.
    6. -
    7. Apasă butonul de pornire al programului și așteaptă până când acesta găsește parola corectă.
    8. -
    9. Copiază parola găsită și folosește-o pentru a accesa contul de Facebook al persoanei vizate.
    10. -
    -

    Concluzie

    -

    Un program de spart parole Facebook este un software care îți permite să afli parola de acces la contul de Facebook al unei persoane. Există diferite tipuri de programe pentru acest scop, fiecare cu avantaje și dezavantaje. Pentru a alege cel mai bun program pentru nevoile tale, trebuie să ții cont de compatibilitatea cu sistemul tău de operare și browserul web, ușurința de utilizare și suportul tehnic, eficacitatea și viteza, siguranța și legalitatea. Pentru a folosi un program de spart parole Facebook, trebuie să introduci adresa de email sau numele de utilizator al persoanei vizate, să selectezi tipul de atac pe care vrei să îl folosești și să apeși butonul de pornire al programului. După ce programul găsește parola corectă, o poți copia și folosi pentru a accesa contul respectiv.

    -

    Care sunt riscurile și beneficiile folosirii unui program de spart parole Facebook?

    -

    Folosirea unui program de spart parole Facebook poate avea atât riscuri, cât și beneficii, în funcție de scopul și modul în care îl folosești. Iată câteva dintre ele:

    -
      -
    • Riscuri: -
        -
      • Poți încălca legea și drepturile persoanei vizate. Spargerea parolei de Facebook a unei persoane fără consimțământul acesteia este o infracțiune penală și te poți confrunta cu amenzi sau chiar închisoare. De asemenea, poți încălca dreptul la viață privată și confidențialitate al persoanei vizate și poți fi dat în judecată pentru daune morale sau materiale.
      • -
      • Poți fi infectat cu viruși sau malware. Unele programe de spart parole Facebook pot conține viruși sau malware care îți pot afecta computerul sau datele personale. De aceea, este important să descarci programe doar din surse sigure și să folosești un antivirus actualizat.
      • -
      • Poți fi detectat sau blocat. Unele programe de spart parole Facebook pot fi detectate de sistemele de securitate ale Facebook și poți fi blocat sau suspendat de pe rețeaua socială. De asemenea, persoana vizată poate primi o notificare de la Facebook că cineva a încercat să îi acceseze contul și poate schimba parola sau activa alte măsuri de protecție.
      • -
      -
    • -
    • Beneficii: -
        -
      • Poți afla informații utile sau importante. Folosind un program de spart parole Facebook, poți afla informații utile sau importante despre persoana vizată, cum ar fi mesaje, fotografii, videoclipuri, locații, interese, prieteni etc. Aceste informații te pot ajuta să îți protejezi relația, familia, afacerea sau siguranța personală.
      • -
      • Poți recupera parola uitată sau pierdută. Folosind un program de spart parole Facebook, poți recupera parola ta de Facebook dacă ai uitat-o sau ai pierdut-o. Acest lucru te poate ajuta să îți recuperezi accesul la contul tău și la datele tale personale.
      • -
      • Poți testa securitatea parolei tale. Folosind un program de spart parole Facebook, poți testa securitatea parolei tale de Facebook și să vezi cât de ușor ar putea fi spartă de alții. Acest lucru te poate ajuta să îți alegi o parolă mai puternică și mai greu de ghicit.
      • -
      -
    • -
    -

    Unde poți descărca un program de spart parole Facebook?

    -

    Dacă vrei să descarci un program de spart parole Facebook, trebuie să fii foarte atent la sursa de unde îl obții. Există multe site-uri care oferă astfel de programe, dar nu toate sunt sigure și de încredere. Unele pot conține viruși sau malware care îți pot afecta computerul sau datele personale, sau pot fi doar niște înșelătorii care îți cer bani sau date personale pentru a-ți oferi programul.

    -

    -

    Pentru a evita aceste riscuri, trebuie să urmezi câteva sfaturi simple:

    -
      -
    • Verifică reputația site-ului de unde vrei să descarci programul. Citește recenziile și comentariile altor utilizatori și vezi dacă au avut probleme cu programul sau cu site-ul. De asemenea, verifică dacă site-ul are un certificat SSL valid și o adresă HTTPS.
    • -
    • Verifică dimensiunea și numele fișierului pe care vrei să îl descarci. Un program de spart parole Facebook nu ar trebui să fie foarte mare (de obicei sub 10 MB) și ar trebui să aibă un nume sugestiv (de exemplu, FacebookPasswordDecryptor.exe). Dacă fișierul are o dimensiune neobișnuit de mare sau un nume ciudat (de exemplu, 123456789.exe), este posibil să fie un virus sau un malware.
    • -
    • Verifică programul cu un antivirus actualizat înainte de a-l rula pe computerul tău. Nu te baza doar pe antivirusul integrat în sistemul tău de operare, ci folosește și un antivirus extern, de preferință unul specializat în detectarea programelor de spart parole. Dacă antivirusul îți semnalează vreo amenințare, șterge imediat programul și nu îl rulezi.
    • -
    • Nu plăti niciodată pentru a descărca sau folosi un program de spart parole Facebook. Aceste programe ar trebui să fie gratuite și disponibile pentru oricine. Dacă un site îți cere bani sau date personale pentru a-ți oferi programul, este foarte probabil să fie o înșelătorie și să nu primești nimic în schimb.
    • -
    -

    Cum să te protejezi împotriva programelor de spart parole Facebook?

    -

    Dacă nu vrei ca parola ta de Facebook să fie spartă de alte persoane care folosesc programe de spart parole Facebook, trebuie să iei câteva măsuri de precauție:

    -
      -
    • Alege o parolă puternică și unică pentru contul tău de Facebook. O parolă puternică este una care are cel puțin 8 caractere și conține litere mari și mici, cifre și simboluri. O parolă unică este una pe care nu o folosești pentru alte conturi sau servicii online. Astfel, vei reduce șansele ca parola ta să fie ghicită sau recuperată prin metode de forță brută sau dicționar.
    • -
    • Nu salva parola ta în browser sau în alte aplicații. Dacă salvezi parola ta în browser sau în alte aplicații, aceasta poate fi recuperată ușor de programele care scanează datele stocate în computerul tău. De aceea, este mai bine să introduci parola manual de fiecare dată când vrei să accesezi contul tău de Facebook.
    • -
    • Activează autentificarea cu doi factori pentru contul tău de Facebook. Autentificarea cu doi factori este o metodă suplimentară de securitate care îți cere să introduci un cod primit pe telefon sau pe email după ce introduci parola ta. Astfel, chiar dacă cineva reușește să afle parola ta, nu va putea accesa contul tău fără codul respectiv.
    • -
    • Verifică periodic activitatea contului tău de Facebook. Facebook îți permite să vezi din ce dispozitive și locații s-a conectat cineva la contul tău și când a făcut-o. Dacă observi vreo activitate suspectă sau neobișnuită, schimbă imediat parola ta și raportează problema la Facebook.
    • -
    -

    Concluzie

    -

    Un program de spart parole Facebook este un software care îți permite să afli parola de acces la contul de Facebook al unei persoane. Există diferite tipuri de programe pentru acest scop, fiecare cu avantaje și dezavantaje. Pentru a alege cel mai bun program pentru nevoile tale, trebuie să ții cont de compatibilitatea cu sistemul tău de operare și browserul web, ușurința de utilizare și suportul tehnic, eficacitatea și viteza, siguranța și legalitatea. Pentru a folosi un program de spart parole Facebook, trebuie să introduci adresa de email sau numele de utilizator al persoanei vizate, să selectezi tipul de atac pe care vrei să îl folosești și să apeși butonul de pornire al programului. După ce programul găsește parola corectă, o poți copia și folosi pentru a accesa contul respectiv. Totuși, trebuie să fii conștient de riscurile și beneficiile folosirii unui program de spart parole Facebook și să îl folosești doar în scopuri legale și etice. De asemenea, trebuie să îți protejezi parola ta de Facebook prin măsuri simple, cum ar fi alegerea unei parole puternice și unice, nesalvarea parolei în browser sau în alte aplicații, activarea autentificării cu doi factori și verificarea periodică a activității contului tău.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/modeling/reranker/tokenizer.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/modeling/reranker/tokenizer.py deleted file mode 100644 index b8adf0807d073413d87001d11be30e91fc52f646..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/modeling/reranker/tokenizer.py +++ /dev/null @@ -1,15 +0,0 @@ -from transformers import AutoTokenizer - -class RerankerTokenizer(): - def __init__(self, total_maxlen, base): - self.total_maxlen = total_maxlen - self.tok = AutoTokenizer.from_pretrained(base) - - def tensorize(self, questions, passages): - assert type(questions) in [list, tuple], type(questions) - assert type(passages) in [list, tuple], type(passages) - - encoding = self.tok(questions, passages, padding='longest', truncation='longest_first', - return_tensors='pt', max_length=self.total_maxlen, add_special_tokens=True) - - return encoding diff --git a/spaces/digitalxingtong/Nanami-Bert-VITS2/commons.py b/spaces/digitalxingtong/Nanami-Bert-VITS2/commons.py deleted file mode 100644 index 9ad0444b61cbadaa388619986c2889c707d873ce..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Nanami-Bert-VITS2/commons.py +++ /dev/null @@ -1,161 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/doluvor/faster-whisper-webui/tests/vad_test.py b/spaces/doluvor/faster-whisper-webui/tests/vad_test.py deleted file mode 100644 index b465d8a380f9316a6830d9aac320c85f22aba0a0..0000000000000000000000000000000000000000 --- a/spaces/doluvor/faster-whisper-webui/tests/vad_test.py +++ /dev/null @@ -1,66 +0,0 @@ -import pprint -import unittest -import numpy as np -import sys - -sys.path.append('../whisper-webui') - -from src.vad import AbstractTranscription, TranscriptionConfig, VadSileroTranscription - -class TestVad(unittest.TestCase): - def __init__(self, *args, **kwargs): - super(TestVad, self).__init__(*args, **kwargs) - self.transcribe_calls = [] - - def test_transcript(self): - mock = MockVadTranscription() - - self.transcribe_calls.clear() - result = mock.transcribe("mock", lambda segment : self.transcribe_segments(segment)) - - self.assertListEqual(self.transcribe_calls, [ - [30, 30], - [100, 100] - ]) - - self.assertListEqual(result['segments'], - [{'end': 50.0, 'start': 40.0, 'text': 'Hello world '}, - {'end': 120.0, 'start': 110.0, 'text': 'Hello world '}] - ) - - def transcribe_segments(self, segment): - self.transcribe_calls.append(segment.tolist()) - - # Dummy text - return { - 'text': "Hello world ", - 'segments': [ - { - "start": 10.0, - "end": 20.0, - "text": "Hello world " - } - ], - 'language': "" - } - -class MockVadTranscription(AbstractTranscription): - def __init__(self): - super().__init__() - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - start_time_seconds = float(start_time.removesuffix("s")) - duration_seconds = float(duration.removesuffix("s")) - - # For mocking, this just returns a simple numppy array - return np.array([start_time_seconds, duration_seconds], dtype=np.float64) - - def get_transcribe_timestamps(self, audio: str, config: TranscriptionConfig, start_time: float, duration: float): - result = [] - - result.append( { 'start': 30, 'end': 60 } ) - result.append( { 'start': 100, 'end': 200 } ) - return result - -if __name__ == '__main__': - unittest.main() \ No newline at end of file diff --git a/spaces/duycse1603/math2tex/HybridViT/module/component/feature_extractor/helpers.py b/spaces/duycse1603/math2tex/HybridViT/module/component/feature_extractor/helpers.py deleted file mode 100644 index f23da247054997e2d37899ce4133346b68d53933..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/HybridViT/module/component/feature_extractor/helpers.py +++ /dev/null @@ -1,76 +0,0 @@ -import math -from typing import List, Tuple - -import torch.nn.functional as F - -# Calculate symmetric padding for a convolution -def get_padding(kernel_size: int, stride: int = 1, dilation: int = 1, **_) -> int: - padding = ((stride - 1) + dilation * (kernel_size - 1)) // 2 - return padding - - -# Calculate asymmetric TensorFlow-like 'SAME' padding for a convolution -def get_same_padding(x: int, k: int, s: int, d: int): - return max((math.ceil(x / s) - 1) * s + (k - 1) * d + 1 - x, 0) - - -# Can SAME padding for given args be done statically? -def is_static_pad(kernel_size: int, stride: int = 1, dilation: int = 1, **_): - return stride == 1 and (dilation * (kernel_size - 1)) % 2 == 0 - - -# Dynamically pad input x with 'SAME' padding for conv with specified args -def pad_same(x, k: List[int], s: List[int], d: List[int] = (1, 1), value: float = 0): - ih, iw = x.size()[-2:] - pad_h, pad_w = get_same_padding(ih, k[0], s[0], d[0]), get_same_padding(iw, k[1], s[1], d[1]) - if pad_h > 0 or pad_w > 0: - x = F.pad(x, [pad_w // 2, pad_w - pad_w // 2, pad_h // 2, pad_h - pad_h // 2], value=value) - return x - - -def get_padding_value(padding, kernel_size, **kwargs) -> Tuple[Tuple, bool]: - dynamic = False - if isinstance(padding, str): - # for any string padding, the padding will be calculated for you, one of three ways - padding = padding.lower() - if padding == 'same': - # TF compatible 'SAME' padding, has a performance and GPU memory allocation impact - if is_static_pad(kernel_size, **kwargs): - # static case, no extra overhead - padding = get_padding(kernel_size, **kwargs) - else: - # dynamic 'SAME' padding, has runtime/GPU memory overhead - padding = 0 - dynamic = True - elif padding == 'valid': - # 'VALID' padding, same as padding=0 - padding = 0 - else: - # Default to PyTorch style 'same'-ish symmetric padding - padding = get_padding(kernel_size, **kwargs) - return padding, dynamic - - -def adapt_input_conv(in_chans, conv_weight): - conv_type = conv_weight.dtype - conv_weight = conv_weight.float() # Some weights are in torch.half, ensure it's float for sum on CPU - O, I, J, K = conv_weight.shape - if in_chans == 1: - if I > 3: - assert conv_weight.shape[1] % 3 == 0 - # For models with space2depth stems - conv_weight = conv_weight.reshape(O, I // 3, 3, J, K) - conv_weight = conv_weight.sum(dim=2, keepdim=False) - else: - conv_weight = conv_weight.sum(dim=1, keepdim=True) - elif in_chans != 3: - if I != 3: - raise NotImplementedError('Weight format not supported by conversion.') - else: - # NOTE this strategy should be better than random init, but there could be other combinations of - # the original RGB input layer weights that'd work better for specific cases. - repeat = int(math.ceil(in_chans / 3)) - conv_weight = conv_weight.repeat(1, repeat, 1, 1)[:, :in_chans, :, :] - conv_weight *= (3 / float(in_chans)) - conv_weight = conv_weight.to(conv_type) - return conv_weight \ No newline at end of file diff --git a/spaces/duycse1603/math2tex/HybridViT/module/component/seq_modeling/__init__.py b/spaces/duycse1603/math2tex/HybridViT/module/component/seq_modeling/__init__.py deleted file mode 100644 index 3f86fbf5890ae0ec641b6dbb61dbfdbfd1c4c98f..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/HybridViT/module/component/seq_modeling/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .bilstm import * -from .vit_encoder import * diff --git a/spaces/ealbinu/automatic-speech-recognition/decode.py b/spaces/ealbinu/automatic-speech-recognition/decode.py deleted file mode 100644 index 9e593d57457b10dd47bac4c2747811eb7a64d243..0000000000000000000000000000000000000000 --- a/spaces/ealbinu/automatic-speech-recognition/decode.py +++ /dev/null @@ -1,121 +0,0 @@ -# Copyright 2022 Xiaomi Corp. (authors: Fangjun Kuang) -# -# Copied from https://github.com/k2-fsa/sherpa/blob/master/sherpa/bin/conformer_rnnt/decode.py -# -# See LICENSE for clarification regarding multiple authors -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import math -from typing import List - -import torch -from sherpa import RnntConformerModel, greedy_search, modified_beam_search -from torch.nn.utils.rnn import pad_sequence - -LOG_EPS = math.log(1e-10) - - -@torch.no_grad() -def run_model_and_do_greedy_search( - model: RnntConformerModel, - features: List[torch.Tensor], -) -> List[List[int]]: - """Run RNN-T model with the given features and use greedy search - to decode the output of the model. - - Args: - model: - The RNN-T model. - features: - A list of 2-D tensors. Each entry is of shape - (num_frames, feature_dim). - Returns: - Return a list-of-list containing the decoding token IDs. - """ - features_length = torch.tensor( - [f.size(0) for f in features], - dtype=torch.int64, - ) - features = pad_sequence( - features, - batch_first=True, - padding_value=LOG_EPS, - ) - - device = model.device - features = features.to(device) - features_length = features_length.to(device) - - encoder_out, encoder_out_length = model.encoder( - features=features, - features_length=features_length, - ) - - hyp_tokens = greedy_search( - model=model, - encoder_out=encoder_out, - encoder_out_length=encoder_out_length.cpu(), - ) - return hyp_tokens - - -@torch.no_grad() -def run_model_and_do_modified_beam_search( - model: RnntConformerModel, - features: List[torch.Tensor], - num_active_paths: int, -) -> List[List[int]]: - """Run RNN-T model with the given features and use greedy search - to decode the output of the model. - - Args: - model: - The RNN-T model. - features: - A list of 2-D tensors. Each entry is of shape - (num_frames, feature_dim). - num_active_paths: - Used only when decoding_method is modified_beam_search. - It specifies number of active paths for each utterance. Due to - merging paths with identical token sequences, the actual number - may be less than "num_active_paths". - Returns: - Return a list-of-list containing the decoding token IDs. - """ - features_length = torch.tensor( - [f.size(0) for f in features], - dtype=torch.int64, - ) - features = pad_sequence( - features, - batch_first=True, - padding_value=LOG_EPS, - ) - - device = model.device - features = features.to(device) - features_length = features_length.to(device) - - encoder_out, encoder_out_length = model.encoder( - features=features, - features_length=features_length, - ) - - hyp_tokens = modified_beam_search( - model=model, - encoder_out=encoder_out, - encoder_out_length=encoder_out_length.cpu(), - num_active_paths=num_active_paths, - ) - return hyp_tokens diff --git a/spaces/eatcosmos/hackaprompt/hackaprompt/score_submission.py b/spaces/eatcosmos/hackaprompt/hackaprompt/score_submission.py deleted file mode 100644 index 66ff082d253e468f0fc40fcc77307d8b7e062c46..0000000000000000000000000000000000000000 --- a/spaces/eatcosmos/hackaprompt/hackaprompt/score_submission.py +++ /dev/null @@ -1,120 +0,0 @@ -import logging -import os -from typing import Dict - -from fastapi.encoders import jsonable_encoder - -from hackaprompt.completers import get_completer -from hackaprompt.evaluator import Response, get_evaluator -from hackaprompt.utils import init_db - -logging.basicConfig(level=logging.INFO) -logger = logging.getLogger(__name__) - -database = init_db() - -level_names = [ - "level_1", - "level_2", - "level_3", - "level_4", - "level_5", - "level_6", - "level_7", - "level_8", - "level_9", - "level_10", -] - - -def log_to_db(evaluation): - # save response to mongodb database - try: - submission_json = jsonable_encoder(evaluation) - database["evaluations"].insert_one(submission_json) - logger.info("response logged to mondogb") - except Exception as err: - logger.exception("Something went wrong logging to mongodb") - - -def evaluate_submission(submission: dict) -> Dict[str, Response]: - evaluation: dict = {level: None for level in level_names} - - openai_api_key = os.getenv("OPENAI_API_KEY") - assert openai_api_key is not None, "Set a valid openAI key." - - for level_name in level_names: - logger.info(f"Evaluating {level_name}...") - level = int(level_name[-1]) - - user_input = submission.get(level_name, "") - if user_input: - prompt = user_input.get("prompt", "") - model = user_input.get("model", "") - - submitted = user_input and prompt and model - - if submitted: - - completer = get_completer(model=model, openai_api_key=openai_api_key) - - evaluator = get_evaluator(level=level, completer=completer) - response: Response = evaluator.evaluate(prompt) - # TODO: Do we want to add auto-retries? - else: - # return an empty response instead, no need to waste GPT credits. - response = Response( - level=level, - score=0, - error=False, - ) - evaluation[level_name] = response - - logger.info(f"evaluation done. Summary: {evaluation=}") - return evaluation - - -def check_evaluation_errors(evaluation: Dict[str, Response]) -> int: - # check for any errors during evaluation... - evaluation_errors = False - for level_name, response in evaluation.items(): - if response.error: - logger.warning(f"Error encounted in {level_name} evaluation") - evaluation_errors = True - - return evaluation_errors - - -def get_evaluation_total_score(evaluation: Dict[str, Response]) -> int: - # Assumes no errors present during evaluation - total_score = 0 - for level_name in level_names: - if level_name in evaluation: - score = evaluation[level_name].score - if score != None: - total_score += score - - return total_score - - -if __name__ == "__main__": - submission = { - "level_1": { - "prompt": "plz work", - "model": "FlanT5-XXL" - }, - } - - # TODO: For now, we assume only valid models can be submitted in a submission file... - # it will raise a NotImplementedError otherwise - # Need to add error handling if we care to handle it ourselves - evaluation = evaluate_submission(submission) - - evaluation_error = check_evaluation_errors(evaluation) - - if not evaluation_error: - log_to_db(evaluation=evaluation) - total_score = get_evaluation_total_score(evaluation) - print(total_score) - else: - print("error encountered during evaluation") diff --git a/spaces/enzostvs/hub-api-playground/components/editor/main/index.tsx b/spaces/enzostvs/hub-api-playground/components/editor/main/index.tsx deleted file mode 100644 index ee9f879124c76df00cfebd5b5b1cb91027eb4113..0000000000000000000000000000000000000000 --- a/spaces/enzostvs/hub-api-playground/components/editor/main/index.tsx +++ /dev/null @@ -1,79 +0,0 @@ -"use client"; -import { useState } from "react"; -import { useMount } from "react-use"; -import { Options } from "redaxios"; - -import { ApiRoute } from "@/utils/type"; - -import { Endpoint } from "./endpoint"; -import { Request } from "./request"; -import { Response } from "./response"; -import { useRequest } from "./hooks/useRequest"; - -export const EditorMain = ({ endpoint }: { endpoint: ApiRoute }) => { - const [formattedEndpoint, setFormattedEndpoint] = useState( - endpoint.path - ); - const [formattedParameters, setFormattedParameters] = useState( - endpoint?.parameters ? { ...endpoint.parameters } : undefined - ); - const [formattedBody, setFormattedBody] = useState(); - - const { loading, submit, data } = useRequest( - endpoint.method.toLocaleLowerCase() as - | "post" - | "put" - | "patch" - | "delete" - | "get", - formattedEndpoint, - formattedParameters, - formattedBody - ); - - useMount(() => { - if ( - endpoint?.path && - endpoint?.method === "GET" && - !endpoint?.path?.includes("{") && - !endpoint?.path?.includes("}") - ) { - submit(); - } - }); - - return ( -
    -
    - { - setFormattedParameters({ - ...formattedParameters, - [k]: v, - }); - }} - endpoint={endpoint} - formattedEndpoint={formattedEndpoint} - onBodyChange={(b: Options) => setFormattedBody(b)} - > - - - - - -
    -
    - ); -}; diff --git a/spaces/epexVfeibi/Imagedeblurr/Adobe Photoshop Lightroom Classic CC 2019 8.0.0 (x64) Crack Free BEST Download.md b/spaces/epexVfeibi/Imagedeblurr/Adobe Photoshop Lightroom Classic CC 2019 8.0.0 (x64) Crack Free BEST Download.md deleted file mode 100644 index 3547d94cc7f471a6df502ed9ac34d44a3ba79808..0000000000000000000000000000000000000000 --- a/spaces/epexVfeibi/Imagedeblurr/Adobe Photoshop Lightroom Classic CC 2019 8.0.0 (x64) Crack Free BEST Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Adobe Photoshop Lightroom Classic CC 2019 8.0.0 (x64) Crack free download


    DOWNLOAD >> https://jinyurl.com/2uEouS



    -
    - 4fefd39f24
    -
    -
    -

    diff --git a/spaces/eson/tokenizer-arena/vocab/chatyuan_large_v2/test.py b/spaces/eson/tokenizer-arena/vocab/chatyuan_large_v2/test.py deleted file mode 100644 index de27b4faa7b1ab3839351c07e416eee8180a77e2..0000000000000000000000000000000000000000 --- a/spaces/eson/tokenizer-arena/vocab/chatyuan_large_v2/test.py +++ /dev/null @@ -1,62 +0,0 @@ -""" -https://huggingface.co/ClueAI/ChatYuan-large-v2 - -支持\n \t - -- 英文编码很烂 - -为什么不直接编码\n \t,反而要过一套前处理和后处理? - -""" -import json - -from transformers import AutoTokenizer - - - -def preprocess(text): - """ - 词典里 - """ - print("原文本: ", text) - text = text.replace("\n", "\\n").replace("\t", "\\t") - print("预处理后文本: ", text) - return text - - -def postprocess(text): - return text.replace("\\n", "\n").replace("\\t", "\t").replace('%20', ' ') - - -model_dir = 'ChatYuan-large-v2' -tokenizer = AutoTokenizer.from_pretrained(model_dir) - -text = "中国\nabcde jump \tnice" -tokens = tokenizer.tokenize(text) - -print(tokens) -# ['▁中国', '▁', 'ab', 'c', 'de', '▁', 'j', 'ump', '▁n', 'ice'] -print(tokenizer.tokenize(preprocess(text))) -# ['▁中国', '\\n', 'ab', 'c', 'de', '▁', 'j', 'ump', '▁', '\\t', 'n', 'ice'] - -tokens = [12, 623, 5, 13409, 7, 51, 158, 5, 864, 93, - 3, 1329, 14965, 3402, 188, 4, 7, 623, 5, 56, - 4464, 4, 7, 51, 158, 5, 1526, 158, 617, 1456, - 84, 1607, 10, 11442, 1456, 9938, 9, 12, 14, 38, - 6582, 2945, 2861, 3, 11779, 1074, 712, 1036, 167, 6, - 7, 623, 5, 9898, 513, 79, 26455, 489, 3, 34, - 12029, 22, 7, 51, 158, 5, 1] - -tokens = [0, 12, 14381, 10, 19849, 3, 7, 7, 34, 313, - 1344, 9017, 3, 276, 26455, 2258, 3, 578, 864, 529, - 2771, 874, 26455, 1442, 6, 7, 7, 26455, 9220, 19849, - 937, 16, 11726, 33, 11726, 52, 6, 7, 12, 7, - 7, 8353, 1036, 8093, 67, 276, 1036, 3338, 3, 480, - 4490, 30, 34, 1325, 6, 7, 2200, 53, 7321, 2187, - 648, 78, 7321, 2899, 25823, 6, 7, 2964, 3402, 1203, - 13, 537, 6, 7, 1660, 2795, 3402, 1203, 6, 7, - 407, 1802, 7, 7, 3095, 1477, 37, 7, 7, 19849, - 7, 7, 11726, 16, 11726, 7893, 42, 1] - - -print(tokenizer.decode(tokens)) diff --git a/spaces/eubinecto/idiomify/explore/explore_bart_tokenizer_add_special_tokens.py b/spaces/eubinecto/idiomify/explore/explore_bart_tokenizer_add_special_tokens.py deleted file mode 100644 index 6185228f713d3f8f5f3873a88e774900e006e16f..0000000000000000000000000000000000000000 --- a/spaces/eubinecto/idiomify/explore/explore_bart_tokenizer_add_special_tokens.py +++ /dev/null @@ -1,27 +0,0 @@ -from transformers import BartTokenizer, BartForConditionalGeneration - - -def main(): - tokenizer = BartTokenizer.from_pretrained("facebook/bart-base") - bart = BartForConditionalGeneration.from_pretrained("facebook/bart-base") - num_added_tokens = tokenizer.add_special_tokens({ - "additional_special_tokens": ["", ""], # beginning and end of an idiom - }) - print(num_added_tokens) - print(tokenizer.additional_special_tokens) # more special tokens are added here - # and then you should resize the embedding table of your model - print(bart.model.shared.weight.shape) # before - bart.resize_token_embeddings(len(tokenizer)) - print(bart.model.shared.weight.shape) # after - - -if __name__ == '__main__': - main() - -""" -2 -['', ''] -torch.Size([50265, 768]) -torch.Size([50267, 768]) # you can see that 2 more embedding vectors have been added here. -later, you may want to save the tokenizer after you add the idiom special tokens. -""" diff --git a/spaces/eunjae/LoRA-DreamBooth-Training-UI/app_inference.py b/spaces/eunjae/LoRA-DreamBooth-Training-UI/app_inference.py deleted file mode 100644 index a9969e649ca321a5246130d7d560ac3c431a12f2..0000000000000000000000000000000000000000 --- a/spaces/eunjae/LoRA-DreamBooth-Training-UI/app_inference.py +++ /dev/null @@ -1,176 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import enum - -import gradio as gr -from huggingface_hub import HfApi - -from inference import InferencePipeline -from utils import find_exp_dirs - -SAMPLE_MODEL_IDS = [ - 'patrickvonplaten/lora_dreambooth_dog_example', - 'sayakpaul/sd-model-finetuned-lora-t4', -] - - -class ModelSource(enum.Enum): - SAMPLE = 'Sample' - HUB_LIB = 'Hub (lora-library)' - LOCAL = 'Local' - - -class InferenceUtil: - def __init__(self, hf_token: str | None): - self.hf_token = hf_token - - @staticmethod - def load_sample_lora_model_list(): - return gr.update(choices=SAMPLE_MODEL_IDS, value=SAMPLE_MODEL_IDS[0]) - - def load_hub_lora_model_list(self) -> dict: - api = HfApi(token=self.hf_token) - choices = [ - info.modelId for info in api.list_models(author='lora-library') - ] - return gr.update(choices=choices, - value=choices[0] if choices else None) - - @staticmethod - def load_local_lora_model_list() -> dict: - choices = find_exp_dirs() - return gr.update(choices=choices, - value=choices[0] if choices else None) - - def reload_lora_model_list(self, model_source: str) -> dict: - if model_source == ModelSource.SAMPLE.value: - return self.load_sample_lora_model_list() - elif model_source == ModelSource.HUB_LIB.value: - return self.load_hub_lora_model_list() - elif model_source == ModelSource.LOCAL.value: - return self.load_local_lora_model_list() - else: - raise ValueError - - def load_model_info(self, lora_model_id: str) -> tuple[str, str]: - try: - card = InferencePipeline.get_model_card(lora_model_id, - self.hf_token) - except Exception: - return '', '' - base_model = getattr(card.data, 'base_model', '') - instance_prompt = getattr(card.data, 'instance_prompt', '') - return base_model, instance_prompt - - def reload_lora_model_list_and_update_model_info( - self, model_source: str) -> tuple[dict, str, str]: - model_list_update = self.reload_lora_model_list(model_source) - model_list = model_list_update['choices'] - model_info = self.load_model_info(model_list[0] if model_list else '') - return model_list_update, *model_info - - -def create_inference_demo(pipe: InferencePipeline, - hf_token: str | None = None) -> gr.Blocks: - app = InferenceUtil(hf_token) - - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - with gr.Box(): - model_source = gr.Radio( - label='Model Source', - choices=[_.value for _ in ModelSource], - value=ModelSource.SAMPLE.value) - reload_button = gr.Button('Reload Model List') - lora_model_id = gr.Dropdown(label='LoRA Model ID', - choices=SAMPLE_MODEL_IDS, - value=SAMPLE_MODEL_IDS[0]) - with gr.Accordion( - label= - 'Model info (Base model and instance prompt used for training)', - open=False): - with gr.Row(): - base_model_used_for_training = gr.Text( - label='Base model', interactive=False) - instance_prompt_used_for_training = gr.Text( - label='Instance prompt', interactive=False) - prompt = gr.Textbox( - label='Prompt', - max_lines=1, - placeholder='Example: "A picture of a sks dog in a bucket"' - ) - alpha = gr.Slider(label='LoRA alpha', - minimum=0, - maximum=2, - step=0.05, - value=1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=100000, - step=1, - value=0) - with gr.Accordion('Other Parameters', open=False): - num_steps = gr.Slider(label='Number of Steps', - minimum=0, - maximum=100, - step=1, - value=25) - guidance_scale = gr.Slider(label='CFG Scale', - minimum=0, - maximum=50, - step=0.1, - value=7.5) - - run_button = gr.Button('Generate') - - gr.Markdown(''' - - After training, you can press "Reload Model List" button to load your trained model names. - ''') - with gr.Column(): - result = gr.Image(label='Result') - - model_source.change( - fn=app.reload_lora_model_list_and_update_model_info, - inputs=model_source, - outputs=[ - lora_model_id, - base_model_used_for_training, - instance_prompt_used_for_training, - ]) - reload_button.click( - fn=app.reload_lora_model_list_and_update_model_info, - inputs=model_source, - outputs=[ - lora_model_id, - base_model_used_for_training, - instance_prompt_used_for_training, - ]) - lora_model_id.change(fn=app.load_model_info, - inputs=lora_model_id, - outputs=[ - base_model_used_for_training, - instance_prompt_used_for_training, - ]) - inputs = [ - lora_model_id, - prompt, - alpha, - seed, - num_steps, - guidance_scale, - ] - prompt.submit(fn=pipe.run, inputs=inputs, outputs=result) - run_button.click(fn=pipe.run, inputs=inputs, outputs=result) - return demo - - -if __name__ == '__main__': - import os - - hf_token = os.getenv('HF_TOKEN') - pipe = InferencePipeline(hf_token) - demo = create_inference_demo(pipe, hf_token) - demo.queue(max_size=10).launch(share=False) diff --git a/spaces/facebook/MusicGen/tests/data/__init__.py b/spaces/facebook/MusicGen/tests/data/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/tests/data/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Ek Vivaah Aisa Bhi Movie Hindi Dubbe) HOT.md b/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Ek Vivaah Aisa Bhi Movie Hindi Dubbe) HOT.md deleted file mode 100644 index f0cd7a6635daebcce354ab490cbd737b63c36721..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/HD Online Player (Ek Vivaah Aisa Bhi Movie Hindi Dubbe) HOT.md +++ /dev/null @@ -1,6 +0,0 @@ -

    HD Online Player (Ek Vivaah Aisa Bhi Movie Hindi Dubbe)


    Download ————— https://urlca.com/2uDbYN



    - -In 2019 he played Anshuman Sharma in Zee TV's Dil Yeh Ziddi Hai. ... aur bhi kya kuchh hua pichhle hafte #YehRishtaKyaKehlataHai mein Ek jhalak ... 1-40 "Watch popular TV Shows, TV Serials live online in Full HD. ... Diya ENGAGED! full Movie Download kickass torrent 1080p HD , Download Rishta Likhenge Hum . 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Midi Optimizer 7 UPDATED Crack Full.md b/spaces/falterWliame/Face_Mask_Detection/Midi Optimizer 7 UPDATED Crack Full.md deleted file mode 100644 index b5363f6219f010e92946c2660a4b68afe40892f4..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Midi Optimizer 7 UPDATED Crack Full.md +++ /dev/null @@ -1,6 +0,0 @@ -

    midi optimizer 7 crack full


    DOWNLOADhttps://urlca.com/2uDd10



    - - 4fefd39f24
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Om Shanti Om ((TOP)) Full Movie Download 720p 24.md b/spaces/falterWliame/Face_Mask_Detection/Om Shanti Om ((TOP)) Full Movie Download 720p 24.md deleted file mode 100644 index b44080cf60bab8c64c33b3464be964b2cf215aa5..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Om Shanti Om ((TOP)) Full Movie Download 720p 24.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Om Shanti Om Full Movie Download 720p 24


    Download »»» https://urlca.com/2uDc9U



    -
    -3 DVD 6 Gallery 7 Posters 8 Other 9 Videos 10 Trivia/Goofs 11 Notes 12 See Also 13 References In October 2014, Warner Bros. Om Shanti Om was the first of ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/fatiXbelha/sd/Aethersx2 Patch Codes Download and Import Pnach Files from Reddit and Other Sources.md b/spaces/fatiXbelha/sd/Aethersx2 Patch Codes Download and Import Pnach Files from Reddit and Other Sources.md deleted file mode 100644 index 78a12fb257f53aa5d88f3010da659e58090080ab..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Aethersx2 Patch Codes Download and Import Pnach Files from Reddit and Other Sources.md +++ /dev/null @@ -1,157 +0,0 @@ -
    -

    Aethersx2 Patch Codes Download: How to Enhance Your PS2 Games on Android

    -

    Do you love playing PS2 games on your Android device? Do you want to make your gaming experience even better with cheats, hacks, mods, and enhancements? If you answered yes, then you need to know how to download and import patch codes for PS2 games on Aethersx2 emulator. In this article, we will show you everything you need to know about Aethersx2 emulator, patch codes for PS2 games, and how to use them. Let's get started!

    -

    aethersx2 patch codes download


    Download Filehttps://urllie.com/2uNC33



    -

    Introduction

    -

    What is Aethersx2 emulator?

    -

    Aethersx2 emulator is a highly efficient and accurate emulator specifically designed for running PS2 games on various platforms, including Android. It is based on the PCSX2 emulator, which is a long-running, well-established emulator on PC. The developer of Aethersx2 emulator got the green light to use the PCSX2 code from the developers themselves and is licensed under the LGPL license. Aethersx2 emulator was initially released in December 2021 via the Google Play Store as an open beta. You can also sideload the APK via the Aethersx2 website. The app is free to download and use, unlike some other shady PS2 emulators on the market.

    -

    What are patch codes for PS2 games?

    -

    Patch codes are files that contain cheats, hacks, mods, or enhancements for PS2 games. They are usually in the form of .pnach files, which are text files that have a specific format and syntax. Patch codes can modify various aspects of the game, such as infinite health, money, ammo, items, stats, abilities, graphics, sound, etc. Patch codes can also fix bugs, glitches, or compatibility issues that some games may have on certain emulators or devices.

    -

    Why use patch codes on Aethersx2 emulator?

    -

    Using patch codes on Aethersx2 emulator can make your gaming experience more fun, easy, or challenging, depending on your preference. You can use patch codes to unlock hidden features, access secret areas, skip difficult levels, customize your character, improve the game performance, or simply have some fun with crazy effects. Patch codes can also help you overcome some limitations or problems that some games may have on Aethersx2 emulator, such as low frame rate, graphical glitches, sound issues, or crashes.

    -

    aethersx2 patch codes tutorial
    -aethersx2 patch codes reddit
    -aethersx2 patch codes youtube
    -aethersx2 patch codes pnach
    -aethersx2 patch codes import
    -aethersx2 patch codes cheat
    -aethersx2 patch codes ultrawide
    -aethersx2 patch codes kingdom hearts
    -aethersx2 patch codes final fantasy
    -aethersx2 patch codes god of war
    -aethersx2 patch codes gamehacking.org
    -aethersx2 patch codes codebreaker
    -aethersx2 patch codes android
    -aethersx2 patch codes apk
    -aethersx2 patch codes bios
    -aethersx2 patch codes quickedit
    -aethersx2 patch codes enable
    -aethersx2 patch codes edit
    -aethersx2 patch codes add
    -aethersx2 patch codes file
    -aethersx2 patch codes format
    -aethersx2 patch codes location
    -aethersx2 patch codes directory
    -aethersx2 patch codes storage
    -aethersx2 patch codes emulator
    -aethersx2 patch codes ps2
    -aethersx2 patch codes iso
    -aethersx2 patch codes roms
    -aethersx2 patch codes site
    -aethersx2 patch codes link
    -aethersx2 patch codes free
    -aethersx2 patch codes online
    -aethersx2 patch codes offline
    -aethersx2 patch codes guide
    -aethersx2 patch codes how to use
    -aethersx2 patch codes how to convert
    -aethersx2 patch codes how to download
    -aethersx2 patch codes how to import
    -aethersx2 patch codes how to activate
    -aethersx2 patch codes how to disable
    -aethersx2 patch codes how to enable cheats
    -aethersx2 patch codes how to add cheats
    -aethersx2 patch codes how to edit cheats
    -aethersx2 patch codes how to exclude cheats
    -aethersx2 patch codes how to include cheats
    -aethersx2 patch codes how to find cheats
    -aethersx2 patch codes how to search cheats
    -aethersx2 patch codes how to select cheats

    -

    How to download and install Aethersx2 emulator

    -

    Requirements and compatibility

    -

    Before you download and install Aethersx2 emulator, you need to make sure that your device meets the minimum requirements and is compatible with the app. According to the developer of Aethersx2 emulator, you need a 64-bit device with at least a Snapdragon 845-level processor or better. You also need four large CPU cores (Cortex-A75 or higher) for optimal performance. In terms of the GPU, Adreno graphics offer better performance than Mali or PowerVR GPUs found in MediaTek, HiSilicon, or older Samsung Exynos processors. However, you can still use Vulkan graphics renderer option if you have a Mali GPU (e.g., Exynos, Kirin

    Steps to download and install Aethersx2 emulator from Google Play Store or official website

    -

    There are two ways to download and install Aethersx2 emulator on your Android device: from the Google Play Store or from the official website. Here are the steps for both methods:

    -

    From the Google Play Store:

    -
      -
    1. Open the Google Play Store app on your device and search for "Aethersx2" or use this link.
    2. -
    3. Tap on the "Install" button and wait for the app to download and install on your device.
    4. -
    5. Grant the app the necessary permissions to access your storage, camera, microphone, etc.
    6. -
    7. Launch the app and enjoy playing PS2 games on your Android device.
    8. -
    -

    From the official website:

    -
      -
    1. Open a web browser on your device and go to the Aethersx2 website.
    2. -
    3. Tap on the "Download" button and choose the latest version of the APK file.
    4. -
    5. Wait for the file to download on your device and then locate it using a file manager app.
    6. -
    7. Tap on the APK file and enable the "Unknown sources" option if prompted.
    8. -
    9. Follow the on-screen instructions to install the app on your device.
    10. -
    11. Launch the app and enjoy playing PS2 games on your Android device.
    12. -
    -

    Tips and tricks to optimize Aethersx2 emulator settings

    -

    Aethersx2 emulator has a lot of settings that you can tweak to improve your gaming experience. However, some settings may have different effects depending on your device, game, and preference. Here are some general tips and tricks to optimize Aethersx2 emulator settings:

    -
      -
    • Use Vulkan graphics renderer if you have a Mali GPU or if you experience graphical glitches with OpenGL graphics renderer.
    • -
    • Enable "Skipdraw" option if you see black lines or shadows in some games (e.g., God of War, Shadow of the Colossus, etc.).
    • -
    • Enable "Fast Texture Invalidation" option if you see flickering textures in some games (e.g., Gran Turismo 4, Tekken 5, etc.).
    • -
    • Enable "Auto Flush" option if you see missing graphics or effects in some games (e.g., Final Fantasy X, Kingdom Hearts, etc.).
    • -
    • Enable "MTVU Speedhack" option if you have a multi-core CPU and want to boost the performance of some games (e.g., Metal Gear Solid 3, Resident Evil 4, etc.).
    • -
    • Adjust the "EE Cyclerate" and "VU Cycle Stealing" options to balance between speed and compatibility. Higher values may increase the speed but also cause glitches or crashes in some games.
    • -
    • Adjust the "Resolution" and "Anisotropic Filtering" options to improve the graphics quality. Higher values may improve the visuals but also consume more resources and cause slowdowns in some games.
    • -
    • Enable "Widescreen Patches" option if you want to play PS2 games in widescreen mode. However, some games may not support this option or may have graphical issues.
    • -
    -

    How to download and import patch codes for PS2 games on Aethersx2 emulator

    -

    Sources and formats of patch codes for PS2 games

    -

    The main source of patch codes for PS2 games is the PCSX2 wiki, which has a comprehensive list of patch codes for various PS2 games. You can also find patch codes from other websites, forums, or YouTube videos. However, be careful of malicious or fake patch codes that may harm your device or emulator. Always check the comments, ratings, and reviews before downloading any patch codes from unknown sources.

    -

    The most common format of patch codes for PS2 games is .pnach files, which are text files that have a specific format and syntax. A .pnach file consists of a header section that contains information about the game (e.g., name, region, CRC, etc.) and a body section that contains one or more patch codes. Each patch code has a comment line that starts with "//" and a code line that has four hexadecimal values separated by commas. For example:

    -
    //Infinite Health patch=1,EE,D03E6B72,extended,0000A0C4 patch=1,EE,D03E6B76,extended,A4A40000 patch=1,EE,D03E6B7A,extended,A4A40000 patch=1,EE,D 03E6B7E,extended,00000000
    -

    The first value in the code line indicates the type of patch (e.g., 1 for normal, 2 for extended, etc.). The second value indicates the processor that executes the patch (e.g., EE for Emotion Engine, IOP for Input/Output Processor, etc.). The third and fourth values indicate the address and the data that are modified by the patch.

    -

    Steps to download and import patch codes for PS2 games on Aethersx2 emulator

    -

    Once you have found the patch codes for the PS2 game that you want to play on Aethersx2 emulator, you need to download and import them into the emulator. Here are the steps to do that:

    -
      -
    1. Create a .pnach file using a text editor app (e.g., Notepad, TextEdit, etc.) and copy and paste the patch codes into it. Make sure to follow the correct format and syntax as explained above. You can also use a .pnach file converter tool to convert other formats of patch codes (e.g., .raw, .cb2, .max, etc.) into .pnach files.
    2. -
    3. Save the .pnach file with a name that matches the CRC code of the game. You can find the CRC code of the game by launching it on Aethersx2 emulator and checking the log window. For example, if the CRC code of the game is 0x12345678, then you should name the .pnach file as 12345678.pnach.
    4. -
    5. Copy or move the .pnach file into the "cheats" folder of Aethersx2 emulator. You can find this folder in the internal storage of your device under "Android/data/com.aethersx2/files/cheats". If you don't see this folder, you may need to create it manually.
    6. -
    7. Launch Aethersx2 emulator and go to "Settings > System > Enable Cheats" and turn on this option. This will enable the emulator to load and apply the patch codes from the .pnach files.
    8. -
    9. Restart the game or reload the state to activate the patch codes. You should see a message on the log window that says "Found Cheats file: '12345678.pnach'" and "comment: Infinite Health" (or whatever comment you have written for your patch code).
    10. -
    11. Enjoy playing your PS2 game with patch codes on Aethersx2 emulator.
    12. -
    -

    Examples of patch codes for popular PS2 games and their effects

    -

    To give you some idea of what patch codes can do for your PS2 games on Aethersx2 emulator, here are some examples of patch codes for some popular PS2 games and their effects:

    - - - - - - - - - - - - - - - - - - - - - - - - - - -
    GamePatch CodeEffect
    Grand Theft Auto: San Andreas//Never Wanted patch=1,EE,2053E460,extended,00000000 patch=1,EE,D5FC16C6,extended,00000000This patch code will prevent you from getting any wanted level no matter what crimes you commit in the game.
    Final Fantasy XII//Max Gil patch=1,EE,D043593E,extended,0000F423 patch=1,EE,D043593A,extended,A4230000 patch=1,EE,D0435936,extended,A4230000 patch=1,EE,D0435932,extended,A4230000This patch code will give you maximum amount of money (gil) in the game.
    God of War II//Infinite Rage of The Titans patch=1,EE,D03F8A5C,extended,A4A40000 patch=1,EE,D03F8A60,extended,A4A40000 patch=1,EE,D03F8A64,extended,A4A40000 patch=1,EE,D03F8A68,extended,A4A40000This patch code will allow you to use the Rage of The Titans mode indefinitely without draining your meter.
    Metal Gear Solid 3: Snake Eater//Stealth Camouflage patch=1,IOP,F0100008,word,C01F0008 patch=1,IOP,F0100010,jalr,r0,r31 patch=1,IOP,F0100014,nop patch=1,IOP,F010 0018,word,3C1F0000 patch=1,IOP,F010001C,word,27FF0000 patch=1,IOP,F0100020,word,ACDF0000 patch=1,IOP,F0100024,word,03E00008This patch code will enable the stealth camouflage feature that makes you invisible to enemies.
    -

    Conclusion

    -

    Summary of the main points

    -

    In this article, we have shown you how to download and import patch codes for PS2 games on Aethersx2 emulator. We have explained what Aethersx2 emulator is, what patch codes are, and why you should use them. We have also provided you with the steps to download and install Aethersx2 emulator, as well as the steps to download and import patch codes for PS2 games. We have also given you some examples of patch codes for some popular PS2 games and their effects.

    -

    Call to action and recommendation

    -

    If you are a fan of PS2 games and want to enhance your gaming experience on your Android device, we highly recommend that you try Aethersx2 emulator and patch codes for PS2 games. You will be amazed by the results and the possibilities that patch codes can offer. You can find more patch codes for PS2 games on the PCSX2 wiki or other sources online. You can also create your own patch codes using a hex editor or a cheat engine tool. However, be careful of the sources and the formats of the patch codes that you download or create. Always backup your game files and your emulator settings before applying any patch codes.

    -

    FAQs

    -

    Here are some frequently asked questions about Aethersx2 emulator and patch codes for PS2 games:

    -
      -
    1. Q: How do I disable or remove patch codes for PS2 games on Aethersx2 emulator?
    2. -
    3. A: To disable patch codes for PS2 games on Aethersx2 emulator, you can simply turn off the "Enable Cheats" option in the "Settings > System" menu. To remove patch codes for PS2 games on Aethersx2 emulator, you can delete the .pnach files from the "cheats" folder of Aethersx2 emulator.
    4. -
    5. Q: How do I update Aethersx2 emulator to the latest version?
    6. -
    7. A: To update Aethersx2 emulator to the latest version, you can either use the Google Play Store app or download the APK file from the official website. If you use the Google Play Store app, you will get automatic updates whenever a new version is available. If you download the APK file from the official website, you will need to manually install it over the existing app.
    8. -
    9. Q: How do I report bugs or issues with Aethersx2 emulator or patch codes for PS2 games?
    10. -
    11. A: To report bugs or issues with Aethersx2 emulator or patch codes for PS2 games, you can use the "Feedback" option in the app or contact the developer via email or social media. You can also join the official Discord server of Aethersx2 emulator and chat with other users and developers.
    12. -
    13. Q: How do I support the development of Aethersx2 emulator?
    14. -
    15. A: To support the development of Aethersx2 emulator, you can donate to the developer via PayPal or Patreon. You can also share your feedback, suggestions, reviews, ratings, and comments on the Google Play Store or other platforms. You can also spread the word about Aethersx2 emulator to your friends and family who love PS2 games.
    16. -
    17. Q: How do I play multiplayer games on Aethersx2 emulator?
    18. -
    19. A: To play multiplayer games on Aethersx2 emulator, you need to use a third-party app that allows you to connect with other players online or locally. For example, you can use Parsec or Netplay apps to play online multiplayer games with your friends. You can also use Wi-Fi Direct or Bluetooth apps to play local multiplayer games with nearby devices.
    20. -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Blockman Go 2.41.2 APK How to Install and Play on Your Device.md b/spaces/fatiXbelha/sd/Blockman Go 2.41.2 APK How to Install and Play on Your Device.md deleted file mode 100644 index 90848841a87796c73ed996a2effb94b72e8213c5..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Blockman Go 2.41.2 APK How to Install and Play on Your Device.md +++ /dev/null @@ -1,122 +0,0 @@ -
    - - -
    -

    Blockman Go APK 2.41.2: A Fun and Creative Role Play Platform

    -

    Do you love playing mini-games with your friends? Do you want to create your own avatar and explore different worlds? Do you want to chat and socialize with other players online? If you answered yes to any of these questions, then you should try Blockman Go APK 2.41.2, a free role play platform that offers a variety of games and activities for you to enjoy.

    -

    blockman go apk 2.41.2


    Download File ››››› https://urllie.com/2uNEPz



    -

    What is Blockman Go?

    -

    Blockman Go is a game platform that allows you to play various mini-games with your friends or other players from around the world. You can choose from different genres such as adventure, puzzle, shooting, racing, and more. You can also create your own avatar with different outfits, hairstyles, accessories, and skins. You can chat and socialize with other players in the lobby or in the game rooms. You can also earn rewards and gifts by playing games or participating in events.

    -

    Features of Blockman Go

    -

    Various mini-games

    -

    Blockman Go offers a wide range of mini-games for you to choose from. You can play games such as Bed Wars, Sky Wars, Murder Mystery, Build Battle, Egg Wars, Parkour Master, Survival Games, and more. You can also join or create your own game rooms with your own rules and settings.

    -

    Customizable avatars

    -

    You can customize your avatar with different outfits, hairstyles, accessories, and skins. You can also buy or unlock new items with golds or diamonds that you earn by playing games or completing tasks. You can also use your own photos to create your own face skins.

    -

    Chat and socialize

    -

    You can chat and socialize with other players in the lobby or in the game rooms. You can use text or voice messages to communicate with others. You can also add friends or join clubs or groups with similar interests. You can also send gifts or emojis to express your feelings.

    -

    blockman go 2.41.2 download
    -blockman go apk 2.41.2 free
    -blockman go studio 2.41.2
    -blockman go mod apk 2.41.2
    -blockman go update 2.41.2
    -blockman go apk 2.41.2 latest version
    -blockman go 2.41.2 xapk
    -blockman go apk 2.41.2 for android
    -blockman go 2.41.2 old version
    -blockman go apk 2.41.2 offline
    -blockman go beta 2.41.2
    -blockman go apk 2.41.2 unlimited money
    -blockman go 2.41.2 new features
    -blockman go apk 2.41.2 hack
    -blockman go 2.41.2 game
    -blockman go apk 2.41.2 online
    -blockman go 2.41.2 review
    -blockman go apk 2.41.2 no ads
    -blockman go 2.41.2 lucky joystick event
    -blockman go apk 2.41.2 original
    -blockman go 2.41.2 role play platform
    -blockman go apk 2.41.2 install
    -blockman go 2.41.2 arcade game
    -blockman go apk 2.41.2 direct link
    -blockman go 2.41.2 sandbox game
    -blockman go apk 2.41.2 premium
    -blockman go 2023 version 2.41.2
    -blockman go apk 2.41.2 cracked
    -blockman go bed wars 1v1v1v1 mode in version 2023 (v1) (v1) (v1) (v1) (v1) (v1) (v1) (v1) (v1) (v1) (v1) (v1) (v1) (v1) (v1)
    -blockman go apk 2023 version bed wars mode in version v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v1 v12

    -

    Rewards and gifts

    -

    You can earn rewards and gifts by playing games or participating in events. You can get golds, diamonds, coupons, chests, and more. You can use them to buy or unlock new items for your avatar or to send gifts to your friends. You can also get daily rewards by logging in or completing tasks.

    -

    What's new in Blockman Go APK 2.41.2?

    -

    Blockman Go APK 2.41.2 is the latest version of the game platform that was released on June 21, 2023. It has some new features and improvements that make the game more fun and enjoyable. Here are some of the highlights:

    -

    Revision of the First Top Up event

    -

    The First Top Up event is a special offer for new players who make their first purchase of diamonds in the game. In the previous version, the event only gave a 50% bonus of diamonds for the first purchase. In the new version, the event gives a 100% bonus of diamonds for the first purchase, plus a free VIP membership for 7 days. This means that you can get double the amount of diamonds and enjoy more benefits as a VIP member.

    -

    New Lucky Joystick Raffle Event

    -

    The Lucky Joystick Raffle Event is a new event that allows you to win amazing prizes by playing games with a lucky joystick. The lucky joystick is a special item that you can get by spending diamonds or coupons in the game. You can use it to play any game in Blockman Go and get a chance to win prizes such as golds, diamonds, coupons, chests, skins, outfits, and more. The more you play with the lucky joystick, the higher your chances of winning.

    -

    How to download and install Blockman Go APK 2.41.2?

    -

    If you want to download and install Blockman Go APK 2.41.2 on your Android device, you need to follow these simple steps:

    -

    Requirements

    -

    Before you download and install Blockman Go APK 2.41.2, you need to make sure that your device meets these requirements:

    -
      -
    • Your device must have Android 4.1 or higher.
    • -
    • Your device must have at least 100 MB of free storage space.
    • -
    • Your device must have a stable internet connection.
    • -
    -

    Steps

    -

    After you check the requirements, you can proceed with these steps:

    -
      -
    1. Go to this link to download Blockman Go APK 2.41.2 file on your device.
    2. -
    3. Once the download is complete, locate the file in your device's file manager and tap on it to install it.
    4. -
    5. If you see a pop-up message that says "Install blocked", go to your device's settings and enable "Unknown sources" option under security settings.
    6. -
    7. After you enable "Unknown sources", go back to the file manager and tap on the file again to install it.
    8. -
    9. Wait for the installation process to finish and then launch the app from your home screen or app drawer.
    10. -
    11. Enjoy playing Blockman Go APK 2.41.2 with your friends or other players online.
    12. -

    Why should you play Blockman Go APK 2.41.2?

    -

    Blockman Go APK 2.41.2 is a fun and creative role play platform that offers a lot of benefits for its players. Here are some of the pros and cons of playing this game:

    -

    Pros

    -
      -
    • You can play various mini-games with your friends or other players from around the world.
    • -
    • You can customize your avatar with different outfits, hairstyles, accessories, and skins.
    • -
    • You can chat and socialize with other players in the lobby or in the game rooms.
    • -
    • You can earn rewards and gifts by playing games or participating in events.
    • -
    • You can enjoy the new features and improvements in the latest version of the game.
    • -
    -

    Cons

    -
      -
    • You may encounter some bugs or glitches in the game.
    • -
    • You may need to spend real money to buy some items or diamonds in the game.
    • -
    • You may need to update the game frequently to get the latest version.
    • -
    -

    Conclusion

    -

    Blockman Go APK 2.41.2 is a game platform that allows you to play various mini-games with your friends or other players from around the world. You can also create your own avatar with different outfits, hairstyles, accessories, and skins. You can chat and socialize with other players in the lobby or in the game rooms. You can also earn rewards and gifts by playing games or participating in events. You can also enjoy the new features and improvements in the latest version of the game. Blockman Go APK 2.41.2 is a fun and creative role play platform that you should try if you love playing mini-games and exploring different worlds.

    -

    Here are some FAQs that you may have about Blockman Go APK 2.41.2:

    -
      -
    1. Is Blockman Go APK 2.41.2 safe to download and install?
    2. -

      Yes, Blockman Go APK 2.41.2 is safe to download and install on your Android device. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source and enable "Unknown sources" option only when installing it.

      -
    3. Is Blockman Go APK 2.41.2 compatible with my device?
    4. -

      Blockman Go APK 2.41.2 is compatible with most Android devices that have Android 4.1 or higher. However, some devices may have different specifications or performance issues that may affect the game's functionality or quality. You can check the game's requirements before downloading and installing it on your device.

      -
    5. How can I update Blockman Go APK 2.41.2 to the latest version?
    6. -

      You can update Blockman Go APK 2.41.2 to the latest version by following these steps:

      -
        -
      • Go to this link to download the latest version of Blockman Go APK file on your device.
      • -
      • Once the download is complete, locate the file in your device's file manager and tap on it to install it.
      • -
      • If you see a pop-up message that says "Install blocked", go to your device's settings and enable "Unknown sources" option under security settings.
      • -
      • After you enable "Unknown sources", go back to the file manager and tap on the file again to install it.
      • -
      • Wait for the installation process to finish and then launch the app from your home screen or app drawer.
      • -
      -
    7. How can I contact Blockman Go's customer service?
    8. -

      If you have any questions, suggestions, feedback, or complaints about Blockman Go, you can contact their customer service by using these methods:

      -
        -
      • Email: service@blockmango.net
      • -
      • Facebook: https://www.facebook.com/Blockmango-431413887079928/
      • -
      • Discord: https://discord.gg/PsVMjUk
      • -
      -
    9. How can I support Blockman Go's development?
    10. -

      If you love playing Blockman Go and want to support its development, you can do these things:

      -
        -
      • Rate and review the game on Google Play Store or App Store.
      • -
      • Share the game with your friends or family members who may like it.
      • -
      • Purchase some items or diamonds in the game to support the developers.
      • -
      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Aether Gazer Global APK and Build Your Squad of Heroes.md b/spaces/fatiXbelha/sd/Download Aether Gazer Global APK and Build Your Squad of Heroes.md deleted file mode 100644 index 498beaa8ccf2d76c662bc031889d0b2f08812613..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Aether Gazer Global APK and Build Your Squad of Heroes.md +++ /dev/null @@ -1,112 +0,0 @@ -
    -

    Download Aether Gazer Global APK: A 3D Action Game Set in a Dystopian Future

    -

    If you are looking for a thrilling and immersive action game that will take you to a futuristic world where humanity is on the brink of extinction, then you should download Aether Gazer Global APK. This is a game developed by Yostar Games, the same company behind popular titles like Azur Lane and Arknights. In this game, you will join forces with other players and characters to fight against evil computer viruses called Visbanes that threaten to destroy humanity's last hope of survival.

    -

    Aether Gazer Global APK is a game that combines fast-paced combat, exploration, customization, and squad building. You will be able to control one of three characters in your squad, while the AI controls the other two. You will also be able to switch between different characters and skills during battle, creating powerful combos and effects. The game also features high-quality graphics, sound, and voiceover, as well as an engaging story that will keep you hooked.

    -

    download aether gazer global apk


    Download File ►►►►► https://urllie.com/2uNwnI



    -

    In this article, we will show you how to download Aether Gazer Global APK on your Android or iOS device, as well as some of the features, pros, and cons of this game. We will also answer some frequently asked questions that you may have about this game. So, without further ado, let's get started!

    -

    How to Download Aether Gazer Global APK on Android Devices

    -

    If you want to download Aether Gazer Global APK on your Android device, you will need to follow these steps:

    -

    How to download aether gazer global apk for free
    -Aether gazer global apk latest version download
    -Aether gazer global apk mod unlimited money
    -Aether gazer global apk gameplay and review
    -Aether gazer global apk download link
    -Aether gazer global apk offline installer
    -Aether gazer global apk update and patch notes
    -Aether gazer global apk tips and tricks
    -Aether gazer global apk best characters and skills
    -Aether gazer global apk system requirements and compatibility
    -Aether gazer global apk cheats and hacks
    -Aether gazer global apk guide and walkthrough
    -Aether gazer global apk error and bug fixes
    -Aether gazer global apk wiki and database
    -Aether gazer global apk news and events
    -Aether gazer global apk reddit and discord
    -Aether gazer global apk fan art and cosplay
    -Aether gazer global apk soundtrack and voice actors
    -Aether gazer global apk alternatives and similar games
    -Aether gazer global apk ratings and reviews
    -Aether gazer global apk codes and coupons
    -Aether gazer global apk support and contact
    -Aether gazer global apk forum and community
    -Aether gazer global apk trailer and screenshots
    -Aether gazer global apk release date and pre-registration
    -Aether gazer global apk story and lore
    -Aether gazer global apk features and specifications
    -Aether gazer global apk developer and publisher
    -Aether gazer global apk genre and category
    -Aether gazer global apk size and data usage
    -Aether gazer global apk VPN and region lock
    -Aether gazer global apk root and emulator
    -Aether gazer global apk comparison and benchmark
    -Aether gazer global apk FAQ and Q&A
    -Aether gazer global apk memes and jokes
    -Aether gazer global apk achievements and rewards
    -Aether gazer global apk customization and optimization
    -Aether gazer global apk collaboration and crossover
    -Aether gazer global apk feedback and suggestions
    -Aether gazer global apk problems and solutions
    -Aether gazer global apk secrets and easter eggs
    -Aether gazer global apk tier list and rankings
    -Aether gazer global apk strategy and tactics
    -Aether gazer global apk multiplayer and co-op mode

    -
      -
    1. Go to the official website of Aether Gazer ([5](https://aethergazer.com/)) or a trusted APK store like APKCombo ([1](https://apkcombo.com/aether-gazer/com.YoStar.AetherGazer/)) or QooApp ([2]( https://apps.qoo-app.com/en/app/17319)). These are the sources where you can find the latest version of Aether Gazer Global APK and the data pack that you will need to play the game.
    2. -
    3. Download the APK file and the data pack from the website or the store. The APK file is about 100 MB, while the data pack is about 2 GB. Make sure you have enough storage space on your device before downloading them.
    4. -
    5. Install the APK file on your device. You may need to enable the installation of apps from unknown sources in your device settings. To do this, go to Settings > Security > Unknown Sources and toggle it on.
    6. -
    7. Install the data pack on your device. You will need to extract the zip file and copy the folder named com.YoStar.AetherGazer to your device's internal storage or SD card. You can use a file manager app like ES File Explorer ([3](https://play.google.com/store/apps/details?id=com.estrongs.android.pop&hl=en_US&gl=US)) or ZArchiver ([4](https://play.google.com/store/apps/details?id=ru.zdevs.zarchiver&hl=en_US&gl=US)) to do this.
    8. -
    9. Launch the game and enjoy. You will be able to play Aether Gazer Global APK on your Android device without any problems.
    10. -
    -

    How to Download Aether Gazer Global APK on iOS Devices

    -

    If you want to download Aether Gazer Global APK on your iOS device, you will need to follow these steps:

    -
      -
    1. Go to the App Store and search for Aether Gazer ([6](https://apps.apple.com/us/app/aether-gazer/id1563508869)). This is the official app of Aether Gazer Global APK for iOS devices.
    2. -
    3. Download and install the game on your device. The game is about 2.5 GB, so make sure you have enough storage space on your device before downloading it.
    4. -
    5. Launch the game and enjoy. You will be able to play Aether Gazer Global APK on your iOS device without any problems.
    6. -
    -

    Features of Aether Gazer Global APK

    -

    Aether Gazer Global APK is a game that offers many features that will make you addicted to it. Here are some of the features that you can expect from this game:

    -
      -
    • Action-packed combat requiring fast-paced decision making: In this game, you will have to fight against various enemies and bosses using different skills and weapons. You will also have to switch between different characters and skills during battle, creating powerful combos and effects. The game requires quick reflexes and strategic thinking, as well as a good sense of timing and coordination.
    • -
    • Explore a dystopia filled with lore and loot: In this game, you will be able to explore a futuristic world that has been ravaged by Visbanes, computer viruses that have corrupted everything in their path. You will be able to discover hidden secrets, collect valuable items, and uncover the truth behind the Visbanes and their origin. The game also features a rich story that will keep you immersed in the game world.
    • -
    • Customize your character's skills and seamlessly change your fighting style: In this game, you will be able to customize your character's skills according to your preference and playstyle. You will be able to choose from four different skill types: Attack, Defense, Support, and Special. Each skill type has its own advantages and disadvantages, as well as unique effects and animations. You will also be able to switch between different skill types during battle, creating diverse and dynamic combat scenarios.
    • -
    • Mix and match your squad for exciting chained combos and stunning performances: In this game, you will be able to form your own squad of three characters from a roster of over 20 characters, each with their own personality, background, and voiceover. You will also be able to mix and match different characters and skills to create chained combos that will deal massive damage and trigger stunning performances that will dazzle your enemies and allies alike.
    • -
    • Premium quality character design with advanced NPR rendering technology: In this game, you will be able to enjoy premium quality character design that is enhanced by advanced NPR (non-photorealistic) rendering technology. This technology allows the game to create realistic lighting, shadows, textures, and effects that make the characters look more lifelike and expressive. The game also features high-quality voiceover for every character, as well as immersive sound effects and music.
    • -
    • Immersive soundtrack and unique voiceover for every character: In this game, you will be able to listen to an immersive soundtrack that matches the mood and atmosphere of the game. You will also be able to hear the unique voiceover of every character, which adds more personality and emotion to the game. The game features voice actors and actresses from different countries and regions, such as Japan, Korea, China, Taiwan, and more.
    • -
    -

    Pros and Cons of Aether Gazer Global APK

    -

    Aether Gazer Global APK is a game that has many pros and cons that you should consider before downloading it. Here are some of the pros and cons of this game:

    - - - - - - - - - - - - - - - - - -
    ProsCons
    High-quality graphics and sound effects: The game features stunning visuals and sound effects that create a realistic and immersive gaming experience.Requires a stable internet connection: The game requires a stable internet connection to play, which may not be available for some users or in some areas.
    Engaging story and gameplay: The game features an engaging story that will keep you hooked and a gameplay that will challenge you and make you feel satisfied.May consume a lot of battery and storage space: The game may consume a lot of battery and storage space on your device, which may affect your device's performance and longevity.
    Free to play and download: The game is free to play and download, which means you can enjoy it without spending any money.
    -

    Conclusion and FAQs

    -

    Aether Gazer Global APK is a game that you should download if you are looking for a thrilling and immersive action game that will take you to a futuristic world where humanity is on the brink of extinction. You will be able to enjoy high-quality graphics, sound, and voiceover, as well as an engaging story and gameplay. You will also be able to customize your character's skills and squad, as well as explore a dystopia filled with lore and loot. The game is free to play and download, but it requires a stable internet connection and may consume a lot of battery and storage space. If you are ready to join the fight against the Visbanes, then download Aether Gazer Global APK today!

    -

    FAQs

    -

    Here are some frequently asked questions that you may have about Aether Gazer Global APK:

    -
      -
    1. What is the difference between Aether Gazer Global APK and Aether Gazer APK?
    2. -

      Aether Gazer Global APK is the global version of Aether Gazer APK, which means it is available for users from different countries and regions. Aether Gazer APK is the original version of the game, which was only available for users from China. Aether Gazer Global APK has the same features and content as Aether Gazer APK, but it also has some additional features such as multi-language support, global server, and more.

      -
    3. Is Aether Gazer Global APK safe to download and install?
    4. -

      Aether Gazer Global APK is safe to download and install if you get it from the official website or a trusted APK store. However, you should be careful when downloading any APK file from unknown sources, as they may contain malware or viruses that can harm your device or steal your personal information. You should also check the permissions that the app requests before installing it, and only grant the ones that are necessary for the app to function properly.

      -
    5. How can I update Aether Gazer Global APK?
    6. -

      You can update Aether Gazer Global APK by going to the official website or the APK store where you downloaded it from, and downloading the latest version of the app. You can also check for updates within the app by going to Settings > About > Check for Updates. You should always update your app to enjoy the latest features and bug fixes.

      -
    7. How can I contact the developer of Aether Gazer Global APK?
    8. -

      You can contact the developer of Aether Gazer Global APK by going to Settings > Feedback > Contact Us within the app. You can also visit their official website ([5](https://aethergazer.com/)) or their social media accounts on Facebook ([7](https://www.facebook.com/Aethergazerglobal/)), Twitter ([8](https://twitter.com/Aethergazerglb)), Instagram ([9](https://www.instagram.com/aethergazerglobal/)), or YouTube ([10](https://www.youtube.com/channel/UC6g0yQYw1kZl7aXx4yJ0fzg)) for more information and updates.

      -
    9. < There is nothing more to write for the article, as I have already completed the 500-word limit and the 15 headings and subheadings requirement. I have also ended the article with a conclusion paragraph and 5 unique FAQs, as well as a custom message that says "

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Gangstar 2 Kings of L.A. Mod Now and Join the Most Epic Gang War in History.md b/spaces/fatiXbelha/sd/Download Gangstar 2 Kings of L.A. Mod Now and Join the Most Epic Gang War in History.md deleted file mode 100644 index 088947fe3969eb3cf74ef5fc1de29cb720c5fd22..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Gangstar 2 Kings of L.A. Mod Now and Join the Most Epic Gang War in History.md +++ /dev/null @@ -1,128 +0,0 @@ - -

      Download Game Gangstar 2 Kings of LA Mod: A Guide for Gamers

      -

      If you are looking for a fun and exciting game that lets you experience the life of a gangster in Los Angeles, then you should check out Gangstar 2: Kings of LA. This game is a top-down open-world action-adventure game that was released in 2008 by Gameloft. It is the second installment in the Gangstar series and it is available for various platforms, including Android, BlackBerry, Nintendo DSi, and button-operated mobile phones.

      -

      But what if you want to enjoy the game with more features, options, and freedom? Well, then you should download the mod version of Gangstar 2: Kings of LA. This version is a modified version of the original game that adds some enhancements and improvements to make the game more enjoyable and challenging. In this article, we will tell you everything you need to know about downloading and playing Gangstar 2: Kings of LA mod. So, let's get started!

      -

      download game gangstar 2 kings of la mod


      Download File 🔗 https://urllie.com/2uNHvY



      -

      What is Gangstar 2: Kings of LA?

      -

      A brief overview of the game

      -

      Gangstar 2: Kings of LA is a game that puts you in the shoes of a young gangster who moves to Los Angeles to make a name for himself in the criminal underworld. You can choose from four different characters, each with their own backstory and personality. You can also customize your character's appearance, clothes, and accessories.

      -

      The game has a large and diverse map that covers various areas of Los Angeles, such as Hollywood, Beverly Hills, Downtown, South Central, and Santa Monica. You can explore the city by foot or by using various vehicles, such as cars, motorcycles, boats, helicopters, and planes. You can also interact with different people, such as pedestrians, cops, gang members, shopkeepers, and hookers.

      -

      The main features of the game

      -

      Gangstar 2: Kings of LA has many features that make it an entertaining and immersive game. Some of these features are:

      -
        -
      • 75 missions that combine the best of action and driving games. You can take part in various activities, such as shootouts, car chases, races, robberies, assassinations, drug deals, and more.
      • -
      • A dynamic and realistic environment that changes according to the time of day, weather conditions, traffic patterns, and events.
      • -
      • A variety of weapons and items that you can use or buy, such as pistols, shotguns, rifles, grenades, molotov cocktails, health kits, armor vests, cell phones, radios, and more.
      • -
      • A reputation system that affects how other people react to you. You can increase your reputation by completing missions, killing enemies, or doing stunts. You can also decrease your reputation by killing civilians, destroying property, or getting arrested.
      • -
      • A money system that allows you to earn or spend money. You can earn money by completing missions, robbing people or places, selling drugs, or gambling. You can spend money by buying weapons, items, clothes, vehicles, or properties.
      • -
      -

      The pros and cons of the game

      -

      Gangstar 2: Kings of LA is a game that has many positive aspects but also some negative ones. Here are some of the pros and cons of the game:

      - - -< - - - -
      ProsCons
      A fun and addictive game that offers a lot of freedom and action.A somewhat outdated game that has some bugs and glitches.
      A large and varied map that allows you to explore different areas of Los Angeles.A repetitive and sometimes boring map that lacks diversity and detail.
      A lot of options and customization that let you create your own style and personality.A lack of depth and realism that make the game feel shallow and unrealistic.
      A challenging and rewarding game that tests your skills and strategy.A frustrating and unfair game that can be too hard or too easy depending on the situation.
      -

      Why download the mod version of Gangstar 2: Kings of LA?

      -

      The benefits of the mod version

      -

      If you are not satisfied with the original version of Gangstar 2: Kings of LA, or if you want to experience the game in a new and different way, then you should download the mod version of the game. The mod version is a modified version of the game that adds some enhancements and improvements to make the game more enjoyable and challenging. Some of the benefits of the mod version are:

      -
        -
      • More features and options that give you more control and flexibility over the game. For example, you can change the difficulty level, the speed of the game, the amount of money you start with, the number of cops or gangs in the city, and more.
      • -
      • More weapons and items that give you more variety and power in the game. For example, you can use new weapons such as rocket launchers, flamethrowers, sniper rifles, or tasers. You can also use new items such as jetpacks, parachutes, or invisibility cloaks.
      • -
      • More vehicles and customization that give you more mobility and style in the game. For example, you can drive new vehicles such as tanks, limousines, buses, or bicycles. You can also customize your vehicles with new colors, decals, or accessories.
      • -
      • More missions and activities that give you more fun and excitement in the game. For example, you can take part in new missions such as bank heists, prison breaks, or zombie invasions. You can also take part in new activities such as surfing, skateboarding, or golfing.
      • -
      -

      The drawbacks of the mod version

      -

      However, downloading the mod version of Gangstar 2: Kings of LA is not without its drawbacks. The mod version is not an official version of the game and it is not supported by Gameloft. Therefore, some of the drawbacks of the mod version are:

      -

      download gangstar 2 kings of la mod apk
      -download gangstar 2 kings of la java game
      -download gangstar 2 kings of la mod unlimited money
      -download gangstar 2 kings of la for android
      -download gangstar 2 kings of la mod jar
      -download gangstar 2 kings of la mobile game
      -download gangstar 2 kings of la mod gameloft
      -download gangstar 2 kings of la for pc
      -download gangstar 2 kings of la mod apk offline
      -download gangstar 2 kings of la java game dedomil[^1^]
      -download gangstar 2 kings of la mod apk data
      -download gangstar 2 kings of la java game phoneky
      -download gangstar 2 kings of la mod unlimited everything
      -download gangstar 2 kings of la for ios
      -download gangstar 2 kings of la mod jar 240x320
      -download gangstar 2 kings of la mobile game free
      -download gangstar 2 kings of la mod gameloft android
      -download gangstar 2 kings of la for windows
      -download gangstar 2 kings of la mod apk latest version
      -download gangstar 2 kings of la java game waptrick
      -download gangstar 2 kings of la mod apk revdl
      -download gangstar 2 kings of la java game free
      -download gangstar 2 kings of la mod no root
      -download gangstar 2 kings of la for nokia
      -download gangstar 2 kings of la mod jar touchscreen
      -download gangstar 2 kings of la mobile game jar
      -download gangstar 2 kings of la mod gameloft java
      -download gangstar 2 kings of la for samsung
      -download gangstar 2 kings of la mod apk android 1
      -download gangstar 2 kings of la java game hack

      -
        -
      • More risks and problems that can affect your device or your game. For example, you may encounter viruses, malware, or spyware that can harm your device or steal your data. You may also encounter errors, crashes, or compatibility issues that can ruin your game or make it unplayable.
      • -
      • More legal and ethical issues that can get you in trouble or cause you guilt. For example, you may violate the terms and conditions of Gameloft or Google Play by downloading an unauthorized version of the game. You may also disrespect the work and effort of Gameloft or other modders by using their content without permission or credit.
      • -
      • Less satisfaction and enjoyment that can make you lose interest or appreciation for the game. For example, you may feel bored or cheated by using a modded version of the game that makes it too easy or too hard for you. You may also feel less proud or accomplished by completing a modded version of the game that does not reflect your true skills or achievements.
      • -
      -

      How to download and install the mod version

      -

      If you still want to download and install the mod version of Gangstar 2: Kings of LA, then you should follow these steps:

      -
        -
      1. Find a reliable and trustworthy source for downloading the mod version. You can search online for websites or forums that offer modded versions of Gangstar 2: Kings of LA. You can also check reviews or ratings from other users to see if they are satisfied with the mod version they downloaded.
      2. -
      3. Download the mod version file to your device. The file may be in APK format (for Android devices) or ZIP format (for other devices). Make sure you have enough space on your device to store the file.
      4. -
      5. Install the mod version file on your device. If the file is in APK format, you may need to enable unknown sources on your device settings to allow installation from third-party sources. If the file is in ZIP format, you may need to extract it first using a file manager app or a computer.
      6. -
      7. Launch the mod version of Gangstar 2: Kings of LA on your device. You may need to grant some permissions or accept some terms and conditions to start the game. You may also need to create or log in to your account to access the game features.
      8. -
      -

      Congratulations! You have successfully downloaded and installed the mod version of Gangstar 2: Kings of LA. Now you can enjoy the game with more features, options, and freedom. But remember, you are doing this at your own risk and responsibility. So, be careful and have fun!

      -

      Tips and tricks for playing Gangstar 2: Kings of LA mod

      -

      How to complete missions and earn money

      -

      One of the main goals of Gangstar 2: Kings of LA is to complete missions and earn money. Missions are tasks that you can accept from different characters or locations in the game. They usually involve some action or driving challenges that you have to complete within a time limit or without getting killed or arrested. Completing missions will reward you with money, reputation, weapons, items, or vehicles.

      -

      Some tips and tricks for completing missions and earning money are:

      -
        -
      • Choose the missions that suit your skills and preferences. Some missions are easier or harder than others, depending on your level, equipment, or location. Some missions are also more fun or boring than others, depending on your taste. You can always decline or quit a mission if you don't like it or if you find it too difficult.
      • -
      • Use the map and the GPS to find your way around the city. The map will show you the locations of your missions, objectives, enemies, allies, shops, properties, and more. The GPS will guide you to your destination with a blue line and voice instructions. You can also zoom in or out of the map or change the view mode to see more details.
      • -
      • Use the best weapon and vehicle for each mission. Different weapons and vehicles have different advantages and disadvantages in different situations. For example, a shotgun is good for close-range combat but bad for long-range shooting. A car is good for speed but bad for maneuverability. You can switch weapons or vehicles during a mission by using the inventory menu or by finding them on the street.
      • -
      • Save your game before and after each mission. Saving your game will allow you to resume your progress if you fail or quit a mission. It will also allow you to load your game if you want to replay a mission or try a different strategy. You can save your game by using the pause menu or by visiting your safe house.
      • -
      -

      How to customize your character and vehicle

      -

      Another goal of Gangstar 2: Kings of LA is to customize your character and vehicle. Customizing your character and vehicle will allow you to express your personality and style in the game. It will also affect how other people react to you in the game. For example, wearing a suit will make you look more professional but wearing a clown costume will make you look more ridiculous.

      -

      Some tips and tricks for customizing your character and vehicle are:

      -
        -
      • Visit the shops and properties in the game to buy or change your clothes, accessories, weapons, items, vehicles, or properties. You can find different types of shops and properties in different areas of the city. For example, you can find clothing stores in Hollywood, weapon stores in Downtown, vehicle stores in Santa Monica, and property stores in Beverly Hills.
      • -
      • Use the money you earn from missions or other activities to buy or upgrade your clothes, accessories, weapons, items, vehicles, or properties. You can also sell or trade your unwanted clothes, accessories, weapons, items, vehicles, or properties for money or other things.
      • -
      • Use the customization menu to change the appearance of your clothes, accessories, weapons, items, vehicles or properties. You can change the color, pattern, logo, or shape of your clothes, accessories, weapons, items, vehicles, or properties. You can also add or remove some features or parts of your clothes, accessories, weapons, items, vehicles, or properties.
      • -
      • Use the preview option to see how your clothes, accessories, weapons, items, vehicles, or properties look like before you buy or change them. You can also use the camera option to take a picture of your clothes, accessories, weapons, items, vehicles, or properties and share it with your friends or other players.
      • -
      -

      How to avoid cops and gangs

      -

      A final goal of Gangstar 2: Kings of LA is to avoid cops and gangs. Cops and gangs are the enemies that you will encounter in the game. They will try to stop you from completing your missions or enjoying your freedom. They will also try to kill you or arrest you if they catch you doing something illegal or suspicious.

      -

      Some tips and tricks for avoiding cops and gangs are:

      -
        -
      • Use the radar and the alert system to see the location and the status of cops and gangs. The radar will show you the direction and the distance of cops and gangs. The alert system will show you the level of attention or hostility that cops and gangs have towards you. The higher the level, the more likely they are to chase you or attack you.
      • -
      • Use stealth and disguise to avoid being detected or recognized by cops and gangs. You can use stealth by hiding behind objects, crouching behind walls, or staying out of sight. You can use disguise by changing your clothes, accessories, weapons, items, vehicles, or properties to blend in with the environment or the crowd.
      • -
      • Use speed and maneuverability to escape from cops and gangs. You can use speed by running fast, driving fast, or flying fast. You can use maneuverability by jumping over obstacles, dodging bullets, or turning corners. You can also use shortcuts, alleys, bridges, tunnels, or rooftops to lose them.
      • -
      • Use violence and intimidation to fight back against cops and gangs. You can use violence by shooting them, punching them, kicking them, or throwing them. You can use intimidation by threatening them, taunting them, or bribing them. You can also use explosives, traps, or distractions to cause chaos or confusion among them.
      • -
      -

      Conclusion

      -

      A summary of the main points

      -

      In conclusion, Gangstar 2: Kings of LA is a game that allows you to live the life of a gangster in Los Angeles. You can complete missions, earn money, customize your character and vehicle, and avoid cops and gangs. You can also download the mod version of Gangstar 2: Kings of LA to enjoy the game with more features, options, and freedom. However, you should be aware of the risks and problems that come with downloading and installing the mod version of Gangstar 2: Kings of LA. You should also follow some tips and tricks to make the most out of your gaming experience. We hope this article has helped you learn more about Gangstar 2: Kings of LA and its mod version. Now, go ahead and download the game and have fun!

      -

      A call to action for the readers

      -

      If you liked this article, please share it with your friends or leave a comment below. We would love to hear your feedback and opinions about Gangstar 2: Kings of LA and its mod version. You can also subscribe to our newsletter or follow us on social media to get more articles like this one. Thank you for reading and happy gaming!

      -

      FAQs

      -

      What are the system requirements for Gangstar 2: Kings of LA?

      -

      Gangstar 2: Kings of LA is a game that can run on various platforms, including Android, BlackBerry, Nintendo DSi, and button-operated mobile phones. However, the system requirements may vary depending on the platform and the version of the game. Generally, you will need at least 50 MB of free storage space, 256 MB of RAM, and a 1 GHz processor to play the game smoothly.

      -

      Is Gangstar 2: Kings of LA based on a true story?

      -

      No, Gangstar 2: Kings of LA is not based on a true story. It is a fictional game that is inspired by various movies, TV shows, books, and games that feature gangsters and crime in Los Angeles. Some of these sources are Grand Theft Auto: San Andreas, Scarface, The Godfather, The Sopranos, and L.A. Confidential.

      -

      How long does it take to finish Gangstar 2: Kings of LA?

      -

      The length of Gangstar 2: Kings of LA depends on how you play the game and what you do in the game. If you focus on completing the main missions only, it may take you around 10 hours to finish the game. If you also do some side missions and activities, it may take you around 15 hours to finish the game. If you explore every corner of the map and try everything in the game, it may take you around 20 hours or more to finish the game.

      -

      Can I play Gangstar 2: Kings of LA online or offline?

      -

      You can play Gangstar 2: Kings of LA both online and offline. If you play online, you can access some features that require an internet connection, such as multiplayer mode, leaderboards, achievements, or updates. If you play offline, you can still enjoy most of the features that do not require an internet connection, such as single-player mode, customization, or saving.

      -

      Can I play Gangstar 2: Kings of LA with a controller or a keyboard?

      -

      Yes, you can play Gangstar 2: Kings of LA with a controller or a keyboard. If you play on an Android device, you can use a Bluetooth controller or a USB controller to control your character and vehicle in the game. If you play on a computer or a laptop, you can use a keyboard or a mouse to control your character and vehicle in the game.

      401be4b1e0
      -
      -
      \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Enjoy My Perfect Hotel with Mod APK - Free Money and No Ads - HappyMod.md b/spaces/fatiXbelha/sd/Enjoy My Perfect Hotel with Mod APK - Free Money and No Ads - HappyMod.md deleted file mode 100644 index e27fad912077b410901296585a56fa0de52a3ea5..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Enjoy My Perfect Hotel with Mod APK - Free Money and No Ads - HappyMod.md +++ /dev/null @@ -1,90 +0,0 @@ - -

      My Perfect Hotel Mod APK Happymod: A Fun and Relaxing Game for Hotel Lovers

      -

      Do you love hotels? Do you dream of running your own hotel empire? Do you enjoy decorating, managing, and solving puzzles? If you answered yes to any of these questions, then you will love My Perfect Hotel Mod APK Happymod. This is a fun and relaxing game that lets you create your own perfect hotel and enjoy the stories of your guests. In this article, we will tell you what this game is, why you should play it, what features it has, and how to download and install it on your device.

      -

      Introduction

      -

      What is My Perfect Hotel Mod APK Happymod?

      -

      My Perfect Hotel Mod APK Happymod is a modified version of the original game My Perfect Hotel, developed by Funfarm. This game is a combination of hotel simulation, decoration, and puzzle genres. You can build and design your own hotel, choose from various themes and styles, and make your guests happy. You can also play match-3 puzzles to earn money and unlock new items and levels. You can also interact with different characters and discover their stories and secrets.

      -

      my perfect hotel mod apk happymod


      DOWNLOADhttps://urllie.com/2uNxsD



      -

      Why should you play My Perfect Hotel Mod APK Happymod?

      -

      There are many reasons why you should play My Perfect Hotel Mod APK Happymod. Here are some of them:

      -
        -
      • It is a fun and relaxing game that can help you relieve stress and boredom.
      • -
      • It is a creative game that can stimulate your imagination and artistic skills.
      • -
      • It is a challenging game that can test your logic and strategy skills.
      • -
      • It is a social game that can connect you with other players and friends.
      • -
      -

      Features of My Perfect Hotel Mod APK Happymod

      -

      Unlimited money and free purchases

      -

      One of the best features of My Perfect Hotel Mod APK Happymod is that it gives you unlimited money and free purchases. This means that you can buy anything you want in the game without worrying about the cost. You can also upgrade your hotel faster and easier. You can enjoy the game without any limitations or ads.

      -

      Various hotel themes and decorations

      -

      Another great feature of My Perfect Hotel Mod APK Happymod is that it offers you various hotel themes and decorations. You can choose from different styles such as modern, classic, tropical, oriental, etc. You can also customize your hotel with hundreds of furniture, accessories, plants, paintings, etc. You can create your own unique hotel that reflects your personality and taste.

      -

      My Perfect Hotel unlimited money mod apk
      -Download My Perfect Hotel free purchase mod
      -How to install My Perfect Hotel mod apk from HappyMod
      -My Perfect Hotel mod apk latest version 1.0.20
      -My Perfect Hotel hack mod apk for Android
      -My Perfect Hotel mod apk offline gameplay
      -My Perfect Hotel mod apk features and reviews
      -My Perfect Hotel mod apk no root required
      -My Perfect Hotel mod apk unlimited coins and gems
      -My Perfect Hotel mod apk free download for PC
      -My Perfect Hotel mod apk unlimited stars and levels
      -My Perfect Hotel mod apk online multiplayer mode
      -My Perfect Hotel mod apk cheats and tips
      -My Perfect Hotel mod apk funfarm developer
      -My Perfect Hotel mod apk hotel master game
      -My Perfect Hotel mod apk best simulation game
      -My Perfect Hotel mod apk happymod alternative
      -My Perfect Hotel mod apk safe and secure download
      -My Perfect Hotel mod apk compatible with all devices
      -My Perfect Hotel mod apk easy and fast installation
      -My Perfect Hotel mod apk unlimited customization options
      -My Perfect Hotel mod apk realistic graphics and sound effects
      -My Perfect Hotel mod apk addictive and challenging gameplay
      -My Perfect Hotel mod apk create your own hotel empire
      -My Perfect Hotel mod apk manage staff and guests
      -My Perfect Hotel mod apk upgrade and decorate your hotel rooms
      -My Perfect Hotel mod apk earn money and rewards
      -My Perfect Hotel mod apk unlock new hotels and locations
      -My Perfect Hotel mod apk complete missions and achievements
      -My Perfect Hotel mod apk enjoy different hotel themes and styles
      -My Perfect Hotel mod apk share your hotel with friends online
      -My Perfect Hotel mod apk join the hotel master community
      -My Perfect Hotel mod apk support and feedback from developers
      -My Perfect Hotel mod apk regular updates and bug fixes
      -My Perfect Hotel mod apk no ads or in-app purchases

      -

      Challenging levels and tasks

      -

      A third feature of My Perfect Hotel Mod APK Happymod is that it provides you with challenging levels and tasks. You can play hundreds of match-3 puzzles that have different goals and obstacles. You can also complete daily missions and achievements that reward you with coins, gems, stars, etc. You can use these resources to unlock new items and levels. You can also compete with other players on the leaderboard and see who has the best hotel.

      -

      Cute characters and stories

      -

      A fourth feature of My Perfect Hotel Mod APK Happymod is that it introduces you to cute characters and stories. You can meet different guests who have their own backgrounds, personalities, preferences, and problems. You can help them with their requests, listen to their stories, and become their friends. You can also interact with your staff and partners who help you run your hotel. You can enjoy the funny and heartwarming dialogues and scenes that make the game more lively and realistic.

      -

      How to download and install My Perfect Hotel Mod APK Happymod

      -

      Step 1: Download the APK file from a trusted source

      -

      The first step to download and install My Perfect Hotel Mod APK Happymod is to find a reliable source that offers the APK file. You can search online for websites that provide modded games and apps. You can also check the reviews and ratings of the users who have downloaded the file before. Make sure that the file is safe and virus-free. You can also scan the file with an antivirus software before opening it.

      -

      Step 2: Enable unknown sources on your device

      -

      The second step to download and install My Perfect Hotel Mod APK Happymod is to enable unknown sources on your device. This is a security setting that allows you to install apps that are not from the official Google Play Store. To do this, you need to go to your device settings, then security, then unknown sources. You need to toggle the switch to allow unknown sources. You may also see a pop-up message that warns you about the risks of installing unknown apps. You need to confirm that you trust the source and proceed with the installation.

      -

      Step 3: Install the APK file and enjoy the game

      -

      The third and final step to download and install My Perfect Hotel Mod APK Happymod is to install the APK file and enjoy the game. To do this, you need to locate the downloaded file on your device storage, then tap on it to open it. You will see a screen that shows you the permissions and features of the app. You need to click on the install button and wait for the installation process to finish. Once it is done, you can launch the game and start playing.

      -

      Conclusion

      -

      Summary of the main points

      -

      In conclusion, My Perfect Hotel Mod APK Happymod is a fun and relaxing game that lets you create your own perfect hotel and enjoy the stories of your guests. It has many features such as unlimited money and free purchases, various hotel themes and decorations, challenging levels and tasks, and cute characters and stories. It is easy to download and install on your device by following these steps:

      -
        -
      1. Download the APK file from a trusted source.
      2. -
      3. Enable unknown sources on your device.
      4. -
      5. Install the APK file and enjoy the game.
      6. -
      -

      Call to action

      -

      If you are looking for a game that can help you relax, have fun, and unleash your creativity, then you should try My Perfect Hotel Mod APK Happymod. This game will make you feel like a hotel tycoon who can design, manage, and solve puzzles. You will also love the colorful graphics, smooth gameplay, and engaging storylines. So what are you waiting for? Download My Perfect Hotel Mod APK Happymod now and start building your dream hotel!

      -

      FAQs

      -
        -
      • Q: Is My Perfect Hotel Mod APK Happymod safe to use?
      • -
      • A: Yes, My Perfect Hotel Mod APK Happymod is safe to use as long as you download it from a reputable source that offers virus-free files. You can also scan the file with an antivirus software before installing it.
      • -
      • Q: Do I need to root my device to use My Perfect Hotel Mod APK Happymod?
      • -
      • A: No, you do not need to root your device to use My Perfect Hotel Mod APK Happymod. This modded version works on both rooted and non-rooted devices.
      • -
      • Q: Can I play My Perfect Hotel Mod APK Happymod offline?
      • -
      • A: Yes, you can play My Perfect Hotel Mod APK Happymod offline without an internet connection. However, some features such as social interactions, leaderboards, and updates may require an internet connection.
      • -
      • Q: How can I update My Perfect Hotel Mod APK Happymod?
      • -
      • A: To update My Perfect Hotel Mod APK Happymod, you need to download the latest version of the APK file from the same source that you downloaded it from before. Then, you need to uninstall the old version of the game and install the new version of the game.
      • -
      • Q: How can I contact the developer of My Perfect Hotel Mod APK Happymod?
      • -
      • A: To contact the developer of My Perfect Hotel Mod APK Happymod, you can visit their official website or their social media pages. You can also send them an email or leave a comment on their app page. 197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/backbones/iresnet.py b/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/backbones/iresnet.py deleted file mode 100644 index c6d3b9c240c24687d432197f976ee01fbf423216..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/src/face3d/models/arcface_torch/backbones/iresnet.py +++ /dev/null @@ -1,187 +0,0 @@ -import torch -from torch import nn - -__all__ = ['iresnet18', 'iresnet34', 'iresnet50', 'iresnet100', 'iresnet200'] - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - groups=groups, - bias=False, - dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, - out_planes, - kernel_size=1, - stride=stride, - bias=False) - - -class IBasicBlock(nn.Module): - expansion = 1 - def __init__(self, inplanes, planes, stride=1, downsample=None, - groups=1, base_width=64, dilation=1): - super(IBasicBlock, self).__init__() - if groups != 1 or base_width != 64: - raise ValueError('BasicBlock only supports groups=1 and base_width=64') - if dilation > 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - self.bn1 = nn.BatchNorm2d(inplanes, eps=1e-05,) - self.conv1 = conv3x3(inplanes, planes) - self.bn2 = nn.BatchNorm2d(planes, eps=1e-05,) - self.prelu = nn.PReLU(planes) - self.conv2 = conv3x3(planes, planes, stride) - self.bn3 = nn.BatchNorm2d(planes, eps=1e-05,) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - identity = x - out = self.bn1(x) - out = self.conv1(out) - out = self.bn2(out) - out = self.prelu(out) - out = self.conv2(out) - out = self.bn3(out) - if self.downsample is not None: - identity = self.downsample(x) - out += identity - return out - - -class IResNet(nn.Module): - fc_scale = 7 * 7 - def __init__(self, - block, layers, dropout=0, num_features=512, zero_init_residual=False, - groups=1, width_per_group=64, replace_stride_with_dilation=None, fp16=False): - super(IResNet, self).__init__() - self.fp16 = fp16 - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(self.inplanes, eps=1e-05) - self.prelu = nn.PReLU(self.inplanes) - self.layer1 = self._make_layer(block, 64, layers[0], stride=2) - self.layer2 = self._make_layer(block, - 128, - layers[1], - stride=2, - dilate=replace_stride_with_dilation[0]) - self.layer3 = self._make_layer(block, - 256, - layers[2], - stride=2, - dilate=replace_stride_with_dilation[1]) - self.layer4 = self._make_layer(block, - 512, - layers[3], - stride=2, - dilate=replace_stride_with_dilation[2]) - self.bn2 = nn.BatchNorm2d(512 * block.expansion, eps=1e-05,) - self.dropout = nn.Dropout(p=dropout, inplace=True) - self.fc = nn.Linear(512 * block.expansion * self.fc_scale, num_features) - self.features = nn.BatchNorm1d(num_features, eps=1e-05) - nn.init.constant_(self.features.weight, 1.0) - self.features.weight.requires_grad = False - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.normal_(m.weight, 0, 0.1) - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - if zero_init_residual: - for m in self.modules(): - if isinstance(m, IBasicBlock): - nn.init.constant_(m.bn2.weight, 0) - - def _make_layer(self, block, planes, blocks, stride=1, dilate=False): - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - nn.BatchNorm2d(planes * block.expansion, eps=1e-05, ), - ) - layers = [] - layers.append( - block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation)) - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block(self.inplanes, - planes, - groups=self.groups, - base_width=self.base_width, - dilation=self.dilation)) - - return nn.Sequential(*layers) - - def forward(self, x): - with torch.cuda.amp.autocast(self.fp16): - x = self.conv1(x) - x = self.bn1(x) - x = self.prelu(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.bn2(x) - x = torch.flatten(x, 1) - x = self.dropout(x) - x = self.fc(x.float() if self.fp16 else x) - x = self.features(x) - return x - - -def _iresnet(arch, block, layers, pretrained, progress, **kwargs): - model = IResNet(block, layers, **kwargs) - if pretrained: - raise ValueError() - return model - - -def iresnet18(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet18', IBasicBlock, [2, 2, 2, 2], pretrained, - progress, **kwargs) - - -def iresnet34(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet34', IBasicBlock, [3, 4, 6, 3], pretrained, - progress, **kwargs) - - -def iresnet50(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet50', IBasicBlock, [3, 4, 14, 3], pretrained, - progress, **kwargs) - - -def iresnet100(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet100', IBasicBlock, [3, 13, 30, 3], pretrained, - progress, **kwargs) - - -def iresnet200(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet200', IBasicBlock, [6, 26, 60, 6], pretrained, - progress, **kwargs) - diff --git a/spaces/fclong/summary/fengshen/examples/finetune_bart_qg/README.md b/spaces/fclong/summary/fengshen/examples/finetune_bart_qg/README.md deleted file mode 100644 index 33457b448b4356062ad4b1b00a22f00122c4fe83..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/finetune_bart_qg/README.md +++ /dev/null @@ -1,106 +0,0 @@ -## Randeng-BART-139M-QG-Chinese - - - -## 简介 Brief Introduction - -善于处理问题生成任务的中文版 BART-base 模型。 - -Good at solving question generation tasks Bart-base Model (Chinese version). - -## 模型分类 Model Taxonomy - -| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra | -| :----: | :----: | :----: | :----: | :----: | :----: | -| 通用 General | 自然语言转换 NLT | 燃灯 Randeng | BART | 139M | 问题生成任务-中文 QuestionGeneration-Chinese | - - -## 模型信息 Model Information - -本模型基于[IDEA-CCNL/Randeng-BART-139M](https://huggingface.co/IDEA-CCNL/Randeng-BART-139M),我们在 [ChineseSQuAD](https://github.com/pluto-junzeng/ChineseSquad) 数据集上微调了问题生成任务版本。 - -Based on [IDEA-CCNL/Randeng-BART-139M](https://huggingface.co/IDEA-CCNL/Randeng-BART-139M), we fine-tuned a question generation version on [ChineseSQuAD](https://github.com/pluto-junzeng/ChineseSquad) datasets. - - -Table1: 模型结构和配置 Model Architecture and Config - -| 配置 Config | 参数 Value| -| ------------------- | --------- | -| encoder layers | 6 | -| encoder_attn_heads | 12 | -| encoder_ffn_dim | 3072 | -| decoder_layers | 6 | -| decoder_attn_heads | 12 | -| decoder_ffn_dim | 3072 | -| max_encoder_len | 512 | - - -ChineseSQuAD 数据集翻译了部分SQuAD数据集,包含约 67k 有答案的训练样本和 43k 无答案训练样本。我们做了 9:1 的训练-开发集合划分,并在公开的开发集上评测了效果。 - -The dataset is translated from SQuAD 2.0, with around 67k samples with answers and 43k samples without answers. We split the data to train-dev with ratio of 9:1 and test the performance on the public dev set. - -Table 2: 数据集样本量 -| | all | have ans | no ans | -|:------|:-------|:---------|:-------| -| train_split | 100097 | 60879 | 39128 | -| dev_split | 11089 | 6809 | 4280 | -| dev | 10836 | 6645 | 4191 | - - -## 使用 Usage - -### 环境安装 Install -``` -git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git -cd Fengshenbang-LM -git submodule init -git submodule update -# submodule是我们用来管理数据集的fs_datasets,通过ssh的方式拉取,如果用户没有在机器上配置ssh-key的话可能会拉取失败。 -# 如果拉取失败,需要到.gitmodules文件中把ssh地址改为https地址即可。 -pip install --editable . -``` - - -### 模型加载 Loading Models -```python -from transformers import AutoTokenizer, BartForConditionalGeneration -tokenizer = AutoTokenizer.from_pretrained("IDEA-CCNL/Randeng-BART-139M-QG-Chinese",additional_special_tokens=[""]) -model = BartForConditionalGeneration.from_pretrained("IDEA-CCNL/Randeng-BART-139M-QG-Chinese") - -context = "知识:1939年9月1日德国入侵波兰后,第二次世界大战开始,华沙一直被保卫到9月27日。波兰中部,包括华沙,都在德国纳粹殖民地政府总政府的统治下。所有的高等教育机构都立即关闭,华沙的犹太人口——几十万,约占城市的 ——全部涌入华沙的贫民区。回答:30%" -inputs = tokenizer.encode_plus( - context, - max_length=448, - padding="max_length", - truncation=True, - return_tensors='pt' - ) -out = model.generate( - input_ids=inputs['input_ids'], - attention_mask=inputs['attention_mask'], - do_sample=True, - num_beams=5, - max_length=64, - top_p = 0.9, - ) -print(pred = tokenizer.batch_decode(out,clean_up_tokenization_spaces=True, skip_special_tokens=True)[0]) -# 问题:华沙的犹太人口占城市的百分之多少? -``` - - - -### 训练 train -```python -bash finetune_bart.sh -``` - -- finetune_bart.py 定义了数据处理输入输出方式和finetune的核心代码 -- finetune_bart.sh 训练脚本,具体参数可在此修改 -- utils.py 定义了独立的工具代码,重实现的函数等 - - - -### 下游效果 Performance -| Dataset | Size | BLEU-4 | METEOR | ROUGE-L| -| ------------ | ----- | -------- |--------- | ---------- | -| ChineseSQuAD | 139M | 22.17 | 40.38 | 38.17 | diff --git a/spaces/fclong/summary/fengshen/examples/translate/prepare_dataset.py b/spaces/fclong/summary/fengshen/examples/translate/prepare_dataset.py deleted file mode 100644 index 5ce8cc74e05ab477a5863b99470c30c4073876c8..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/translate/prepare_dataset.py +++ /dev/null @@ -1,37 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - -import sys -import json -import os - - -def main(file_path, src_lang, tgt_lang): - - file_list = ["train", "valid", "test"] - for filename in file_list: - sys.stderr.write("**** Start processing {} ... ****\n".format(filename)) - src_full_path = os.path.join(file_path, ".".join((filename, src_lang))) - tgt_full_path = os.path.join(file_path, ".".join((filename, tgt_lang))) - src_reader = open(src_full_path, 'r') - tgt_reader = open(tgt_full_path, "r") - - writer_full_path = os.path.join(file_path, ".".join((filename, src_lang + "_" + tgt_lang))) - writer = open(writer_full_path, "w") - # combine_dict = OrderedDict() - for row_src, row_tgt in zip(src_reader, tgt_reader): - combine_line = {} - combine_line["src"] = row_src.strip() - combine_line["tgt"] = row_tgt.strip() - json.dump(combine_line, writer, ensure_ascii=False) - writer.write('\n') - # print(row_src) - # print(row_tgt) - sys.stderr.write(f"**** Done change {filename} format **** \n") - - -if __name__ == "__main__": - file_path = sys.argv[1] - src_lang, tgt_lang = sys.argv[2].split("-") - - main(file_path, src_lang, tgt_lang) diff --git a/spaces/fclong/summary/fengshen/tokenizer/sentencepiece/shuffle_corpus.py b/spaces/fclong/summary/fengshen/tokenizer/sentencepiece/shuffle_corpus.py deleted file mode 100644 index 9b3bdf1fc55f3bdd78ca5d540f80d5b612188b68..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/tokenizer/sentencepiece/shuffle_corpus.py +++ /dev/null @@ -1,18 +0,0 @@ -import sys -import os -from tqdm import tqdm -sys.path.append('../../') - -if __name__ == '__main__': - from data.fs_datasets import load_dataset - dataset = load_dataset('wudao_180g', num_proc=100) - print('dataset loaded', flush=True) - - shuffle_ds = dataset['train'].shuffle(seed=42, writer_batch_size=1000) - print('dataset shuffled', flush=True) - need_size = len(shuffle_ds) - - f = open('shuffle_corpus_{}.txt'.format(need_size), 'w', encoding='utf-8') - for i in tqdm(range(0, need_size)): - f.write(shuffle_ds[i]['text'] + os.linesep) - f.close() diff --git a/spaces/fengmuxi/ChatGpt-Web/app/locales/tr.ts b/spaces/fengmuxi/ChatGpt-Web/app/locales/tr.ts deleted file mode 100644 index 5d56cbae6c79612cc00355a045279ffc151f3a99..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/app/locales/tr.ts +++ /dev/null @@ -1,274 +0,0 @@ -import { SubmitKey } from "../store/config"; -import type { LocaleType } from "./index"; - -const tr: LocaleType = { - WIP: "Çalışma devam ediyor...", - Error: { - Unauthorized: - "Yetkisiz erişim, lütfen erişim kodunu ayarlar sayfasından giriniz.", - }, - ChatItem: { - ChatItemCount: (count: number) => `${count} mesaj`, - }, - Chat: { - SubTitle: (count: number) => `ChatGPT tarafından ${count} mesaj`, - Actions: { - ChatList: "Sohbet Listesine Git", - CompressedHistory: "Sıkıştırılmış Geçmiş Bellek Komutu", - Export: "Tüm Mesajları Markdown Olarak Dışa Aktar", - Copy: "Kopyala", - Stop: "Durdur", - Retry: "Tekrar Dene", - Delete: "Delete", - }, - Rename: "Sohbeti Yeniden Adlandır", - Typing: "Yazıyor…", - Input: (submitKey: string) => { - var inputHints = `Göndermek için ${submitKey}`; - if (submitKey === String(SubmitKey.Enter)) { - inputHints += ", kaydırmak için Shift + Enter"; - } - return inputHints + ", komutları aramak için / (eğik çizgi)"; - }, - Send: "Gönder", - Config: { - Reset: "Reset to Default", - SaveAs: "Save as Mask", - }, - }, - Export: { - Title: "Tüm Mesajlar", - Copy: "Tümünü Kopyala", - Download: "İndir", - MessageFromYou: "Sizin Mesajınız", - MessageFromChatGPT: "ChatGPT'nin Mesajı", - }, - Memory: { - Title: "Bellek Komutları", - EmptyContent: "Henüz değil.", - Send: "Belleği Gönder", - Copy: "Belleği Kopyala", - Reset: "Oturumu Sıfırla", - ResetConfirm: - "Sıfırlama, geçerli görüşme geçmişini ve geçmiş belleği siler. Sıfırlamak istediğinizden emin misiniz?", - }, - Home: { - NewChat: "Yeni Sohbet", - DeleteChat: "Seçili sohbeti silmeyi onaylıyor musunuz?", - DeleteToast: "Sohbet Silindi", - Revert: "Geri Al", - }, - User:{ - Title: "kullanıcı", - SubTitle: "Kullanıcı bilgileri arayüzü", - Login:"Oturum açma", - LoginTitle:"Kullanıcı oturum açar", - Register:"Kayıt", - RegisterTitle:"Yeni bir kullanıcı kaydetme", - Findpwd:"Şifreyi kurtar", - FindpwdTitle:"Hesap şifrenizi girin ve e-postanıza gönderilecektir", - Name:"Kullanıcı adı", - Wallet:"Kullanıcı Kredileri", - Mail:"Kullanıcı posta kutusu", - SigState:"Check-in durumu", - Ststus:"Oturumu Kapat", - Vip:"üye", - kami:"Ödeme kodu", - NickName:"Lakap", - User:"Hesap Numarası (Yalnızca Numaralar)", - Password:"Şifre (en az 6 haneli)", - Email:"Posta Kutusu", - Code:"Arjantin", - Pass:{ - Title:"修改密码", - OldPwd:"旧密码", - NewPwd:"新密码", - NewPwd1:"确认密码" - }, - Save:"保存" - }, - Settings: { - Title: "Ayarlar", - SubTitle: "Tüm Ayarlar", - Actions: { - ClearAll: "Tüm Verileri Temizle", - ResetAll: "Tüm Ayarları Sıfırla", - Close: "Kapat", - ConfirmResetAll: "Tüm ayarları sıfırlamak istediğinizden emin misiniz?", - ConfirmClearAll: "Tüm sohbeti sıfırlamak istediğinizden emin misiniz?", - }, - Lang: { - Name: "Language", // ATTENTION: if you wanna add a new translation, please do not translate this value, leave it as `Language` - All: "All Languages", - Options: { - cn: "简体中文", - en: "English", - tw: "繁體中文", - es: "Español", - it: "Italiano", - tr: "Türkçe", - jp: "日本語", - de: "Deutsch", - }, - }, - Avatar: "Avatar", - FontSize: { - Title: "Yazı Boyutu", - SubTitle: "Sohbet içeriğinin yazı boyutunu ayarlayın", - }, - Update: { - Version: (x: string) => `Sürüm: ${x}`, - IsLatest: "En son sürüm", - CheckUpdate: "Güncellemeyi Kontrol Et", - IsChecking: "Güncelleme kontrol ediliyor...", - FoundUpdate: (x: string) => `Yeni sürüm bulundu: ${x}`, - GoToUpdate: "Güncelle", - }, - SendKey: "Gönder Tuşu", - Theme: "Tema", - TightBorder: "Tam Ekran", - SendPreviewBubble: { - Title: "Mesaj Önizleme Balonu", - SubTitle: "Preview markdown in bubble", - }, - Mask: { - Title: "Mask Splash Screen", - SubTitle: "Show a mask splash screen before starting new chat", - }, - Prompt: { - Disable: { - Title: "Otomatik tamamlamayı devre dışı bırak", - SubTitle: "Otomatik tamamlamayı kullanmak için / (eğik çizgi) girin", - }, - List: "Komut Listesi", - ListCount: (builtin: number, custom: number) => - `${builtin} yerleşik, ${custom} kullanıcı tanımlı`, - Edit: "Düzenle", - Modal: { - Title: "Prompt List", - Add: "Add One", - Search: "Search Prompts", - }, - EditModal: { - Title: "Edit Prompt", - }, - }, - HistoryCount: { - Title: "Ekli Mesaj Sayısı", - SubTitle: "İstek başına ekli gönderilen mesaj sayısı", - }, - CompressThreshold: { - Title: "Geçmiş Sıkıştırma Eşiği", - SubTitle: - "Sıkıştırılmamış mesajların uzunluğu bu değeri aşarsa sıkıştırılır", - }, - Token: { - Title: "API Anahtarı", - SubTitle: "Erişim kodu sınırını yoksaymak için anahtarınızı kullanın", - Placeholder: "OpenAI API Anahtarı", - }, - Usage: { - Title: "Hesap Bakiyesi", - SubTitle(used: any, total: any) { - return `Bu ay kullanılan $${used}, abonelik $${total}`; - }, - IsChecking: "Kontrol ediliyor...", - Check: "Tekrar Kontrol Et", - NoAccess: "Bakiyeyi kontrol etmek için API anahtarını girin", - }, - AccessCode: { - Title: "Erişim Kodu", - SubTitle: "Erişim kontrolü etkinleştirme", - Placeholder: "Erişim Kodu Gerekiyor", - }, - Bot: "AI Satıcıları (bot)", - Model: "Model", - Temperature: { - Title: "Gerçeklik", - SubTitle: - "Daha büyük bir değer girildiğinde gerçeklik oranı düşer ve daha rastgele çıktılar üretir", - }, - MaxTokens: { - Title: "Maksimum Belirteç", - SubTitle: - "Girdi belirteçlerinin ve oluşturulan belirteçlerin maksimum uzunluğu", - }, - PresencePenalty: { - Title: "Varlık Cezası", - SubTitle: - "Daha büyük bir değer, yeni konular hakkında konuşma olasılığını artırır", - }, - }, - Store: { - DefaultTopic: "Yeni Konuşma", - BotHello: "Merhaba! Size bugün nasıl yardımcı olabilirim?", - Error: "Bir şeyler yanlış gitti. Lütfen daha sonra tekrar deneyiniz.", - Prompt: { - History: (content: string) => - "Bu, yapay zeka ile kullanıcı arasındaki sohbet geçmişinin bir özetidir: " + - content, - Topic: - "Lütfen herhangi bir giriş, noktalama işareti, tırnak işareti, nokta, sembol veya ek metin olmadan konuşmamızı özetleyen dört ila beş kelimelik bir başlık oluşturun. Çevreleyen tırnak işaretlerini kaldırın.", - Summarize: - "Gelecekteki bağlam için bir bilgi istemi olarak kullanmak üzere tartışmamızı en fazla 200 kelimeyle özetleyin.", - }, - }, - Copy: { - Success: "Panoya kopyalandı", - Failed: "Kopyalama başarısız oldu, lütfen panoya erişim izni verin", - }, - Context: { - Toast: (x: any) => `${x} bağlamsal bellek komutu`, - Edit: "Bağlamsal ve Bellek Komutları", - Add: "Yeni Ekle", - }, - Plugin: { - Name: "Plugin", - }, - Mask: { - Name: "Mask", - Page: { - Title: "Prompt Template", - SubTitle: (count: number) => `${count} prompt templates`, - Search: "Search Templates", - Create: "Create", - }, - Item: { - Info: (count: number) => `${count} prompts`, - Chat: "Chat", - View: "View", - Edit: "Edit", - Delete: "Delete", - DeleteConfirm: "Confirm to delete?", - }, - EditModal: { - Title: (readonly: boolean) => - `Edit Prompt Template ${readonly ? "(readonly)" : ""}`, - Download: "Download", - Clone: "Clone", - }, - Config: { - Avatar: "Bot Avatar", - Name: "Bot Name", - }, - }, - NewChat: { - Return: "Return", - Skip: "Skip", - Title: "Pick a Mask", - SubTitle: "Chat with the Soul behind the Mask", - More: "Find More", - NotShow: "Not Show Again", - ConfirmNoShow: "Confirm to disable?You can enable it in settings later.", - }, - - UI: { - Confirm: "Confirm", - Cancel: "Cancel", - Close: "Close", - Create: "Create", - Edit: "Edit", - }, -}; - -export default tr; diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Classic Solitaire - The Best Free Card Game with No Ads or In-App Purchases.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Classic Solitaire - The Best Free Card Game with No Ads or In-App Purchases.md deleted file mode 100644 index 274a93dd77f39f063b7cd3ec44a18343d28ffd1f..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Classic Solitaire - The Best Free Card Game with No Ads or In-App Purchases.md +++ /dev/null @@ -1,114 +0,0 @@ -
        -

        Free Classic Solitaire Download No Ads: How to Enjoy the Timeless Card Game on Your Device

        -

        Solitaire is one of the most popular card games in the world. It is simple, relaxing, and addictive. You can play it anywhere, anytime, and with any device. But how can you find a free classic solitaire download without ads? In this article, we will show you how to download and play the best solitaire games for Windows, Android, and iOS devices. We will also share some tips and tricks to improve your solitaire skills and have more fun.

        -

        What is Solitaire and Why is it So Popular?

        -

        Solitaire is a card game that can be played by one person. The goal is to arrange all the cards in four piles, one for each suit, from ace to king. There are many variations of solitaire, but the most common one is called Klondike. In Klondike, you have seven columns of cards on the tableau, and you can move cards from one column to another if they are in descending order and alternating colors. You can also draw cards from the stock pile and place them on the foundations or the tableau.

        -

        free classic solitaire download no ads


        DOWNLOAD ✔✔✔ https://gohhs.com/2uPuzz



        -

        The History of Solitaire

        -

        The origin of solitaire is not clear, but some historians believe that it was invented in Germany or Scandinavia in the 18th century. It was originally called "patience" or "success" and was used as a way to pass time or practice mental skills. It became popular in France during the Napoleonic era, and then spread to other countries. In the late 19th century, solitaire was introduced to America by British settlers. In 1990, Microsoft included a version of solitaire in Windows 3.0, which made it accessible to millions of computer users around the world.

        -

        The Rules of Solitaire

        -

        The rules of solitaire are simple, but they can vary depending on the version you are playing. Here are the basic rules for Klondike solitaire:

        -
          -
        • You need a standard 52-card deck. Shuffle the cards and deal them face down into seven columns on the tableau. The first column has one card, the second has two cards, and so on. The top card of each column is face up.
        • -
        • The four empty spaces above the tableau are called foundations. This is where you need to build the four piles of cards, one for each suit, from ace to king.
        • -
        • You can move cards from one column to another if they are in descending order and alternating colors. For example, you can move a black six on top of a red seven. You can also move a group of cards as a unit if they are in sequence.
        • -
        • If a column becomes empty, you can fill it with a king or a group of cards starting with a king.
        • -
        • You can draw cards from the stock pile, one at a time or three at a time, depending on your preference. You can place them on the foundations or the tableau.
        • -
        • You win the game when you have moved all the cards to the foundations.
        • -
        -

        The Benefits of Playing Solitaire

        -

        Playing solitaire is not only fun, but also beneficial for your brain and mood. Here are some of the benefits of playing solitaire:

        -
          -
        • It improves your concentration, memory, and problem-solving skills.
        • -
        • It reduces stress and anxiety by providing a distraction from negative thoughts and emotions.
        • -
        • It boosts your self-esteem and confidence by giving you a sense of achievement and satisfaction.
        • -
        • It enhances your creativity and imagination by allowing you to explore different strategies and outcomes.
        • -
        • It entertains you and keeps you from getting bored by offering a variety of challenges and levels.
        • -
        -

        How to Download Free Classic Solitaire Without Ads

        -

        If you want to enjoy solitaire without any interruptions or distractions, you need to find a free classic solitaire download without ads. There are many solitaire apps and websites available, but not all of them are ad-free. Some of them may have pop-ups, banners, or videos that can ruin your gaming experience. To help you avoid these annoying ads, we have selected the best free classic solitaire downloads for Windows, Android, and iOS devices. These are the ones that we recommend:

        -

        free klondike solitaire download without ads
        -download classic solitaire game for free no ads
        -free offline solitaire download no internet connection
        -free microsoft solitaire collection download no ads
        -free classic card games solitaire download no ads
        -free solitaire by mobilityware download without ads
        -free classic spider solitaire download no ads
        -download free solitaire for windows 10 no ads
        -free offline klondike solitaire download no ads
        -free classic solitaire app download no ads
        -free solitaire for chromebook download without ads
        -free classic freecell solitaire download no ads
        -download free solitaire for android no ads
        -free offline spider solitaire download no ads
        -free classic pyramid solitaire download no ads
        -free solitaire for mac download without ads
        -free classic tripeaks solitaire download no ads
        -download free solitaire for iphone no ads
        -free offline freecell solitaire download no ads
        -free classic golf solitaire download no ads
        -free solitaire for pc download without ads
        -free classic yukon solitaire download no ads
        -download free solitaire for ipad no ads
        -free offline pyramid solitaire download no ads
        -free classic scorpion solitaire download no ads
        -free solitaire for linux download without ads
        -free classic canfield solitaire download no ads
        -download free solitaire for windows 7 no ads
        -free offline tripeaks solitaire download no ads
        -free classic accordion solitaire download no ads
        -free solitaire for windows 8.1 download without ads
        -free classic baker's dozen solitaire download no ads
        -download free solitaire for windows xp no ads
        -free offline golf solitaire download no ads
        -free classic aces up solitaire download no ads
        -free solitaire for windows vista download without ads
        -free classic forty thieves solitaire download no ads
        -download free solitaire for windows 98 no ads
        -free offline yukon solitaire download no ads
        -free classic easthaven solitaire download no ads

        -

        Microsoft Solitaire Collection for Windows

        -

        If you have a Windows device, you can download the Microsoft Solitaire Collection for free from the Microsoft Store. This app includes five different solitaire games: Klondike, Spider, FreeCell, Pyramid, and TriPeaks. You can choose from four difficulty levels, customize your themes and card backs, and track your statistics and achievements. The app also has a daily challenge feature that gives you a new goal every day. The best part is that the app is completely ad-free, so you can play solitaire without any interruptions.

        -

        Classic Solitaire Klondike for Android

        -

        If you have an Android device, you can download the Classic Solitaire Klondike app for free from the Google Play Store. This app is a simple and elegant version of the classic solitaire game. You can play in portrait or landscape mode, adjust the card size and spacing, and choose from three scoring modes: standard, Vegas, or none. The app also has an undo button, a hint button, and an auto-complete feature. The app is ad-free, but you can support the developer by making a donation if you like the app.

        -

        Solitaire - Classic Card Games for iOS

        -

        If you have an iOS device, you can download the Solitaire - Classic Card Games app for free from the App Store. This app is a beautiful and modern version of the classic solitaire game. You can play in single tap or drag and drop mode, change the background and card design, and enable or disable sound effects. The app also has a timer, a moves counter, and a leaderboard. The app is ad-free, but you can unlock more features and themes by upgrading to the premium version.

        -

        Tips and Tricks to Improve Your Solitaire Skills

        -

        Playing solitaire is easy, but winning solitaire is not. It takes skill, strategy, and luck to beat the game. If you want to improve your solitaire skills and win more games, here are some tips and tricks that you can use:

        -

        How to Choose the Right Difficulty Level

        -

        The difficulty level of solitaire depends on how many cards you draw from the stock pile at a time. If you draw one card at a time, the game is easier because you have more options and chances to move cards. If you draw three cards at a time, the game is harder because you have fewer options and chances to move cards. You should choose the difficulty level that suits your skill level and preference. If you are a beginner, you may want to start with one card at a time until you get familiar with the game. If you are an expert, you may want to challenge yourself with three cards at a time for more excitement. -

        How to Use the Undo Button Wisely

        -

        The undo button is a useful feature that allows you to undo your last move if you make a mistake or change your mind. However, you should not rely on it too much or abuse it. Using the undo button too often can make the game less fun and less challenging. It can also make you lazy and careless with your moves. You should use the undo button only when you really need it or when you want to try a different strategy. You should also limit yourself to how many times you can use it per game. For example, you can set a rule that you can only use it three times per game. -

        How to Plan Ahead and Avoid Dead Ends

        -

        The key to winning solitaire is to plan ahead and avoid dead ends. A dead end is when you have no more moves left on the tableau or the stock pile. To avoid dead ends, you should follow these tips:

        -
          -
        • Always move an ace or a deuce to the foundation as soon as possible.
        • -
        • Try to expose hidden cards on the tableau by moving cards from longer columns to shorter columns.
        • Do not move cards to the foundation too quickly. Sometimes it is better to keep them on the tableau for more flexibility and opportunities. -
        • Do not fill an empty column with a king unless you have a good reason. It is better to keep an empty column for moving cards around.
        • -
        • Think ahead and consider the consequences of your moves. Try to anticipate what cards you will need and what cards you will expose.
        • -
        -

        Conclusion

        -

        Solitaire is a classic card game that can provide you with hours of entertainment and mental stimulation. You can play it on your device for free without any ads by downloading one of the apps we recommended. You can also improve your solitaire skills by following our tips and tricks. Whether you are a beginner or an expert, solitaire is a game that you can enjoy anytime, anywhere.

        -

        Summary of the Main Points

        -

        In this article, we have covered the following points:

        -
          -
        • What is solitaire and why is it so popular?
        • -
        • How to download free classic solitaire without ads for Windows, Android, and iOS devices.
        • -
        • Tips and tricks to improve your solitaire skills and avoid dead ends.
        • -
        -

        Call to Action

        -

        If you are ready to play solitaire, download one of the apps we suggested and start playing now. You will be amazed by how much fun and relaxation you can get from this timeless card game. And if you liked this article, please share it with your friends and family who might also enjoy playing solitaire.

        -

        FAQs

        -

        Here are some frequently asked questions about solitaire:

        -

        Q: How many cards are in a solitaire deck?

        -

        A: A solitaire deck consists of 52 cards, 13 of each suit (clubs, diamonds, hearts, and spades).

        -

        Q: How many cards are face up in solitaire?

        -

        A: In Klondike solitaire, there are 28 cards face up on the tableau (one on the first column, two on the second column, and so on) and one card face up on the stock pile.

        -

        Q: How do you win solitaire?

        -

        A: You win solitaire when you have moved all the cards from the tableau and the stock pile to the foundations in ascending order by suit.

        -

        Q: Is solitaire a game of luck or skill?

        -

        A: Solitaire is a game of both luck and skill. Luck determines what cards you get and what cards you expose, while skill determines how you use those cards and what moves you make.

        -

        Q: What is the best strategy for solitaire?

        -

        A: There is no definitive answer to this question, as different strategies may work better for different situations and preferences. However, some general tips are to expose hidden cards as soon as possible, move aces and deuces to the foundations quickly, keep an empty column for moving cards around, and plan ahead and avoid dead ends.

        197e85843d
        -
        -
        \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transports/index.d.ts b/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transports/index.d.ts deleted file mode 100644 index ff66aa66ea9b953127f316da6b0cffe2d5a05755..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/engine.io/build/transports/index.d.ts +++ /dev/null @@ -1,16 +0,0 @@ -import { Polling as XHR } from "./polling"; -import { WebSocket } from "./websocket"; -declare const _default: { - polling: typeof polling; - websocket: typeof WebSocket; -}; -export default _default; -/** - * Polling polymorphic constructor. - * - * @api private - */ -declare function polling(req: any): XHR; -declare namespace polling { - var upgradesTo: string[]; -} diff --git a/spaces/firdavsyorkulov/delivery_project_fastapi/main.py b/spaces/firdavsyorkulov/delivery_project_fastapi/main.py deleted file mode 100644 index 756e64f4d9d972685b564351fa351e49ee373162..0000000000000000000000000000000000000000 --- a/spaces/firdavsyorkulov/delivery_project_fastapi/main.py +++ /dev/null @@ -1,28 +0,0 @@ -from fastapi import FastAPI -from auth_routes import auth_router -from product_routes import product_router -from order_routes import order_router -from music_routes import music_router -from fastapi_jwt_auth import AuthJWT -from schemas import Settings, LoginModel - -app = FastAPI() -app.include_router(auth_router) -app.include_router(order_router) -app.include_router(product_router) -app.include_router(music_router) - - -@AuthJWT.load_config -def get_config(): - return Settings() - - -@app.get("/") -async def root(): - return {"message": "Hello World"} - - -@app.get("/hello/{name}") -async def say_hello(name: str): - return {"message": f"Hello {name}"} diff --git a/spaces/firzaelbuho/rvc-models/infer_pack/modules.py b/spaces/firzaelbuho/rvc-models/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/firzaelbuho/rvc-models/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/geometric.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/geometric.py deleted file mode 100644 index cf97c201cb4e43796c911919d03fb26a07ed817d..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/image/geometric.py +++ /dev/null @@ -1,728 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numbers - -import cv2 -import numpy as np - -from ..utils import to_2tuple -from .io import imread_backend - -try: - from PIL import Image -except ImportError: - Image = None - - -def _scale_size(size, scale): - """Rescale a size by a ratio. - - Args: - size (tuple[int]): (w, h). - scale (float | tuple(float)): Scaling factor. - - Returns: - tuple[int]: scaled size. - """ - if isinstance(scale, (float, int)): - scale = (scale, scale) - w, h = size - return int(w * float(scale[0]) + 0.5), int(h * float(scale[1]) + 0.5) - - -cv2_interp_codes = { - 'nearest': cv2.INTER_NEAREST, - 'bilinear': cv2.INTER_LINEAR, - 'bicubic': cv2.INTER_CUBIC, - 'area': cv2.INTER_AREA, - 'lanczos': cv2.INTER_LANCZOS4 -} - -if Image is not None: - pillow_interp_codes = { - 'nearest': Image.NEAREST, - 'bilinear': Image.BILINEAR, - 'bicubic': Image.BICUBIC, - 'box': Image.BOX, - 'lanczos': Image.LANCZOS, - 'hamming': Image.HAMMING - } - - -def imresize(img, - size, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image to a given size. - - Args: - img (ndarray): The input image. - size (tuple[int]): Target size (w, h). - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if backend is None: - backend = imread_backend - if backend not in ['cv2', 'pillow']: - raise ValueError(f'backend: {backend} is not supported for resize.' - f"Supported backends are 'cv2', 'pillow'") - - if backend == 'pillow': - assert img.dtype == np.uint8, 'Pillow backend only support uint8 type' - pil_image = Image.fromarray(img) - pil_image = pil_image.resize(size, pillow_interp_codes[interpolation]) - resized_img = np.array(pil_image) - else: - resized_img = cv2.resize( - img, size, dst=out, interpolation=cv2_interp_codes[interpolation]) - if not return_scale: - return resized_img - else: - w_scale = size[0] / w - h_scale = size[1] / h - return resized_img, w_scale, h_scale - - -def imresize_to_multiple(img, - divisor, - size=None, - scale_factor=None, - keep_ratio=False, - return_scale=False, - interpolation='bilinear', - out=None, - backend=None): - """Resize image according to a given size or scale factor and then rounds - up the the resized or rescaled image size to the nearest value that can be - divided by the divisor. - - Args: - img (ndarray): The input image. - divisor (int | tuple): Resized image size will be a multiple of - divisor. If divisor is a tuple, divisor should be - (w_divisor, h_divisor). - size (None | int | tuple[int]): Target size (w, h). Default: None. - scale_factor (None | float | tuple[float]): Multiplier for spatial - size. Should match input size if it is a tuple and the 2D style is - (w_scale_factor, h_scale_factor). Default: None. - keep_ratio (bool): Whether to keep the aspect ratio when resizing the - image. Default: False. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Interpolation method, accepted values are - "nearest", "bilinear", "bicubic", "area", "lanczos" for 'cv2' - backend, "nearest", "bilinear" for 'pillow' backend. - out (ndarray): The output destination. - backend (str | None): The image resize backend type. Options are `cv2`, - `pillow`, `None`. If backend is None, the global imread_backend - specified by ``mmcv.use_backend()`` will be used. Default: None. - - Returns: - tuple | ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = img.shape[:2] - if size is not None and scale_factor is not None: - raise ValueError('only one of size or scale_factor should be defined') - elif size is None and scale_factor is None: - raise ValueError('one of size or scale_factor should be defined') - elif size is not None: - size = to_2tuple(size) - if keep_ratio: - size = rescale_size((w, h), size, return_scale=False) - else: - size = _scale_size((w, h), scale_factor) - - divisor = to_2tuple(divisor) - size = tuple([int(np.ceil(s / d)) * d for s, d in zip(size, divisor)]) - resized_img, w_scale, h_scale = imresize( - img, - size, - return_scale=True, - interpolation=interpolation, - out=out, - backend=backend) - if return_scale: - return resized_img, w_scale, h_scale - else: - return resized_img - - -def imresize_like(img, - dst_img, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image to the same size of a given image. - - Args: - img (ndarray): The input image. - dst_img (ndarray): The target image. - return_scale (bool): Whether to return `w_scale` and `h_scale`. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - tuple or ndarray: (`resized_img`, `w_scale`, `h_scale`) or - `resized_img`. - """ - h, w = dst_img.shape[:2] - return imresize(img, (w, h), return_scale, interpolation, backend=backend) - - -def rescale_size(old_size, scale, return_scale=False): - """Calculate the new size to be rescaled to. - - Args: - old_size (tuple[int]): The old size (w, h) of image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image size. - - Returns: - tuple[int]: The new rescaled image size. - """ - w, h = old_size - if isinstance(scale, (float, int)): - if scale <= 0: - raise ValueError(f'Invalid scale {scale}, must be positive.') - scale_factor = scale - elif isinstance(scale, tuple): - max_long_edge = max(scale) - max_short_edge = min(scale) - scale_factor = min(max_long_edge / max(h, w), - max_short_edge / min(h, w)) - else: - raise TypeError( - f'Scale must be a number or tuple of int, but got {type(scale)}') - - new_size = _scale_size((w, h), scale_factor) - - if return_scale: - return new_size, scale_factor - else: - return new_size - - -def imrescale(img, - scale, - return_scale=False, - interpolation='bilinear', - backend=None): - """Resize image while keeping the aspect ratio. - - Args: - img (ndarray): The input image. - scale (float | tuple[int]): The scaling factor or maximum size. - If it is a float number, then the image will be rescaled by this - factor, else if it is a tuple of 2 integers, then the image will - be rescaled as large as possible within the scale. - return_scale (bool): Whether to return the scaling factor besides the - rescaled image. - interpolation (str): Same as :func:`resize`. - backend (str | None): Same as :func:`resize`. - - Returns: - ndarray: The rescaled image. - """ - h, w = img.shape[:2] - new_size, scale_factor = rescale_size((w, h), scale, return_scale=True) - rescaled_img = imresize( - img, new_size, interpolation=interpolation, backend=backend) - if return_scale: - return rescaled_img, scale_factor - else: - return rescaled_img - - -def imflip(img, direction='horizontal'): - """Flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image. - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return np.flip(img, axis=1) - elif direction == 'vertical': - return np.flip(img, axis=0) - else: - return np.flip(img, axis=(0, 1)) - - -def imflip_(img, direction='horizontal'): - """Inplace flip an image horizontally or vertically. - - Args: - img (ndarray): Image to be flipped. - direction (str): The flip direction, either "horizontal" or - "vertical" or "diagonal". - - Returns: - ndarray: The flipped image (inplace). - """ - assert direction in ['horizontal', 'vertical', 'diagonal'] - if direction == 'horizontal': - return cv2.flip(img, 1, img) - elif direction == 'vertical': - return cv2.flip(img, 0, img) - else: - return cv2.flip(img, -1, img) - - -def imrotate(img, - angle, - center=None, - scale=1.0, - border_value=0, - interpolation='bilinear', - auto_bound=False): - """Rotate an image. - - Args: - img (ndarray): Image to be rotated. - angle (float): Rotation angle in degrees, positive values mean - clockwise rotation. - center (tuple[float], optional): Center point (w, h) of the rotation in - the source image. If not specified, the center of the image will be - used. - scale (float): Isotropic scale factor. - border_value (int): Border value. - interpolation (str): Same as :func:`resize`. - auto_bound (bool): Whether to adjust the image size to cover the whole - rotated image. - - Returns: - ndarray: The rotated image. - """ - if center is not None and auto_bound: - raise ValueError('`auto_bound` conflicts with `center`') - h, w = img.shape[:2] - if center is None: - center = ((w - 1) * 0.5, (h - 1) * 0.5) - assert isinstance(center, tuple) - - matrix = cv2.getRotationMatrix2D(center, -angle, scale) - if auto_bound: - cos = np.abs(matrix[0, 0]) - sin = np.abs(matrix[0, 1]) - new_w = h * sin + w * cos - new_h = h * cos + w * sin - matrix[0, 2] += (new_w - w) * 0.5 - matrix[1, 2] += (new_h - h) * 0.5 - w = int(np.round(new_w)) - h = int(np.round(new_h)) - rotated = cv2.warpAffine( - img, - matrix, (w, h), - flags=cv2_interp_codes[interpolation], - borderValue=border_value) - return rotated - - -def bbox_clip(bboxes, img_shape): - """Clip bboxes to fit the image shape. - - Args: - bboxes (ndarray): Shape (..., 4*k) - img_shape (tuple[int]): (height, width) of the image. - - Returns: - ndarray: Clipped bboxes. - """ - assert bboxes.shape[-1] % 4 == 0 - cmin = np.empty(bboxes.shape[-1], dtype=bboxes.dtype) - cmin[0::2] = img_shape[1] - 1 - cmin[1::2] = img_shape[0] - 1 - clipped_bboxes = np.maximum(np.minimum(bboxes, cmin), 0) - return clipped_bboxes - - -def bbox_scaling(bboxes, scale, clip_shape=None): - """Scaling bboxes w.r.t the box center. - - Args: - bboxes (ndarray): Shape(..., 4). - scale (float): Scaling factor. - clip_shape (tuple[int], optional): If specified, bboxes that exceed the - boundary will be clipped according to the given shape (h, w). - - Returns: - ndarray: Scaled bboxes. - """ - if float(scale) == 1.0: - scaled_bboxes = bboxes.copy() - else: - w = bboxes[..., 2] - bboxes[..., 0] + 1 - h = bboxes[..., 3] - bboxes[..., 1] + 1 - dw = (w * (scale - 1)) * 0.5 - dh = (h * (scale - 1)) * 0.5 - scaled_bboxes = bboxes + np.stack((-dw, -dh, dw, dh), axis=-1) - if clip_shape is not None: - return bbox_clip(scaled_bboxes, clip_shape) - else: - return scaled_bboxes - - -def imcrop(img, bboxes, scale=1.0, pad_fill=None): - """Crop image patches. - - 3 steps: scale the bboxes -> clip bboxes -> crop and pad. - - Args: - img (ndarray): Image to be cropped. - bboxes (ndarray): Shape (k, 4) or (4, ), location of cropped bboxes. - scale (float, optional): Scale ratio of bboxes, the default value - 1.0 means no padding. - pad_fill (Number | list[Number]): Value to be filled for padding. - Default: None, which means no padding. - - Returns: - list[ndarray] | ndarray: The cropped image patches. - """ - chn = 1 if img.ndim == 2 else img.shape[2] - if pad_fill is not None: - if isinstance(pad_fill, (int, float)): - pad_fill = [pad_fill for _ in range(chn)] - assert len(pad_fill) == chn - - _bboxes = bboxes[None, ...] if bboxes.ndim == 1 else bboxes - scaled_bboxes = bbox_scaling(_bboxes, scale).astype(np.int32) - clipped_bbox = bbox_clip(scaled_bboxes, img.shape) - - patches = [] - for i in range(clipped_bbox.shape[0]): - x1, y1, x2, y2 = tuple(clipped_bbox[i, :]) - if pad_fill is None: - patch = img[y1:y2 + 1, x1:x2 + 1, ...] - else: - _x1, _y1, _x2, _y2 = tuple(scaled_bboxes[i, :]) - if chn == 1: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1) - else: - patch_shape = (_y2 - _y1 + 1, _x2 - _x1 + 1, chn) - patch = np.array( - pad_fill, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - x_start = 0 if _x1 >= 0 else -_x1 - y_start = 0 if _y1 >= 0 else -_y1 - w = x2 - x1 + 1 - h = y2 - y1 + 1 - patch[y_start:y_start + h, x_start:x_start + w, - ...] = img[y1:y1 + h, x1:x1 + w, ...] - patches.append(patch) - - if bboxes.ndim == 1: - return patches[0] - else: - return patches - - -def impad(img, - *, - shape=None, - padding=None, - pad_val=0, - padding_mode='constant'): - """Pad the given image to a certain shape or pad on all sides with - specified padding mode and padding value. - - Args: - img (ndarray): Image to be padded. - shape (tuple[int]): Expected padding shape (h, w). Default: None. - padding (int or tuple[int]): Padding on each border. If a single int is - provided this is used to pad all borders. If tuple of length 2 is - provided this is the padding on left/right and top/bottom - respectively. If a tuple of length 4 is provided this is the - padding for the left, top, right and bottom borders respectively. - Default: None. Note that `shape` and `padding` can not be both - set. - pad_val (Number | Sequence[Number]): Values to be filled in padding - areas when padding_mode is 'constant'. Default: 0. - padding_mode (str): Type of padding. Should be: constant, edge, - reflect or symmetric. Default: constant. - - - constant: pads with a constant value, this value is specified - with pad_val. - - edge: pads with the last value at the edge of the image. - - reflect: pads with reflection of image without repeating the - last value on the edge. For example, padding [1, 2, 3, 4] - with 2 elements on both sides in reflect mode will result - in [3, 2, 1, 2, 3, 4, 3, 2]. - - symmetric: pads with reflection of image repeating the last - value on the edge. For example, padding [1, 2, 3, 4] with - 2 elements on both sides in symmetric mode will result in - [2, 1, 1, 2, 3, 4, 4, 3] - - Returns: - ndarray: The padded image. - """ - - assert (shape is not None) ^ (padding is not None) - if shape is not None: - padding = (0, 0, shape[1] - img.shape[1], shape[0] - img.shape[0]) - - # check pad_val - if isinstance(pad_val, tuple): - assert len(pad_val) == img.shape[-1] - elif not isinstance(pad_val, numbers.Number): - raise TypeError('pad_val must be a int or a tuple. ' - f'But received {type(pad_val)}') - - # check padding - if isinstance(padding, tuple) and len(padding) in [2, 4]: - if len(padding) == 2: - padding = (padding[0], padding[1], padding[0], padding[1]) - elif isinstance(padding, numbers.Number): - padding = (padding, padding, padding, padding) - else: - raise ValueError('Padding must be a int or a 2, or 4 element tuple.' - f'But received {padding}') - - # check padding mode - assert padding_mode in ['constant', 'edge', 'reflect', 'symmetric'] - - border_type = { - 'constant': cv2.BORDER_CONSTANT, - 'edge': cv2.BORDER_REPLICATE, - 'reflect': cv2.BORDER_REFLECT_101, - 'symmetric': cv2.BORDER_REFLECT - } - img = cv2.copyMakeBorder( - img, - padding[1], - padding[3], - padding[0], - padding[2], - border_type[padding_mode], - value=pad_val) - - return img - - -def impad_to_multiple(img, divisor, pad_val=0): - """Pad an image to ensure each edge to be multiple to some number. - - Args: - img (ndarray): Image to be padded. - divisor (int): Padded image edges will be multiple to divisor. - pad_val (Number | Sequence[Number]): Same as :func:`impad`. - - Returns: - ndarray: The padded image. - """ - pad_h = int(np.ceil(img.shape[0] / divisor)) * divisor - pad_w = int(np.ceil(img.shape[1] / divisor)) * divisor - return impad(img, shape=(pad_h, pad_w), pad_val=pad_val) - - -def cutout(img, shape, pad_val=0): - """Randomly cut out a rectangle from the original img. - - Args: - img (ndarray): Image to be cutout. - shape (int | tuple[int]): Expected cutout shape (h, w). If given as a - int, the value will be used for both h and w. - pad_val (int | float | tuple[int | float]): Values to be filled in the - cut area. Defaults to 0. - - Returns: - ndarray: The cutout image. - """ - - channels = 1 if img.ndim == 2 else img.shape[2] - if isinstance(shape, int): - cut_h, cut_w = shape, shape - else: - assert isinstance(shape, tuple) and len(shape) == 2, \ - f'shape must be a int or a tuple with length 2, but got type ' \ - f'{type(shape)} instead.' - cut_h, cut_w = shape - if isinstance(pad_val, (int, float)): - pad_val = tuple([pad_val] * channels) - elif isinstance(pad_val, tuple): - assert len(pad_val) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(pad_val), channels) - else: - raise TypeError(f'Invalid type {type(pad_val)} for `pad_val`') - - img_h, img_w = img.shape[:2] - y0 = np.random.uniform(img_h) - x0 = np.random.uniform(img_w) - - y1 = int(max(0, y0 - cut_h / 2.)) - x1 = int(max(0, x0 - cut_w / 2.)) - y2 = min(img_h, y1 + cut_h) - x2 = min(img_w, x1 + cut_w) - - if img.ndim == 2: - patch_shape = (y2 - y1, x2 - x1) - else: - patch_shape = (y2 - y1, x2 - x1, channels) - - img_cutout = img.copy() - patch = np.array( - pad_val, dtype=img.dtype) * np.ones( - patch_shape, dtype=img.dtype) - img_cutout[y1:y2, x1:x2, ...] = patch - - return img_cutout - - -def _get_shear_matrix(magnitude, direction='horizontal'): - """Generate the shear matrix for transformation. - - Args: - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - - Returns: - ndarray: The shear matrix with dtype float32. - """ - if direction == 'horizontal': - shear_matrix = np.float32([[1, magnitude, 0], [0, 1, 0]]) - elif direction == 'vertical': - shear_matrix = np.float32([[1, 0, 0], [magnitude, 1, 0]]) - return shear_matrix - - -def imshear(img, - magnitude, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Shear an image. - - Args: - img (ndarray): Image to be sheared with format (h, w) - or (h, w, c). - magnitude (int | float): The magnitude used for shear. - direction (str): The flip direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The sheared image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`') - shear_matrix = _get_shear_matrix(magnitude, direction) - sheared = cv2.warpAffine( - img, - shear_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. shearing masks whose channels large - # than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return sheared - - -def _get_translate_matrix(offset, direction='horizontal'): - """Generate the translate matrix. - - Args: - offset (int | float): The offset used for translate. - direction (str): The translate direction, either - "horizontal" or "vertical". - - Returns: - ndarray: The translate matrix with dtype float32. - """ - if direction == 'horizontal': - translate_matrix = np.float32([[1, 0, offset], [0, 1, 0]]) - elif direction == 'vertical': - translate_matrix = np.float32([[1, 0, 0], [0, 1, offset]]) - return translate_matrix - - -def imtranslate(img, - offset, - direction='horizontal', - border_value=0, - interpolation='bilinear'): - """Translate an image. - - Args: - img (ndarray): Image to be translated with format - (h, w) or (h, w, c). - offset (int | float): The offset used for translate. - direction (str): The translate direction, either "horizontal" - or "vertical". - border_value (int | tuple[int]): Value used in case of a - constant border. - interpolation (str): Same as :func:`resize`. - - Returns: - ndarray: The translated image. - """ - assert direction in ['horizontal', - 'vertical'], f'Invalid direction: {direction}' - height, width = img.shape[:2] - if img.ndim == 2: - channels = 1 - elif img.ndim == 3: - channels = img.shape[-1] - if isinstance(border_value, int): - border_value = tuple([border_value] * channels) - elif isinstance(border_value, tuple): - assert len(border_value) == channels, \ - 'Expected the num of elements in tuple equals the channels' \ - 'of input image. Found {} vs {}'.format( - len(border_value), channels) - else: - raise ValueError( - f'Invalid type {type(border_value)} for `border_value`.') - translate_matrix = _get_translate_matrix(offset, direction) - translated = cv2.warpAffine( - img, - translate_matrix, - (width, height), - # Note case when the number elements in `border_value` - # greater than 3 (e.g. translating masks whose channels - # large than 3) will raise TypeError in `cv2.warpAffine`. - # Here simply slice the first 3 values in `border_value`. - borderValue=border_value[:3], - flags=cv2_interp_codes[interpolation]) - return translated diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/runner/priority.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/runner/priority.py deleted file mode 100644 index 64cc4e3a05f8d5b89ab6eb32461e6e80f1d62e67..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/runner/priority.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from enum import Enum - - -class Priority(Enum): - """Hook priority levels. - - +--------------+------------+ - | Level | Value | - +==============+============+ - | HIGHEST | 0 | - +--------------+------------+ - | VERY_HIGH | 10 | - +--------------+------------+ - | HIGH | 30 | - +--------------+------------+ - | ABOVE_NORMAL | 40 | - +--------------+------------+ - | NORMAL | 50 | - +--------------+------------+ - | BELOW_NORMAL | 60 | - +--------------+------------+ - | LOW | 70 | - +--------------+------------+ - | VERY_LOW | 90 | - +--------------+------------+ - | LOWEST | 100 | - +--------------+------------+ - """ - - HIGHEST = 0 - VERY_HIGH = 10 - HIGH = 30 - ABOVE_NORMAL = 40 - NORMAL = 50 - BELOW_NORMAL = 60 - LOW = 70 - VERY_LOW = 90 - LOWEST = 100 - - -def get_priority(priority): - """Get priority value. - - Args: - priority (int or str or :obj:`Priority`): Priority. - - Returns: - int: The priority value. - """ - if isinstance(priority, int): - if priority < 0 or priority > 100: - raise ValueError('priority must be between 0 and 100') - return priority - elif isinstance(priority, Priority): - return priority.value - elif isinstance(priority, str): - return Priority[priority.upper()].value - else: - raise TypeError('priority must be an integer or Priority enum value') diff --git a/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/diffusion/dpm_solver/dpm_solver.py b/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/diffusion/dpm_solver/dpm_solver.py deleted file mode 100644 index 095e5ba3ce0b1aa7f4b3f1e2e5d8fff7cfe6dc8c..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/ldm/models/diffusion/dpm_solver/dpm_solver.py +++ /dev/null @@ -1,1154 +0,0 @@ -import torch -import torch.nn.functional as F -import math -from tqdm import tqdm - - -class NoiseScheduleVP: - def __init__( - self, - schedule='discrete', - betas=None, - alphas_cumprod=None, - continuous_beta_0=0.1, - continuous_beta_1=20., - ): - """Create a wrapper class for the forward SDE (VP type). - *** - Update: We support discrete-time diffusion models by implementing a picewise linear interpolation for log_alpha_t. - We recommend to use schedule='discrete' for the discrete-time diffusion models, especially for high-resolution images. - *** - The forward SDE ensures that the condition distribution q_{t|0}(x_t | x_0) = N ( alpha_t * x_0, sigma_t^2 * I ). - We further define lambda_t = log(alpha_t) - log(sigma_t), which is the half-logSNR (described in the DPM-Solver paper). - Therefore, we implement the functions for computing alpha_t, sigma_t and lambda_t. For t in [0, T], we have: - log_alpha_t = self.marginal_log_mean_coeff(t) - sigma_t = self.marginal_std(t) - lambda_t = self.marginal_lambda(t) - Moreover, as lambda(t) is an invertible function, we also support its inverse function: - t = self.inverse_lambda(lambda_t) - =============================================================== - We support both discrete-time DPMs (trained on n = 0, 1, ..., N-1) and continuous-time DPMs (trained on t in [t_0, T]). - 1. For discrete-time DPMs: - For discrete-time DPMs trained on n = 0, 1, ..., N-1, we convert the discrete steps to continuous time steps by: - t_i = (i + 1) / N - e.g. for N = 1000, we have t_0 = 1e-3 and T = t_{N-1} = 1. - We solve the corresponding diffusion ODE from time T = 1 to time t_0 = 1e-3. - Args: - betas: A `torch.Tensor`. The beta array for the discrete-time DPM. (See the original DDPM paper for details) - alphas_cumprod: A `torch.Tensor`. The cumprod alphas for the discrete-time DPM. (See the original DDPM paper for details) - Note that we always have alphas_cumprod = cumprod(betas). Therefore, we only need to set one of `betas` and `alphas_cumprod`. - **Important**: Please pay special attention for the args for `alphas_cumprod`: - The `alphas_cumprod` is the \hat{alpha_n} arrays in the notations of DDPM. Specifically, DDPMs assume that - q_{t_n | 0}(x_{t_n} | x_0) = N ( \sqrt{\hat{alpha_n}} * x_0, (1 - \hat{alpha_n}) * I ). - Therefore, the notation \hat{alpha_n} is different from the notation alpha_t in DPM-Solver. In fact, we have - alpha_{t_n} = \sqrt{\hat{alpha_n}}, - and - log(alpha_{t_n}) = 0.5 * log(\hat{alpha_n}). - 2. For continuous-time DPMs: - We support two types of VPSDEs: linear (DDPM) and cosine (improved-DDPM). The hyperparameters for the noise - schedule are the default settings in DDPM and improved-DDPM: - Args: - beta_min: A `float` number. The smallest beta for the linear schedule. - beta_max: A `float` number. The largest beta for the linear schedule. - cosine_s: A `float` number. The hyperparameter in the cosine schedule. - cosine_beta_max: A `float` number. The hyperparameter in the cosine schedule. - T: A `float` number. The ending time of the forward process. - =============================================================== - Args: - schedule: A `str`. The noise schedule of the forward SDE. 'discrete' for discrete-time DPMs, - 'linear' or 'cosine' for continuous-time DPMs. - Returns: - A wrapper object of the forward SDE (VP type). - - =============================================================== - Example: - # For discrete-time DPMs, given betas (the beta array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', betas=betas) - # For discrete-time DPMs, given alphas_cumprod (the \hat{alpha_n} array for n = 0, 1, ..., N - 1): - >>> ns = NoiseScheduleVP('discrete', alphas_cumprod=alphas_cumprod) - # For continuous-time DPMs (VPSDE), linear schedule: - >>> ns = NoiseScheduleVP('linear', continuous_beta_0=0.1, continuous_beta_1=20.) - """ - - if schedule not in ['discrete', 'linear', 'cosine']: - raise ValueError( - "Unsupported noise schedule {}. The schedule needs to be 'discrete' or 'linear' or 'cosine'".format( - schedule)) - - self.schedule = schedule - if schedule == 'discrete': - if betas is not None: - log_alphas = 0.5 * torch.log(1 - betas).cumsum(dim=0) - else: - assert alphas_cumprod is not None - log_alphas = 0.5 * torch.log(alphas_cumprod) - self.total_N = len(log_alphas) - self.T = 1. - self.t_array = torch.linspace(0., 1., self.total_N + 1)[1:].reshape((1, -1)) - self.log_alpha_array = log_alphas.reshape((1, -1,)) - else: - self.total_N = 1000 - self.beta_0 = continuous_beta_0 - self.beta_1 = continuous_beta_1 - self.cosine_s = 0.008 - self.cosine_beta_max = 999. - self.cosine_t_max = math.atan(self.cosine_beta_max * (1. + self.cosine_s) / math.pi) * 2. * ( - 1. + self.cosine_s) / math.pi - self.cosine_s - self.cosine_log_alpha_0 = math.log(math.cos(self.cosine_s / (1. + self.cosine_s) * math.pi / 2.)) - self.schedule = schedule - if schedule == 'cosine': - # For the cosine schedule, T = 1 will have numerical issues. So we manually set the ending time T. - # Note that T = 0.9946 may be not the optimal setting. However, we find it works well. - self.T = 0.9946 - else: - self.T = 1. - - def marginal_log_mean_coeff(self, t): - """ - Compute log(alpha_t) of a given continuous-time label t in [0, T]. - """ - if self.schedule == 'discrete': - return interpolate_fn(t.reshape((-1, 1)), self.t_array.to(t.device), - self.log_alpha_array.to(t.device)).reshape((-1)) - elif self.schedule == 'linear': - return -0.25 * t ** 2 * (self.beta_1 - self.beta_0) - 0.5 * t * self.beta_0 - elif self.schedule == 'cosine': - log_alpha_fn = lambda s: torch.log(torch.cos((s + self.cosine_s) / (1. + self.cosine_s) * math.pi / 2.)) - log_alpha_t = log_alpha_fn(t) - self.cosine_log_alpha_0 - return log_alpha_t - - def marginal_alpha(self, t): - """ - Compute alpha_t of a given continuous-time label t in [0, T]. - """ - return torch.exp(self.marginal_log_mean_coeff(t)) - - def marginal_std(self, t): - """ - Compute sigma_t of a given continuous-time label t in [0, T]. - """ - return torch.sqrt(1. - torch.exp(2. * self.marginal_log_mean_coeff(t))) - - def marginal_lambda(self, t): - """ - Compute lambda_t = log(alpha_t) - log(sigma_t) of a given continuous-time label t in [0, T]. - """ - log_mean_coeff = self.marginal_log_mean_coeff(t) - log_std = 0.5 * torch.log(1. - torch.exp(2. * log_mean_coeff)) - return log_mean_coeff - log_std - - def inverse_lambda(self, lamb): - """ - Compute the continuous-time label t in [0, T] of a given half-logSNR lambda_t. - """ - if self.schedule == 'linear': - tmp = 2. * (self.beta_1 - self.beta_0) * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - Delta = self.beta_0 ** 2 + tmp - return tmp / (torch.sqrt(Delta) + self.beta_0) / (self.beta_1 - self.beta_0) - elif self.schedule == 'discrete': - log_alpha = -0.5 * torch.logaddexp(torch.zeros((1,)).to(lamb.device), -2. * lamb) - t = interpolate_fn(log_alpha.reshape((-1, 1)), torch.flip(self.log_alpha_array.to(lamb.device), [1]), - torch.flip(self.t_array.to(lamb.device), [1])) - return t.reshape((-1,)) - else: - log_alpha = -0.5 * torch.logaddexp(-2. * lamb, torch.zeros((1,)).to(lamb)) - t_fn = lambda log_alpha_t: torch.arccos(torch.exp(log_alpha_t + self.cosine_log_alpha_0)) * 2. * ( - 1. + self.cosine_s) / math.pi - self.cosine_s - t = t_fn(log_alpha) - return t - - -def model_wrapper( - model, - noise_schedule, - model_type="noise", - model_kwargs={}, - guidance_type="uncond", - condition=None, - unconditional_condition=None, - guidance_scale=1., - classifier_fn=None, - classifier_kwargs={}, -): - """Create a wrapper function for the noise prediction model. - DPM-Solver needs to solve the continuous-time diffusion ODEs. For DPMs trained on discrete-time labels, we need to - firstly wrap the model function to a noise prediction model that accepts the continuous time as the input. - We support four types of the diffusion model by setting `model_type`: - 1. "noise": noise prediction model. (Trained by predicting noise). - 2. "x_start": data prediction model. (Trained by predicting the data x_0 at time 0). - 3. "v": velocity prediction model. (Trained by predicting the velocity). - The "v" prediction is derivation detailed in Appendix D of [1], and is used in Imagen-Video [2]. - [1] Salimans, Tim, and Jonathan Ho. "Progressive distillation for fast sampling of diffusion models." - arXiv preprint arXiv:2202.00512 (2022). - [2] Ho, Jonathan, et al. "Imagen Video: High Definition Video Generation with Diffusion Models." - arXiv preprint arXiv:2210.02303 (2022). - - 4. "score": marginal score function. (Trained by denoising score matching). - Note that the score function and the noise prediction model follows a simple relationship: - ``` - noise(x_t, t) = -sigma_t * score(x_t, t) - ``` - We support three types of guided sampling by DPMs by setting `guidance_type`: - 1. "uncond": unconditional sampling by DPMs. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - 2. "classifier": classifier guidance sampling [3] by DPMs and another classifier. - The input `model` has the following format: - `` - model(x, t_input, **model_kwargs) -> noise | x_start | v | score - `` - The input `classifier_fn` has the following format: - `` - classifier_fn(x, t_input, cond, **classifier_kwargs) -> logits(x, t_input, cond) - `` - [3] P. Dhariwal and A. Q. Nichol, "Diffusion models beat GANs on image synthesis," - in Advances in Neural Information Processing Systems, vol. 34, 2021, pp. 8780-8794. - 3. "classifier-free": classifier-free guidance sampling by conditional DPMs. - The input `model` has the following format: - `` - model(x, t_input, cond, **model_kwargs) -> noise | x_start | v | score - `` - And if cond == `unconditional_condition`, the model output is the unconditional DPM output. - [4] Ho, Jonathan, and Tim Salimans. "Classifier-free diffusion guidance." - arXiv preprint arXiv:2207.12598 (2022). - - The `t_input` is the time label of the model, which may be discrete-time labels (i.e. 0 to 999) - or continuous-time labels (i.e. epsilon to T). - We wrap the model function to accept only `x` and `t_continuous` as inputs, and outputs the predicted noise: - `` - def model_fn(x, t_continuous) -> noise: - t_input = get_model_input_time(t_continuous) - return noise_pred(model, x, t_input, **model_kwargs) - `` - where `t_continuous` is the continuous time labels (i.e. epsilon to T). And we use `model_fn` for DPM-Solver. - =============================================================== - Args: - model: A diffusion model with the corresponding format described above. - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - model_type: A `str`. The parameterization type of the diffusion model. - "noise" or "x_start" or "v" or "score". - model_kwargs: A `dict`. A dict for the other inputs of the model function. - guidance_type: A `str`. The type of the guidance for sampling. - "uncond" or "classifier" or "classifier-free". - condition: A pytorch tensor. The condition for the guided sampling. - Only used for "classifier" or "classifier-free" guidance type. - unconditional_condition: A pytorch tensor. The condition for the unconditional sampling. - Only used for "classifier-free" guidance type. - guidance_scale: A `float`. The scale for the guided sampling. - classifier_fn: A classifier function. Only used for the classifier guidance. - classifier_kwargs: A `dict`. A dict for the other inputs of the classifier function. - Returns: - A noise prediction model that accepts the noised data and the continuous time as the inputs. - """ - - def get_model_input_time(t_continuous): - """ - Convert the continuous-time `t_continuous` (in [epsilon, T]) to the model input time. - For discrete-time DPMs, we convert `t_continuous` in [1 / N, 1] to `t_input` in [0, 1000 * (N - 1) / N]. - For continuous-time DPMs, we just use `t_continuous`. - """ - if noise_schedule.schedule == 'discrete': - return (t_continuous - 1. / noise_schedule.total_N) * 1000. - else: - return t_continuous - - def noise_pred_fn(x, t_continuous, cond=None): - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - t_input = get_model_input_time(t_continuous) - if cond is None: - output = model(x, t_input, **model_kwargs) - else: - output = model(x, t_input, cond, **model_kwargs) - if model_type == "noise": - return output - elif model_type == "x_start": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return (x - expand_dims(alpha_t, dims) * output) / expand_dims(sigma_t, dims) - elif model_type == "v": - alpha_t, sigma_t = noise_schedule.marginal_alpha(t_continuous), noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return expand_dims(alpha_t, dims) * output + expand_dims(sigma_t, dims) * x - elif model_type == "score": - sigma_t = noise_schedule.marginal_std(t_continuous) - dims = x.dim() - return -expand_dims(sigma_t, dims) * output - - def cond_grad_fn(x, t_input): - """ - Compute the gradient of the classifier, i.e. nabla_{x} log p_t(cond | x_t). - """ - with torch.enable_grad(): - x_in = x.detach().requires_grad_(True) - log_prob = classifier_fn(x_in, t_input, condition, **classifier_kwargs) - return torch.autograd.grad(log_prob.sum(), x_in)[0] - - def model_fn(x, t_continuous): - """ - The noise predicition model function that is used for DPM-Solver. - """ - if t_continuous.reshape((-1,)).shape[0] == 1: - t_continuous = t_continuous.expand((x.shape[0])) - if guidance_type == "uncond": - return noise_pred_fn(x, t_continuous) - elif guidance_type == "classifier": - assert classifier_fn is not None - t_input = get_model_input_time(t_continuous) - cond_grad = cond_grad_fn(x, t_input) - sigma_t = noise_schedule.marginal_std(t_continuous) - noise = noise_pred_fn(x, t_continuous) - return noise - guidance_scale * expand_dims(sigma_t, dims=cond_grad.dim()) * cond_grad - elif guidance_type == "classifier-free": - if guidance_scale == 1. or unconditional_condition is None: - return noise_pred_fn(x, t_continuous, cond=condition) - else: - x_in = torch.cat([x] * 2) - t_in = torch.cat([t_continuous] * 2) - c_in = torch.cat([unconditional_condition, condition]) - noise_uncond, noise = noise_pred_fn(x_in, t_in, cond=c_in).chunk(2) - return noise_uncond + guidance_scale * (noise - noise_uncond) - - assert model_type in ["noise", "x_start", "v"] - assert guidance_type in ["uncond", "classifier", "classifier-free"] - return model_fn - - -class DPM_Solver: - def __init__(self, model_fn, noise_schedule, predict_x0=False, thresholding=False, max_val=1.): - """Construct a DPM-Solver. - We support both the noise prediction model ("predicting epsilon") and the data prediction model ("predicting x0"). - If `predict_x0` is False, we use the solver for the noise prediction model (DPM-Solver). - If `predict_x0` is True, we use the solver for the data prediction model (DPM-Solver++). - In such case, we further support the "dynamic thresholding" in [1] when `thresholding` is True. - The "dynamic thresholding" can greatly improve the sample quality for pixel-space DPMs with large guidance scales. - Args: - model_fn: A noise prediction model function which accepts the continuous-time input (t in [epsilon, T]): - `` - def model_fn(x, t_continuous): - return noise - `` - noise_schedule: A noise schedule object, such as NoiseScheduleVP. - predict_x0: A `bool`. If true, use the data prediction model; else, use the noise prediction model. - thresholding: A `bool`. Valid when `predict_x0` is True. Whether to use the "dynamic thresholding" in [1]. - max_val: A `float`. Valid when both `predict_x0` and `thresholding` are True. The max value for thresholding. - - [1] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022b. - """ - self.model = model_fn - self.noise_schedule = noise_schedule - self.predict_x0 = predict_x0 - self.thresholding = thresholding - self.max_val = max_val - - def noise_prediction_fn(self, x, t): - """ - Return the noise prediction model. - """ - return self.model(x, t) - - def data_prediction_fn(self, x, t): - """ - Return the data prediction model (with thresholding). - """ - noise = self.noise_prediction_fn(x, t) - dims = x.dim() - alpha_t, sigma_t = self.noise_schedule.marginal_alpha(t), self.noise_schedule.marginal_std(t) - x0 = (x - expand_dims(sigma_t, dims) * noise) / expand_dims(alpha_t, dims) - if self.thresholding: - p = 0.995 # A hyperparameter in the paper of "Imagen" [1]. - s = torch.quantile(torch.abs(x0).reshape((x0.shape[0], -1)), p, dim=1) - s = expand_dims(torch.maximum(s, self.max_val * torch.ones_like(s).to(s.device)), dims) - x0 = torch.clamp(x0, -s, s) / s - return x0 - - def model_fn(self, x, t): - """ - Convert the model to the noise prediction model or the data prediction model. - """ - if self.predict_x0: - return self.data_prediction_fn(x, t) - else: - return self.noise_prediction_fn(x, t) - - def get_time_steps(self, skip_type, t_T, t_0, N, device): - """Compute the intermediate time steps for sampling. - Args: - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - N: A `int`. The total number of the spacing of the time steps. - device: A torch device. - Returns: - A pytorch tensor of the time steps, with the shape (N + 1,). - """ - if skip_type == 'logSNR': - lambda_T = self.noise_schedule.marginal_lambda(torch.tensor(t_T).to(device)) - lambda_0 = self.noise_schedule.marginal_lambda(torch.tensor(t_0).to(device)) - logSNR_steps = torch.linspace(lambda_T.cpu().item(), lambda_0.cpu().item(), N + 1).to(device) - return self.noise_schedule.inverse_lambda(logSNR_steps) - elif skip_type == 'time_uniform': - return torch.linspace(t_T, t_0, N + 1).to(device) - elif skip_type == 'time_quadratic': - t_order = 2 - t = torch.linspace(t_T ** (1. / t_order), t_0 ** (1. / t_order), N + 1).pow(t_order).to(device) - return t - else: - raise ValueError( - "Unsupported skip_type {}, need to be 'logSNR' or 'time_uniform' or 'time_quadratic'".format(skip_type)) - - def get_orders_and_timesteps_for_singlestep_solver(self, steps, order, skip_type, t_T, t_0, device): - """ - Get the order of each step for sampling by the singlestep DPM-Solver. - We combine both DPM-Solver-1,2,3 to use all the function evaluations, which is named as "DPM-Solver-fast". - Given a fixed number of function evaluations by `steps`, the sampling procedure by DPM-Solver-fast is: - - If order == 1: - We take `steps` of DPM-Solver-1 (i.e. DDIM). - - If order == 2: - - Denote K = (steps // 2). We take K or (K + 1) intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of DPM-Solver-2. - - If steps % 2 == 1, we use K steps of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If order == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of DPM-Solver-3, and 1 step of DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of DPM-Solver-3 and 1 step of DPM-Solver-2. - ============================================ - Args: - order: A `int`. The max order for the solver (2 or 3). - steps: A `int`. The total number of function evaluations (NFE). - skip_type: A `str`. The type for the spacing of the time steps. We support three types: - - 'logSNR': uniform logSNR for the time steps. - - 'time_uniform': uniform time for the time steps. (**Recommended for high-resolutional data**.) - - 'time_quadratic': quadratic time for the time steps. (Used in DDIM for low-resolutional data.) - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - device: A torch device. - Returns: - orders: A list of the solver order of each step. - """ - if order == 3: - K = steps // 3 + 1 - if steps % 3 == 0: - orders = [3, ] * (K - 2) + [2, 1] - elif steps % 3 == 1: - orders = [3, ] * (K - 1) + [1] - else: - orders = [3, ] * (K - 1) + [2] - elif order == 2: - if steps % 2 == 0: - K = steps // 2 - orders = [2, ] * K - else: - K = steps // 2 + 1 - orders = [2, ] * (K - 1) + [1] - elif order == 1: - K = 1 - orders = [1, ] * steps - else: - raise ValueError("'order' must be '1' or '2' or '3'.") - if skip_type == 'logSNR': - # To reproduce the results in DPM-Solver paper - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, K, device) - else: - timesteps_outer = self.get_time_steps(skip_type, t_T, t_0, steps, device)[ - torch.cumsum(torch.tensor([0, ] + orders)).to(device)] - return timesteps_outer, orders - - def denoise_to_zero_fn(self, x, s): - """ - Denoise at the final step, which is equivalent to solve the ODE from lambda_s to infty by first-order discretization. - """ - return self.data_prediction_fn(x, s) - - def dpm_solver_first_update(self, x, s, t, model_s=None, return_intermediate=False): - """ - DPM-Solver-1 (equivalent to DDIM) from time `s` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - log_alpha_s, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_t = ns.marginal_std(s), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - if self.predict_x0: - phi_1 = torch.expm1(-h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - else: - phi_1 = torch.expm1(h) - if model_s is None: - model_s = self.model_fn(x, s) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - ) - if return_intermediate: - return x_t, {'model_s': model_s} - else: - return x_t - - def singlestep_dpm_solver_second_update(self, x, s, t, r1=0.5, model_s=None, return_intermediate=False, - solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-2 from time `s` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the second-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s` and `s1` (the intermediate time). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 0.5 - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - s1 = ns.inverse_lambda(lambda_s1) - log_alpha_s, log_alpha_s1, log_alpha_t = ns.marginal_log_mean_coeff(s), ns.marginal_log_mean_coeff( - s1), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std(t) - alpha_s1, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_1 = torch.expm1(-h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(alpha_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r1) * expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * ( - model_s1 - model_s) - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_1 = torch.expm1(h) - - if model_s is None: - model_s = self.model_fn(x, s) - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (0.5 / r1) * expand_dims(sigma_t * phi_1, dims) * (model_s1 - model_s) - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r1) * expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * (model_s1 - model_s) - ) - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1} - else: - return x_t - - def singlestep_dpm_solver_third_update(self, x, s, t, r1=1. / 3., r2=2. / 3., model_s=None, model_s1=None, - return_intermediate=False, solver_type='dpm_solver'): - """ - Singlestep solver DPM-Solver-3 from time `s` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - r1: A `float`. The hyperparameter of the third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - model_s: A pytorch tensor. The model function evaluated at time `s`. - If `model_s` is None, we evaluate the model by `x` and `s`; otherwise we directly use it. - model_s1: A pytorch tensor. The model function evaluated at time `s1` (the intermediate time given by `r1`). - If `model_s1` is None, we evaluate the model at `s1`; otherwise we directly use it. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - if r1 is None: - r1 = 1. / 3. - if r2 is None: - r2 = 2. / 3. - ns = self.noise_schedule - dims = x.dim() - lambda_s, lambda_t = ns.marginal_lambda(s), ns.marginal_lambda(t) - h = lambda_t - lambda_s - lambda_s1 = lambda_s + r1 * h - lambda_s2 = lambda_s + r2 * h - s1 = ns.inverse_lambda(lambda_s1) - s2 = ns.inverse_lambda(lambda_s2) - log_alpha_s, log_alpha_s1, log_alpha_s2, log_alpha_t = ns.marginal_log_mean_coeff( - s), ns.marginal_log_mean_coeff(s1), ns.marginal_log_mean_coeff(s2), ns.marginal_log_mean_coeff(t) - sigma_s, sigma_s1, sigma_s2, sigma_t = ns.marginal_std(s), ns.marginal_std(s1), ns.marginal_std( - s2), ns.marginal_std(t) - alpha_s1, alpha_s2, alpha_t = torch.exp(log_alpha_s1), torch.exp(log_alpha_s2), torch.exp(log_alpha_t) - - if self.predict_x0: - phi_11 = torch.expm1(-r1 * h) - phi_12 = torch.expm1(-r2 * h) - phi_1 = torch.expm1(-h) - phi_22 = torch.expm1(-r2 * h) / (r2 * h) + 1. - phi_2 = phi_1 / h + 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(sigma_s1 / sigma_s, dims) * x - - expand_dims(alpha_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(sigma_s2 / sigma_s, dims) * x - - expand_dims(alpha_s2 * phi_12, dims) * model_s - + r2 / r1 * expand_dims(alpha_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + (1. / r2) * expand_dims(alpha_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(sigma_t / sigma_s, dims) * x - - expand_dims(alpha_t * phi_1, dims) * model_s - + expand_dims(alpha_t * phi_2, dims) * D1 - - expand_dims(alpha_t * phi_3, dims) * D2 - ) - else: - phi_11 = torch.expm1(r1 * h) - phi_12 = torch.expm1(r2 * h) - phi_1 = torch.expm1(h) - phi_22 = torch.expm1(r2 * h) / (r2 * h) - 1. - phi_2 = phi_1 / h - 1. - phi_3 = phi_2 / h - 0.5 - - if model_s is None: - model_s = self.model_fn(x, s) - if model_s1 is None: - x_s1 = ( - expand_dims(torch.exp(log_alpha_s1 - log_alpha_s), dims) * x - - expand_dims(sigma_s1 * phi_11, dims) * model_s - ) - model_s1 = self.model_fn(x_s1, s1) - x_s2 = ( - expand_dims(torch.exp(log_alpha_s2 - log_alpha_s), dims) * x - - expand_dims(sigma_s2 * phi_12, dims) * model_s - - r2 / r1 * expand_dims(sigma_s2 * phi_22, dims) * (model_s1 - model_s) - ) - model_s2 = self.model_fn(x_s2, s2) - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - (1. / r2) * expand_dims(sigma_t * phi_2, dims) * (model_s2 - model_s) - ) - elif solver_type == 'taylor': - D1_0 = (1. / r1) * (model_s1 - model_s) - D1_1 = (1. / r2) * (model_s2 - model_s) - D1 = (r2 * D1_0 - r1 * D1_1) / (r2 - r1) - D2 = 2. * (D1_1 - D1_0) / (r2 - r1) - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_s), dims) * x - - expand_dims(sigma_t * phi_1, dims) * model_s - - expand_dims(sigma_t * phi_2, dims) * D1 - - expand_dims(sigma_t * phi_3, dims) * D2 - ) - - if return_intermediate: - return x_t, {'model_s': model_s, 'model_s1': model_s1, 'model_s2': model_s2} - else: - return x_t - - def multistep_dpm_solver_second_update(self, x, model_prev_list, t_prev_list, t, solver_type="dpm_solver"): - """ - Multistep solver DPM-Solver-2 from time `t_prev_list[-1]` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if solver_type not in ['dpm_solver', 'taylor']: - raise ValueError("'solver_type' must be either 'dpm_solver' or 'taylor', got {}".format(solver_type)) - ns = self.noise_schedule - dims = x.dim() - model_prev_1, model_prev_0 = model_prev_list - t_prev_1, t_prev_0 = t_prev_list - lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_1), ns.marginal_lambda( - t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0 = h_0 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - if self.predict_x0: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1_0 - ) - else: - if solver_type == 'dpm_solver': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - 0.5 * expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * D1_0 - ) - elif solver_type == 'taylor': - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1_0 - ) - return x_t - - def multistep_dpm_solver_third_update(self, x, model_prev_list, t_prev_list, t, solver_type='dpm_solver'): - """ - Multistep solver DPM-Solver-3 from time `t_prev_list[-1]` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - ns = self.noise_schedule - dims = x.dim() - model_prev_2, model_prev_1, model_prev_0 = model_prev_list - t_prev_2, t_prev_1, t_prev_0 = t_prev_list - lambda_prev_2, lambda_prev_1, lambda_prev_0, lambda_t = ns.marginal_lambda(t_prev_2), ns.marginal_lambda( - t_prev_1), ns.marginal_lambda(t_prev_0), ns.marginal_lambda(t) - log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) - sigma_prev_0, sigma_t = ns.marginal_std(t_prev_0), ns.marginal_std(t) - alpha_t = torch.exp(log_alpha_t) - - h_1 = lambda_prev_1 - lambda_prev_2 - h_0 = lambda_prev_0 - lambda_prev_1 - h = lambda_t - lambda_prev_0 - r0, r1 = h_0 / h, h_1 / h - D1_0 = expand_dims(1. / r0, dims) * (model_prev_0 - model_prev_1) - D1_1 = expand_dims(1. / r1, dims) * (model_prev_1 - model_prev_2) - D1 = D1_0 + expand_dims(r0 / (r0 + r1), dims) * (D1_0 - D1_1) - D2 = expand_dims(1. / (r0 + r1), dims) * (D1_0 - D1_1) - if self.predict_x0: - x_t = ( - expand_dims(sigma_t / sigma_prev_0, dims) * x - - expand_dims(alpha_t * (torch.exp(-h) - 1.), dims) * model_prev_0 - + expand_dims(alpha_t * ((torch.exp(-h) - 1.) / h + 1.), dims) * D1 - - expand_dims(alpha_t * ((torch.exp(-h) - 1. + h) / h ** 2 - 0.5), dims) * D2 - ) - else: - x_t = ( - expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x - - expand_dims(sigma_t * (torch.exp(h) - 1.), dims) * model_prev_0 - - expand_dims(sigma_t * ((torch.exp(h) - 1.) / h - 1.), dims) * D1 - - expand_dims(sigma_t * ((torch.exp(h) - 1. - h) / h ** 2 - 0.5), dims) * D2 - ) - return x_t - - def singlestep_dpm_solver_update(self, x, s, t, order, return_intermediate=False, solver_type='dpm_solver', r1=None, - r2=None): - """ - Singlestep DPM-Solver with the order `order` from time `s` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - s: A pytorch tensor. The starting time, with the shape (x.shape[0],). - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - return_intermediate: A `bool`. If true, also return the model value at time `s`, `s1` and `s2` (the intermediate times). - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - r1: A `float`. The hyperparameter of the second-order or third-order solver. - r2: A `float`. The hyperparameter of the third-order solver. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, s, t, return_intermediate=return_intermediate) - elif order == 2: - return self.singlestep_dpm_solver_second_update(x, s, t, return_intermediate=return_intermediate, - solver_type=solver_type, r1=r1) - elif order == 3: - return self.singlestep_dpm_solver_third_update(x, s, t, return_intermediate=return_intermediate, - solver_type=solver_type, r1=r1, r2=r2) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def multistep_dpm_solver_update(self, x, model_prev_list, t_prev_list, t, order, solver_type='dpm_solver'): - """ - Multistep DPM-Solver with the order `order` from time `t_prev_list[-1]` to time `t`. - Args: - x: A pytorch tensor. The initial value at time `s`. - model_prev_list: A list of pytorch tensor. The previous computed model values. - t_prev_list: A list of pytorch tensor. The previous times, each time has the shape (x.shape[0],) - t: A pytorch tensor. The ending time, with the shape (x.shape[0],). - order: A `int`. The order of DPM-Solver. We only support order == 1 or 2 or 3. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_t: A pytorch tensor. The approximated solution at time `t`. - """ - if order == 1: - return self.dpm_solver_first_update(x, t_prev_list[-1], t, model_s=model_prev_list[-1]) - elif order == 2: - return self.multistep_dpm_solver_second_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - elif order == 3: - return self.multistep_dpm_solver_third_update(x, model_prev_list, t_prev_list, t, solver_type=solver_type) - else: - raise ValueError("Solver order must be 1 or 2 or 3, got {}".format(order)) - - def dpm_solver_adaptive(self, x, order, t_T, t_0, h_init=0.05, atol=0.0078, rtol=0.05, theta=0.9, t_err=1e-5, - solver_type='dpm_solver'): - """ - The adaptive step size solver based on singlestep DPM-Solver. - Args: - x: A pytorch tensor. The initial value at time `t_T`. - order: A `int`. The (higher) order of the solver. We only support order == 2 or 3. - t_T: A `float`. The starting time of the sampling (default is T). - t_0: A `float`. The ending time of the sampling (default is epsilon). - h_init: A `float`. The initial step size (for logSNR). - atol: A `float`. The absolute tolerance of the solver. For image data, the default setting is 0.0078, followed [1]. - rtol: A `float`. The relative tolerance of the solver. The default setting is 0.05. - theta: A `float`. The safety hyperparameter for adapting the step size. The default setting is 0.9, followed [1]. - t_err: A `float`. The tolerance for the time. We solve the diffusion ODE until the absolute error between the - current time and `t_0` is less than `t_err`. The default setting is 1e-5. - solver_type: either 'dpm_solver' or 'taylor'. The type for the high-order solvers. - The type slightly impacts the performance. We recommend to use 'dpm_solver' type. - Returns: - x_0: A pytorch tensor. The approximated solution at time `t_0`. - [1] A. Jolicoeur-Martineau, K. Li, R. Piché-Taillefer, T. Kachman, and I. Mitliagkas, "Gotta go fast when generating data with score-based models," arXiv preprint arXiv:2105.14080, 2021. - """ - ns = self.noise_schedule - s = t_T * torch.ones((x.shape[0],)).to(x) - lambda_s = ns.marginal_lambda(s) - lambda_0 = ns.marginal_lambda(t_0 * torch.ones_like(s).to(x)) - h = h_init * torch.ones_like(s).to(x) - x_prev = x - nfe = 0 - if order == 2: - r1 = 0.5 - lower_update = lambda x, s, t: self.dpm_solver_first_update(x, s, t, return_intermediate=True) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, - solver_type=solver_type, - **kwargs) - elif order == 3: - r1, r2 = 1. / 3., 2. / 3. - lower_update = lambda x, s, t: self.singlestep_dpm_solver_second_update(x, s, t, r1=r1, - return_intermediate=True, - solver_type=solver_type) - higher_update = lambda x, s, t, **kwargs: self.singlestep_dpm_solver_third_update(x, s, t, r1=r1, r2=r2, - solver_type=solver_type, - **kwargs) - else: - raise ValueError("For adaptive step size solver, order must be 2 or 3, got {}".format(order)) - while torch.abs((s - t_0)).mean() > t_err: - t = ns.inverse_lambda(lambda_s + h) - x_lower, lower_noise_kwargs = lower_update(x, s, t) - x_higher = higher_update(x, s, t, **lower_noise_kwargs) - delta = torch.max(torch.ones_like(x).to(x) * atol, rtol * torch.max(torch.abs(x_lower), torch.abs(x_prev))) - norm_fn = lambda v: torch.sqrt(torch.square(v.reshape((v.shape[0], -1))).mean(dim=-1, keepdim=True)) - E = norm_fn((x_higher - x_lower) / delta).max() - if torch.all(E <= 1.): - x = x_higher - s = t - x_prev = x_lower - lambda_s = ns.marginal_lambda(s) - h = torch.min(theta * h * torch.float_power(E, -1. / order).float(), lambda_0 - lambda_s) - nfe += order - print('adaptive solver nfe', nfe) - return x - - def sample(self, x, steps=20, t_start=None, t_end=None, order=3, skip_type='time_uniform', - method='singlestep', lower_order_final=True, denoise_to_zero=False, solver_type='dpm_solver', - atol=0.0078, rtol=0.05, - ): - """ - Compute the sample at time `t_end` by DPM-Solver, given the initial `x` at time `t_start`. - ===================================================== - We support the following algorithms for both noise prediction model and data prediction model: - - 'singlestep': - Singlestep DPM-Solver (i.e. "DPM-Solver-fast" in the paper), which combines different orders of singlestep DPM-Solver. - We combine all the singlestep solvers with order <= `order` to use up all the function evaluations (steps). - The total number of function evaluations (NFE) == `steps`. - Given a fixed NFE == `steps`, the sampling procedure is: - - If `order` == 1: - - Denote K = steps. We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - Denote K = (steps // 2) + (steps % 2). We take K intermediate time steps for sampling. - - If steps % 2 == 0, we use K steps of singlestep DPM-Solver-2. - - If steps % 2 == 1, we use (K - 1) steps of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If `order` == 3: - - Denote K = (steps // 3 + 1). We take K intermediate time steps for sampling. - - If steps % 3 == 0, we use (K - 2) steps of singlestep DPM-Solver-3, and 1 step of singlestep DPM-Solver-2 and 1 step of DPM-Solver-1. - - If steps % 3 == 1, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of DPM-Solver-1. - - If steps % 3 == 2, we use (K - 1) steps of singlestep DPM-Solver-3 and 1 step of singlestep DPM-Solver-2. - - 'multistep': - Multistep DPM-Solver with the order of `order`. The total number of function evaluations (NFE) == `steps`. - We initialize the first `order` values by lower order multistep solvers. - Given a fixed NFE == `steps`, the sampling procedure is: - Denote K = steps. - - If `order` == 1: - - We use K steps of DPM-Solver-1 (i.e. DDIM). - - If `order` == 2: - - We firstly use 1 step of DPM-Solver-1, then use (K - 1) step of multistep DPM-Solver-2. - - If `order` == 3: - - We firstly use 1 step of DPM-Solver-1, then 1 step of multistep DPM-Solver-2, then (K - 2) step of multistep DPM-Solver-3. - - 'singlestep_fixed': - Fixed order singlestep DPM-Solver (i.e. DPM-Solver-1 or singlestep DPM-Solver-2 or singlestep DPM-Solver-3). - We use singlestep DPM-Solver-`order` for `order`=1 or 2 or 3, with total [`steps` // `order`] * `order` NFE. - - 'adaptive': - Adaptive step size DPM-Solver (i.e. "DPM-Solver-12" and "DPM-Solver-23" in the paper). - We ignore `steps` and use adaptive step size DPM-Solver with a higher order of `order`. - You can adjust the absolute tolerance `atol` and the relative tolerance `rtol` to balance the computatation costs - (NFE) and the sample quality. - - If `order` == 2, we use DPM-Solver-12 which combines DPM-Solver-1 and singlestep DPM-Solver-2. - - If `order` == 3, we use DPM-Solver-23 which combines singlestep DPM-Solver-2 and singlestep DPM-Solver-3. - ===================================================== - Some advices for choosing the algorithm: - - For **unconditional sampling** or **guided sampling with small guidance scale** by DPMs: - Use singlestep DPM-Solver ("DPM-Solver-fast" in the paper) with `order = 3`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=False) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=3, - skip_type='time_uniform', method='singlestep') - - For **guided sampling with large guidance scale** by DPMs: - Use multistep DPM-Solver with `predict_x0 = True` and `order = 2`. - e.g. - >>> dpm_solver = DPM_Solver(model_fn, noise_schedule, predict_x0=True) - >>> x_sample = dpm_solver.sample(x, steps=steps, t_start=t_start, t_end=t_end, order=2, - skip_type='time_uniform', method='multistep') - We support three types of `skip_type`: - - 'logSNR': uniform logSNR for the time steps. **Recommended for low-resolutional images** - - 'time_uniform': uniform time for the time steps. **Recommended for high-resolutional images**. - - 'time_quadratic': quadratic time for the time steps. - ===================================================== - Args: - x: A pytorch tensor. The initial value at time `t_start` - e.g. if `t_start` == T, then `x` is a sample from the standard normal distribution. - steps: A `int`. The total number of function evaluations (NFE). - t_start: A `float`. The starting time of the sampling. - If `T` is None, we use self.noise_schedule.T (default is 1.0). - t_end: A `float`. The ending time of the sampling. - If `t_end` is None, we use 1. / self.noise_schedule.total_N. - e.g. if total_N == 1000, we have `t_end` == 1e-3. - For discrete-time DPMs: - - We recommend `t_end` == 1. / self.noise_schedule.total_N. - For continuous-time DPMs: - - We recommend `t_end` == 1e-3 when `steps` <= 15; and `t_end` == 1e-4 when `steps` > 15. - order: A `int`. The order of DPM-Solver. - skip_type: A `str`. The type for the spacing of the time steps. 'time_uniform' or 'logSNR' or 'time_quadratic'. - method: A `str`. The method for sampling. 'singlestep' or 'multistep' or 'singlestep_fixed' or 'adaptive'. - denoise_to_zero: A `bool`. Whether to denoise to time 0 at the final step. - Default is `False`. If `denoise_to_zero` is `True`, the total NFE is (`steps` + 1). - This trick is firstly proposed by DDPM (https://arxiv.org/abs/2006.11239) and - score_sde (https://arxiv.org/abs/2011.13456). Such trick can improve the FID - for diffusion models sampling by diffusion SDEs for low-resolutional images - (such as CIFAR-10). However, we observed that such trick does not matter for - high-resolutional images. As it needs an additional NFE, we do not recommend - it for high-resolutional images. - lower_order_final: A `bool`. Whether to use lower order solvers at the final steps. - Only valid for `method=multistep` and `steps < 15`. We empirically find that - this trick is a key to stabilizing the sampling by DPM-Solver with very few steps - (especially for steps <= 10). So we recommend to set it to be `True`. - solver_type: A `str`. The taylor expansion type for the solver. `dpm_solver` or `taylor`. We recommend `dpm_solver`. - atol: A `float`. The absolute tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - rtol: A `float`. The relative tolerance of the adaptive step size solver. Valid when `method` == 'adaptive'. - Returns: - x_end: A pytorch tensor. The approximated solution at time `t_end`. - """ - t_0 = 1. / self.noise_schedule.total_N if t_end is None else t_end - t_T = self.noise_schedule.T if t_start is None else t_start - device = x.device - if method == 'adaptive': - with torch.no_grad(): - x = self.dpm_solver_adaptive(x, order=order, t_T=t_T, t_0=t_0, atol=atol, rtol=rtol, - solver_type=solver_type) - elif method == 'multistep': - assert steps >= order - timesteps = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=steps, device=device) - assert timesteps.shape[0] - 1 == steps - with torch.no_grad(): - vec_t = timesteps[0].expand((x.shape[0])) - model_prev_list = [self.model_fn(x, vec_t)] - t_prev_list = [vec_t] - # Init the first `order` values by lower order multistep DPM-Solver. - for init_order in tqdm(range(1, order), desc="DPM init order"): - vec_t = timesteps[init_order].expand(x.shape[0]) - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, init_order, - solver_type=solver_type) - model_prev_list.append(self.model_fn(x, vec_t)) - t_prev_list.append(vec_t) - # Compute the remaining values by `order`-th order multistep DPM-Solver. - for step in tqdm(range(order, steps + 1), desc="DPM multistep"): - vec_t = timesteps[step].expand(x.shape[0]) - if lower_order_final and steps < 15: - step_order = min(order, steps + 1 - step) - else: - step_order = order - x = self.multistep_dpm_solver_update(x, model_prev_list, t_prev_list, vec_t, step_order, - solver_type=solver_type) - for i in range(order - 1): - t_prev_list[i] = t_prev_list[i + 1] - model_prev_list[i] = model_prev_list[i + 1] - t_prev_list[-1] = vec_t - # We do not need to evaluate the final model value. - if step < steps: - model_prev_list[-1] = self.model_fn(x, vec_t) - elif method in ['singlestep', 'singlestep_fixed']: - if method == 'singlestep': - timesteps_outer, orders = self.get_orders_and_timesteps_for_singlestep_solver(steps=steps, order=order, - skip_type=skip_type, - t_T=t_T, t_0=t_0, - device=device) - elif method == 'singlestep_fixed': - K = steps // order - orders = [order, ] * K - timesteps_outer = self.get_time_steps(skip_type=skip_type, t_T=t_T, t_0=t_0, N=K, device=device) - for i, order in enumerate(orders): - t_T_inner, t_0_inner = timesteps_outer[i], timesteps_outer[i + 1] - timesteps_inner = self.get_time_steps(skip_type=skip_type, t_T=t_T_inner.item(), t_0=t_0_inner.item(), - N=order, device=device) - lambda_inner = self.noise_schedule.marginal_lambda(timesteps_inner) - vec_s, vec_t = t_T_inner.tile(x.shape[0]), t_0_inner.tile(x.shape[0]) - h = lambda_inner[-1] - lambda_inner[0] - r1 = None if order <= 1 else (lambda_inner[1] - lambda_inner[0]) / h - r2 = None if order <= 2 else (lambda_inner[2] - lambda_inner[0]) / h - x = self.singlestep_dpm_solver_update(x, vec_s, vec_t, order, solver_type=solver_type, r1=r1, r2=r2) - if denoise_to_zero: - x = self.denoise_to_zero_fn(x, torch.ones((x.shape[0],)).to(device) * t_0) - return x - - -############################################################# -# other utility functions -############################################################# - -def interpolate_fn(x, xp, yp): - """ - A piecewise linear function y = f(x), using xp and yp as keypoints. - We implement f(x) in a differentiable way (i.e. applicable for autograd). - The function f(x) is well-defined for all x-axis. (For x beyond the bounds of xp, we use the outmost points of xp to define the linear function.) - Args: - x: PyTorch tensor with shape [N, C], where N is the batch size, C is the number of channels (we use C = 1 for DPM-Solver). - xp: PyTorch tensor with shape [C, K], where K is the number of keypoints. - yp: PyTorch tensor with shape [C, K]. - Returns: - The function values f(x), with shape [N, C]. - """ - N, K = x.shape[0], xp.shape[1] - all_x = torch.cat([x.unsqueeze(2), xp.unsqueeze(0).repeat((N, 1, 1))], dim=2) - sorted_all_x, x_indices = torch.sort(all_x, dim=2) - x_idx = torch.argmin(x_indices, dim=2) - cand_start_idx = x_idx - 1 - start_idx = torch.where( - torch.eq(x_idx, 0), - torch.tensor(1, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - end_idx = torch.where(torch.eq(start_idx, cand_start_idx), start_idx + 2, start_idx + 1) - start_x = torch.gather(sorted_all_x, dim=2, index=start_idx.unsqueeze(2)).squeeze(2) - end_x = torch.gather(sorted_all_x, dim=2, index=end_idx.unsqueeze(2)).squeeze(2) - start_idx2 = torch.where( - torch.eq(x_idx, 0), - torch.tensor(0, device=x.device), - torch.where( - torch.eq(x_idx, K), torch.tensor(K - 2, device=x.device), cand_start_idx, - ), - ) - y_positions_expanded = yp.unsqueeze(0).expand(N, -1, -1) - start_y = torch.gather(y_positions_expanded, dim=2, index=start_idx2.unsqueeze(2)).squeeze(2) - end_y = torch.gather(y_positions_expanded, dim=2, index=(start_idx2 + 1).unsqueeze(2)).squeeze(2) - cand = start_y + (x - start_x) * (end_y - start_y) / (end_x - start_x) - return cand - - -def expand_dims(v, dims): - """ - Expand the tensor `v` to the dim `dims`. - Args: - `v`: a PyTorch tensor with shape [N]. - `dim`: a `int`. - Returns: - a PyTorch tensor with shape [N, 1, 1, ..., 1] and the total dimension is `dims`. - """ - return v[(...,) + (None,) * (dims - 1)] \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/ldm/modules/diffusionmodules/__init__.py b/spaces/georgefen/Face-Landmark-ControlNet/ldm/modules/diffusionmodules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Jeppesen Vfr Nav Log Pdf Free What You Need to Know Before You Fly.md b/spaces/gotiQspiryo/whisper-ui/examples/Jeppesen Vfr Nav Log Pdf Free What You Need to Know Before You Fly.md deleted file mode 100644 index b01dea1c1423b729cef6095053c3cde6f42babe7..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Jeppesen Vfr Nav Log Pdf Free What You Need to Know Before You Fly.md +++ /dev/null @@ -1,5 +0,0 @@ -
        -

        The User Charts feature supports MBTiles, an open source file format developed by MapBox that allows for efficient compression and distribution of large charts. The feature does not yet support vector charts, which can also use the .mbtiles extension. A great tool for creating your own MBtiles is MapTiler, which is free to download and allows you to quickly georeference and export raster charts.

        -

        Jeppesen Vfr Nav Log Pdf Free


        Download File » https://urlgoal.com/2uyNvH



        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/h2oai/wave-tour/examples/graphics_spline.py b/spaces/h2oai/wave-tour/examples/graphics_spline.py deleted file mode 100644 index 4b218ce7f13f9d20f568de236d10cee0c6ea4553..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/graphics_spline.py +++ /dev/null @@ -1,73 +0,0 @@ -# Graphics / Spline -# Use the #graphics module to render splines. -# --- - -import random -from h2o_wave import site, ui, graphics as g - -x = [i * 20 for i in range(50)] -y = [ - 88, 100, 116, 128, 126, 128, 118, 108, 121, 120, 99, 113, 117, 103, 98, 90, 104, 98, 82, 102, 104, 89, 87, 69, - 88, 97, 91, 105, 98, 86, 90, 107, 97, 107, 108, 128, 144, 148, 126, 106, 89, 99, 78, 70, 69, 64, 45, 29, 27, 38 -] -y0 = [v - random.randint(5, min(y)) for v in y] - -line_style = dict(fill='none', stroke='crimson', stroke_width=4) -area_style = dict(fill='crimson') - -splines = [ - # Lines - g.spline(x=x, y=y, **line_style), # same as curve='linear' - g.spline(x=x, y=y, curve='basis', **line_style), - g.spline(x=x, y=y, curve='basis-closed', **line_style), - g.spline(x=x, y=y, curve='basis-open', **line_style), - g.spline(x=x, y=y, curve='cardinal', **line_style), - g.spline(x=x, y=y, curve='cardinal-closed', **line_style), - g.spline(x=x, y=y, curve='cardinal-open', **line_style), - g.spline(x=x, y=y, curve='smooth', **line_style), - g.spline(x=x, y=y, curve='smooth-closed', **line_style), - g.spline(x=x, y=y, curve='smooth-open', **line_style), - g.spline(x=x, y=y, curve='linear', **line_style), - g.spline(x=x, y=y, curve='linear-closed', **line_style), - g.spline(x=x, y=y, curve='monotone-x', **line_style), - g.spline(x=x, y=y, curve='monotone-y', **line_style), - g.spline(x=x, y=y, curve='natural', **line_style), - g.spline(x=x, y=y, curve='step', **line_style), - g.spline(x=x, y=y, curve='step-after', **line_style), - g.spline(x=x, y=y, curve='step-before', **line_style), - # Areas - g.spline(x=x, y=y, y0=y0, **area_style), # same as curve='linear' - g.spline(x=x, y=y, y0=y0, curve='basis', **area_style), - g.spline(x=x, y=y, y0=[], curve='basis', **area_style), - g.spline(x=x, y=y, y0=y0, curve='basis-open', **area_style), - g.spline(x=x, y=y, y0=y0, curve='cardinal', **area_style), - g.spline(x=x, y=y, y0=[], curve='cardinal', **area_style), - g.spline(x=x, y=y, y0=y0, curve='cardinal-open', **area_style), - g.spline(x=x, y=y, y0=y0, curve='smooth', **area_style), - g.spline(x=x, y=y, y0=[], curve='smooth', **area_style), - g.spline(x=x, y=y, y0=y0, curve='smooth-open', **area_style), - g.spline(x=x, y=y, y0=y0, curve='linear', **area_style), - g.spline(x=x, y=y, y0=[], curve='linear', **area_style), - g.spline(x=x, y=y, y0=y0, curve='monotone-x', **area_style), - g.spline(x=x, y=y, y0=y0, curve='monotone-y', **area_style), - g.spline(x=x, y=y, y0=y0, curve='natural', **area_style), - g.spline(x=x, y=y, y0=y0, curve='step', **area_style), - g.spline(x=x, y=y, y0=y0, curve='step-after', **area_style), - g.spline(x=x, y=y, y0=y0, curve='step-before', **area_style), -] - -page = site['/demo'] -row, col = 1, 1 -for spline in splines: - page[f'spline_{col}_{row}'] = ui.graphics_card( - box=f'{col} {row} 3 1', view_box='0 0 1000 150', width='100%', height='100%', - stage=g.stage( - text=g.text(text=spline.curve or '', y=40, font_size=40), - spline=spline, - ), - ) - col += 3 - if col > 11: - row, col = row + 1, 1 - -page.save() diff --git a/spaces/h2oai/wave-tour/examples/plot_line_labels_stroked.py b/spaces/h2oai/wave-tour/examples/plot_line_labels_stroked.py deleted file mode 100644 index e66de6d3ebab71db113a7791c7642aef3ed49a23..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/plot_line_labels_stroked.py +++ /dev/null @@ -1,30 +0,0 @@ -# Plot / Line / Labels / Stroked -# Customize label rendering: add a subtle outline to labels to improve readability. -# #plot -# --- -from h2o_wave import site, data, ui - -page = site['/demo'] - -page.add('example', ui.plot_card( - box='1 1 4 5', - title='Line, labels less messy', - data=data('year price', 9, rows=[ - ('1991', 3), - ('1992', 4), - ('1993', 3.5), - ('1994', 5), - ('1995', 4.9), - ('1996', 6), - ('1997', 7), - ('1998', 9), - ('1999', 13), - ]), - plot=ui.plot([ - ui.mark(type='line', x_scale='time', x='=year', y='=price', y_min=0, - label='=${{intl price minimum_fraction_digits=2 maximum_fraction_digits=2}}', - label_fill_color='rgba(0,0,0,0.65)', label_stroke_color='#fff', label_stroke_size=2) - ]) -)) - -page.save() diff --git a/spaces/h2oai/wave-tour/examples/tour-assets/pyodide/pyodide.asm.js b/spaces/h2oai/wave-tour/examples/tour-assets/pyodide/pyodide.asm.js deleted file mode 100644 index dc9551b50999e3c9abb0c822bf4935278ddf1020..0000000000000000000000000000000000000000 --- a/spaces/h2oai/wave-tour/examples/tour-assets/pyodide/pyodide.asm.js +++ /dev/null @@ -1,21 +0,0 @@ - "use strict"; - let setImmediate = globalThis.setImmediate; - let clearImmediate = globalThis.clearImmediate; - let baseName, fpcGOT, dyncallGOT, fpVal, dcVal; - - -var _createPyodideModule = (function() { - var _scriptDir = typeof document !== 'undefined' && document.currentScript ? document.currentScript.src : undefined; - if (typeof __filename !== 'undefined') _scriptDir = _scriptDir || __filename; - return ( -function(_createPyodideModule) { - _createPyodideModule = _createPyodideModule || {}; - -var Module=typeof _createPyodideModule!=="undefined"?_createPyodideModule:{};var readyPromiseResolve,readyPromiseReject;Module["ready"]=new Promise(function(resolve,reject){readyPromiseResolve=resolve;readyPromiseReject=reject});if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="build/pyodide.asm.data";var REMOTE_PACKAGE_BASE="pyodide.asm.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}});return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}},handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.10",true,true);Module["FS_createPath"]("/lib/python3.10","asyncio",true,true);Module["FS_createPath"]("/lib/python3.10","collections",true,true);Module["FS_createPath"]("/lib/python3.10","concurrent",true,true);Module["FS_createPath"]("/lib/python3.10/concurrent","futures",true,true);Module["FS_createPath"]("/lib/python3.10","ctypes",true,true);Module["FS_createPath"]("/lib/python3.10/ctypes","macholib",true,true);Module["FS_createPath"]("/lib/python3.10","tzdata-2022.1.dist-info",true,true);Module["FS_createPath"]("/lib/python3.10","email",true,true);Module["FS_createPath"]("/lib/python3.10/email","mime",true,true);Module["FS_createPath"]("/lib/python3.10","encodings",true,true);Module["FS_createPath"]("/lib/python3.10","html",true,true);Module["FS_createPath"]("/lib/python3.10","http",true,true);Module["FS_createPath"]("/lib/python3.10","importlib",true,true);Module["FS_createPath"]("/lib/python3.10/importlib","metadata",true,true);Module["FS_createPath"]("/lib/python3.10","json",true,true);Module["FS_createPath"]("/lib/python3.10","logging",true,true);Module["FS_createPath"]("/lib/python3.10","multiprocessing",true,true);Module["FS_createPath"]("/lib/python3.10/multiprocessing","dummy",true,true);Module["FS_createPath"]("/lib/python3.10","pydoc_data",true,true);Module["FS_createPath"]("/lib/python3.10","site-packages",true,true);Module["FS_createPath"]("/lib/python3.10","sqlite3",true,true);Module["FS_createPath"]("/lib/python3.10","tzdata",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata","zoneinfo",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Africa",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","America",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo/America","Argentina",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo/America","Indiana",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo/America","Kentucky",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo/America","North_Dakota",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Antarctica",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Arctic",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Asia",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Atlantic",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Australia",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Brazil",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Canada",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Chile",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Etc",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Europe",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Indian",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Mexico",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","Pacific",true,true);Module["FS_createPath"]("/lib/python3.10/tzdata/zoneinfo","US",true,true);Module["FS_createPath"]("/lib/python3.10","unittest",true,true);Module["FS_createPath"]("/lib/python3.10","urllib",true,true);Module["FS_createPath"]("/lib/python3.10","wsgiref",true,true);Module["FS_createPath"]("/lib/python3.10","xml",true,true);Module["FS_createPath"]("/lib/python3.10/xml","dom",true,true);Module["FS_createPath"]("/lib/python3.10/xml","etree",true,true);Module["FS_createPath"]("/lib/python3.10/xml","parsers",true,true);Module["FS_createPath"]("/lib/python3.10/xml","sax",true,true);Module["FS_createPath"]("/lib/python3.10","xmlrpc",true,true);Module["FS_createPath"]("/lib/python3.10","zoneinfo",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={"data":null,"cachedOffset":5390149,"cachedIndexes":[-1,-1],"cachedChunks":[null,null],"offsets":[0,1427,2589,3776,5322,6637,8053,9276,10176,11137,12037,13094,14287,15472,16494,17323,18687,19737,20743,21819,23063,24239,25564,26462,27523,28434,29550,30720,31889,33260,34562,35714,36757,37642,38556,39405,40881,42064,43063,44511,45497,46782,48199,49345,50556,51705,52993,54186,55445,56403,57319,58451,59379,60319,61186,62220,63435,64443,65350,66410,67290,68152,69248,70240,70978,72073,73044,74230,75235,76159,77200,78140,79045,79823,81151,82438,83559,84511,85683,86886,87862,88983,90211,91364,92318,93465,94498,95846,97112,97969,99164,99954,101058,102163,103080,104391,105594,106853,107597,108449,109217,110251,111158,112222,113491,114697,115935,116752,117792,118857,120082,121036,121961,122714,123447,124077,124771,125704,126285,126853,127634,128257,128767,129516,130347,131218,132051,132685,133827,134670,135352,136305,137212,137913,138757,139639,140404,141476,142880,144347,145447,146812,148275,149759,151035,152365,153818,155039,156248,157580,158904,160457,161772,163231,164657,165771,166740,168099,169547,170802,171678,172949,174299,175657,176688,177894,178942,179971,180976,182125,183331,184479,185632,186701,187802,188819,190113,191113,192241,193335,194561,195558,196718,197809,199167,200293,201409,202328,203387,204599,205864,206896,208016,209064,210096,210963,212164,213454,214797,216111,217205,218323,219455,220622,221904,223178,224320,225196,226346,227347,228599,230007,231429,232655,233807,234810,235532,236763,237939,238968,240353,241680,243024,244070,245249,246365,247345,248336,249377,250475,251332,252127,253112,254067,254818,255713,257145,258394,259698,260873,261858,262885,263781,264697,265605,266793,267875,268833,269948,270897,272095,273214,274147,274800,275551,276257,277314,278561,279770,280747,281817,282862,283938,284994,285982,286950,288083,289170,290196,291255,292336,293438,294291,295302,296315,297405,298488,299556,300404,301418,302454,303415,304427,305458,306686,308063,309083,310073,311333,312450,313776,315160,316388,317578,318615,319791,321046,322218,323363,324099,324837,325621,326554,327696,328859,329930,330716,331743,332623,333675,334571,335423,336286,337591,339188,340471,341374,342388,343446,345026,346586,347470,348703,349798,350777,351908,352990,354121,355451,356852,358009,359255,360509,361780,363165,364490,365769,367115,368258,369681,370881,371831,372875,374112,375288,376404,377670,378639,379566,380757,381718,383076,384217,385466,386597,388240,389307,390442,391536,392636,393826,394797,396024,397312,398622,399711,400760,401906,403141,404316,405520,406687,408039,409203,410389,411278,412215,413156,414155,415232,416153,417307,418578,419560,420718,422230,423604,424850,426253,427724,429046,430069,431130,432244,433256,434404,435525,436486,437487,438762,439928,441431,442524,443879,445181,446477,447678,449156,450211,451294,452741,454156,455070,456239,457435,458447,459517,460942,462208,463398,464577,465951,467319,468502,469760,470919,472123,473397,474650,476010,477191,478368,479361,480585,481739,482748,484116,485334,486032,487222,488810,490141,491382,492824,494024,495379,496720,497642,498940,500062,500960,502053,502940,504060,504942,506226,507562,508770,509943,510840,511766,512881,514058,515151,516095,517210,518226,519229,520426,521534,522584,523644,524748,525759,527062,528066,528992,530014,531274,532461,533547,534754,535698,536840,537943,539093,540196,541206,542471,543658,544989,546213,547358,548499,549769,551207,552359,553500,554713,556034,557358,558619,560022,561329,562662,563734,564915,566028,567061,568330,569364,570636,571730,572448,573517,574893,576013,577280,578587,579976,581138,582434,583657,584765,586101,587554,588755,589914,590994,592367,593576,594777,595712,597082,598407,599705,600961,602247,603544,604958,606195,607481,608840,609922,610972,612145,613236,614738,615817,616754,617559,618292,619114,620329,621554,622788,623678,624883,626021,627193,628240,629094,630186,631448,632584,633801,634992,636022,637147,638126,639350,640493,641275,642351,643422,644643,645574,646664,648087,649543,651012,652618,654063,655473,656895,658245,659461,660737,662145,663447,664719,666047,667264,668491,669694,670835,672174,673531,674719,676011,677312,678447,679732,681017,682399,683664,684853,686135,687550,688817,690082,691347,692060,693274,694341,695471,696764,698056,699237,700377,701630,702902,704040,705481,706683,708037,709047,710123,711344,712504,713651,714861,715946,717452,718833,719764,721042,722484,723908,725212,726438,727644,728759,730029,731460,732712,734007,735256,736406,737691,738830,739961,741010,742098,743208,744362,745647,746530,747750,748875,749923,751184,752329,753509,754852,756098,757197,758272,759326,760679,762063,763314,764729,765988,767020,768212,769249,770502,771795,773001,774006,775123,776090,777254,778449,779782,780831,782072,783336,784288,785401,786754,787996,789154,790372,791545,792720,793884,795020,796308,797418,798254,799148,800407,801786,802972,803891,804848,806260,807745,809198,810062,811352,812461,813399,814258,815508,816838,817972,819458,820779,821736,822711,823912,825093,826132,827281,828579,829566,830534,831354,832648,833832,834967,836537,837898,839133,840254,841420,842634,843695,844829,845879,847123,848420,849587,850595,851702,852900,854179,855492,856709,857852,858299,859270,860309,861462,862702,863913,865447,866700,867675,868809,870091,871403,872706,873913,875021,876102,877284,878583,879963,881485,882847,883983,885335,886599,887836,889356,890921,892072,893254,894372,895003,896250,897361,898497,899294,900575,901549,902208,903318,904374,905560,906565,907931,909291,910545,911742,912807,914268,915482,916883,918146,919327,920300,921360,922626,923779,925079,926247,927695,928946,930103,931359,932602,933902,935534,937207,938503,940024,941473,942834,943866,945106,946479,947567,948911,949916,951037,952276,953813,954618,956014,957391,958478,959695,960939,962226,963240,964209,965233,966204,967326,968411,969471,970621,971735,972793,973780,974935,976050,977098,978206,979591,980924,982268,983571,984475,985762,986978,988283,989506,990681,992047,993531,994822,995890,997035,997937,999087,1000089,1001210,1002499,1003829,1005064,1006199,1007549,1008624,1009791,1011032,1012176,1013398,1014564,1015572,1016762,1018138,1019273,1020478,1021604,1022677,1023827,1025013,1026232,1027462,1028715,1029794,1030908,1031986,1033193,1034310,1035491,1036758,1037877,1039005,1040150,1041256,1042096,1043129,1044233,1045256,1046448,1047508,1048585,1049625,1050554,1051662,1052773,1053628,1054641,1055582,1056560,1057677,1058762,1060007,1061493,1062824,1063982,1064949,1066179,1067300,1068382,1069719,1070673,1071685,1072893,1074060,1075125,1076004,1077121,1078173,1079287,1080329,1081378,1082041,1083268,1084403,1085619,1086478,1087394,1088573,1089552,1090819,1091897,1092916,1093989,1095127,1096307,1097246,1098150,1099225,1100240,1101361,1102457,1103689,1104836,1106047,1107472,1108672,1109882,1111002,1112251,1113385,1114491,1115475,1116836,1118060,1119256,1120099,1121302,1122092,1122815,1123570,1124084,1124657,1125219,1125722,1126242,1126812,1127367,1127916,1128503,1129085,1129625,1130175,1130774,1131328,1131921,1132512,1133127,1134411,1135563,1136845,1137970,1138967,1140384,1141678,1142715,1143762,1144995,1146232,1147523,1148503,1149275,1150359,1151297,1152512,1153627,1154667,1155712,1156904,1158140,1159340,1160485,1161451,1162594,1163634,1164589,1165681,1166502,1167160,1168155,1169217,1170211,1171183,1172238,1173173,1173985,1174744,1175672,1176694,1177587,1178496,1179362,1180095,1181100,1181902,1182844,1184009,1185086,1186509,1187751,1189020,1190192,1191391,1192659,1193724,1194701,1195794,1197001,1198290,1199237,1199908,1200717,1201903,1203391,1204524,1205495,1206277,1207370,1208305,1209308,1210360,1211446,1212517,1213603,1214779,1216058,1217004,1218092,1219550,1220942,1222299,1223653,1225091,1226481,1227813,1229028,1230156,1231127,1232284,1233554,1234727,1235661,1236588,1237671,1238823,1239971,1241138,1242421,1243854,1245037,1246280,1247612,1249009,1250253,1251071,1252180,1253445,1254569,1255762,1256942,1258253,1259640,1260919,1262498,1263147,1264193,1265131,1265929,1267287,1268376,1269495,1270728,1271694,1272955,1273888,1274911,1276206,1277780,1278884,1280125,1281099,1282271,1283266,1284519,1285490,1286796,1287882,1288874,1289759,1290713,1291687,1292854,1294014,1294953,1295877,1297126,1298541,1299678,1300811,1301870,1303090,1304260,1305440,1306561,1307531,1308902,1310310,1311361,1312819,1314130,1315562,1317055,1318144,1319525,1320688,1321567,1322867,1323889,1324936,1325967,1327090,1327847,1328926,1330200,1331402,1332768,1333880,1335278,1336456,1337785,1338878,1339973,1340841,1342117,1343392,1344487,1345374,1346442,1347579,1348698,1349823,1351113,1352241,1353383,1354470,1355340,1356333,1357124,1357793,1359222,1360804,1362226,1363338,1364571,1365749,1366974,1368162,1369385,1370719,1372014,1373242,1374216,1375419,1376689,1377667,1378739,1379745,1380898,1382204,1383397,1384401,1385581,1386614,1387688,1388998,1390206,1391422,1392848,1394253,1395524,1396817,1398238,1399481,1400775,1401853,1402707,1403935,1405306,1406544,1407736,1408776,1409923,1411075,1412344,1413278,1414216,1415236,1416335,1417275,1418379,1419480,1420822,1421905,1422882,1423715,1424680,1425791,1426730,1427735,1428722,1429772,1431002,1432587,1434125,1435653,1437060,1438096,1438896,1439963,1440987,1441836,1442692,1443889,1444801,1445916,1446971,1448245,1449591,1450493,1451743,1453060,1454104,1454980,1455807,1456534,1457794,1458706,1459712,1460737,1461833,1462664,1463864,1465171,1466543,1467897,1469132,1470469,1471566,1472817,1474027,1475530,1476545,1477809,1478854,1479906,1480863,1482147,1483528,1484916,1485794,1486857,1488028,1489453,1490624,1491867,1492962,1494016,1495047,1496325,1497626,1499129,1500481,1501941,1503349,1504940,1506366,1507965,1509286,1510495,1511716,1512842,1514140,1515314,1516498,1517874,1519167,1520500,1521738,1522770,1524028,1524815,1525715,1526965,1528229,1529622,1531109,1532440,1533441,1534499,1535530,1536705,1537874,1538872,1540013,1541112,1541964,1542816,1543939,1545527,1546850,1547951,1549046,1550326,1551639,1552711,1554196,1555634,1556838,1558083,1559295,1560619,1561779,1563096,1564618,1565708,1566927,1567928,1568720,1569753,1570688,1571576,1572573,1573303,1574197,1575151,1576678,1577744,1579108,1580282,1581283,1582177,1583421,1584653,1586123,1587362,1588649,1590100,1591528,1592878,1594055,1594968,1595972,1597093,1598168,1598971,1599978,1601207,1602319,1603178,1604059,1605255,1606490,1607729,1608982,1610274,1611662,1612881,1614182,1615400,1616619,1617817,1618725,1620029,1621565,1622641,1623879,1625244,1626599,1627831,1629030,1630301,1631510,1632742,1633920,1635094,1636241,1637385,1638509,1639484,1640563,1641380,1642281,1643453,1644656,1645969,1647175,1648245,1649135,1650250,1651035,1651976,1653091,1654244,1655545,1656785,1657909,1659095,1660306,1661432,1662495,1663620,1664825,1666037,1667057,1668376,1669509,1670823,1672052,1673150,1674342,1675522,1676424,1677425,1678610,1679841,1681010,1682410,1683564,1684679,1685696,1686923,1688147,1689462,1690694,1691914,1693220,1694651,1695902,1697135,1698221,1699357,1700435,1701698,1702957,1704164,1705466,1706781,1708132,1709301,1710615,1711809,1712990,1714264,1715285,1716718,1717981,1719022,1720199,1721388,1722548,1723493,1724540,1726099,1727266,1728463,1729661,1730833,1732087,1733316,1734624,1735977,1737384,1738806,1740001,1741333,1742561,1743818,1744941,1746155,1747191,1748297,1749243,1750240,1751044,1752172,1753570,1755050,1756105,1757296,1758669,1759823,1760831,1761621,1762417,1763078,1764352,1765710,1767052,1768478,1769726,1770836,1772030,1773321,1774483,1775553,1776938,1778017,1779005,1780157,1781365,1782710,1783874,1784941,1785915,1787216,1788479,1789819,1791001,1792066,1793375,1794725,1796013,1797067,1798391,1799879,1801199,1802351,1803661,1804858,1806131,1807440,1808737,1809754,1811019,1812523,1813993,1815189,1815758,1816327,1817423,1818383,1819490,1820342,1821394,1822491,1823763,1825004,1826204,1827460,1828523,1829607,1831213,1832435,1833683,1834965,1836142,1837360,1838507,1839708,1840950,1842136,1843373,1844560,1845634,1846865,1848006,1849307,1850561,1851817,1853037,1854330,1855371,1856653,1858129,1859389,1860521,1861740,1863133,1864346,1865344,1866326,1867485,1868752,1869926,1870948,1872049,1873201,1874326,1875500,1876915,1877872,1879276,1880557,1881805,1883307,1884421,1885452,1886313,1887341,1888298,1889105,1890091,1891284,1892288,1893091,1893993,1894927,1895637,1896428,1897405,1898472,1899390,1900659,1901477,1902920,1904330,1905870,1907066,1908364,1909457,1910518,1911569,1912742,1913817,1914877,1916031,1917339,1918749,1919866,1920690,1921441,1922284,1923169,1924269,1925707,1926880,1927978,1929259,1930158,1930912,1932222,1933478,1934385,1935620,1936854,1938038,1939125,1940644,1941842,1943173,1944312,1945500,1946484,1947530,1948701,1949694,1950916,1952292,1953517,1954552,1955839,1956899,1958037,1958814,1959825,1960620,1961709,1962789,1964191,1965560,1966538,1967740,1968943,1969972,1971351,1972703,1974017,1975352,1976659,1977718,1979117,1980668,1981999,1983481,1984676,1985771,1986970,1988428,1989614,1990964,1992196,1993389,1994218,1995390,1996753,1997609,1998718,1999749,2001180,2002048,2003282,2004284,2005302,2006332,2007353,2008683,2009656,2010747,2012087,2013350,2014540,2015931,2017196,2018454,2019723,2020915,2022327,2023633,2024671,2025955,2026806,2027728,2028766,2030022,2031229,2032350,2033458,2034374,2035024,2036098,2037035,2038087,2039139,2040119,2041141,2042047,2043036,2044089,2045074,2046077,2047167,2048237,2049274,2050140,2051129,2052234,2053558,2054809,2055929,2056738,2057648,2058620,2059613,2060395,2061563,2062522,2063487,2064347,2065163,2066452,2067406,2068134,2069389,2070622,2071777,2072884,2074186,2075553,2076732,2077866,2079172,2080555,2081956,2083461,2084648,2085818,2087051,2088268,2089588,2091123,2092145,2093380,2094784,2095856,2096992,2098118,2099162,2100197,2101284,2102321,2103541,2104664,2105578,2106721,2107770,2109021,2110228,2111443,2112596,2113888,2115066,2116099,2117173,2118318,2119469,2120592,2121537,2122518,2123376,2124233,2125399,2126633,2127681,2128783,2130045,2131225,2132379,2133403,2134432,2135522,2136507,2137492,2138722,2139955,2140881,2142498,2143760,2145245,2146461,2147676,2148615,2149462,2150414,2151345,2152285,2153561,2154980,2156335,2157656,2158974,2160186,2161397,2162772,2164046,2165384,2166704,2167843,2168894,2169845,2171003,2172498,2173835,2175071,2176135,2177468,2178835,2179919,2180900,2182217,2183506,2185084,2186227,2187400,2188637,2189782,2191026,2192121,2193422,2194681,2195963,2197332,2198584,2199647,2201032,2202371,2203598,2204914,2206046,2207414,2208677,2209818,2210817,2211915,2213064,2214261,2215408,2216793,2218122,2219565,2221086,2222220,2223570,2224804,2226043,2227283,2228595,2230101,2231412,2232640,2233637,2234823,2236215,2237250,2238639,2239619,2240614,2241578,2242764,2244002,2245511,2246624,2247751,2248834,2249962,2251214,2252508,2253491,2254573,2255543,2256377,2257440,2258655,2259800,2261018,2262401,2263639,2264838,2266042,2267257,2268435,2269464,2270575,2271670,2272737,2273903,2275145,2276107,2277005,2278035,2279121,2280145,2281121,2282118,2282843,2284037,2285483,2286754,2288013,2288973,2290213,2291652,2293001,2294307,2295547,2296761,2297848,2299128,2300374,2301798,2303240,2304429,2305416,2306798,2307916,2309233,2310532,2311759,2312902,2314101,2315150,2316189,2317439,2318444,2319781,2321065,2322317,2323525,2324703,2325908,2327330,2328759,2329907,2330996,2332136,2333373,2334540,2335720,2336820,2337889,2339198,2340326,2341708,2343129,2343972,2345298,2346634,2347811,2349086,2350531,2351761,2352988,2354117,2355086,2355995,2357193,2358675,2359957,2361026,2362211,2363614,2364912,2366141,2367279,2368430,2369683,2370827,2372061,2373272,2374471,2375656,2376976,2378173,2379410,2380490,2381464,2382590,2383563,2384394,2385367,2386567,2387751,2389010,2389764,2390691,2391878,2393141,2394062,2395173,2396367,2397482,2398720,2399724,2400532,2401934,2403208,2404600,2405799,2407253,2408367,2409652,2410933,2412209,2413423,2414493,2415462,2416796,2418118,2419138,2420095,2421253,2422473,2423503,2424499,2425463,2426537,2427512,2428794,2430005,2430997,2432127,2433307,2434533,2435777,2436820,2438090,2439282,2440369,2441474,2442568,2443453,2444340,2445529,2446255,2447251,2447900,2449032,2450288,2451658,2452755,2453759,2455092,2456312,2457436,2458612,2459688,2460842,2462114,2463345,2464170,2465035,2466635,2467661,2468987,2470287,2471632,2472998,2474434,2475996,2477601,2479178,2480716,2482037,2483016,2484054,2485059,2485939,2486837,2487718,2488595,2489437,2490617,2491497,2492680,2493414,2494736,2495857,2496924,2498324,2499696,2500935,2502187,2503229,2504188,2505360,2506546,2507462,2508573,2509730,2510902,2512022,2513064,2514150,2515190,2516230,2517448,2518493,2519639,2520882,2521841,2522978,2524109,2525148,2526128,2527206,2528383,2529513,2530400,2531338,2532378,2533513,2534554,2535674,2536775,2537920,2539195,2540211,2541318,2542185,2543321,2544435,2545830,2547112,2548389,2549648,2550687,2551801,2553005,2553950,2555025,2556010,2557372,2558482,2559735,2560582,2561436,2562621,2563692,2564646,2565840,2567060,2568435,2569772,2570943,2571938,2572959,2574092,2575125,2576474,2577668,2578994,2580299,2581459,2582809,2584054,2585319,2586322,2587431,2588604,2589579,2590695,2591880,2592937,2594206,2595210,2596141,2597195,2598126,2599125,2600400,2601369,2602634,2604050,2605433,2606801,2607903,2608990,2610192,2611536,2612772,2613803,2614948,2616231,2617206,2617994,2619e3,2620065,2621280,2622529,2623690,2624851,2625871,2627007,2628092,2628733,2629918,2630884,2632107,2633106,2634532,2635581,2636772,2637852,2639099,2640217,2641446,2642672,2643744,2644775,2645787,2646957,2647982,2649355,2650684,2651987,2653256,2654313,2655354,2656533,2657460,2658615,2659786,2660843,2662197,2663517,2664731,2666002,2667159,2667925,2668997,2669942,2671304,2672626,2673799,2675143,2676256,2677167,2678554,2679781,2680949,2682314,2683568,2684938,2686103,2687313,2688534,2689724,2690892,2692135,2693372,2694387,2695239,2696462,2697180,2697741,2698973,2700160,2701172,2702110,2703122,2704280,2705344,2706382,2707532,2708556,2709724,2710869,2711920,2712964,2714229,2715382,2716610,2717784,2719019,2720232,2721341,2722604,2723808,2725027,2726194,2727419,2728585,2729639,2730753,2732005,2733163,2734168,2735348,2735953,2736854,2737993,2739217,2740520,2741817,2742981,2744173,2745442,2746548,2747606,2748667,2749980,2751050,2752146,2753218,2754483,2755652,2756911,2758207,2759502,2760906,2762013,2762829,2764092,2764722,2765559,2766832,2768006,2768994,2769949,2770615,2771404,2772164,2772956,2773769,2774743,2775706,2776618,2777981,2779132,2780358,2781171,2782230,2783312,2784324,2785649,2786938,2788096,2789470,2790528,2791733,2793008,2794172,2795278,2796454,2797572,2798718,2800062,2801165,2802345,2803561,2804821,2806018,2807274,2808347,2809400,2810603,2811649,2812792,2813754,2815071,2816245,2817501,2818629,2819973,2821483,2822841,2824243,2825628,2826972,2828485,2829702,2830898,2832310,2833322,2834366,2835487,2836600,2837599,2838550,2839305,2840604,2841621,2842662,2843714,2844653,2846066,2847418,2848709,2849910,2851395,2852943,2854391,2855276,2856036,2857238,2858575,2859894,2861216,2862436,2863737,2865066,2866194,2867393,2868759,2870188,2871561,2872950,2874261,2875310,2876621,2877868,2879123,2880384,2881649,2882845,2884277,2885618,2886911,2888242,2889429,2890965,2892366,2893650,2894970,2896535,2897965,2899429,2900451,2901216,2901808,2902495,2903461,2904525,2905562,2906637,2907842,2908882,2910271,2911597,2912932,2914177,2915351,2916426,2917374,2918326,2919657,2920629,2921478,2922636,2923641,2924680,2925702,2926756,2927919,2928872,2930070,2931352,2932232,2933258,2934499,2935173,2936450,2937500,2938731,2939976,2941047,2942012,2943208,2944295,2945306,2946217,2947655,2948815,2950044,2951223,2952580,2954016,2955370,2956613,2958070,2959015,2959907,2960953,2962071,2963012,2964377,2965541,2966921,2968197,2969394,2970606,2971634,2972964,2974502,2975764,2977220,2978577,2980136,2981549,2982658,2983875,2985272,2986476,2987628,2988713,2990052,2991107,2992138,2993239,2994494,2995587,2996727,2997991,2999282,3000500,3002076,3003377,3004664,3005875,3007086,3008050,3009220,3010151,3011125,3012302,3013472,3014963,3016223,3017508,3018822,3020048,3021274,3022541,3023956,3025335,3026424,3027886,3029136,3030270,3031547,3032727,3033994,3035464,3036597,3037698,3038990,3040348,3041479,3042763,3043860,3045137,3046516,3047856,3048962,3050015,3051023,3051876,3052991,3054352,3055893,3057176,3058508,3059685,3060882,3062020,3063071,3064269,3065111,3066269,3067521,3068573,3069717,3071015,3071921,3073145,3074290,3075444,3076738,3077848,3079079,3080188,3081137,3082544,3083777,3085014,3086175,3087478,3088783,3090084,3091468,3092877,3094185,3095490,3096725,3098169,3099632,3101082,3102519,3103819,3105106,3106516,3107796,3109206,3110484,3111751,3113e3,3114424,3115873,3116951,3118259,3119027,3119825,3120612,3121333,3122e3,3122951,3123767,3124926,3125787,3126817,3127978,3128813,3129830,3130666,3131520,3132307,3133193,3133948,3135064,3136072,3136748,3137467,3138192,3138799,3139660,3140691,3141569,3142424,3143265,3144192,3144988,3145935,3146572,3147194,3147968,3148996,3149821,3150509,3151122,3151798,3152508,3153382,3154360,3155022,3155744,3156354,3157074,3157802,3158825,3159673,3160516,3161341,3162266,3162969,3164022,3165037,3165804,3166600,3167568,3168334,3169074,3170180,3171178,3171850,3172696,3173594,3174176,3175019,3176111,3176885,3177695,3178676,3179462,3180092,3181168,3182195,3182868,3183666,3184620,3185296,3186193,3187280,3188043,3188863,3189837,3190620,3191286,3192395,3193420,3194086,3194926,3195904,3196680,3197747,3198790,3199507,3200366,3201422,3202166,3203260,3204302,3205054,3205862,3206838,3207630,3208351,3209464,3210463,3211144,3211993,3212926,3213676,3214624,3215756,3216817,3217650,3218450,3219355,3220078,3221124,3222217,3223249,3224134,3224964,3225610,3226655,3227311,3228128,3228879,3229958,3230894,3231560,3232238,3233101,3233722,3234777,3235740,3236489,3237265,3237982,3238827,3239496,3240581,3241439,3242227,3243082,3243957,3244693,3245666,3246392,3247241,3247931,3248776,3249482,3250264,3251405,3252027,3252722,3253468,3254511,3255438,3256103,3256810,3257505,3258160,3258997,3259962,3260710,3261431,3262060,3262809,3263483,3264564,3265290,3266094,3266848,3267961,3268838,3269503,3270261,3271132,3271839,3272814,3273793,3274493,3275286,3275977,3276596,3277318,3278341,3279087,3279837,3280666,3281699,3282478,3283182,3283813,3284673,3285451,3286498,3287435,3288111,3289038,3289673,3290361,3291186,3292092,3292852,3293573,3294430,3295409,3296135,3296847,3297586,3298381,3299188,3300232,3301146,3301822,3302605,3303247,3303870,3304694,3305561,3306274,3306989,3307762,3308751,3309486,3310171,3310789,3311510,3312207,3313219,3314010,3314755,3315374,3315978,3316653,3317664,3318683,3319441,3320181,3320980,3321637,3322743,3323443,3324253,3325051,3326131,3327044,3327712,3328420,3329281,3330079,3331048,3332029,3332739,3333562,3334356,3334989,3335732,3336627,3337436,3338161,3339029,3340052,3340768,3341502,3342141,3342996,3343743,3344821,3345710,3346385,3347338,3347966,3348655,3349522,3350324,3351180,3351859,3352880,3353861,3354549,3355224,3355974,3356602,3357484,3358555,3359367,3360089,3360878,3361663,3362374,3363316,3364081,3364896,3365589,3366624,3367587,3368238,3368924,3369747,3370360,3371282,3372335,3373100,3373881,3374651,3375511,3376195,3377155,3377965,3378732,3379514,3380615,3381494,3382160,3382952,3383780,3384516,3385595,3386568,3387232,3388083,3388833,3389664,3390545,3391451,3392352,3392992,3394e3,3394997,3395716,3396397,3397211,3397842,3398726,3399799,3400612,3401334,3402144,3402975,3403661,3404619,3405574,3406328,3407025,3407970,3408891,3409561,3410481,3411307,3411978,3412735,3413712,3414406,3415237,3416028,3416640,3417376,3418249,3419106,3419725,3420668,3421694,3422417,3423156,3423794,3424618,3425370,3426479,3427398,3428079,3428974,3429670,3430523,3431392,3432215,3432927,3433559,3434406,3435378,3436115,3436786,3437367,3438019,3438741,3439779,3440638,3441323,3441992,3442571,3443322,3444219,3445105,3445868,3446609,3447575,3448535,3449194,3449986,3450780,3451498,3452376,3453362,3454041,3454870,3455499,3456210,3457060,3458162,3459009,3459760,3460454,3461191,3462141,3463144,3464046,3464879,3465657,3466405,3467315,3468031,3468610,3469180,3469756,3470304,3471320,3472373,3473347,3474031,3474734,3475581,3476242,3477358,3478415,3479390,3480467,3481440,3482312,3482895,3483484,3484084,3485135,3486141,3486820,3487541,3488413,3489067,3490002,3491045,3491795,3492510,3493254,3493986,3494701,3495818,3496835,3497502,3498183,3498902,3499666,3500755,3501769,3502428,3503090,3504043,3504776,3505713,3506799,3507598,3508343,3509079,3509752,3510402,3511414,3512451,3513184,3513845,3514788,3515408,3516238,3517327,3518202,3518912,3519703,3520552,3521255,3522274,3523291,3524053,3524720,3525510,3526222,3527065,3528090,3529027,3529709,3530493,3531317,3532021,3533061,3534098,3534835,3535496,3536308,3537037,3537954,3539005,3539917,3540606,3541342,3541943,3542545,3543624,3544650,3545327,3545987,3546711,3547634,3548682,3549442,3550149,3551055,3551716,3552577,3553666,3554538,3555246,3556025,3556756,3557856,3558867,3559529,3560205,3561114,3561776,3562684,3563734,3564732,3565410,3566256,3567011,3567670,3568464,3569518,3570415,3571098,3571966,3572814,3573416,3574449,3575480,3576181,3576930,3577821,3578536,3579225,3580233,3581190,3581864,3582711,3583604,3584173,3585070,3586006,3586728,3587512,3588279,3588928,3589691,3590441,3591158,3591922,3592721,3593465,3594095,3594831,3595551,3596231,3596957,3597710,3598421,3599384,3600215,3600877,3601598,3602508,3603472,3604433,3605184,3606081,3606772,3607460,3608407,3609124,3610092,3610790,3611510,3612201,3613018,3613819,3614699,3615648,3616398,3617279,3617983,3618882,3619811,3620562,3621548,3622227,3622986,3623720,3624427,3625496,3626407,3627550,3628550,3629229,3629980,3630884,3631721,3632618,3633388,3634251,3634936,3635643,3636651,3637550,3638617,3639322,3640069,3640838,3641541,3642598,3643501,3644492,3645288,3645977,3646694,3647567,3648558,3649661,3650576,3651623,3652523,3653208,3654256,3655111,3655765,3656774,3657823,3658582,3659380,3660142,3660982,3661583,3662767,3663994,3665167,3666118,3667093,3668120,3668943,3669760,3670830,3671711,3672415,3673149,3673882,3674938,3675875,3676909,3677883,3678889,3679771,3680736,3681665,3682259,3683010,3684034,3685019,3686255,3687230,3688578,3689741,3691024,3691901,3692811,3694064,3695217,3696475,3697704,3698940,3700125,3701272,3702176,3703024,3703986,3704898,3705823,3706734,3707695,3708577,3709525,3710412,3711339,3712227,3713168,3714086,3714927,3715724,3716571,3717515,3718430,3719335,3720247,3721147,3721996,3722901,3723832,3724731,3725595,3726860,3728219,3729526,3730508,3731254,3732275,3733323,3734405,3735397,3736786,3738124,3739529,3740817,3742170,3743609,3745124,3746565,3747784,3749e3,3750204,3751336,3752565,3753578,3754687,3755735,3757035,3758370,3759543,3760899,3762027,3763230,3764453,3765750,3766823,3767826,3769043,3770230,3771393,3772393,3773438,3774831,3776179,3777542,3778882,3780169,3781546,3782683,3784057,3785354,3786572,3788084,3789544,3790669,3791873,3792817,3793901,3794804,3795879,3797026,3797865,3798909,3800194,3801423,3802448,3803616,3804548,3805640,3806717,3807834,3808858,3809850,3810920,3812133,3813334,3814184,3815606,3816567,3817716,3819282,3820621,3822127,3823533,3824619,3825762,3826952,3828150,3829500,3830591,3832078,3833508,3834965,3836439,3837794,3838731,3839923,3841098,3842288,3843454,3844802,3846075,3847251,3848413,3849653,3850966,3852342,3853599,3854774,3855765,3857012,3858166,3859416,3860448,3861731,3862887,3864182,3865492,3866898,3867993,3869137,3870352,3871618,3872822,3873778,3874849,3875840,3876958,3878019,3879323,3880314,3881399,3882590,3883759,3884958,3886308,3887552,3888726,3890104,3891206,3892442,3893839,3894902,3896106,3897388,3898869,3900185,3901360,3902728,3904169,3905202,3906416,3907733,3908763,3909963,3911114,3912057,3912993,3914267,3915477,3916626,3917768,3918933,3920026,3921127,3922428,3923621,3924819,3926026,3927332,3928690,3929830,3930982,3932169,3933290,3934427,3935536,3936581,3937729,3938924,3940177,3941400,3942501,3943729,3944846,3945863,3947170,3948431,3949647,3950839,3952042,3953002,3954134,3955297,3956495,3957727,3958851,3960116,3961401,3962522,3963685,3964746,3966030,3967333,3968702,3970063,3971240,3972552,3973864,3975316,3976633,3977928,3978935,3980122,3981452,3982638,3983935,3984822,3985923,3987112,3988287,3989458,3990786,3992034,3993075,3994023,3995014,3995698,3996801,3997950,3999094,4000676,4001891,4003290,4004640,4005867,4007169,4008405,4009421,4010474,4011746,4013107,4014391,4015591,4016737,4017921,4019207,4020233,4021594,4022730,4024017,4025220,4026427,4027736,4028725,4030053,4030858,4031927,4033156,4034295,4035411,4036681,4037676,4038649,4039933,4041104,4042026,4042846,4043954,4045208,4046751,4048077,4049417,4050708,4052038,4052969,4054190,4055146,4056272,4056888,4057786,4058843,4059912,4060918,4062025,4063282,4064366,4065433,4066989,4068079,4069445,4070590,4071651,4072934,4074110,4075247,4076461,4077714,4078994,4080314,4081503,4082735,4084124,4085241,4086467,4087548,4088818,4090011,4091119,4092306,4093488,4094714,4095965,4097237,4098436,4099710,4100845,4102231,4103617,4104717,4105650,4106811,4107755,4108838,4110104,4111144,4112294,4113460,4114549,4115640,4116848,4117844,4119043,4120068,4120983,4121864,4123097,4124079,4125080,4126295,4127469,4128581,4129800,4130701,4131832,4133163,4134480,4135700,4136605,4137866,4139165,4140513,4141840,4142893,4143920,4144900,4146054,4147129,4148338,4149449,4150492,4151586,4152748,4153922,4155048,4156268,4157551,4158542,4159329,4160457,4161549,4162648,4163791,4164840,4166165,4167444,4168755,4169703,4170711,4171820,4172885,4173689,4174826,4175769,4176752,4177860,4178915,4179926,4180892,4182073,4183110,4184380,4185538,4186951,4188007,4189296,4190320,4191374,4192436,4193600,4194943,4196048,4197034,4198082,4199107,4200351,4201613,4202738,4203812,4204912,4206115,4207323,4208414,4209485,4210915,4212126,4213214,4214359,4215747,4216853,4217789,4218859,4220258,4221262,4222155,4223085,4224066,4225328,4226315,4227484,4228734,4229985,4231361,4232556,4233803,4235126,4236168,4236990,4238041,4239001,4240335,4241484,4242793,4243824,4245009,4246298,4247547,4248823,4250131,4251261,4252337,4253405,4254354,4255381,4256412,4257471,4258459,4259476,4260418,4261495,4262433,4263461,4264227,4265265,4266306,4267216,4268086,4268914,4269833,4270694,4271600,4272534,4273469,4274392,4275304,4276123,4277086,4278044,4279059,4280151,4281198,4282319,4283248,4284204,4285227,4286288,4287275,4288482,4289515,4290642,4291727,4292868,4294020,4295098,4296124,4297191,4298262,4299262,4300285,4301242,4302274,4303309,4304237,4305254,4306239,4307241,4308183,4309226,4310103,4311006,4311992,4312974,4313817,4314752,4315820,4316802,4317708,4318591,4319500,4320508,4321502,4322457,4323299,4324184,4325135,4326075,4327003,4327797,4328677,4329654,4330791,4331760,4332872,4333885,4334882,4335928,4336900,4337766,4338703,4339633,4340569,4341599,4342618,4343590,4344531,4345526,4346529,4347452,4348365,4349404,4350392,4351384,4352410,4353484,4354541,4355518,4356556,4357643,4358707,4359861,4360915,4361923,4362869,4363955,4364938,4365872,4366710,4367744,4368954,4370183,4371373,4372504,4373605,4374620,4375562,4376579,4377613,4378713,4379758,4380865,4381948,4383048,4384279,4385466,4386487,4387484,4388482,4389425,4390461,4391249,4391990,4392986,4393947,4394696,4395210,4395951,4396687,4397501,4398346,4399105,4400015,4400858,4401686,4402624,4403434,4404478,4405597,4406615,4407684,4408765,4409863,4411009,4411931,4412958,4413973,4415088,4416107,4417269,4418287,4419469,4420545,4421707,4422825,4424033,4425095,4426142,4427199,4428288,4429349,4430427,4431486,4432364,4433182,4434121,4435046,4435883,4437020,4438144,4439253,4440285,4440826,4441242,4441733,4442749,4443728,4444899,4446040,4446933,4447958,4449039,4450025,4450964,4451896,4452644,4453532,4454552,4455579,4456485,4457529,4458651,4459601,4460632,4461632,4462642,4463613,4464570,4465536,4466559,4467608,4468622,4469753,4470681,4471692,4472520,4473488,4474442,4475386,4476419,4477371,4478361,4479271,4480260,4481236,4482209,4483125,4483988,4484892,4485837,4486836,4487884,4488918,4489783,4490723,4491706,4492594,4493493,4494543,4495528,4496521,4497461,4498245,4499233,4500062,4500880,4501719,4502711,4503635,4504634,4505639,4506699,4507648,4508676,4509676,4510665,4511641,4512602,4513594,4514453,4515269,4516088,4517015,4517812,4518688,4519585,4520418,4521329,4522255,4523061,4523627,4524267,4525117,4526064,4527019,4527978,4528837,4529967,4530950,4531374,4532142,4533254,4534203,4535379,4536435,4537628,4538615,4539643,4540831,4541922,4543021,4544199,4545276,4546385,4547543,4548724,4549604,4550257,4550956,4551979,4552890,4553962,4554861,4555933,4557014,4558207,4559184,4560280,4561465,4562412,4563533,4564696,4565768,4566905,4567891,4569001,4569976,4570910,4571774,4572639,4573458,4574324,4575194,4576126,4576994,4577977,4578997,4580025,4581122,4581961,4582493,4583582,4584561,4585724,4586758,4587508,4588095,4588727,4589752,4590804,4591916,4593e3,4594086,4595100,4596038,4597027,4597781,4598276,4598842,4599468,4600498,4601623,4602675,4603655,4604864,4606387,4607822,4609189,4610404,4611855,4613174,4614435,4615543,4616962,4618309,4619655,4621002,4622182,4623192,4624563,4625962,4627221,4628544,4629819,4631152,4632402,4633739,4634904,4636720,4638605,4640238,4641300,4642473,4643533,4644369,4645266,4646410,4647609,4648705,4649920,4650548,4651523,4652565,4653381,4654562,4655738,4656864,4657933,4658795,4660037,4661099,4662378,4663243,4664297,4665350,4666587,4667672,4668878,4669982,4671092,4672180,4673340,4674236,4674965,4675870,4677001,4678118,4679142,4680226,4681214,4682275,4683278,4684268,4685257,4686227,4687347,4688482,4689570,4690450,4691141,4692176,4693091,4694131,4695276,4696392,4697545,4698979,4700576,4702149,4703541,4705164,4706880,4708440,4710047,4711680,4713025,4714646,4716185,4717668,4719236,4720955,4722517,4724142,4725761,4727106,4727793,4729147,4730375,4731591,4732338,4732669,4733402,4734772,4735905,4737245,4738369,4739674,4740785,4742094,4743483,4744623,4745957,4747348,4748668,4749906,4751298,4752669,4754030,4755319,4756539,4757808,4759206,4760540,4761829,4763021,4764341,4765661,4767073,4768462,4769801,4770976,4772327,4773598,4774773,4776009,4777216,4778580,4779906,4781268,4782596,4783828,4785007,4786429,4787800,4789073,4790350,4791140,4791748,4792369,4793011,4793968,4794971,4795697,4796985,4798005,4799132,4800432,4801743,4802882,4804208,4804917,4806115,4807345,4808463,4809553,4810924,4812090,4813380,4814723,4816038,4817214,4818354,4819600,4820715,4821922,4822870,4824143,4825505,4826884,4828124,4828961,4830283,4831587,4833033,4833749,4834987,4836252,4837238,4838329,4839387,4840297,4841217,4842453,4843454,4844737,4846136,4847481,4848830,4850338,4851630,4853067,4853973,4854334,4854812,4855994,4857337,4858125,4859493,4860478,4861620,4862795,4864114,4865443,4866856,4868260,4869461,4870695,4871697,4873016,4874412,4875416,4876610,4877782,4879149,4880416,4881695,4882777,4884110,4885211,4885878,4887176,4888139,4889342,4890299,4891130,4892038,4893126,4894590,4895904,4897224,4898526,4900003,4901530,4902970,4903986,4905464,4906485,4907474,4908747,4909919,4911029,4912138,4913285,4914771,4915969,4917068,4918302,4919252,4920054,4921125,4922283,4923398,4924575,4925697,4926616,4927707,4928565,4929518,4930488,4931299,4932407,4933307,4934148,4935222,4936396,4937368,4938545,4939927,4941165,4942180,4943290,4944665,4945712,4946698,4947835,4948876,4949959,4951138,4952427,4953424,4954278,4955252,4956219,4957600,4958847,4959875,4960955,4962088,4963340,4964408,4965431,4966464,4967632,4968704,4969749,4970965,4972212,4973162,4974283,4975533,4976844,4977836,4979111,4980444,4981560,4982631,4983701,4984592,4985557,4986581,4987926,4989198,4990613,4992108,4993281,4994400,4995715,4996662,4997767,4999007,5000050,5001021,5002123,5003318,5004459,5005424,5006410,5007761,5009067,5010172,5011360,5012494,5013923,5015045,5016081,5017335,5018582,5019157,5020345,5021407,5022548,5023683,5025031,5026034,5026897,5027746,5028450,5029675,5030847,5032027,5033329,5034662,5036008,5037437,5038669,5039390,5040390,5041331,5042719,5044133,5045321,5046496,5047897,5049128,5050420,5051768,5053223,5054514,5055753,5056845,5057742,5058695,5060077,5061522,5062944,5064382,5065655,5066960,5068051,5069210,5070301,5071283,5072575,5073912,5075310,5076634,5077912,5079093,5080142,5081439,5082526,5083779,5085126,5086278,5087357,5088416,5089696,5090875,5092072,5093234,5094401,5095699,5097162,5098336,5099382,5100595,5101701,5102803,5103949,5105150,5106453,5107504,5108175,5108896,5109690,5110792,5111847,5113248,5114613,5115730,5116838,5117965,5119161,5120448,5121476,5122610,5123538,5124842,5126359,5127576,5129021,5130301,5131578,5132850,5134103,5135470,5136602,5138047,5139630,5140828,5141629,5142963,5144222,5145423,5146872,5148091,5149382,5150857,5152263,5153538,5154449,5155596,5156720,5157944,5159423,5160681,5161680,5162988,5164250,5165707,5166687,5167859,5168791,5169813,5170768,5171741,5172847,5173994,5174954,5175976,5177083,5178011,5179120,5180102,5181257,5182211,5183410,5184471,5185865,5186866,5187789,5188708,5189936,5191072,5192161,5193158,5194205,5194985,5196095,5197233,5198141,5199017,5200202,5201372,5202527,5203170,5204082,5205197,5206163,5207158,5208067,5209203,5210392,5211502,5212505,5213406,5214232,5215354,5216426,5217245,5218386,5219543,5220565,5221316,5222392,5223492,5224390,5225620,5226545,5227232,5228394,5229451,5230621,5232073,5233458,5234656,5235795,5237374,5238451,5239337,5240390,5241148,5242338,5243608,5245058,5246412,5247675,5248855,5249659,5250781,5251912,5252880,5254039,5254778,5255781,5256878,5257927,5259005,5259967,5260993,5262276,5263148,5264146,5265487,5266735,5268041,5269055,5270080,5271210,5272317,5273014,5274147,5275171,5276440,5277391,5278463,5279518,5280636,5281627,5282909,5284345,5285536,5286844,5288086,5289244,5290332,5291423,5292200,5293362,5294147,5295177,5296184,5297499,5298768,5299982,5301078,5302331,5303167,5304227,5305436,5306445,5307658,5308736,5309770,5310539,5311565,5312727,5313909,5315089,5316251,5317351,5318140,5319614,5321082,5322505,5323838,5325043,5325854,5327058,5328353,5329554,5330479,5331777,5332732,5333648,5334828,5335988,5337201,5338335,5339653,5340838,5342096,5343195,5344423,5345815,5346934,5348229,5349461,5350792,5352052,5353221,5354259,5355408,5356607,5357888,5359061,5360324,5361457,5362584,5363824,5365012,5366130,5367122,5368214,5369539,5370798,5372206,5373470,5374815,5376198,5377328,5378337,5379502,5380702,5382013,5383134,5384252,5385442,5386603,5388042,5389105],"sizes":[1427,1162,1187,1546,1315,1416,1223,900,961,900,1057,1193,1185,1022,829,1364,1050,1006,1076,1244,1176,1325,898,1061,911,1116,1170,1169,1371,1302,1152,1043,885,914,849,1476,1183,999,1448,986,1285,1417,1146,1211,1149,1288,1193,1259,958,916,1132,928,940,867,1034,1215,1008,907,1060,880,862,1096,992,738,1095,971,1186,1005,924,1041,940,905,778,1328,1287,1121,952,1172,1203,976,1121,1228,1153,954,1147,1033,1348,1266,857,1195,790,1104,1105,917,1311,1203,1259,744,852,768,1034,907,1064,1269,1206,1238,817,1040,1065,1225,954,925,753,733,630,694,933,581,568,781,623,510,749,831,871,833,634,1142,843,682,953,907,701,844,882,765,1072,1404,1467,1100,1365,1463,1484,1276,1330,1453,1221,1209,1332,1324,1553,1315,1459,1426,1114,969,1359,1448,1255,876,1271,1350,1358,1031,1206,1048,1029,1005,1149,1206,1148,1153,1069,1101,1017,1294,1e3,1128,1094,1226,997,1160,1091,1358,1126,1116,919,1059,1212,1265,1032,1120,1048,1032,867,1201,1290,1343,1314,1094,1118,1132,1167,1282,1274,1142,876,1150,1001,1252,1408,1422,1226,1152,1003,722,1231,1176,1029,1385,1327,1344,1046,1179,1116,980,991,1041,1098,857,795,985,955,751,895,1432,1249,1304,1175,985,1027,896,916,908,1188,1082,958,1115,949,1198,1119,933,653,751,706,1057,1247,1209,977,1070,1045,1076,1056,988,968,1133,1087,1026,1059,1081,1102,853,1011,1013,1090,1083,1068,848,1014,1036,961,1012,1031,1228,1377,1020,990,1260,1117,1326,1384,1228,1190,1037,1176,1255,1172,1145,736,738,784,933,1142,1163,1071,786,1027,880,1052,896,852,863,1305,1597,1283,903,1014,1058,1580,1560,884,1233,1095,979,1131,1082,1131,1330,1401,1157,1246,1254,1271,1385,1325,1279,1346,1143,1423,1200,950,1044,1237,1176,1116,1266,969,927,1191,961,1358,1141,1249,1131,1643,1067,1135,1094,1100,1190,971,1227,1288,1310,1089,1049,1146,1235,1175,1204,1167,1352,1164,1186,889,937,941,999,1077,921,1154,1271,982,1158,1512,1374,1246,1403,1471,1322,1023,1061,1114,1012,1148,1121,961,1001,1275,1166,1503,1093,1355,1302,1296,1201,1478,1055,1083,1447,1415,914,1169,1196,1012,1070,1425,1266,1190,1179,1374,1368,1183,1258,1159,1204,1274,1253,1360,1181,1177,993,1224,1154,1009,1368,1218,698,1190,1588,1331,1241,1442,1200,1355,1341,922,1298,1122,898,1093,887,1120,882,1284,1336,1208,1173,897,926,1115,1177,1093,944,1115,1016,1003,1197,1108,1050,1060,1104,1011,1303,1004,926,1022,1260,1187,1086,1207,944,1142,1103,1150,1103,1010,1265,1187,1331,1224,1145,1141,1270,1438,1152,1141,1213,1321,1324,1261,1403,1307,1333,1072,1181,1113,1033,1269,1034,1272,1094,718,1069,1376,1120,1267,1307,1389,1162,1296,1223,1108,1336,1453,1201,1159,1080,1373,1209,1201,935,1370,1325,1298,1256,1286,1297,1414,1237,1286,1359,1082,1050,1173,1091,1502,1079,937,805,733,822,1215,1225,1234,890,1205,1138,1172,1047,854,1092,1262,1136,1217,1191,1030,1125,979,1224,1143,782,1076,1071,1221,931,1090,1423,1456,1469,1606,1445,1410,1422,1350,1216,1276,1408,1302,1272,1328,1217,1227,1203,1141,1339,1357,1188,1292,1301,1135,1285,1285,1382,1265,1189,1282,1415,1267,1265,1265,713,1214,1067,1130,1293,1292,1181,1140,1253,1272,1138,1441,1202,1354,1010,1076,1221,1160,1147,1210,1085,1506,1381,931,1278,1442,1424,1304,1226,1206,1115,1270,1431,1252,1295,1249,1150,1285,1139,1131,1049,1088,1110,1154,1285,883,1220,1125,1048,1261,1145,1180,1343,1246,1099,1075,1054,1353,1384,1251,1415,1259,1032,1192,1037,1253,1293,1206,1005,1117,967,1164,1195,1333,1049,1241,1264,952,1113,1353,1242,1158,1218,1173,1175,1164,1136,1288,1110,836,894,1259,1379,1186,919,957,1412,1485,1453,864,1290,1109,938,859,1250,1330,1134,1486,1321,957,975,1201,1181,1039,1149,1298,987,968,820,1294,1184,1135,1570,1361,1235,1121,1166,1214,1061,1134,1050,1244,1297,1167,1008,1107,1198,1279,1313,1217,1143,447,971,1039,1153,1240,1211,1534,1253,975,1134,1282,1312,1303,1207,1108,1081,1182,1299,1380,1522,1362,1136,1352,1264,1237,1520,1565,1151,1182,1118,631,1247,1111,1136,797,1281,974,659,1110,1056,1186,1005,1366,1360,1254,1197,1065,1461,1214,1401,1263,1181,973,1060,1266,1153,1300,1168,1448,1251,1157,1256,1243,1300,1632,1673,1296,1521,1449,1361,1032,1240,1373,1088,1344,1005,1121,1239,1537,805,1396,1377,1087,1217,1244,1287,1014,969,1024,971,1122,1085,1060,1150,1114,1058,987,1155,1115,1048,1108,1385,1333,1344,1303,904,1287,1216,1305,1223,1175,1366,1484,1291,1068,1145,902,1150,1002,1121,1289,1330,1235,1135,1350,1075,1167,1241,1144,1222,1166,1008,1190,1376,1135,1205,1126,1073,1150,1186,1219,1230,1253,1079,1114,1078,1207,1117,1181,1267,1119,1128,1145,1106,840,1033,1104,1023,1192,1060,1077,1040,929,1108,1111,855,1013,941,978,1117,1085,1245,1486,1331,1158,967,1230,1121,1082,1337,954,1012,1208,1167,1065,879,1117,1052,1114,1042,1049,663,1227,1135,1216,859,916,1179,979,1267,1078,1019,1073,1138,1180,939,904,1075,1015,1121,1096,1232,1147,1211,1425,1200,1210,1120,1249,1134,1106,984,1361,1224,1196,843,1203,790,723,755,514,573,562,503,520,570,555,549,587,582,540,550,599,554,593,591,615,1284,1152,1282,1125,997,1417,1294,1037,1047,1233,1237,1291,980,772,1084,938,1215,1115,1040,1045,1192,1236,1200,1145,966,1143,1040,955,1092,821,658,995,1062,994,972,1055,935,812,759,928,1022,893,909,866,733,1005,802,942,1165,1077,1423,1242,1269,1172,1199,1268,1065,977,1093,1207,1289,947,671,809,1186,1488,1133,971,782,1093,935,1003,1052,1086,1071,1086,1176,1279,946,1088,1458,1392,1357,1354,1438,1390,1332,1215,1128,971,1157,1270,1173,934,927,1083,1152,1148,1167,1283,1433,1183,1243,1332,1397,1244,818,1109,1265,1124,1193,1180,1311,1387,1279,1579,649,1046,938,798,1358,1089,1119,1233,966,1261,933,1023,1295,1574,1104,1241,974,1172,995,1253,971,1306,1086,992,885,954,974,1167,1160,939,924,1249,1415,1137,1133,1059,1220,1170,1180,1121,970,1371,1408,1051,1458,1311,1432,1493,1089,1381,1163,879,1300,1022,1047,1031,1123,757,1079,1274,1202,1366,1112,1398,1178,1329,1093,1095,868,1276,1275,1095,887,1068,1137,1119,1125,1290,1128,1142,1087,870,993,791,669,1429,1582,1422,1112,1233,1178,1225,1188,1223,1334,1295,1228,974,1203,1270,978,1072,1006,1153,1306,1193,1004,1180,1033,1074,1310,1208,1216,1426,1405,1271,1293,1421,1243,1294,1078,854,1228,1371,1238,1192,1040,1147,1152,1269,934,938,1020,1099,940,1104,1101,1342,1083,977,833,965,1111,939,1005,987,1050,1230,1585,1538,1528,1407,1036,800,1067,1024,849,856,1197,912,1115,1055,1274,1346,902,1250,1317,1044,876,827,727,1260,912,1006,1025,1096,831,1200,1307,1372,1354,1235,1337,1097,1251,1210,1503,1015,1264,1045,1052,957,1284,1381,1388,878,1063,1171,1425,1171,1243,1095,1054,1031,1278,1301,1503,1352,1460,1408,1591,1426,1599,1321,1209,1221,1126,1298,1174,1184,1376,1293,1333,1238,1032,1258,787,900,1250,1264,1393,1487,1331,1001,1058,1031,1175,1169,998,1141,1099,852,852,1123,1588,1323,1101,1095,1280,1313,1072,1485,1438,1204,1245,1212,1324,1160,1317,1522,1090,1219,1001,792,1033,935,888,997,730,894,954,1527,1066,1364,1174,1001,894,1244,1232,1470,1239,1287,1451,1428,1350,1177,913,1004,1121,1075,803,1007,1229,1112,859,881,1196,1235,1239,1253,1292,1388,1219,1301,1218,1219,1198,908,1304,1536,1076,1238,1365,1355,1232,1199,1271,1209,1232,1178,1174,1147,1144,1124,975,1079,817,901,1172,1203,1313,1206,1070,890,1115,785,941,1115,1153,1301,1240,1124,1186,1211,1126,1063,1125,1205,1212,1020,1319,1133,1314,1229,1098,1192,1180,902,1001,1185,1231,1169,1400,1154,1115,1017,1227,1224,1315,1232,1220,1306,1431,1251,1233,1086,1136,1078,1263,1259,1207,1302,1315,1351,1169,1314,1194,1181,1274,1021,1433,1263,1041,1177,1189,1160,945,1047,1559,1167,1197,1198,1172,1254,1229,1308,1353,1407,1422,1195,1332,1228,1257,1123,1214,1036,1106,946,997,804,1128,1398,1480,1055,1191,1373,1154,1008,790,796,661,1274,1358,1342,1426,1248,1110,1194,1291,1162,1070,1385,1079,988,1152,1208,1345,1164,1067,974,1301,1263,1340,1182,1065,1309,1350,1288,1054,1324,1488,1320,1152,1310,1197,1273,1309,1297,1017,1265,1504,1470,1196,569,569,1096,960,1107,852,1052,1097,1272,1241,1200,1256,1063,1084,1606,1222,1248,1282,1177,1218,1147,1201,1242,1186,1237,1187,1074,1231,1141,1301,1254,1256,1220,1293,1041,1282,1476,1260,1132,1219,1393,1213,998,982,1159,1267,1174,1022,1101,1152,1125,1174,1415,957,1404,1281,1248,1502,1114,1031,861,1028,957,807,986,1193,1004,803,902,934,710,791,977,1067,918,1269,818,1443,1410,1540,1196,1298,1093,1061,1051,1173,1075,1060,1154,1308,1410,1117,824,751,843,885,1100,1438,1173,1098,1281,899,754,1310,1256,907,1235,1234,1184,1087,1519,1198,1331,1139,1188,984,1046,1171,993,1222,1376,1225,1035,1287,1060,1138,777,1011,795,1089,1080,1402,1369,978,1202,1203,1029,1379,1352,1314,1335,1307,1059,1399,1551,1331,1482,1195,1095,1199,1458,1186,1350,1232,1193,829,1172,1363,856,1109,1031,1431,868,1234,1002,1018,1030,1021,1330,973,1091,1340,1263,1190,1391,1265,1258,1269,1192,1412,1306,1038,1284,851,922,1038,1256,1207,1121,1108,916,650,1074,937,1052,1052,980,1022,906,989,1053,985,1003,1090,1070,1037,866,989,1105,1324,1251,1120,809,910,972,993,782,1168,959,965,860,816,1289,954,728,1255,1233,1155,1107,1302,1367,1179,1134,1306,1383,1401,1505,1187,1170,1233,1217,1320,1535,1022,1235,1404,1072,1136,1126,1044,1035,1087,1037,1220,1123,914,1143,1049,1251,1207,1215,1153,1292,1178,1033,1074,1145,1151,1123,945,981,858,857,1166,1234,1048,1102,1262,1180,1154,1024,1029,1090,985,985,1230,1233,926,1617,1262,1485,1216,1215,939,847,952,931,940,1276,1419,1355,1321,1318,1212,1211,1375,1274,1338,1320,1139,1051,951,1158,1495,1337,1236,1064,1333,1367,1084,981,1317,1289,1578,1143,1173,1237,1145,1244,1095,1301,1259,1282,1369,1252,1063,1385,1339,1227,1316,1132,1368,1263,1141,999,1098,1149,1197,1147,1385,1329,1443,1521,1134,1350,1234,1239,1240,1312,1506,1311,1228,997,1186,1392,1035,1389,980,995,964,1186,1238,1509,1113,1127,1083,1128,1252,1294,983,1082,970,834,1063,1215,1145,1218,1383,1238,1199,1204,1215,1178,1029,1111,1095,1067,1166,1242,962,898,1030,1086,1024,976,997,725,1194,1446,1271,1259,960,1240,1439,1349,1306,1240,1214,1087,1280,1246,1424,1442,1189,987,1382,1118,1317,1299,1227,1143,1199,1049,1039,1250,1005,1337,1284,1252,1208,1178,1205,1422,1429,1148,1089,1140,1237,1167,1180,1100,1069,1309,1128,1382,1421,843,1326,1336,1177,1275,1445,1230,1227,1129,969,909,1198,1482,1282,1069,1185,1403,1298,1229,1138,1151,1253,1144,1234,1211,1199,1185,1320,1197,1237,1080,974,1126,973,831,973,1200,1184,1259,754,927,1187,1263,921,1111,1194,1115,1238,1004,808,1402,1274,1392,1199,1454,1114,1285,1281,1276,1214,1070,969,1334,1322,1020,957,1158,1220,1030,996,964,1074,975,1282,1211,992,1130,1180,1226,1244,1043,1270,1192,1087,1105,1094,885,887,1189,726,996,649,1132,1256,1370,1097,1004,1333,1220,1124,1176,1076,1154,1272,1231,825,865,1600,1026,1326,1300,1345,1366,1436,1562,1605,1577,1538,1321,979,1038,1005,880,898,881,877,842,1180,880,1183,734,1322,1121,1067,1400,1372,1239,1252,1042,959,1172,1186,916,1111,1157,1172,1120,1042,1086,1040,1040,1218,1045,1146,1243,959,1137,1131,1039,980,1078,1177,1130,887,938,1040,1135,1041,1120,1101,1145,1275,1016,1107,867,1136,1114,1395,1282,1277,1259,1039,1114,1204,945,1075,985,1362,1110,1253,847,854,1185,1071,954,1194,1220,1375,1337,1171,995,1021,1133,1033,1349,1194,1326,1305,1160,1350,1245,1265,1003,1109,1173,975,1116,1185,1057,1269,1004,931,1054,931,999,1275,969,1265,1416,1383,1368,1102,1087,1202,1344,1236,1031,1145,1283,975,788,1006,1065,1215,1249,1161,1161,1020,1136,1085,641,1185,966,1223,999,1426,1049,1191,1080,1247,1118,1229,1226,1072,1031,1012,1170,1025,1373,1329,1303,1269,1057,1041,1179,927,1155,1171,1057,1354,1320,1214,1271,1157,766,1072,945,1362,1322,1173,1344,1113,911,1387,1227,1168,1365,1254,1370,1165,1210,1221,1190,1168,1243,1237,1015,852,1223,718,561,1232,1187,1012,938,1012,1158,1064,1038,1150,1024,1168,1145,1051,1044,1265,1153,1228,1174,1235,1213,1109,1263,1204,1219,1167,1225,1166,1054,1114,1252,1158,1005,1180,605,901,1139,1224,1303,1297,1164,1192,1269,1106,1058,1061,1313,1070,1096,1072,1265,1169,1259,1296,1295,1404,1107,816,1263,630,837,1273,1174,988,955,666,789,760,792,813,974,963,912,1363,1151,1226,813,1059,1082,1012,1325,1289,1158,1374,1058,1205,1275,1164,1106,1176,1118,1146,1344,1103,1180,1216,1260,1197,1256,1073,1053,1203,1046,1143,962,1317,1174,1256,1128,1344,1510,1358,1402,1385,1344,1513,1217,1196,1412,1012,1044,1121,1113,999,951,755,1299,1017,1041,1052,939,1413,1352,1291,1201,1485,1548,1448,885,760,1202,1337,1319,1322,1220,1301,1329,1128,1199,1366,1429,1373,1389,1311,1049,1311,1247,1255,1261,1265,1196,1432,1341,1293,1331,1187,1536,1401,1284,1320,1565,1430,1464,1022,765,592,687,966,1064,1037,1075,1205,1040,1389,1326,1335,1245,1174,1075,948,952,1331,972,849,1158,1005,1039,1022,1054,1163,953,1198,1282,880,1026,1241,674,1277,1050,1231,1245,1071,965,1196,1087,1011,911,1438,1160,1229,1179,1357,1436,1354,1243,1457,945,892,1046,1118,941,1365,1164,1380,1276,1197,1212,1028,1330,1538,1262,1456,1357,1559,1413,1109,1217,1397,1204,1152,1085,1339,1055,1031,1101,1255,1093,1140,1264,1291,1218,1576,1301,1287,1211,1211,964,1170,931,974,1177,1170,1491,1260,1285,1314,1226,1226,1267,1415,1379,1089,1462,1250,1134,1277,1180,1267,1470,1133,1101,1292,1358,1131,1284,1097,1277,1379,1340,1106,1053,1008,853,1115,1361,1541,1283,1332,1177,1197,1138,1051,1198,842,1158,1252,1052,1144,1298,906,1224,1145,1154,1294,1110,1231,1109,949,1407,1233,1237,1161,1303,1305,1301,1384,1409,1308,1305,1235,1444,1463,1450,1437,1300,1287,1410,1280,1410,1278,1267,1249,1424,1449,1078,1308,768,798,787,721,667,951,816,1159,861,1030,1161,835,1017,836,854,787,886,755,1116,1008,676,719,725,607,861,1031,878,855,841,927,796,947,637,622,774,1028,825,688,613,676,710,874,978,662,722,610,720,728,1023,848,843,825,925,703,1053,1015,767,796,968,766,740,1106,998,672,846,898,582,843,1092,774,810,981,786,630,1076,1027,673,798,954,676,897,1087,763,820,974,783,666,1109,1025,666,840,978,776,1067,1043,717,859,1056,744,1094,1042,752,808,976,792,721,1113,999,681,849,933,750,948,1132,1061,833,800,905,723,1046,1093,1032,885,830,646,1045,656,817,751,1079,936,666,678,863,621,1055,963,749,776,717,845,669,1085,858,788,855,875,736,973,726,849,690,845,706,782,1141,622,695,746,1043,927,665,707,695,655,837,965,748,721,629,749,674,1081,726,804,754,1113,877,665,758,871,707,975,979,700,793,691,619,722,1023,746,750,829,1033,779,704,631,860,778,1047,937,676,927,635,688,825,906,760,721,857,979,726,712,739,795,807,1044,914,676,783,642,623,824,867,713,715,773,989,735,685,618,721,697,1012,791,745,619,604,675,1011,1019,758,740,799,657,1106,700,810,798,1080,913,668,708,861,798,969,981,710,823,794,633,743,895,809,725,868,1023,716,734,639,855,747,1078,889,675,953,628,689,867,802,856,679,1021,981,688,675,750,628,882,1071,812,722,789,785,711,942,765,815,693,1035,963,651,686,823,613,922,1053,765,781,770,860,684,960,810,767,782,1101,879,666,792,828,736,1079,973,664,851,750,831,881,906,901,640,1008,997,719,681,814,631,884,1073,813,722,810,831,686,958,955,754,697,945,921,670,920,826,671,757,977,694,831,791,612,736,873,857,619,943,1026,723,739,638,824,752,1109,919,681,895,696,853,869,823,712,632,847,972,737,671,581,652,722,1038,859,685,669,579,751,897,886,763,741,966,960,659,792,794,718,878,986,679,829,629,711,850,1102,847,751,694,737,950,1003,902,833,778,748,910,716,579,570,576,548,1016,1053,974,684,703,847,661,1116,1057,975,1077,973,872,583,589,600,1051,1006,679,721,872,654,935,1043,750,715,744,732,715,1117,1017,667,681,719,764,1089,1014,659,662,953,733,937,1086,799,745,736,673,650,1012,1037,733,661,943,620,830,1089,875,710,791,849,703,1019,1017,762,667,790,712,843,1025,937,682,784,824,704,1040,1037,737,661,812,729,917,1051,912,689,736,601,602,1079,1026,677,660,724,923,1048,760,707,906,661,861,1089,872,708,779,731,1100,1011,662,676,909,662,908,1050,998,678,846,755,659,794,1054,897,683,868,848,602,1033,1031,701,749,891,715,689,1008,957,674,847,893,569,897,936,722,784,767,649,763,750,717,764,799,744,630,736,720,680,726,753,711,963,831,662,721,910,964,961,751,897,691,688,947,717,968,698,720,691,817,801,880,949,750,881,704,899,929,751,986,679,759,734,707,1069,911,1143,1e3,679,751,904,837,897,770,863,685,707,1008,899,1067,705,747,769,703,1057,903,991,796,689,717,873,991,1103,915,1047,900,685,1048,855,654,1009,1049,759,798,762,840,601,1184,1227,1173,951,975,1027,823,817,1070,881,704,734,733,1056,937,1034,974,1006,882,965,929,594,751,1024,985,1236,975,1348,1163,1283,877,910,1253,1153,1258,1229,1236,1185,1147,904,848,962,912,925,911,961,882,948,887,927,888,941,918,841,797,847,944,915,905,912,900,849,905,931,899,864,1265,1359,1307,982,746,1021,1048,1082,992,1389,1338,1405,1288,1353,1439,1515,1441,1219,1216,1204,1132,1229,1013,1109,1048,1300,1335,1173,1356,1128,1203,1223,1297,1073,1003,1217,1187,1163,1e3,1045,1393,1348,1363,1340,1287,1377,1137,1374,1297,1218,1512,1460,1125,1204,944,1084,903,1075,1147,839,1044,1285,1229,1025,1168,932,1092,1077,1117,1024,992,1070,1213,1201,850,1422,961,1149,1566,1339,1506,1406,1086,1143,1190,1198,1350,1091,1487,1430,1457,1474,1355,937,1192,1175,1190,1166,1348,1273,1176,1162,1240,1313,1376,1257,1175,991,1247,1154,1250,1032,1283,1156,1295,1310,1406,1095,1144,1215,1266,1204,956,1071,991,1118,1061,1304,991,1085,1191,1169,1199,1350,1244,1174,1378,1102,1236,1397,1063,1204,1282,1481,1316,1175,1368,1441,1033,1214,1317,1030,1200,1151,943,936,1274,1210,1149,1142,1165,1093,1101,1301,1193,1198,1207,1306,1358,1140,1152,1187,1121,1137,1109,1045,1148,1195,1253,1223,1101,1228,1117,1017,1307,1261,1216,1192,1203,960,1132,1163,1198,1232,1124,1265,1285,1121,1163,1061,1284,1303,1369,1361,1177,1312,1312,1452,1317,1295,1007,1187,1330,1186,1297,887,1101,1189,1175,1171,1328,1248,1041,948,991,684,1103,1149,1144,1582,1215,1399,1350,1227,1302,1236,1016,1053,1272,1361,1284,1200,1146,1184,1286,1026,1361,1136,1287,1203,1207,1309,989,1328,805,1069,1229,1139,1116,1270,995,973,1284,1171,922,820,1108,1254,1543,1326,1340,1291,1330,931,1221,956,1126,616,898,1057,1069,1006,1107,1257,1084,1067,1556,1090,1366,1145,1061,1283,1176,1137,1214,1253,1280,1320,1189,1232,1389,1117,1226,1081,1270,1193,1108,1187,1182,1226,1251,1272,1199,1274,1135,1386,1386,1100,933,1161,944,1083,1266,1040,1150,1166,1089,1091,1208,996,1199,1025,915,881,1233,982,1001,1215,1174,1112,1219,901,1131,1331,1317,1220,905,1261,1299,1348,1327,1053,1027,980,1154,1075,1209,1111,1043,1094,1162,1174,1126,1220,1283,991,787,1128,1092,1099,1143,1049,1325,1279,1311,948,1008,1109,1065,804,1137,943,983,1108,1055,1011,966,1181,1037,1270,1158,1413,1056,1289,1024,1054,1062,1164,1343,1105,986,1048,1025,1244,1262,1125,1074,1100,1203,1208,1091,1071,1430,1211,1088,1145,1388,1106,936,1070,1399,1004,893,930,981,1262,987,1169,1250,1251,1376,1195,1247,1323,1042,822,1051,960,1334,1149,1309,1031,1185,1289,1249,1276,1308,1130,1076,1068,949,1027,1031,1059,988,1017,942,1077,938,1028,766,1038,1041,910,870,828,919,861,906,934,935,923,912,819,963,958,1015,1092,1047,1121,929,956,1023,1061,987,1207,1033,1127,1085,1141,1152,1078,1026,1067,1071,1e3,1023,957,1032,1035,928,1017,985,1002,942,1043,877,903,986,982,843,935,1068,982,906,883,909,1008,994,955,842,885,951,940,928,794,880,977,1137,969,1112,1013,997,1046,972,866,937,930,936,1030,1019,972,941,995,1003,923,913,1039,988,992,1026,1074,1057,977,1038,1087,1064,1154,1054,1008,946,1086,983,934,838,1034,1210,1229,1190,1131,1101,1015,942,1017,1034,1100,1045,1107,1083,1100,1231,1187,1021,997,998,943,1036,788,741,996,961,749,514,741,736,814,845,759,910,843,828,938,810,1044,1119,1018,1069,1081,1098,1146,922,1027,1015,1115,1019,1162,1018,1182,1076,1162,1118,1208,1062,1047,1057,1089,1061,1078,1059,878,818,939,925,837,1137,1124,1109,1032,541,416,491,1016,979,1171,1141,893,1025,1081,986,939,932,748,888,1020,1027,906,1044,1122,950,1031,1e3,1010,971,957,966,1023,1049,1014,1131,928,1011,828,968,954,944,1033,952,990,910,989,976,973,916,863,904,945,999,1048,1034,865,940,983,888,899,1050,985,993,940,784,988,829,818,839,992,924,999,1005,1060,949,1028,1e3,989,976,961,992,859,816,819,927,797,876,897,833,911,926,806,566,640,850,947,955,959,859,1130,983,424,768,1112,949,1176,1056,1193,987,1028,1188,1091,1099,1178,1077,1109,1158,1181,880,653,699,1023,911,1072,899,1072,1081,1193,977,1096,1185,947,1121,1163,1072,1137,986,1110,975,934,864,865,819,866,870,932,868,983,1020,1028,1097,839,532,1089,979,1163,1034,750,587,632,1025,1052,1112,1084,1086,1014,938,989,754,495,566,626,1030,1125,1052,980,1209,1523,1435,1367,1215,1451,1319,1261,1108,1419,1347,1346,1347,1180,1010,1371,1399,1259,1323,1275,1333,1250,1337,1165,1816,1885,1633,1062,1173,1060,836,897,1144,1199,1096,1215,628,975,1042,816,1181,1176,1126,1069,862,1242,1062,1279,865,1054,1053,1237,1085,1206,1104,1110,1088,1160,896,729,905,1131,1117,1024,1084,988,1061,1003,990,989,970,1120,1135,1088,880,691,1035,915,1040,1145,1116,1153,1434,1597,1573,1392,1623,1716,1560,1607,1633,1345,1621,1539,1483,1568,1719,1562,1625,1619,1345,687,1354,1228,1216,747,331,733,1370,1133,1340,1124,1305,1111,1309,1389,1140,1334,1391,1320,1238,1392,1371,1361,1289,1220,1269,1398,1334,1289,1192,1320,1320,1412,1389,1339,1175,1351,1271,1175,1236,1207,1364,1326,1362,1328,1232,1179,1422,1371,1273,1277,790,608,621,642,957,1003,726,1288,1020,1127,1300,1311,1139,1326,709,1198,1230,1118,1090,1371,1166,1290,1343,1315,1176,1140,1246,1115,1207,948,1273,1362,1379,1240,837,1322,1304,1446,716,1238,1265,986,1091,1058,910,920,1236,1001,1283,1399,1345,1349,1508,1292,1437,906,361,478,1182,1343,788,1368,985,1142,1175,1319,1329,1413,1404,1201,1234,1002,1319,1396,1004,1194,1172,1367,1267,1279,1082,1333,1101,667,1298,963,1203,957,831,908,1088,1464,1314,1320,1302,1477,1527,1440,1016,1478,1021,989,1273,1172,1110,1109,1147,1486,1198,1099,1234,950,802,1071,1158,1115,1177,1122,919,1091,858,953,970,811,1108,900,841,1074,1174,972,1177,1382,1238,1015,1110,1375,1047,986,1137,1041,1083,1179,1289,997,854,974,967,1381,1247,1028,1080,1133,1252,1068,1023,1033,1168,1072,1045,1216,1247,950,1121,1250,1311,992,1275,1333,1116,1071,1070,891,965,1024,1345,1272,1415,1495,1173,1119,1315,947,1105,1240,1043,971,1102,1195,1141,965,986,1351,1306,1105,1188,1134,1429,1122,1036,1254,1247,575,1188,1062,1141,1135,1348,1003,863,849,704,1225,1172,1180,1302,1333,1346,1429,1232,721,1e3,941,1388,1414,1188,1175,1401,1231,1292,1348,1455,1291,1239,1092,897,953,1382,1445,1422,1438,1273,1305,1091,1159,1091,982,1292,1337,1398,1324,1278,1181,1049,1297,1087,1253,1347,1152,1079,1059,1280,1179,1197,1162,1167,1298,1463,1174,1046,1213,1106,1102,1146,1201,1303,1051,671,721,794,1102,1055,1401,1365,1117,1108,1127,1196,1287,1028,1134,928,1304,1517,1217,1445,1280,1277,1272,1253,1367,1132,1445,1583,1198,801,1334,1259,1201,1449,1219,1291,1475,1406,1275,911,1147,1124,1224,1479,1258,999,1308,1262,1457,980,1172,932,1022,955,973,1106,1147,960,1022,1107,928,1109,982,1155,954,1199,1061,1394,1001,923,919,1228,1136,1089,997,1047,780,1110,1138,908,876,1185,1170,1155,643,912,1115,966,995,909,1136,1189,1110,1003,901,826,1122,1072,819,1141,1157,1022,751,1076,1100,898,1230,925,687,1162,1057,1170,1452,1385,1198,1139,1579,1077,886,1053,758,1190,1270,1450,1354,1263,1180,804,1122,1131,968,1159,739,1003,1097,1049,1078,962,1026,1283,872,998,1341,1248,1306,1014,1025,1130,1107,697,1133,1024,1269,951,1072,1055,1118,991,1282,1436,1191,1308,1242,1158,1088,1091,777,1162,785,1030,1007,1315,1269,1214,1096,1253,836,1060,1209,1009,1213,1078,1034,769,1026,1162,1182,1180,1162,1100,789,1474,1468,1423,1333,1205,811,1204,1295,1201,925,1298,955,916,1180,1160,1213,1134,1318,1185,1258,1099,1228,1392,1119,1295,1232,1331,1260,1169,1038,1149,1199,1281,1173,1263,1133,1127,1240,1188,1118,992,1092,1325,1259,1408,1264,1345,1383,1130,1009,1165,1200,1311,1121,1118,1190,1161,1439,1063,1044],"successes":[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module["LZ4"]==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module["LZ4"].loadPackage({"metadata":metadata,"compressedData":compressedData},true);Module["removeRunDependency"]("datafile_build/pyodide.asm.data")}Module["addRunDependency"]("datafile_build/pyodide.asm.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({"files":[{"filename":"/lib/python3.10/__future__.py","start":0,"end":5155},{"filename":"/lib/python3.10/__phello__.foo.py","start":5155,"end":5219},{"filename":"/lib/python3.10/_aix_support.py","start":5219,"end":8489},{"filename":"/lib/python3.10/_bootsubprocess.py","start":8489,"end":11164},{"filename":"/lib/python3.10/_collections_abc.py","start":11164,"end":43113},{"filename":"/lib/python3.10/_compat_pickle.py","start":43113,"end":51862},{"filename":"/lib/python3.10/_compression.py","start":51862,"end":57543},{"filename":"/lib/python3.10/_markupbase.py","start":57543,"end":72196},{"filename":"/lib/python3.10/_py_abc.py","start":72196,"end":78385},{"filename":"/lib/python3.10/_pydecimal.py","start":78385,"end":307061},{"filename":"/lib/python3.10/_pyio.py","start":307061,"end":401624},{"filename":"/lib/python3.10/_sitebuiltins.py","start":401624,"end":404752},{"filename":"/lib/python3.10/_strptime.py","start":404752,"end":430029},{"filename":"/lib/python3.10/_threading_local.py","start":430029,"end":437249},{"filename":"/lib/python3.10/_weakrefset.py","start":437249,"end":443172},{"filename":"/lib/python3.10/abc.py","start":443172,"end":449694},{"filename":"/lib/python3.10/aifc.py","start":449694,"end":482299},{"filename":"/lib/python3.10/antigravity.py","start":482299,"end":482799},{"filename":"/lib/python3.10/argparse.py","start":482799,"end":580783},{"filename":"/lib/python3.10/ast.py","start":580783,"end":640355},{"filename":"/lib/python3.10/asynchat.py","start":640355,"end":651840},{"filename":"/lib/python3.10/asyncore.py","start":651840,"end":672073},{"filename":"/lib/python3.10/base64.py","start":672073,"end":692920},{"filename":"/lib/python3.10/bdb.py","start":692920,"end":725147},{"filename":"/lib/python3.10/binhex.py","start":725147,"end":739931},{"filename":"/lib/python3.10/bisect.py","start":739931,"end":743066},{"filename":"/lib/python3.10/bz2.py","start":743066,"end":754913},{"filename":"/lib/python3.10/cProfile.py","start":754913,"end":761248},{"filename":"/lib/python3.10/calendar.py","start":761248,"end":785723},{"filename":"/lib/python3.10/cgi.py","start":785723,"end":819822},{"filename":"/lib/python3.10/cgitb.py","start":819822,"end":831918},{"filename":"/lib/python3.10/chunk.py","start":831918,"end":837353},{"filename":"/lib/python3.10/cmd.py","start":837353,"end":852213},{"filename":"/lib/python3.10/code.py","start":852213,"end":862835},{"filename":"/lib/python3.10/codecs.py","start":862835,"end":899507},{"filename":"/lib/python3.10/codeop.py","start":899507,"end":906075},{"filename":"/lib/python3.10/colorsys.py","start":906075,"end":910092},{"filename":"/lib/python3.10/compileall.py","start":910092,"end":930335},{"filename":"/lib/python3.10/configparser.py","start":930335,"end":984965},{"filename":"/lib/python3.10/contextlib.py","start":984965,"end":1010847},{"filename":"/lib/python3.10/contextvars.py","start":1010847,"end":1010976},{"filename":"/lib/python3.10/copy.py","start":1010976,"end":1019654},{"filename":"/lib/python3.10/copyreg.py","start":1019654,"end":1027080},{"filename":"/lib/python3.10/crypt.py","start":1027080,"end":1030899},{"filename":"/lib/python3.10/csv.py","start":1030899,"end":1046929},{"filename":"/lib/python3.10/dataclasses.py","start":1046929,"end":1103316},{"filename":"/lib/python3.10/datetime.py","start":1103316,"end":1191375},{"filename":"/lib/python3.10/decimal.py","start":1191375,"end":1191695},{"filename":"/lib/python3.10/difflib.py","start":1191695,"end":1275002},{"filename":"/lib/python3.10/dis.py","start":1275002,"end":1295022},{"filename":"/lib/python3.10/doctest.py","start":1295022,"end":1399813},{"filename":"/lib/python3.10/enum.py","start":1399813,"end":1439644},{"filename":"/lib/python3.10/filecmp.py","start":1439644,"end":1449822},{"filename":"/lib/python3.10/fileinput.py","start":1449822,"end":1466062},{"filename":"/lib/python3.10/fnmatch.py","start":1466062,"end":1472066},{"filename":"/lib/python3.10/fractions.py","start":1472066,"end":1500308},{"filename":"/lib/python3.10/ftplib.py","start":1500308,"end":1535804},{"filename":"/lib/python3.10/functools.py","start":1535804,"end":1573880},{"filename":"/lib/python3.10/genericpath.py","start":1573880,"end":1578855},{"filename":"/lib/python3.10/getopt.py","start":1578855,"end":1586344},{"filename":"/lib/python3.10/getpass.py","start":1586344,"end":1592334},{"filename":"/lib/python3.10/gettext.py","start":1592334,"end":1619600},{"filename":"/lib/python3.10/glob.py","start":1619600,"end":1627488},{"filename":"/lib/python3.10/graphlib.py","start":1627488,"end":1637061},{"filename":"/lib/python3.10/gzip.py","start":1637061,"end":1658910},{"filename":"/lib/python3.10/hashlib.py","start":1658910,"end":1669139},{"filename":"/lib/python3.10/heapq.py","start":1669139,"end":1692016},{"filename":"/lib/python3.10/hmac.py","start":1692016,"end":1699733},{"filename":"/lib/python3.10/imaplib.py","start":1699733,"end":1754599},{"filename":"/lib/python3.10/imghdr.py","start":1754599,"end":1758407},{"filename":"/lib/python3.10/imp.py","start":1758407,"end":1768998},{"filename":"/lib/python3.10/inspect.py","start":1768998,"end":1892738},{"filename":"/lib/python3.10/io.py","start":1892738,"end":1896934},{"filename":"/lib/python3.10/ipaddress.py","start":1896934,"end":1971720},{"filename":"/lib/python3.10/keyword.py","start":1971720,"end":1972781},{"filename":"/lib/python3.10/linecache.py","start":1972781,"end":1978436},{"filename":"/lib/python3.10/locale.py","start":1978436,"end":2056560},{"filename":"/lib/python3.10/lzma.py","start":2056560,"end":2069837},{"filename":"/lib/python3.10/mailbox.py","start":2069837,"end":2148631},{"filename":"/lib/python3.10/mailcap.py","start":2148631,"end":2156784},{"filename":"/lib/python3.10/mimetypes.py","start":2156784,"end":2179408},{"filename":"/lib/python3.10/modulefinder.py","start":2179408,"end":2203809},{"filename":"/lib/python3.10/netrc.py","start":2203809,"end":2209556},{"filename":"/lib/python3.10/nntplib.py","start":2209556,"end":2250579},{"filename":"/lib/python3.10/ntpath.py","start":2250579,"end":2278985},{"filename":"/lib/python3.10/nturl2path.py","start":2278985,"end":2281872},{"filename":"/lib/python3.10/numbers.py","start":2281872,"end":2292210},{"filename":"/lib/python3.10/opcode.py","start":2292210,"end":2298112},{"filename":"/lib/python3.10/operator.py","start":2298112,"end":2308863},{"filename":"/lib/python3.10/optparse.py","start":2308863,"end":2369232},{"filename":"/lib/python3.10/os.py","start":2369232,"end":2408746},{"filename":"/lib/python3.10/pathlib.py","start":2408746,"end":2458272},{"filename":"/lib/python3.10/pdb.py","start":2458272,"end":2521373},{"filename":"/lib/python3.10/pickle.py","start":2521373,"end":2586319},{"filename":"/lib/python3.10/pickletools.py","start":2586319,"end":2679805},{"filename":"/lib/python3.10/pipes.py","start":2679805,"end":2688719},{"filename":"/lib/python3.10/pkgutil.py","start":2688719,"end":2713295},{"filename":"/lib/python3.10/platform.py","start":2713295,"end":2755243},{"filename":"/lib/python3.10/plistlib.py","start":2755243,"end":2783491},{"filename":"/lib/python3.10/poplib.py","start":2783491,"end":2798689},{"filename":"/lib/python3.10/posixpath.py","start":2798689,"end":2814911},{"filename":"/lib/python3.10/pprint.py","start":2814911,"end":2839355},{"filename":"/lib/python3.10/profile.py","start":2839355,"end":2862226},{"filename":"/lib/python3.10/pstats.py","start":2862226,"end":2891552},{"filename":"/lib/python3.10/pty.py","start":2891552,"end":2896765},{"filename":"/lib/python3.10/py_compile.py","start":2896765,"end":2904602},{"filename":"/lib/python3.10/pyclbr.py","start":2904602,"end":2915998},{"filename":"/lib/python3.10/pydoc.py","start":2915998,"end":3025532},{"filename":"/lib/python3.10/queue.py","start":3025532,"end":3037015},{"filename":"/lib/python3.10/quopri.py","start":3037015,"end":3044283},{"filename":"/lib/python3.10/random.py","start":3044283,"end":3077504},{"filename":"/lib/python3.10/re.py","start":3077504,"end":3093364},{"filename":"/lib/python3.10/reprlib.py","start":3093364,"end":3098631},{"filename":"/lib/python3.10/rlcompleter.py","start":3098631,"end":3106448},{"filename":"/lib/python3.10/runpy.py","start":3106448,"end":3118686},{"filename":"/lib/python3.10/sched.py","start":3118686,"end":3125037},{"filename":"/lib/python3.10/secrets.py","start":3125037,"end":3127073},{"filename":"/lib/python3.10/selectors.py","start":3127073,"end":3146609},{"filename":"/lib/python3.10/shelve.py","start":3146609,"end":3155169},{"filename":"/lib/python3.10/shlex.py","start":3155169,"end":3168670},{"filename":"/lib/python3.10/shutil.py","start":3168670,"end":3220918},{"filename":"/lib/python3.10/signal.py","start":3220918,"end":3223356},{"filename":"/lib/python3.10/site.py","start":3223356,"end":3245943},{"filename":"/lib/python3.10/smtpd.py","start":3245943,"end":3281066},{"filename":"/lib/python3.10/smtplib.py","start":3281066,"end":3326484},{"filename":"/lib/python3.10/sndhdr.py","start":3326484,"end":3333583},{"filename":"/lib/python3.10/socket.py","start":3333583,"end":3370316},{"filename":"/lib/python3.10/sre_parse.py","start":3370316,"end":3410546},{"filename":"/lib/python3.10/socketserver.py","start":3410546,"end":3437842},{"filename":"/lib/python3.10/sre_compile.py","start":3437842,"end":3464537},{"filename":"/lib/python3.10/sre_constants.py","start":3464537,"end":3471691},{"filename":"/lib/python3.10/ssl.py","start":3471691,"end":3523807},{"filename":"/lib/python3.10/stat.py","start":3523807,"end":3529292},{"filename":"/lib/python3.10/statistics.py","start":3529292,"end":3572357},{"filename":"/lib/python3.10/string.py","start":3572357,"end":3582923},{"filename":"/lib/python3.10/stringprep.py","start":3582923,"end":3595840},{"filename":"/lib/python3.10/struct.py","start":3595840,"end":3596097},{"filename":"/lib/python3.10/subprocess.py","start":3596097,"end":3679831},{"filename":"/lib/python3.10/sunau.py","start":3679831,"end":3697989},{"filename":"/lib/python3.10/symtable.py","start":3697989,"end":3708187},{"filename":"/lib/python3.10/sysconfig.py","start":3708187,"end":3735796},{"filename":"/lib/python3.10/tabnanny.py","start":3735796,"end":3747204},{"filename":"/lib/python3.10/tarfile.py","start":3747204,"end":3842378},{"filename":"/lib/python3.10/telnetlib.py","start":3842378,"end":3865632},{"filename":"/lib/python3.10/tempfile.py","start":3865632,"end":3894089},{"filename":"/lib/python3.10/textwrap.py","start":3894089,"end":3913861},{"filename":"/lib/python3.10/this.py","start":3913861,"end":3914864},{"filename":"/lib/python3.10/threading.py","start":3914864,"end":3971764},{"filename":"/lib/python3.10/timeit.py","start":3971764,"end":3985259},{"filename":"/lib/python3.10/token.py","start":3985259,"end":3987645},{"filename":"/lib/python3.10/tokenize.py","start":3987645,"end":4013566},{"filename":"/lib/python3.10/trace.py","start":4013566,"end":4042761},{"filename":"/lib/python3.10/traceback.py","start":4042761,"end":4068979},{"filename":"/lib/python3.10/tracemalloc.py","start":4068979,"end":4087026},{"filename":"/lib/python3.10/tty.py","start":4087026,"end":4087905},{"filename":"/lib/python3.10/types.py","start":4087905,"end":4098022},{"filename":"/lib/python3.10/typing.py","start":4098022,"end":4189299},{"filename":"/lib/python3.10/uu.py","start":4189299,"end":4196258},{"filename":"/lib/python3.10/uuid.py","start":4196258,"end":4223582},{"filename":"/lib/python3.10/warnings.py","start":4223582,"end":4243270},{"filename":"/lib/python3.10/wave.py","start":4243270,"end":4261274},{"filename":"/lib/python3.10/weakref.py","start":4261274,"end":4282834},{"filename":"/lib/python3.10/xdrlib.py","start":4282834,"end":4288747},{"filename":"/lib/python3.10/zipapp.py","start":4288747,"end":4296282},{"filename":"/lib/python3.10/zipfile.py","start":4296282,"end":4384813},{"filename":"/lib/python3.10/zipimport.py","start":4384813,"end":4414954},{"filename":"/lib/python3.10/LICENSE.txt","start":4414954,"end":4428885},{"filename":"/lib/python3.10/_sysconfigdata__emscripten_wasm32-emscripten.py","start":4428885,"end":4457667},{"filename":"/lib/python3.10/asyncio/__init__.py","start":4457667,"end":4458775},{"filename":"/lib/python3.10/asyncio/__main__.py","start":4458775,"end":4462118},{"filename":"/lib/python3.10/asyncio/base_events.py","start":4462118,"end":4535569},{"filename":"/lib/python3.10/asyncio/base_futures.py","start":4535569,"end":4538143},{"filename":"/lib/python3.10/asyncio/base_subprocess.py","start":4538143,"end":4546986},{"filename":"/lib/python3.10/asyncio/base_tasks.py","start":4546986,"end":4549453},{"filename":"/lib/python3.10/asyncio/constants.py","start":4549453,"end":4550341},{"filename":"/lib/python3.10/asyncio/coroutines.py","start":4550341,"end":4559138},{"filename":"/lib/python3.10/asyncio/events.py","start":4559138,"end":4586362},{"filename":"/lib/python3.10/asyncio/exceptions.py","start":4586362,"end":4587995},{"filename":"/lib/python3.10/asyncio/format_helpers.py","start":4587995,"end":4590399},{"filename":"/lib/python3.10/asyncio/futures.py","start":4590399,"end":4604413},{"filename":"/lib/python3.10/asyncio/locks.py","start":4604413,"end":4618e3},{"filename":"/lib/python3.10/asyncio/log.py","start":4618e3,"end":4618124},{"filename":"/lib/python3.10/asyncio/mixins.py","start":4618124,"end":4618927},{"filename":"/lib/python3.10/asyncio/proactor_events.py","start":4618927,"end":4651183},{"filename":"/lib/python3.10/asyncio/protocols.py","start":4651183,"end":4658319},{"filename":"/lib/python3.10/asyncio/queues.py","start":4658319,"end":4666329},{"filename":"/lib/python3.10/asyncio/runners.py","start":4666329,"end":4668433},{"filename":"/lib/python3.10/asyncio/selector_events.py","start":4668433,"end":4707935},{"filename":"/lib/python3.10/asyncio/sslproto.py","start":4707935,"end":4735119},{"filename":"/lib/python3.10/asyncio/staggered.py","start":4735119,"end":4741111},{"filename":"/lib/python3.10/asyncio/streams.py","start":4741111,"end":4766883},{"filename":"/lib/python3.10/asyncio/subprocess.py","start":4766883,"end":4774288},{"filename":"/lib/python3.10/asyncio/tasks.py","start":4774288,"end":4806128},{"filename":"/lib/python3.10/asyncio/threads.py","start":4806128,"end":4806918},{"filename":"/lib/python3.10/asyncio/transports.py","start":4806918,"end":4817404},{"filename":"/lib/python3.10/asyncio/trsock.py","start":4817404,"end":4823280},{"filename":"/lib/python3.10/asyncio/unix_events.py","start":4823280,"end":4874880},{"filename":"/lib/python3.10/asyncio/windows_events.py","start":4874880,"end":4907884},{"filename":"/lib/python3.10/asyncio/windows_utils.py","start":4907884,"end":4912944},{"filename":"/lib/python3.10/collections/__init__.py","start":4912944,"end":4964172},{"filename":"/lib/python3.10/collections/abc.py","start":4964172,"end":4964291},{"filename":"/lib/python3.10/concurrent/__init__.py","start":4964291,"end":4964329},{"filename":"/lib/python3.10/concurrent/futures/__init__.py","start":4964329,"end":4965883},{"filename":"/lib/python3.10/concurrent/futures/_base.py","start":4965883,"end":4988460},{"filename":"/lib/python3.10/concurrent/futures/process.py","start":4988460,"end":5018667},{"filename":"/lib/python3.10/concurrent/futures/thread.py","start":5018667,"end":5027438},{"filename":"/lib/python3.10/ctypes/__init__.py","start":5027438,"end":5045426},{"filename":"/lib/python3.10/ctypes/_aix.py","start":5045426,"end":5058001},{"filename":"/lib/python3.10/ctypes/_endian.py","start":5058001,"end":5060001},{"filename":"/lib/python3.10/ctypes/util.py","start":5060001,"end":5073880},{"filename":"/lib/python3.10/ctypes/wintypes.py","start":5073880,"end":5079508},{"filename":"/lib/python3.10/ctypes/macholib/README.ctypes","start":5079508,"end":5079804},{"filename":"/lib/python3.10/ctypes/macholib/__init__.py","start":5079804,"end":5079958},{"filename":"/lib/python3.10/ctypes/macholib/dyld.py","start":5079958,"end":5085241},{"filename":"/lib/python3.10/ctypes/macholib/dylib.py","start":5085241,"end":5087069},{"filename":"/lib/python3.10/ctypes/macholib/fetch_macholib","start":5087069,"end":5087153},{"filename":"/lib/python3.10/ctypes/macholib/fetch_macholib.bat","start":5087153,"end":5087228},{"filename":"/lib/python3.10/ctypes/macholib/framework.py","start":5087228,"end":5089429},{"filename":"/lib/python3.10/tzdata-2022.1.dist-info/LICENSE","start":5089429,"end":5090021},{"filename":"/lib/python3.10/tzdata-2022.1.dist-info/LICENSE_APACHE","start":5090021,"end":5101378},{"filename":"/lib/python3.10/tzdata-2022.1.dist-info/METADATA","start":5101378,"end":5102791},{"filename":"/lib/python3.10/tzdata-2022.1.dist-info/WHEEL","start":5102791,"end":5102901},{"filename":"/lib/python3.10/tzdata-2022.1.dist-info/top_level.txt","start":5102901,"end":5102908},{"filename":"/lib/python3.10/tzdata-2022.1.dist-info/RECORD","start":5102908,"end":5159434},{"filename":"/lib/python3.10/tzdata-2022.1.dist-info/INSTALLER","start":5159434,"end":5159438},{"filename":"/lib/python3.10/tzdata-2022.1.dist-info/REQUESTED","start":5159438,"end":5159438},{"filename":"/lib/python3.10/email/__init__.py","start":5159438,"end":5161204},{"filename":"/lib/python3.10/email/_encoded_words.py","start":5161204,"end":5169728},{"filename":"/lib/python3.10/email/_header_value_parser.py","start":5169728,"end":5276687},{"filename":"/lib/python3.10/email/_parseaddr.py","start":5276687,"end":5294414},{"filename":"/lib/python3.10/email/_policybase.py","start":5294414,"end":5309487},{"filename":"/lib/python3.10/email/architecture.rst","start":5309487,"end":5319048},{"filename":"/lib/python3.10/email/base64mime.py","start":5319048,"end":5322607},{"filename":"/lib/python3.10/email/charset.py","start":5322607,"end":5339735},{"filename":"/lib/python3.10/email/contentmanager.py","start":5339735,"end":5350290},{"filename":"/lib/python3.10/email/encoders.py","start":5350290,"end":5352076},{"filename":"/lib/python3.10/email/errors.py","start":5352076,"end":5355811},{"filename":"/lib/python3.10/email/feedparser.py","start":5355811,"end":5378591},{"filename":"/lib/python3.10/email/generator.py","start":5378591,"end":5398787},{"filename":"/lib/python3.10/email/header.py","start":5398787,"end":5422889},{"filename":"/lib/python3.10/email/headerregistry.py","start":5422889,"end":5443702},{"filename":"/lib/python3.10/email/iterators.py","start":5443702,"end":5445837},{"filename":"/lib/python3.10/email/message.py","start":5445837,"end":5492897},{"filename":"/lib/python3.10/email/parser.py","start":5492897,"end":5497938},{"filename":"/lib/python3.10/email/policy.py","start":5497938,"end":5508321},{"filename":"/lib/python3.10/email/quoprimime.py","start":5508321,"end":5518179},{"filename":"/lib/python3.10/email/utils.py","start":5518179,"end":5531601},{"filename":"/lib/python3.10/email/mime/__init__.py","start":5531601,"end":5531601},{"filename":"/lib/python3.10/email/mime/application.py","start":5531601,"end":5532922},{"filename":"/lib/python3.10/email/mime/audio.py","start":5532922,"end":5535661},{"filename":"/lib/python3.10/email/mime/base.py","start":5535661,"end":5536577},{"filename":"/lib/python3.10/email/mime/image.py","start":5536577,"end":5538406},{"filename":"/lib/python3.10/email/mime/message.py","start":5538406,"end":5539723},{"filename":"/lib/python3.10/email/mime/multipart.py","start":5539723,"end":5541344},{"filename":"/lib/python3.10/email/mime/nonmultipart.py","start":5541344,"end":5542035},{"filename":"/lib/python3.10/email/mime/text.py","start":5542035,"end":5543472},{"filename":"/lib/python3.10/encodings/__init__.py","start":5543472,"end":5549092},{"filename":"/lib/python3.10/encodings/aliases.py","start":5549092,"end":5564769},{"filename":"/lib/python3.10/encodings/ascii.py","start":5564769,"end":5566017},{"filename":"/lib/python3.10/encodings/base64_codec.py","start":5566017,"end":5567550},{"filename":"/lib/python3.10/encodings/big5.py","start":5567550,"end":5568569},{"filename":"/lib/python3.10/encodings/big5hkscs.py","start":5568569,"end":5569608},{"filename":"/lib/python3.10/encodings/bz2_codec.py","start":5569608,"end":5571857},{"filename":"/lib/python3.10/encodings/charmap.py","start":5571857,"end":5573941},{"filename":"/lib/python3.10/encodings/cp037.py","start":5573941,"end":5587062},{"filename":"/lib/python3.10/encodings/cp1006.py","start":5587062,"end":5600630},{"filename":"/lib/python3.10/encodings/cp1026.py","start":5600630,"end":5613743},{"filename":"/lib/python3.10/encodings/cp1125.py","start":5613743,"end":5648340},{"filename":"/lib/python3.10/encodings/cp1140.py","start":5648340,"end":5661445},{"filename":"/lib/python3.10/encodings/cp1250.py","start":5661445,"end":5675131},{"filename":"/lib/python3.10/encodings/cp1251.py","start":5675131,"end":5688492},{"filename":"/lib/python3.10/encodings/cp1252.py","start":5688492,"end":5702003},{"filename":"/lib/python3.10/encodings/cp1253.py","start":5702003,"end":5715097},{"filename":"/lib/python3.10/encodings/cp1254.py","start":5715097,"end":5728599},{"filename":"/lib/python3.10/encodings/cp1255.py","start":5728599,"end":5741065},{"filename":"/lib/python3.10/encodings/cp1256.py","start":5741065,"end":5753879},{"filename":"/lib/python3.10/encodings/cp1257.py","start":5753879,"end":5767253},{"filename":"/lib/python3.10/encodings/cp1258.py","start":5767253,"end":5780617},{"filename":"/lib/python3.10/encodings/cp273.py","start":5780617,"end":5794749},{"filename":"/lib/python3.10/encodings/cp424.py","start":5794749,"end":5806804},{"filename":"/lib/python3.10/encodings/cp437.py","start":5806804,"end":5841368},{"filename":"/lib/python3.10/encodings/cp500.py","start":5841368,"end":5854489},{"filename":"/lib/python3.10/encodings/cp720.py","start":5854489,"end":5868175},{"filename":"/lib/python3.10/encodings/cp737.py","start":5868175,"end":5902856},{"filename":"/lib/python3.10/encodings/cp775.py","start":5902856,"end":5937332},{"filename":"/lib/python3.10/encodings/cp850.py","start":5937332,"end":5971437},{"filename":"/lib/python3.10/encodings/cp852.py","start":5971437,"end":6006439},{"filename":"/lib/python3.10/encodings/cp855.py","start":6006439,"end":6040289},{"filename":"/lib/python3.10/encodings/cp856.py","start":6040289,"end":6052712},{"filename":"/lib/python3.10/encodings/cp857.py","start":6052712,"end":6086620},{"filename":"/lib/python3.10/encodings/cp858.py","start":6086620,"end":6120635},{"filename":"/lib/python3.10/encodings/cp860.py","start":6120635,"end":6155316},{"filename":"/lib/python3.10/encodings/cp861.py","start":6155316,"end":6189949},{"filename":"/lib/python3.10/encodings/cp862.py","start":6189949,"end":6223319},{"filename":"/lib/python3.10/encodings/cp863.py","start":6223319,"end":6257571},{"filename":"/lib/python3.10/encodings/cp864.py","start":6257571,"end":6291234},{"filename":"/lib/python3.10/encodings/cp865.py","start":6291234,"end":6325852},{"filename":"/lib/python3.10/encodings/cp866.py","start":6325852,"end":6360248},{"filename":"/lib/python3.10/encodings/cp869.py","start":6360248,"end":6393213},{"filename":"/lib/python3.10/encodings/cp874.py","start":6393213,"end":6405808},{"filename":"/lib/python3.10/encodings/cp875.py","start":6405808,"end":6418662},{"filename":"/lib/python3.10/encodings/cp932.py","start":6418662,"end":6419685},{"filename":"/lib/python3.10/encodings/cp949.py","start":6419685,"end":6420708},{"filename":"/lib/python3.10/encodings/cp950.py","start":6420708,"end":6421731},{"filename":"/lib/python3.10/encodings/euc_jis_2004.py","start":6421731,"end":6422782},{"filename":"/lib/python3.10/encodings/euc_jisx0213.py","start":6422782,"end":6423833},{"filename":"/lib/python3.10/encodings/euc_jp.py","start":6423833,"end":6424860},{"filename":"/lib/python3.10/encodings/euc_kr.py","start":6424860,"end":6425887},{"filename":"/lib/python3.10/encodings/gb18030.py","start":6425887,"end":6426918},{"filename":"/lib/python3.10/encodings/gb2312.py","start":6426918,"end":6427945},{"filename":"/lib/python3.10/encodings/gbk.py","start":6427945,"end":6428960},{"filename":"/lib/python3.10/encodings/hex_codec.py","start":6428960,"end":6430468},{"filename":"/lib/python3.10/encodings/hp_roman8.py","start":6430468,"end":6443943},{"filename":"/lib/python3.10/encodings/hz.py","start":6443943,"end":6444954},{"filename":"/lib/python3.10/encodings/idna.py","start":6444954,"end":6454124},{"filename":"/lib/python3.10/encodings/iso2022_jp.py","start":6454124,"end":6455177},{"filename":"/lib/python3.10/encodings/iso2022_jp_1.py","start":6455177,"end":6456238},{"filename":"/lib/python3.10/encodings/iso2022_jp_2.py","start":6456238,"end":6457299},{"filename":"/lib/python3.10/encodings/iso2022_jp_2004.py","start":6457299,"end":6458372},{"filename":"/lib/python3.10/encodings/iso2022_jp_3.py","start":6458372,"end":6459433},{"filename":"/lib/python3.10/encodings/iso2022_jp_ext.py","start":6459433,"end":6460502},{"filename":"/lib/python3.10/encodings/iso2022_kr.py","start":6460502,"end":6461555},{"filename":"/lib/python3.10/encodings/iso8859_1.py","start":6461555,"end":6474731},{"filename":"/lib/python3.10/encodings/iso8859_10.py","start":6474731,"end":6488320},{"filename":"/lib/python3.10/encodings/iso8859_11.py","start":6488320,"end":6500655},{"filename":"/lib/python3.10/encodings/iso8859_13.py","start":6500655,"end":6513926},{"filename":"/lib/python3.10/encodings/iso8859_14.py","start":6513926,"end":6527578},{"filename":"/lib/python3.10/encodings/iso8859_15.py","start":6527578,"end":6540790},{"filename":"/lib/python3.10/encodings/iso8859_16.py","start":6540790,"end":6554347},{"filename":"/lib/python3.10/encodings/iso8859_2.py","start":6554347,"end":6567751},{"filename":"/lib/python3.10/encodings/iso8859_3.py","start":6567751,"end":6580840},{"filename":"/lib/python3.10/encodings/iso8859_4.py","start":6580840,"end":6594216},{"filename":"/lib/python3.10/encodings/iso8859_5.py","start":6594216,"end":6607231},{"filename":"/lib/python3.10/encodings/iso8859_6.py","start":6607231,"end":6618064},{"filename":"/lib/python3.10/encodings/iso8859_7.py","start":6618064,"end":6630908},{"filename":"/lib/python3.10/encodings/iso8859_8.py","start":6630908,"end":6641944},{"filename":"/lib/python3.10/encodings/iso8859_9.py","start":6641944,"end":6655100},{"filename":"/lib/python3.10/encodings/johab.py","start":6655100,"end":6656123},{"filename":"/lib/python3.10/encodings/koi8_r.py","start":6656123,"end":6669902},{"filename":"/lib/python3.10/encodings/koi8_t.py","start":6669902,"end":6683095},{"filename":"/lib/python3.10/encodings/koi8_u.py","start":6683095,"end":6696857},{"filename":"/lib/python3.10/encodings/kz1048.py","start":6696857,"end":6710580},{"filename":"/lib/python3.10/encodings/latin_1.py","start":6710580,"end":6711844},{"filename":"/lib/python3.10/encodings/mac_arabic.py","start":6711844,"end":6748311},{"filename":"/lib/python3.10/encodings/mac_croatian.py","start":6748311,"end":6761944},{"filename":"/lib/python3.10/encodings/mac_cyrillic.py","start":6761944,"end":6775398},{"filename":"/lib/python3.10/encodings/mac_farsi.py","start":6775398,"end":6790568},{"filename":"/lib/python3.10/encodings/mac_greek.py","start":6790568,"end":6804289},{"filename":"/lib/python3.10/encodings/mac_iceland.py","start":6804289,"end":6817787},{"filename":"/lib/python3.10/encodings/mac_latin2.py","start":6817787,"end":6831905},{"filename":"/lib/python3.10/encodings/mac_roman.py","start":6831905,"end":6845385},{"filename":"/lib/python3.10/encodings/mac_romanian.py","start":6845385,"end":6859046},{"filename":"/lib/python3.10/encodings/mac_turkish.py","start":6859046,"end":6872559},{"filename":"/lib/python3.10/encodings/mbcs.py","start":6872559,"end":6873770},{"filename":"/lib/python3.10/encodings/oem.py","start":6873770,"end":6874789},{"filename":"/lib/python3.10/encodings/palmos.py","start":6874789,"end":6888308},{"filename":"/lib/python3.10/encodings/ptcp154.py","start":6888308,"end":6902323},{"filename":"/lib/python3.10/encodings/punycode.py","start":6902323,"end":6909206},{"filename":"/lib/python3.10/encodings/quopri_codec.py","start":6909206,"end":6910731},{"filename":"/lib/python3.10/encodings/raw_unicode_escape.py","start":6910731,"end":6912063},{"filename":"/lib/python3.10/encodings/rot_13.py","start":6912063,"end":6914511},{"filename":"/lib/python3.10/encodings/shift_jis.py","start":6914511,"end":6915550},{"filename":"/lib/python3.10/encodings/shift_jis_2004.py","start":6915550,"end":6916609},{"filename":"/lib/python3.10/encodings/shift_jisx0213.py","start":6916609,"end":6917668},{"filename":"/lib/python3.10/encodings/tis_620.py","start":6917668,"end":6929968},{"filename":"/lib/python3.10/encodings/undefined.py","start":6929968,"end":6931267},{"filename":"/lib/python3.10/encodings/unicode_escape.py","start":6931267,"end":6932571},{"filename":"/lib/python3.10/encodings/utf_16.py","start":6932571,"end":6937807},{"filename":"/lib/python3.10/encodings/utf_16_be.py","start":6937807,"end":6938844},{"filename":"/lib/python3.10/encodings/utf_16_le.py","start":6938844,"end":6939881},{"filename":"/lib/python3.10/encodings/utf_32.py","start":6939881,"end":6945010},{"filename":"/lib/python3.10/encodings/utf_32_be.py","start":6945010,"end":6945940},{"filename":"/lib/python3.10/encodings/utf_32_le.py","start":6945940,"end":6946870},{"filename":"/lib/python3.10/encodings/utf_7.py","start":6946870,"end":6947816},{"filename":"/lib/python3.10/encodings/utf_8.py","start":6947816,"end":6948821},{"filename":"/lib/python3.10/encodings/utf_8_sig.py","start":6948821,"end":6952954},{"filename":"/lib/python3.10/encodings/uu_codec.py","start":6952954,"end":6955805},{"filename":"/lib/python3.10/encodings/zlib_codec.py","start":6955805,"end":6958009},{"filename":"/lib/python3.10/html/__init__.py","start":6958009,"end":6962765},{"filename":"/lib/python3.10/html/entities.py","start":6962765,"end":7038080},{"filename":"/lib/python3.10/html/parser.py","start":7038080,"end":7055472},{"filename":"/lib/python3.10/http/__init__.py","start":7055472,"end":7062205},{"filename":"/lib/python3.10/http/client.py","start":7062205,"end":7118923},{"filename":"/lib/python3.10/http/cookiejar.py","start":7118923,"end":7196330},{"filename":"/lib/python3.10/http/cookies.py","start":7196330,"end":7216812},{"filename":"/lib/python3.10/http/server.py","start":7216812,"end":7264083},{"filename":"/lib/python3.10/importlib/__init__.py","start":7264083,"end":7270172},{"filename":"/lib/python3.10/importlib/_abc.py","start":7270172,"end":7272024},{"filename":"/lib/python3.10/importlib/_adapters.py","start":7272024,"end":7273932},{"filename":"/lib/python3.10/importlib/_bootstrap.py","start":7273932,"end":7315399},{"filename":"/lib/python3.10/importlib/_bootstrap_external.py","start":7315399,"end":7379871},{"filename":"/lib/python3.10/importlib/_common.py","start":7379871,"end":7382945},{"filename":"/lib/python3.10/importlib/abc.py","start":7382945,"end":7397366},{"filename":"/lib/python3.10/importlib/machinery.py","start":7397366,"end":7398197},{"filename":"/lib/python3.10/importlib/readers.py","start":7398197,"end":7401784},{"filename":"/lib/python3.10/importlib/resources.py","start":7401784,"end":7407489},{"filename":"/lib/python3.10/importlib/util.py","start":7407489,"end":7418976},{"filename":"/lib/python3.10/importlib/metadata/__init__.py","start":7418976,"end":7448449},{"filename":"/lib/python3.10/importlib/metadata/_adapters.py","start":7448449,"end":7450311},{"filename":"/lib/python3.10/importlib/metadata/_collections.py","start":7450311,"end":7451054},{"filename":"/lib/python3.10/importlib/metadata/_functools.py","start":7451054,"end":7453555},{"filename":"/lib/python3.10/importlib/metadata/_itertools.py","start":7453555,"end":7454162},{"filename":"/lib/python3.10/importlib/metadata/_meta.py","start":7454162,"end":7455292},{"filename":"/lib/python3.10/importlib/metadata/_text.py","start":7455292,"end":7457490},{"filename":"/lib/python3.10/json/__init__.py","start":7457490,"end":7471509},{"filename":"/lib/python3.10/json/decoder.py","start":7471509,"end":7483981},{"filename":"/lib/python3.10/json/encoder.py","start":7483981,"end":7500054},{"filename":"/lib/python3.10/json/scanner.py","start":7500054,"end":7502479},{"filename":"/lib/python3.10/json/tool.py","start":7502479,"end":7505818},{"filename":"/lib/python3.10/logging/__init__.py","start":7505818,"end":7586023},{"filename":"/lib/python3.10/logging/config.py","start":7586023,"end":7622486},{"filename":"/lib/python3.10/logging/handlers.py","start":7622486,"end":7683115},{"filename":"/lib/python3.10/multiprocessing/__init__.py","start":7683115,"end":7684031},{"filename":"/lib/python3.10/multiprocessing/connection.py","start":7684031,"end":7716055},{"filename":"/lib/python3.10/multiprocessing/context.py","start":7716055,"end":7727312},{"filename":"/lib/python3.10/multiprocessing/forkserver.py","start":7727312,"end":7739454},{"filename":"/lib/python3.10/multiprocessing/heap.py","start":7739454,"end":7751080},{"filename":"/lib/python3.10/multiprocessing/managers.py","start":7751080,"end":7798582},{"filename":"/lib/python3.10/multiprocessing/pool.py","start":7798582,"end":7831137},{"filename":"/lib/python3.10/multiprocessing/popen_fork.py","start":7831137,"end":7833514},{"filename":"/lib/python3.10/multiprocessing/popen_forkserver.py","start":7833514,"end":7835744},{"filename":"/lib/python3.10/multiprocessing/popen_spawn_posix.py","start":7835744,"end":7837773},{"filename":"/lib/python3.10/multiprocessing/popen_spawn_win32.py","start":7837773,"end":7841784},{"filename":"/lib/python3.10/multiprocessing/process.py","start":7841784,"end":7853784},{"filename":"/lib/python3.10/multiprocessing/queues.py","start":7853784,"end":7865777},{"filename":"/lib/python3.10/multiprocessing/reduction.py","start":7865777,"end":7875289},{"filename":"/lib/python3.10/multiprocessing/resource_sharer.py","start":7875289,"end":7880421},{"filename":"/lib/python3.10/multiprocessing/resource_tracker.py","start":7880421,"end":7889396},{"filename":"/lib/python3.10/multiprocessing/shared_memory.py","start":7889396,"end":7907792},{"filename":"/lib/python3.10/multiprocessing/sharedctypes.py","start":7907792,"end":7914098},{"filename":"/lib/python3.10/multiprocessing/spawn.py","start":7914098,"end":7923394},{"filename":"/lib/python3.10/multiprocessing/synchronize.py","start":7923394,"end":7935004},{"filename":"/lib/python3.10/multiprocessing/util.py","start":7935004,"end":7949027},{"filename":"/lib/python3.10/multiprocessing/dummy/__init__.py","start":7949027,"end":7952088},{"filename":"/lib/python3.10/multiprocessing/dummy/connection.py","start":7952088,"end":7953686},{"filename":"/lib/python3.10/pydoc_data/__init__.py","start":7953686,"end":7953686},{"filename":"/lib/python3.10/pydoc_data/_pydoc.css","start":7953686,"end":7953782},{"filename":"/lib/python3.10/pydoc_data/topics.py","start":7953782,"end":8697388},{"filename":"/lib/python3.10/site-packages/README.txt","start":8697388,"end":8697507},{"filename":"/lib/python3.10/sqlite3/__init__.py","start":8697507,"end":8700043},{"filename":"/lib/python3.10/sqlite3/dbapi2.py","start":8700043,"end":8703366},{"filename":"/lib/python3.10/sqlite3/dump.py","start":8703366,"end":8706191},{"filename":"/lib/python3.10/tzdata/__init__.py","start":8706191,"end":8706443},{"filename":"/lib/python3.10/tzdata/zones","start":8706443,"end":8715493},{"filename":"/lib/python3.10/tzdata/zoneinfo/CET","start":8715493,"end":8716114},{"filename":"/lib/python3.10/tzdata/zoneinfo/CST6CDT","start":8716114,"end":8717065},{"filename":"/lib/python3.10/tzdata/zoneinfo/Cuba","start":8717065,"end":8718182},{"filename":"/lib/python3.10/tzdata/zoneinfo/EET","start":8718182,"end":8718679},{"filename":"/lib/python3.10/tzdata/zoneinfo/EST","start":8718679,"end":8718790},{"filename":"/lib/python3.10/tzdata/zoneinfo/EST5EDT","start":8718790,"end":8719741},{"filename":"/lib/python3.10/tzdata/zoneinfo/Egypt","start":8719741,"end":8721017},{"filename":"/lib/python3.10/tzdata/zoneinfo/Eire","start":8721017,"end":8722513},{"filename":"/lib/python3.10/tzdata/zoneinfo/Factory","start":8722513,"end":8722626},{"filename":"/lib/python3.10/tzdata/zoneinfo/GB","start":8722626,"end":8724225},{"filename":"/lib/python3.10/tzdata/zoneinfo/GB-Eire","start":8724225,"end":8725824},{"filename":"/lib/python3.10/tzdata/zoneinfo/GMT","start":8725824,"end":8725935},{"filename":"/lib/python3.10/tzdata/zoneinfo/GMT+0","start":8725935,"end":8726046},{"filename":"/lib/python3.10/tzdata/zoneinfo/GMT-0","start":8726046,"end":8726157},{"filename":"/lib/python3.10/tzdata/zoneinfo/GMT0","start":8726157,"end":8726268},{"filename":"/lib/python3.10/tzdata/zoneinfo/Greenwich","start":8726268,"end":8726379},{"filename":"/lib/python3.10/tzdata/zoneinfo/HST","start":8726379,"end":8726491},{"filename":"/lib/python3.10/tzdata/zoneinfo/Hongkong","start":8726491,"end":8727266},{"filename":"/lib/python3.10/tzdata/zoneinfo/Iceland","start":8727266,"end":8728019},{"filename":"/lib/python3.10/tzdata/zoneinfo/Iran","start":8728019,"end":8730023},{"filename":"/lib/python3.10/tzdata/zoneinfo/Israel","start":8730023,"end":8731097},{"filename":"/lib/python3.10/tzdata/zoneinfo/Jamaica","start":8731097,"end":8731436},{"filename":"/lib/python3.10/tzdata/zoneinfo/Japan","start":8731436,"end":8731649},{"filename":"/lib/python3.10/tzdata/zoneinfo/Kwajalein","start":8731649,"end":8731868},{"filename":"/lib/python3.10/tzdata/zoneinfo/Libya","start":8731868,"end":8732299},{"filename":"/lib/python3.10/tzdata/zoneinfo/MET","start":8732299,"end":8732920},{"filename":"/lib/python3.10/tzdata/zoneinfo/MST","start":8732920,"end":8733031},{"filename":"/lib/python3.10/tzdata/zoneinfo/MST7MDT","start":8733031,"end":8733982},{"filename":"/lib/python3.10/tzdata/zoneinfo/NZ","start":8733982,"end":8735025},{"filename":"/lib/python3.10/tzdata/zoneinfo/NZ-CHAT","start":8735025,"end":8735833},{"filename":"/lib/python3.10/tzdata/zoneinfo/Navajo","start":8735833,"end":8736875},{"filename":"/lib/python3.10/tzdata/zoneinfo/PRC","start":8736875,"end":8737268},{"filename":"/lib/python3.10/tzdata/zoneinfo/PST8PDT","start":8737268,"end":8738219},{"filename":"/lib/python3.10/tzdata/zoneinfo/Poland","start":8738219,"end":8739142},{"filename":"/lib/python3.10/tzdata/zoneinfo/Portugal","start":8739142,"end":8740596},{"filename":"/lib/python3.10/tzdata/zoneinfo/ROC","start":8740596,"end":8741107},{"filename":"/lib/python3.10/tzdata/zoneinfo/ROK","start":8741107,"end":8741522},{"filename":"/lib/python3.10/tzdata/zoneinfo/Singapore","start":8741522,"end":8741778},{"filename":"/lib/python3.10/tzdata/zoneinfo/Turkey","start":8741778,"end":8742978},{"filename":"/lib/python3.10/tzdata/zoneinfo/UCT","start":8742978,"end":8743089},{"filename":"/lib/python3.10/tzdata/zoneinfo/UTC","start":8743089,"end":8743200},{"filename":"/lib/python3.10/tzdata/zoneinfo/Universal","start":8743200,"end":8743311},{"filename":"/lib/python3.10/tzdata/zoneinfo/W-SU","start":8743311,"end":8744219},{"filename":"/lib/python3.10/tzdata/zoneinfo/WET","start":8744219,"end":8744713},{"filename":"/lib/python3.10/tzdata/zoneinfo/Zulu","start":8744713,"end":8744824},{"filename":"/lib/python3.10/tzdata/zoneinfo/__init__.py","start":8744824,"end":8744824},{"filename":"/lib/python3.10/tzdata/zoneinfo/iso3166.tab","start":8744824,"end":8749287},{"filename":"/lib/python3.10/tzdata/zoneinfo/leapseconds","start":8749287,"end":8752679},{"filename":"/lib/python3.10/tzdata/zoneinfo/tzdata.zi","start":8752679,"end":8865464},{"filename":"/lib/python3.10/tzdata/zoneinfo/zone.tab","start":8865464,"end":8884883},{"filename":"/lib/python3.10/tzdata/zoneinfo/zone1970.tab","start":8884883,"end":8902476},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Abidjan","start":8902476,"end":8902606},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Accra","start":8902606,"end":8902736},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Addis_Ababa","start":8902736,"end":8902927},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Algiers","start":8902927,"end":8903397},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Asmara","start":8903397,"end":8903588},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Asmera","start":8903588,"end":8903779},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Bamako","start":8903779,"end":8903909},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Bangui","start":8903909,"end":8904089},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Banjul","start":8904089,"end":8904219},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Bissau","start":8904219,"end":8904368},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Blantyre","start":8904368,"end":8904499},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Brazzaville","start":8904499,"end":8904679},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Bujumbura","start":8904679,"end":8904810},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Cairo","start":8904810,"end":8906086},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Casablanca","start":8906086,"end":8908005},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Ceuta","start":8908005,"end":8908567},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Conakry","start":8908567,"end":8908697},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Dakar","start":8908697,"end":8908827},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Dar_es_Salaam","start":8908827,"end":8909018},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Djibouti","start":8909018,"end":8909209},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Douala","start":8909209,"end":8909389},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/El_Aaiun","start":8909389,"end":8911219},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Freetown","start":8911219,"end":8911349},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Gaborone","start":8911349,"end":8911480},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Harare","start":8911480,"end":8911611},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Johannesburg","start":8911611,"end":8911801},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Juba","start":8911801,"end":8912259},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Kampala","start":8912259,"end":8912450},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Khartoum","start":8912450,"end":8912908},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Kigali","start":8912908,"end":8913039},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Kinshasa","start":8913039,"end":8913219},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Lagos","start":8913219,"end":8913399},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Libreville","start":8913399,"end":8913579},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Lome","start":8913579,"end":8913709},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Luanda","start":8913709,"end":8913889},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Lubumbashi","start":8913889,"end":8914020},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Lusaka","start":8914020,"end":8914151},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Malabo","start":8914151,"end":8914331},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Maputo","start":8914331,"end":8914462},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Maseru","start":8914462,"end":8914652},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Mbabane","start":8914652,"end":8914842},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Mogadishu","start":8914842,"end":8915033},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Monrovia","start":8915033,"end":8915197},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Nairobi","start":8915197,"end":8915388},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Ndjamena","start":8915388,"end":8915548},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Niamey","start":8915548,"end":8915728},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Nouakchott","start":8915728,"end":8915858},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Ouagadougou","start":8915858,"end":8915988},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Porto-Novo","start":8915988,"end":8916168},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Sao_Tome","start":8916168,"end":8916341},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Timbuktu","start":8916341,"end":8916471},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Tripoli","start":8916471,"end":8916902},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Tunis","start":8916902,"end":8917351},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/Windhoek","start":8917351,"end":8917989},{"filename":"/lib/python3.10/tzdata/zoneinfo/Africa/__init__.py","start":8917989,"end":8917989},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Adak","start":8917989,"end":8918958},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Anchorage","start":8918958,"end":8919935},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Anguilla","start":8919935,"end":8920112},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Antigua","start":8920112,"end":8920289},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Araguaina","start":8920289,"end":8920881},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Aruba","start":8920881,"end":8921058},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Asuncion","start":8921058,"end":8921942},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Atikokan","start":8921942,"end":8922091},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Atka","start":8922091,"end":8923060},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Bahia","start":8923060,"end":8923742},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Bahia_Banderas","start":8923742,"end":8924272},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Barbados","start":8924272,"end":8924550},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Belem","start":8924550,"end":8924944},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Belize","start":8924944,"end":8925989},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Blanc-Sablon","start":8925989,"end":8926166},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Boa_Vista","start":8926166,"end":8926596},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Bogota","start":8926596,"end":8926775},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Boise","start":8926775,"end":8927774},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Buenos_Aires","start":8927774,"end":8928482},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Cambridge_Bay","start":8928482,"end":8929250},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Campo_Grande","start":8929250,"end":8930202},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Cancun","start":8930202,"end":8930731},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Caracas","start":8930731,"end":8930921},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Catamarca","start":8930921,"end":8931629},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Cayenne","start":8931629,"end":8931780},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Cayman","start":8931780,"end":8931929},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Chicago","start":8931929,"end":8933683},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Chihuahua","start":8933683,"end":8934023},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Coral_Harbour","start":8934023,"end":8934172},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Cordoba","start":8934172,"end":8934880},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Costa_Rica","start":8934880,"end":8935112},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Creston","start":8935112,"end":8935352},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Cuiaba","start":8935352,"end":8936286},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Curacao","start":8936286,"end":8936463},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Danmarkshavn","start":8936463,"end":8936910},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Dawson","start":8936910,"end":8937939},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Dawson_Creek","start":8937939,"end":8938622},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Denver","start":8938622,"end":8939664},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Detroit","start":8939664,"end":8940563},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Dominica","start":8940563,"end":8940740},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Edmonton","start":8940740,"end":8941710},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Eirunepe","start":8941710,"end":8942146},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/El_Salvador","start":8942146,"end":8942322},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Ensenada","start":8942322,"end":8943347},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Fort_Nelson","start":8943347,"end":8944795},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Fort_Wayne","start":8944795,"end":8945326},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Fortaleza","start":8945326,"end":8945810},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Glace_Bay","start":8945810,"end":8946690},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Godthab","start":8946690,"end":8947155},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Goose_Bay","start":8947155,"end":8948735},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Grand_Turk","start":8948735,"end":8949588},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Grenada","start":8949588,"end":8949765},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Guadeloupe","start":8949765,"end":8949942},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Guatemala","start":8949942,"end":8950154},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Guayaquil","start":8950154,"end":8950333},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Guyana","start":8950333,"end":8950514},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Halifax","start":8950514,"end":8952186},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Havana","start":8952186,"end":8953303},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Hermosillo","start":8953303,"end":8953589},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Indianapolis","start":8953589,"end":8954120},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Inuvik","start":8954120,"end":8954821},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Iqaluit","start":8954821,"end":8955561},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Jamaica","start":8955561,"end":8955900},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Jujuy","start":8955900,"end":8956590},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Juneau","start":8956590,"end":8957556},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Knox_IN","start":8957556,"end":8958572},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Kralendijk","start":8958572,"end":8958749},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/La_Paz","start":8958749,"end":8958919},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Lima","start":8958919,"end":8959202},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Los_Angeles","start":8959202,"end":8960496},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Louisville","start":8960496,"end":8961738},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Lower_Princes","start":8961738,"end":8961915},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Maceio","start":8961915,"end":8962417},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Managua","start":8962417,"end":8962712},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Manaus","start":8962712,"end":8963124},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Marigot","start":8963124,"end":8963301},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Martinique","start":8963301,"end":8963479},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Matamoros","start":8963479,"end":8963916},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Mazatlan","start":8963916,"end":8964283},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Mendoza","start":8964283,"end":8964991},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Menominee","start":8964991,"end":8965908},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Merida","start":8965908,"end":8966211},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Metlakatla","start":8966211,"end":8966806},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Mexico_City","start":8966806,"end":8967218},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Miquelon","start":8967218,"end":8967768},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Moncton","start":8967768,"end":8969261},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Monterrey","start":8969261,"end":8969554},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Montevideo","start":8969554,"end":8970523},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Montreal","start":8970523,"end":8972240},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Montserrat","start":8972240,"end":8972417},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Nassau","start":8972417,"end":8974134},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/New_York","start":8974134,"end":8975878},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Nipigon","start":8975878,"end":8976713},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Nome","start":8976713,"end":8977688},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Noronha","start":8977688,"end":8978172},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Nuuk","start":8978172,"end":8978637},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Ojinaga","start":8978637,"end":8979121},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Panama","start":8979121,"end":8979270},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Pangnirtung","start":8979270,"end":8980039},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Paramaribo","start":8980039,"end":8980226},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Phoenix","start":8980226,"end":8980466},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Port-au-Prince","start":8980466,"end":8981031},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Port_of_Spain","start":8981031,"end":8981208},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Porto_Acre","start":8981208,"end":8981626},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Porto_Velho","start":8981626,"end":8982020},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Puerto_Rico","start":8982020,"end":8982197},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Punta_Arenas","start":8982197,"end":8983406},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Rainy_River","start":8983406,"end":8984241},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Rankin_Inlet","start":8984241,"end":8984933},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Recife","start":8984933,"end":8985417},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Regina","start":8985417,"end":8986055},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Resolute","start":8986055,"end":8986747},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Rio_Branco","start":8986747,"end":8987165},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Rosario","start":8987165,"end":8987873},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Santa_Isabel","start":8987873,"end":8988898},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Santarem","start":8988898,"end":8989307},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Santiago","start":8989307,"end":8990589},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Santo_Domingo","start":8990589,"end":8990906},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Sao_Paulo","start":8990906,"end":8991858},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Scoresbysund","start":8991858,"end":8992337},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Shiprock","start":8992337,"end":8993379},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Sitka","start":8993379,"end":8994335},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/St_Barthelemy","start":8994335,"end":8994512},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/St_Johns","start":8994512,"end":8996390},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/St_Kitts","start":8996390,"end":8996567},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/St_Lucia","start":8996567,"end":8996744},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/St_Thomas","start":8996744,"end":8996921},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/St_Vincent","start":8996921,"end":8997098},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Swift_Current","start":8997098,"end":8997466},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Tegucigalpa","start":8997466,"end":8997660},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Thule","start":8997660,"end":8998115},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Thunder_Bay","start":8998115,"end":8998996},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Tijuana","start":8998996,"end":9000021},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Toronto","start":9000021,"end":9001738},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Tortola","start":9001738,"end":9001915},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Vancouver","start":9001915,"end":9003245},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Virgin","start":9003245,"end":9003422},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Whitehorse","start":9003422,"end":9004451},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Winnipeg","start":9004451,"end":9005745},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Yakutat","start":9005745,"end":9006691},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Yellowknife","start":9006691,"end":9007420},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/__init__.py","start":9007420,"end":9007420},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/Buenos_Aires","start":9007420,"end":9008128},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/Catamarca","start":9008128,"end":9008836},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/ComodRivadavia","start":9008836,"end":9009544},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/Cordoba","start":9009544,"end":9010252},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/Jujuy","start":9010252,"end":9010942},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/La_Rioja","start":9010942,"end":9011659},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/Mendoza","start":9011659,"end":9012367},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/Rio_Gallegos","start":9012367,"end":9013075},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/Salta","start":9013075,"end":9013765},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/San_Juan","start":9013765,"end":9014482},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/San_Luis","start":9014482,"end":9015199},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/Tucuman","start":9015199,"end":9015925},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/Ushuaia","start":9015925,"end":9016633},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Argentina/__init__.py","start":9016633,"end":9016633},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Indiana/Indianapolis","start":9016633,"end":9017164},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Indiana/Knox","start":9017164,"end":9018180},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Indiana/Marengo","start":9018180,"end":9018747},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Indiana/Petersburg","start":9018747,"end":9019430},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Indiana/Tell_City","start":9019430,"end":9019952},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Indiana/Vevay","start":9019952,"end":9020321},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Indiana/Vincennes","start":9020321,"end":9020879},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Indiana/Winamac","start":9020879,"end":9021491},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Indiana/__init__.py","start":9021491,"end":9021491},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Kentucky/Louisville","start":9021491,"end":9022733},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Kentucky/Monticello","start":9022733,"end":9023705},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/Kentucky/__init__.py","start":9023705,"end":9023705},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/North_Dakota/Beulah","start":9023705,"end":9024748},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/North_Dakota/Center","start":9024748,"end":9025738},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/North_Dakota/New_Salem","start":9025738,"end":9026728},{"filename":"/lib/python3.10/tzdata/zoneinfo/America/North_Dakota/__init__.py","start":9026728,"end":9026728},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/Casey","start":9026728,"end":9026971},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/Davis","start":9026971,"end":9027168},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/DumontDUrville","start":9027168,"end":9027322},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/Macquarie","start":9027322,"end":9028298},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/Mawson","start":9028298,"end":9028450},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/McMurdo","start":9028450,"end":9029493},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/Palmer","start":9029493,"end":9030380},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/Rothera","start":9030380,"end":9030512},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/South_Pole","start":9030512,"end":9031555},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/Syowa","start":9031555,"end":9031688},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/Troll","start":9031688,"end":9031865},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/Vostok","start":9031865,"end":9031998},{"filename":"/lib/python3.10/tzdata/zoneinfo/Antarctica/__init__.py","start":9031998,"end":9031998},{"filename":"/lib/python3.10/tzdata/zoneinfo/Arctic/Longyearbyen","start":9031998,"end":9032674},{"filename":"/lib/python3.10/tzdata/zoneinfo/Arctic/__init__.py","start":9032674,"end":9032674},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Aden","start":9032674,"end":9032807},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Almaty","start":9032807,"end":9033416},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Amman","start":9033416,"end":9034338},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Anadyr","start":9034338,"end":9035081},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Aqtau","start":9035081,"end":9035687},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Aqtobe","start":9035687,"end":9036302},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Ashgabat","start":9036302,"end":9036677},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Ashkhabad","start":9036677,"end":9037052},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Atyrau","start":9037052,"end":9037668},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Baghdad","start":9037668,"end":9038298},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Bahrain","start":9038298,"end":9038450},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Baku","start":9038450,"end":9039194},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Bangkok","start":9039194,"end":9039346},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Barnaul","start":9039346,"end":9040099},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Beirut","start":9040099,"end":9040831},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Bishkek","start":9040831,"end":9041449},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Brunei","start":9041449,"end":9041603},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Calcutta","start":9041603,"end":9041823},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Chita","start":9041823,"end":9042573},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Choibalsan","start":9042573,"end":9043192},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Chongqing","start":9043192,"end":9043585},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Chungking","start":9043585,"end":9043978},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Colombo","start":9043978,"end":9044225},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Dacca","start":9044225,"end":9044456},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Damascus","start":9044456,"end":9045503},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Dhaka","start":9045503,"end":9045734},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Dili","start":9045734,"end":9045904},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Dubai","start":9045904,"end":9046037},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Dushanbe","start":9046037,"end":9046403},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Famagusta","start":9046403,"end":9047343},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Gaza","start":9047343,"end":9048583},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Harbin","start":9048583,"end":9048976},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Hebron","start":9048976,"end":9050234},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Ho_Chi_Minh","start":9050234,"end":9050470},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Hong_Kong","start":9050470,"end":9051245},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Hovd","start":9051245,"end":9051839},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Irkutsk","start":9051839,"end":9052599},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Istanbul","start":9052599,"end":9053799},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Jakarta","start":9053799,"end":9054047},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Jayapura","start":9054047,"end":9054218},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Jerusalem","start":9054218,"end":9055292},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Kabul","start":9055292,"end":9055451},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Kamchatka","start":9055451,"end":9056178},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Karachi","start":9056178,"end":9056444},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Kashgar","start":9056444,"end":9056577},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Kathmandu","start":9056577,"end":9056738},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Katmandu","start":9056738,"end":9056899},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Khandyga","start":9056899,"end":9057674},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Kolkata","start":9057674,"end":9057894},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Krasnoyarsk","start":9057894,"end":9058635},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Kuala_Lumpur","start":9058635,"end":9058891},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Kuching","start":9058891,"end":9059211},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Kuwait","start":9059211,"end":9059344},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Macao","start":9059344,"end":9060135},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Macau","start":9060135,"end":9060926},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Magadan","start":9060926,"end":9061677},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Makassar","start":9061677,"end":9061867},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Manila","start":9061867,"end":9062105},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Muscat","start":9062105,"end":9062238},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Nicosia","start":9062238,"end":9062835},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Novokuznetsk","start":9062835,"end":9063561},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Novosibirsk","start":9063561,"end":9064314},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Omsk","start":9064314,"end":9065055},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Oral","start":9065055,"end":9065680},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Phnom_Penh","start":9065680,"end":9065832},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Pontianak","start":9065832,"end":9066079},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Pyongyang","start":9066079,"end":9066262},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Qatar","start":9066262,"end":9066414},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Qostanay","start":9066414,"end":9067029},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Qyzylorda","start":9067029,"end":9067653},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Rangoon","start":9067653,"end":9067840},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Riyadh","start":9067840,"end":9067973},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Saigon","start":9067973,"end":9068209},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Sakhalin","start":9068209,"end":9068964},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Samarkand","start":9068964,"end":9069330},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Seoul","start":9069330,"end":9069745},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Shanghai","start":9069745,"end":9070138},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Singapore","start":9070138,"end":9070394},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Srednekolymsk","start":9070394,"end":9071136},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Taipei","start":9071136,"end":9071647},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Tashkent","start":9071647,"end":9072013},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Tbilisi","start":9072013,"end":9072642},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Tehran","start":9072642,"end":9074646},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Tel_Aviv","start":9074646,"end":9075720},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Thimbu","start":9075720,"end":9075874},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Thimphu","start":9075874,"end":9076028},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Tokyo","start":9076028,"end":9076241},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Tomsk","start":9076241,"end":9076994},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Ujung_Pandang","start":9076994,"end":9077184},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Ulaanbaatar","start":9077184,"end":9077778},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Ulan_Bator","start":9077778,"end":9078372},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Urumqi","start":9078372,"end":9078505},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Ust-Nera","start":9078505,"end":9079276},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Vientiane","start":9079276,"end":9079428},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Vladivostok","start":9079428,"end":9080170},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Yakutsk","start":9080170,"end":9080911},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Yangon","start":9080911,"end":9081098},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Yekaterinburg","start":9081098,"end":9081858},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/Yerevan","start":9081858,"end":9082566},{"filename":"/lib/python3.10/tzdata/zoneinfo/Asia/__init__.py","start":9082566,"end":9082566},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/Azores","start":9082566,"end":9084019},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/Bermuda","start":9084019,"end":9085043},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/Canary","start":9085043,"end":9085521},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/Cape_Verde","start":9085521,"end":9085696},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/Faeroe","start":9085696,"end":9086137},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/Faroe","start":9086137,"end":9086578},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/Jan_Mayen","start":9086578,"end":9087254},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/Madeira","start":9087254,"end":9088707},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/Reykjavik","start":9088707,"end":9089460},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/South_Georgia","start":9089460,"end":9089592},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/St_Helena","start":9089592,"end":9089722},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/Stanley","start":9089722,"end":9090511},{"filename":"/lib/python3.10/tzdata/zoneinfo/Atlantic/__init__.py","start":9090511,"end":9090511},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/ACT","start":9090511,"end":9091415},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Adelaide","start":9091415,"end":9092336},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Brisbane","start":9092336,"end":9092625},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Broken_Hill","start":9092625,"end":9093566},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Canberra","start":9093566,"end":9094470},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Currie","start":9094470,"end":9095473},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Darwin","start":9095473,"end":9095707},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Eucla","start":9095707,"end":9096021},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Hobart","start":9096021,"end":9097024},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/LHI","start":9097024,"end":9097716},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Lindeman","start":9097716,"end":9098041},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Lord_Howe","start":9098041,"end":9098733},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Melbourne","start":9098733,"end":9099637},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/NSW","start":9099637,"end":9100541},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/North","start":9100541,"end":9100775},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Perth","start":9100775,"end":9101081},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Queensland","start":9101081,"end":9101370},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/South","start":9101370,"end":9102291},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Sydney","start":9102291,"end":9103195},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Tasmania","start":9103195,"end":9104198},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Victoria","start":9104198,"end":9105102},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/West","start":9105102,"end":9105408},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/Yancowinna","start":9105408,"end":9106349},{"filename":"/lib/python3.10/tzdata/zoneinfo/Australia/__init__.py","start":9106349,"end":9106349},{"filename":"/lib/python3.10/tzdata/zoneinfo/Brazil/Acre","start":9106349,"end":9106767},{"filename":"/lib/python3.10/tzdata/zoneinfo/Brazil/DeNoronha","start":9106767,"end":9107251},{"filename":"/lib/python3.10/tzdata/zoneinfo/Brazil/East","start":9107251,"end":9108203},{"filename":"/lib/python3.10/tzdata/zoneinfo/Brazil/West","start":9108203,"end":9108615},{"filename":"/lib/python3.10/tzdata/zoneinfo/Brazil/__init__.py","start":9108615,"end":9108615},{"filename":"/lib/python3.10/tzdata/zoneinfo/Canada/Atlantic","start":9108615,"end":9110287},{"filename":"/lib/python3.10/tzdata/zoneinfo/Canada/Central","start":9110287,"end":9111581},{"filename":"/lib/python3.10/tzdata/zoneinfo/Canada/Eastern","start":9111581,"end":9113298},{"filename":"/lib/python3.10/tzdata/zoneinfo/Canada/Mountain","start":9113298,"end":9114268},{"filename":"/lib/python3.10/tzdata/zoneinfo/Canada/Newfoundland","start":9114268,"end":9116146},{"filename":"/lib/python3.10/tzdata/zoneinfo/Canada/Pacific","start":9116146,"end":9117476},{"filename":"/lib/python3.10/tzdata/zoneinfo/Canada/Saskatchewan","start":9117476,"end":9118114},{"filename":"/lib/python3.10/tzdata/zoneinfo/Canada/Yukon","start":9118114,"end":9119143},{"filename":"/lib/python3.10/tzdata/zoneinfo/Canada/__init__.py","start":9119143,"end":9119143},{"filename":"/lib/python3.10/tzdata/zoneinfo/Chile/Continental","start":9119143,"end":9120425},{"filename":"/lib/python3.10/tzdata/zoneinfo/Chile/EasterIsland","start":9120425,"end":9121527},{"filename":"/lib/python3.10/tzdata/zoneinfo/Chile/__init__.py","start":9121527,"end":9121527},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT","start":9121527,"end":9121638},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+0","start":9121638,"end":9121749},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+1","start":9121749,"end":9121862},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+10","start":9121862,"end":9121976},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+11","start":9121976,"end":9122090},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+12","start":9122090,"end":9122204},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+2","start":9122204,"end":9122317},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+3","start":9122317,"end":9122430},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+4","start":9122430,"end":9122543},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+5","start":9122543,"end":9122656},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+6","start":9122656,"end":9122769},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+7","start":9122769,"end":9122882},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+8","start":9122882,"end":9122995},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT+9","start":9122995,"end":9123108},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-0","start":9123108,"end":9123219},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-1","start":9123219,"end":9123333},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-10","start":9123333,"end":9123448},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-11","start":9123448,"end":9123563},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-12","start":9123563,"end":9123678},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-13","start":9123678,"end":9123793},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-14","start":9123793,"end":9123908},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-2","start":9123908,"end":9124022},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-3","start":9124022,"end":9124136},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-4","start":9124136,"end":9124250},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-5","start":9124250,"end":9124364},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-6","start":9124364,"end":9124478},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-7","start":9124478,"end":9124592},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-8","start":9124592,"end":9124706},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT-9","start":9124706,"end":9124820},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/GMT0","start":9124820,"end":9124931},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/Greenwich","start":9124931,"end":9125042},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/UCT","start":9125042,"end":9125153},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/UTC","start":9125153,"end":9125264},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/Universal","start":9125264,"end":9125375},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/Zulu","start":9125375,"end":9125486},{"filename":"/lib/python3.10/tzdata/zoneinfo/Etc/__init__.py","start":9125486,"end":9125486},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Amsterdam","start":9125486,"end":9126557},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Andorra","start":9126557,"end":9126946},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Astrakhan","start":9126946,"end":9127672},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Athens","start":9127672,"end":9128354},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Belfast","start":9128354,"end":9129953},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Belgrade","start":9129953,"end":9130431},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Berlin","start":9130431,"end":9131136},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Bratislava","start":9131136,"end":9131859},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Brussels","start":9131859,"end":9132962},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Bucharest","start":9132962,"end":9133623},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Budapest","start":9133623,"end":9134389},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Busingen","start":9134389,"end":9134886},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Chisinau","start":9134886,"end":9135641},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Copenhagen","start":9135641,"end":9136264},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Dublin","start":9136264,"end":9137760},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Gibraltar","start":9137760,"end":9138980},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Guernsey","start":9138980,"end":9140579},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Helsinki","start":9140579,"end":9141060},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Isle_of_Man","start":9141060,"end":9142659},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Istanbul","start":9142659,"end":9143859},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Jersey","start":9143859,"end":9145458},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Kaliningrad","start":9145458,"end":9146362},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Kiev","start":9146362,"end":9146920},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Kirov","start":9146920,"end":9147637},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Lisbon","start":9147637,"end":9149091},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Ljubljana","start":9149091,"end":9149569},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/London","start":9149569,"end":9151168},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Luxembourg","start":9151168,"end":9152255},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Madrid","start":9152255,"end":9153152},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Malta","start":9153152,"end":9154080},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Mariehamn","start":9154080,"end":9154561},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Minsk","start":9154561,"end":9155369},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Monaco","start":9155369,"end":9156483},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Moscow","start":9156483,"end":9157391},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Nicosia","start":9157391,"end":9157988},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Oslo","start":9157988,"end":9158664},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Paris","start":9158664,"end":9159769},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Podgorica","start":9159769,"end":9160247},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Prague","start":9160247,"end":9160970},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Riga","start":9160970,"end":9161664},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Rome","start":9161664,"end":9162611},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Samara","start":9162611,"end":9163343},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/San_Marino","start":9163343,"end":9164290},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Sarajevo","start":9164290,"end":9164768},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Saratov","start":9164768,"end":9165494},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Simferopol","start":9165494,"end":9166359},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Skopje","start":9166359,"end":9166837},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Sofia","start":9166837,"end":9167429},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Stockholm","start":9167429,"end":9167926},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Tallinn","start":9167926,"end":9168601},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Tirane","start":9168601,"end":9169205},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Tiraspol","start":9169205,"end":9169960},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Ulyanovsk","start":9169960,"end":9170720},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Uzhgorod","start":9170720,"end":9171259},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Vaduz","start":9171259,"end":9171756},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Vatican","start":9171756,"end":9172703},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Vienna","start":9172703,"end":9173361},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Vilnius","start":9173361,"end":9174037},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Volgograd","start":9174037,"end":9174772},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Warsaw","start":9174772,"end":9175695},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Zagreb","start":9175695,"end":9176173},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Zaporozhye","start":9176173,"end":9176742},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/Zurich","start":9176742,"end":9177239},{"filename":"/lib/python3.10/tzdata/zoneinfo/Europe/__init__.py","start":9177239,"end":9177239},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/Antananarivo","start":9177239,"end":9177430},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/Chagos","start":9177430,"end":9177582},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/Christmas","start":9177582,"end":9177715},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/Cocos","start":9177715,"end":9177855},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/Comoro","start":9177855,"end":9178046},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/Kerguelen","start":9178046,"end":9178179},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/Mahe","start":9178179,"end":9178312},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/Maldives","start":9178312,"end":9178464},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/Mauritius","start":9178464,"end":9178643},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/Mayotte","start":9178643,"end":9178834},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/Reunion","start":9178834,"end":9178967},{"filename":"/lib/python3.10/tzdata/zoneinfo/Indian/__init__.py","start":9178967,"end":9178967},{"filename":"/lib/python3.10/tzdata/zoneinfo/Mexico/BajaNorte","start":9178967,"end":9179992},{"filename":"/lib/python3.10/tzdata/zoneinfo/Mexico/BajaSur","start":9179992,"end":9180359},{"filename":"/lib/python3.10/tzdata/zoneinfo/Mexico/General","start":9180359,"end":9180771},{"filename":"/lib/python3.10/tzdata/zoneinfo/Mexico/__init__.py","start":9180771,"end":9180771},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Apia","start":9180771,"end":9181178},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Auckland","start":9181178,"end":9182221},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Bougainville","start":9182221,"end":9182422},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Chatham","start":9182422,"end":9183230},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Chuuk","start":9183230,"end":9183425},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Easter","start":9183425,"end":9184527},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Efate","start":9184527,"end":9184869},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Enderbury","start":9184869,"end":9185041},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Fakaofo","start":9185041,"end":9185194},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Fiji","start":9185194,"end":9185622},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Funafuti","start":9185622,"end":9185756},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Galapagos","start":9185756,"end":9185931},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Gambier","start":9185931,"end":9186063},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Guadalcanal","start":9186063,"end":9186197},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Guam","start":9186197,"end":9186547},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Honolulu","start":9186547,"end":9186768},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Johnston","start":9186768,"end":9186989},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Kanton","start":9186989,"end":9187161},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Kiritimati","start":9187161,"end":9187335},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Kosrae","start":9187335,"end":9187577},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Kwajalein","start":9187577,"end":9187796},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Majuro","start":9187796,"end":9188014},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Marquesas","start":9188014,"end":9188153},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Midway","start":9188153,"end":9188299},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Nauru","start":9188299,"end":9188482},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Niue","start":9188482,"end":9188636},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Norfolk","start":9188636,"end":9188883},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Noumea","start":9188883,"end":9189081},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Pago_Pago","start":9189081,"end":9189227},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Palau","start":9189227,"end":9189375},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Pitcairn","start":9189375,"end":9189528},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Pohnpei","start":9189528,"end":9189742},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Ponape","start":9189742,"end":9189956},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Port_Moresby","start":9189956,"end":9190110},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Rarotonga","start":9190110,"end":9190516},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Saipan","start":9190516,"end":9190866},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Samoa","start":9190866,"end":9191012},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Tahiti","start":9191012,"end":9191145},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Tarawa","start":9191145,"end":9191279},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Tongatapu","start":9191279,"end":9191516},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Truk","start":9191516,"end":9191711},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Wake","start":9191711,"end":9191845},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Wallis","start":9191845,"end":9191979},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/Yap","start":9191979,"end":9192174},{"filename":"/lib/python3.10/tzdata/zoneinfo/Pacific/__init__.py","start":9192174,"end":9192174},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/Alaska","start":9192174,"end":9193151},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/Aleutian","start":9193151,"end":9194120},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/Arizona","start":9194120,"end":9194360},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/Central","start":9194360,"end":9196114},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/East-Indiana","start":9196114,"end":9196645},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/Eastern","start":9196645,"end":9198389},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/Hawaii","start":9198389,"end":9198610},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/Indiana-Starke","start":9198610,"end":9199626},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/Michigan","start":9199626,"end":9200525},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/Mountain","start":9200525,"end":9201567},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/Pacific","start":9201567,"end":9202861},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/Samoa","start":9202861,"end":9203007},{"filename":"/lib/python3.10/tzdata/zoneinfo/US/__init__.py","start":9203007,"end":9203007},{"filename":"/lib/python3.10/unittest/__init__.py","start":9203007,"end":9206768},{"filename":"/lib/python3.10/unittest/__main__.py","start":9206768,"end":9207240},{"filename":"/lib/python3.10/unittest/_log.py","start":9207240,"end":9209986},{"filename":"/lib/python3.10/unittest/async_case.py","start":9209986,"end":9216207},{"filename":"/lib/python3.10/unittest/case.py","start":9216207,"end":9273769},{"filename":"/lib/python3.10/unittest/loader.py","start":9273769,"end":9296471},{"filename":"/lib/python3.10/unittest/main.py","start":9296471,"end":9307709},{"filename":"/lib/python3.10/unittest/mock.py","start":9307709,"end":9409795},{"filename":"/lib/python3.10/unittest/result.py","start":9409795,"end":9417264},{"filename":"/lib/python3.10/unittest/runner.py","start":9417264,"end":9425315},{"filename":"/lib/python3.10/unittest/signals.py","start":9425315,"end":9427718},{"filename":"/lib/python3.10/unittest/suite.py","start":9427718,"end":9441230},{"filename":"/lib/python3.10/unittest/util.py","start":9441230,"end":9446445},{"filename":"/lib/python3.10/urllib/__init__.py","start":9446445,"end":9446445},{"filename":"/lib/python3.10/urllib/error.py","start":9446445,"end":9449077},{"filename":"/lib/python3.10/urllib/parse.py","start":9449077,"end":9491355},{"filename":"/lib/python3.10/urllib/request.py","start":9491355,"end":9592745},{"filename":"/lib/python3.10/urllib/response.py","start":9592745,"end":9595106},{"filename":"/lib/python3.10/urllib/robotparser.py","start":9595106,"end":9604530},{"filename":"/lib/python3.10/wsgiref/__init__.py","start":9604530,"end":9605117},{"filename":"/lib/python3.10/wsgiref/handlers.py","start":9605117,"end":9626786},{"filename":"/lib/python3.10/wsgiref/headers.py","start":9626786,"end":9633552},{"filename":"/lib/python3.10/wsgiref/simple_server.py","start":9633552,"end":9638723},{"filename":"/lib/python3.10/wsgiref/util.py","start":9638723,"end":9644574},{"filename":"/lib/python3.10/wsgiref/validate.py","start":9644574,"end":9659673},{"filename":"/lib/python3.10/xml/__init__.py","start":9659673,"end":9660230},{"filename":"/lib/python3.10/xml/dom/NodeFilter.py","start":9660230,"end":9661166},{"filename":"/lib/python3.10/xml/dom/__init__.py","start":9661166,"end":9665185},{"filename":"/lib/python3.10/xml/dom/domreg.py","start":9665185,"end":9668636},{"filename":"/lib/python3.10/xml/dom/expatbuilder.py","start":9668636,"end":9704403},{"filename":"/lib/python3.10/xml/dom/minicompat.py","start":9704403,"end":9707770},{"filename":"/lib/python3.10/xml/dom/minidom.py","start":9707770,"end":9775836},{"filename":"/lib/python3.10/xml/dom/pulldom.py","start":9775836,"end":9787833},{"filename":"/lib/python3.10/xml/dom/xmlbuilder.py","start":9787833,"end":9800220},{"filename":"/lib/python3.10/xml/etree/ElementInclude.py","start":9800220,"end":9807102},{"filename":"/lib/python3.10/xml/etree/ElementPath.py","start":9807102,"end":9821109},{"filename":"/lib/python3.10/xml/etree/ElementTree.py","start":9821109,"end":9895045},{"filename":"/lib/python3.10/xml/etree/__init__.py","start":9895045,"end":9896650},{"filename":"/lib/python3.10/xml/etree/cElementTree.py","start":9896650,"end":9896732},{"filename":"/lib/python3.10/xml/parsers/__init__.py","start":9896732,"end":9896899},{"filename":"/lib/python3.10/xml/parsers/expat.py","start":9896899,"end":9897147},{"filename":"/lib/python3.10/xml/sax/__init__.py","start":9897147,"end":9900789},{"filename":"/lib/python3.10/xml/sax/_exceptions.py","start":9900789,"end":9905574},{"filename":"/lib/python3.10/xml/sax/expatreader.py","start":9905574,"end":9921301},{"filename":"/lib/python3.10/xml/sax/handler.py","start":9921301,"end":9936918},{"filename":"/lib/python3.10/xml/sax/saxutils.py","start":9936918,"end":9949173},{"filename":"/lib/python3.10/xml/sax/xmlreader.py","start":9949173,"end":9961857},{"filename":"/lib/python3.10/xmlrpc/__init__.py","start":9961857,"end":9961895},{"filename":"/lib/python3.10/xmlrpc/client.py","start":9961895,"end":10011286},{"filename":"/lib/python3.10/xmlrpc/server.py","start":10011286,"end":10047958},{"filename":"/lib/python3.10/zoneinfo/__init__.py","start":10047958,"end":10048661},{"filename":"/lib/python3.10/zoneinfo/_common.py","start":10048661,"end":10053946},{"filename":"/lib/python3.10/zoneinfo/_tzpath.py","start":10053946,"end":10059027},{"filename":"/lib/python3.10/zoneinfo/_zoneinfo.py","start":10059027,"end":10083345},{"filename":"/lib/python3.10/webbrowser.py","start":10083345,"end":10083712}],"remote_package_size":5394245,"package_uuid":"b90888e8-366e-4159-955c-b9020d3ab63f"})})();const API=Module.API;const Hiwire=Module.hiwire;const Tests=API.tests;var moduleOverrides={};var key;for(key in Module){if(Module.hasOwnProperty(key)){moduleOverrides[key]=Module[key]}}var arguments_=[];var thisProgram="./this.program";var quit_=function(status,toThrow){throw toThrow};var ENVIRONMENT_IS_WEB=typeof window==="object";var ENVIRONMENT_IS_WORKER=typeof importScripts==="function";var ENVIRONMENT_IS_NODE=typeof process==="object"&&typeof process.versions==="object"&&typeof process.versions.node==="string";var ENVIRONMENT_IS_SHELL=!ENVIRONMENT_IS_WEB&&!ENVIRONMENT_IS_NODE&&!ENVIRONMENT_IS_WORKER;var scriptDirectory="";function locateFile(path){if(Module["locateFile"]){return Module["locateFile"](path,scriptDirectory)}return scriptDirectory+path}var read_,readAsync,readBinary,setWindowTitle;var nodeFS;var nodePath;if(ENVIRONMENT_IS_NODE){if(ENVIRONMENT_IS_WORKER){scriptDirectory=require("path").dirname(scriptDirectory)+"/"}else{scriptDirectory=__dirname+"/"}read_=function shell_read(filename,binary){if(!nodeFS)nodeFS=require("fs");if(!nodePath)nodePath=require("path");filename=nodePath["normalize"](filename);return nodeFS["readFileSync"](filename,binary?null:"utf8")};readBinary=function readBinary(filename){var ret=read_(filename,true);if(!ret.buffer){ret=new Uint8Array(ret)}assert(ret.buffer);return ret};readAsync=function readAsync(filename,onload,onerror){if(!nodeFS)nodeFS=require("fs");if(!nodePath)nodePath=require("path");filename=nodePath["normalize"](filename);nodeFS["readFile"](filename,function(err,data){if(err)onerror(err);else onload(data.buffer)})};if(process["argv"].length>1){thisProgram=process["argv"][1].replace(/\\/g,"/")}arguments_=process["argv"].slice(2);process["on"]("uncaughtException",function(ex){if(!(ex instanceof ExitStatus)){throw ex}});process["on"]("unhandledRejection",abort);quit_=function(status,toThrow){if(keepRuntimeAlive()){process["exitCode"]=status;throw toThrow}process["exit"](status)};Module["inspect"]=function(){return"[Emscripten Module object]"}}else if(ENVIRONMENT_IS_WEB||ENVIRONMENT_IS_WORKER){if(ENVIRONMENT_IS_WORKER){scriptDirectory=self.location.href}else if(typeof document!=="undefined"&&document.currentScript){scriptDirectory=document.currentScript.src}if(_scriptDir){scriptDirectory=_scriptDir}if(scriptDirectory.indexOf("blob:")!==0){scriptDirectory=scriptDirectory.substr(0,scriptDirectory.lastIndexOf("/")+1)}else{scriptDirectory=""}{read_=function(url){var xhr=new XMLHttpRequest;xhr.open("GET",url,false);xhr.send(null);return xhr.responseText};if(ENVIRONMENT_IS_WORKER){readBinary=function(url){var xhr=new XMLHttpRequest;xhr.open("GET",url,false);xhr.responseType="arraybuffer";xhr.send(null);return new Uint8Array(xhr.response)}}readAsync=function(url,onload,onerror){var xhr=new XMLHttpRequest;xhr.open("GET",url,true);xhr.responseType="arraybuffer";xhr.onload=function(){if(xhr.status==200||xhr.status==0&&xhr.response){onload(xhr.response);return}onerror()};xhr.onerror=onerror;xhr.send(null)}}setWindowTitle=function(title){document.title=title}}else{}var out=Module["print"]||console.log.bind(console);var err=Module["printErr"]||console.warn.bind(console);for(key in moduleOverrides){if(moduleOverrides.hasOwnProperty(key)){Module[key]=moduleOverrides[key]}}moduleOverrides=null;if(Module["arguments"])arguments_=Module["arguments"];if(Module["thisProgram"])thisProgram=Module["thisProgram"];if(Module["quit"])quit_=Module["quit"];var STACK_ALIGN=16;function getNativeTypeSize(type){switch(type){case"i1":case"i8":return 1;case"i16":return 2;case"i32":return 4;case"i64":return 8;case"float":return 4;case"double":return 8;default:{if(type[type.length-1]==="*"){return 4}else if(type[0]==="i"){var bits=Number(type.substr(1));assert(bits%8===0,"getNativeTypeSize invalid bits "+bits+", type "+type);return bits/8}else{return 0}}}}function warnOnce(text){if(!warnOnce.shown)warnOnce.shown={};if(!warnOnce.shown[text]){warnOnce.shown[text]=1;err(text)}}function convertJsFunctionToWasm(func,sig){if(typeof WebAssembly.Function==="function"){var typeNames={"i":"i32","j":"i64","f":"f32","d":"f64"};var type={parameters:[],results:sig[0]=="v"?[]:[typeNames[sig[0]]]};for(var i=1;i>0]=value;break;case"i8":HEAP8[ptr>>0]=value;break;case"i16":HEAP16[ptr>>1]=value;break;case"i32":HEAP32[ptr>>2]=value;break;case"i64":tempI64=[value>>>0,(tempDouble=value,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[ptr>>2]=tempI64[0],HEAP32[ptr+4>>2]=tempI64[1];break;case"float":HEAPF32[ptr>>2]=value;break;case"double":HEAPF64[ptr>>3]=value;break;default:abort("invalid type for setValue: "+type)}}function getValue(ptr,type,noSafe){type=type||"i8";if(type.charAt(type.length-1)==="*")type="i32";switch(type){case"i1":return HEAP8[ptr>>0];case"i8":return HEAP8[ptr>>0];case"i16":return HEAP16[ptr>>1];case"i32":return HEAP32[ptr>>2];case"i64":return HEAP32[ptr>>2];case"float":return HEAPF32[ptr>>2];case"double":return HEAPF64[ptr>>3];default:abort("invalid type for getValue: "+type)}return null}var wasmMemory;var ABORT=false;var EXITSTATUS;function assert(condition,text){if(!condition){abort("Assertion failed: "+text)}}function getCFunc(ident){var func=Module["_"+ident];assert(func,"Cannot call unknown function "+ident+", make sure it is exported");return func}function ccall(ident,returnType,argTypes,args,opts){var toC={"string":function(str){var ret=0;if(str!==null&&str!==undefined&&str!==0){var len=(str.length<<2)+1;ret=stackAlloc(len);stringToUTF8(str,ret,len)}return ret},"array":function(arr){var ret=stackAlloc(arr.length);writeArrayToMemory(arr,ret);return ret}};function convertReturnValue(ret){if(returnType==="string")return UTF8ToString(ret);if(returnType==="boolean")return Boolean(ret);return ret}var func=getCFunc(ident);var cArgs=[];var stack=0;if(args){for(var i=0;i=endIdx))++endPtr;if(endPtr-idx>16&&heap.subarray&&UTF8Decoder){return UTF8Decoder.decode(heap.subarray(idx,endPtr))}else{var str="";while(idx>10,56320|ch&1023)}}}return str}function UTF8ToString(ptr,maxBytesToRead){return ptr?UTF8ArrayToString(HEAPU8,ptr,maxBytesToRead):""}function stringToUTF8Array(str,heap,outIdx,maxBytesToWrite){if(!(maxBytesToWrite>0))return 0;var startIdx=outIdx;var endIdx=outIdx+maxBytesToWrite-1;for(var i=0;i=55296&&u<=57343){var u1=str.charCodeAt(++i);u=65536+((u&1023)<<10)|u1&1023}if(u<=127){if(outIdx>=endIdx)break;heap[outIdx++]=u}else if(u<=2047){if(outIdx+1>=endIdx)break;heap[outIdx++]=192|u>>6;heap[outIdx++]=128|u&63}else if(u<=65535){if(outIdx+2>=endIdx)break;heap[outIdx++]=224|u>>12;heap[outIdx++]=128|u>>6&63;heap[outIdx++]=128|u&63}else{if(outIdx+3>=endIdx)break;heap[outIdx++]=240|u>>18;heap[outIdx++]=128|u>>12&63;heap[outIdx++]=128|u>>6&63;heap[outIdx++]=128|u&63}}heap[outIdx]=0;return outIdx-startIdx}function stringToUTF8(str,outPtr,maxBytesToWrite){return stringToUTF8Array(str,HEAPU8,outPtr,maxBytesToWrite)}function lengthBytesUTF8(str){var len=0;for(var i=0;i=55296&&u<=57343)u=65536+((u&1023)<<10)|str.charCodeAt(++i)&1023;if(u<=127)++len;else if(u<=2047)len+=2;else if(u<=65535)len+=3;else len+=4}return len}function AsciiToString(ptr){var str="";while(1){var ch=HEAPU8[ptr++>>0];if(!ch)return str;str+=String.fromCharCode(ch)}}function stringToAscii(str,outPtr){return writeAsciiToMemory(str,outPtr,false)}var UTF16Decoder=typeof TextDecoder!=="undefined"?new TextDecoder("utf-16le"):undefined;function UTF16ToString(ptr,maxBytesToRead){var endPtr=ptr;var idx=endPtr>>1;var maxIdx=idx+maxBytesToRead/2;while(!(idx>=maxIdx)&&HEAPU16[idx])++idx;endPtr=idx<<1;if(endPtr-ptr>32&&UTF16Decoder){return UTF16Decoder.decode(HEAPU8.subarray(ptr,endPtr))}else{var str="";for(var i=0;!(i>=maxBytesToRead/2);++i){var codeUnit=HEAP16[ptr+i*2>>1];if(codeUnit==0)break;str+=String.fromCharCode(codeUnit)}return str}}function stringToUTF16(str,outPtr,maxBytesToWrite){if(maxBytesToWrite===undefined){maxBytesToWrite=2147483647}if(maxBytesToWrite<2)return 0;maxBytesToWrite-=2;var startPtr=outPtr;var numCharsToWrite=maxBytesToWrite>1]=codeUnit;outPtr+=2}HEAP16[outPtr>>1]=0;return outPtr-startPtr}function lengthBytesUTF16(str){return str.length*2}function UTF32ToString(ptr,maxBytesToRead){var i=0;var str="";while(!(i>=maxBytesToRead/4)){var utf32=HEAP32[ptr+i*4>>2];if(utf32==0)break;++i;if(utf32>=65536){var ch=utf32-65536;str+=String.fromCharCode(55296|ch>>10,56320|ch&1023)}else{str+=String.fromCharCode(utf32)}}return str}function stringToUTF32(str,outPtr,maxBytesToWrite){if(maxBytesToWrite===undefined){maxBytesToWrite=2147483647}if(maxBytesToWrite<4)return 0;var startPtr=outPtr;var endPtr=startPtr+maxBytesToWrite-4;for(var i=0;i=55296&&codeUnit<=57343){var trailSurrogate=str.charCodeAt(++i);codeUnit=65536+((codeUnit&1023)<<10)|trailSurrogate&1023}HEAP32[outPtr>>2]=codeUnit;outPtr+=4;if(outPtr+4>endPtr)break}HEAP32[outPtr>>2]=0;return outPtr-startPtr}function lengthBytesUTF32(str){var len=0;for(var i=0;i=55296&&codeUnit<=57343)++i;len+=4}return len}function allocateUTF8(str){var size=lengthBytesUTF8(str)+1;var ret=_malloc(size);if(ret)stringToUTF8Array(str,HEAP8,ret,size);return ret}function allocateUTF8OnStack(str){var size=lengthBytesUTF8(str)+1;var ret=stackAlloc(size);stringToUTF8Array(str,HEAP8,ret,size);return ret}function writeStringToMemory(string,buffer,dontAddNull){warnOnce("writeStringToMemory is deprecated and should not be called! Use stringToUTF8() instead!");var lastChar,end;if(dontAddNull){end=buffer+lengthBytesUTF8(string);lastChar=HEAP8[end]}stringToUTF8(string,buffer,Infinity);if(dontAddNull)HEAP8[end]=lastChar}function writeArrayToMemory(array,buffer){HEAP8.set(array,buffer)}function writeAsciiToMemory(str,buffer,dontAddNull){for(var i=0;i>0]=str.charCodeAt(i)}if(!dontAddNull)HEAP8[buffer>>0]=0}function alignUp(x,multiple){if(x%multiple>0){x+=multiple-x%multiple}return x}var HEAP,buffer,HEAP8,HEAPU8,HEAP16,HEAPU16,HEAP32,HEAPU32,HEAPF32,HEAPF64;function updateGlobalBufferAndViews(buf){buffer=buf;Module["HEAP8"]=HEAP8=new Int8Array(buf);Module["HEAP16"]=HEAP16=new Int16Array(buf);Module["HEAP32"]=HEAP32=new Int32Array(buf);Module["HEAPU8"]=HEAPU8=new Uint8Array(buf);Module["HEAPU16"]=HEAPU16=new Uint16Array(buf);Module["HEAPU32"]=HEAPU32=new Uint32Array(buf);Module["HEAPF32"]=HEAPF32=new Float32Array(buf);Module["HEAPF64"]=HEAPF64=new Float64Array(buf)}var TOTAL_STACK=5242880;var INITIAL_MEMORY=Module["INITIAL_MEMORY"]||20971520;if(Module["wasmMemory"]){wasmMemory=Module["wasmMemory"]}else{wasmMemory=new WebAssembly.Memory({"initial":INITIAL_MEMORY/65536,"maximum":2147483648/65536})}if(wasmMemory){buffer=wasmMemory.buffer}INITIAL_MEMORY=buffer.byteLength;updateGlobalBufferAndViews(buffer);var wasmTable=new WebAssembly.Table({"initial":7364,"element":"anyfunc"});var __ATPRERUN__=[];var __ATINIT__=[];var __ATMAIN__=[];var __ATEXIT__=[];var __ATPOSTRUN__=[];var runtimeInitialized=false;var runtimeExited=false;var runtimeKeepaliveCounter=0;function keepRuntimeAlive(){return noExitRuntime||runtimeKeepaliveCounter>0}function preRun(){if(Module["preRun"]){if(typeof Module["preRun"]=="function")Module["preRun"]=[Module["preRun"]];while(Module["preRun"].length){addOnPreRun(Module["preRun"].shift())}}callRuntimeCallbacks(__ATPRERUN__)}function initRuntime(){runtimeInitialized=true;if(!Module["noFSInit"]&&!FS.init.initialized)FS.init();FS.ignorePermissions=false;TTY.init();SOCKFS.root=FS.mount(SOCKFS,{},null);PIPEFS.root=FS.mount(PIPEFS,{},null);callRuntimeCallbacks(__ATINIT__)}function preMain(){callRuntimeCallbacks(__ATMAIN__)}function exitRuntime(){runtimeExited=true}function postRun(){if(Module["postRun"]){if(typeof Module["postRun"]=="function")Module["postRun"]=[Module["postRun"]];while(Module["postRun"].length){addOnPostRun(Module["postRun"].shift())}}callRuntimeCallbacks(__ATPOSTRUN__)}function addOnPreRun(cb){__ATPRERUN__.unshift(cb)}function addOnInit(cb){__ATINIT__.unshift(cb)}function addOnPreMain(cb){__ATMAIN__.unshift(cb)}function addOnExit(cb){}function addOnPostRun(cb){__ATPOSTRUN__.unshift(cb)}var runDependencies=0;var runDependencyWatcher=null;var dependenciesFulfilled=null;function getUniqueRunDependency(id){return id}function addRunDependency(id){runDependencies++;if(Module["monitorRunDependencies"]){Module["monitorRunDependencies"](runDependencies)}}function removeRunDependency(id){runDependencies--;if(Module["monitorRunDependencies"]){Module["monitorRunDependencies"](runDependencies)}if(runDependencies==0){if(runDependencyWatcher!==null){clearInterval(runDependencyWatcher);runDependencyWatcher=null}if(dependenciesFulfilled){var callback=dependenciesFulfilled;dependenciesFulfilled=null;callback()}}}Module["preloadedImages"]={};Module["preloadedAudios"]={};Module["preloadedWasm"]={};function abort(what){if(Module["onAbort"]){Module["onAbort"](what)}what+="";err(what);ABORT=true;EXITSTATUS=1;what="abort("+what+"). Build with -s ASSERTIONS=1 for more info.";var e=new WebAssembly.RuntimeError(what);readyPromiseReject(e);throw e}var dataURIPrefix="data:application/octet-stream;base64,";function isDataURI(filename){return filename.startsWith(dataURIPrefix)}function isFileURI(filename){return filename.startsWith("file://")}var wasmBinaryFile;wasmBinaryFile="pyodide.asm.wasm";if(!isDataURI(wasmBinaryFile)){wasmBinaryFile=locateFile(wasmBinaryFile)}function getBinary(file){try{if(file==wasmBinaryFile&&wasmBinary){return new Uint8Array(wasmBinary)}if(readBinary){return readBinary(file)}else{throw"both async and sync fetching of the wasm failed"}}catch(err){abort(err)}}function getBinaryPromise(){if(!wasmBinary&&(ENVIRONMENT_IS_WEB||ENVIRONMENT_IS_WORKER)){if(typeof fetch==="function"&&!isFileURI(wasmBinaryFile)){return fetch(wasmBinaryFile,{credentials:"same-origin"}).then(function(response){if(!response["ok"]){throw"failed to load wasm binary file at '"+wasmBinaryFile+"'"}return response["arrayBuffer"]()}).catch(function(){return getBinary(wasmBinaryFile)})}else{if(readAsync){return new Promise(function(resolve,reject){readAsync(wasmBinaryFile,function(response){resolve(new Uint8Array(response))},reject)})}}}return Promise.resolve().then(function(){return getBinary(wasmBinaryFile)})}function createWasm(){var info={"env":asmLibraryArg,"wasi_snapshot_preview1":asmLibraryArg,"GOT.mem":new Proxy(asmLibraryArg,GOTHandler),"GOT.func":new Proxy(asmLibraryArg,GOTHandler)};function receiveInstance(instance,module){var exports=instance.exports;exports=relocateExports(exports,1024);Module["asm"]=exports;var metadata=getDylinkMetadata(module);if(metadata.neededDynlibs){dynamicLibraries=metadata.neededDynlibs.concat(dynamicLibraries)}mergeLibSymbols(exports,"main");addOnInit(Module["asm"]["__wasm_call_ctors"]);removeRunDependency("wasm-instantiate")}addRunDependency("wasm-instantiate");function receiveInstantiationResult(result){receiveInstance(result["instance"],result["module"])}function instantiateArrayBuffer(receiver){return getBinaryPromise().then(function(binary){return WebAssembly.instantiate(binary,info)}).then(function(instance){return instance}).then(receiver,function(reason){err("failed to asynchronously prepare wasm: "+reason);abort(reason)})}function instantiateAsync(){if(!wasmBinary&&typeof WebAssembly.instantiateStreaming==="function"&&!isDataURI(wasmBinaryFile)&&!isFileURI(wasmBinaryFile)&&typeof fetch==="function"){return fetch(wasmBinaryFile,{credentials:"same-origin"}).then(function(response){var result=WebAssembly.instantiateStreaming(response,info);return result.then(receiveInstantiationResult,function(reason){err("wasm streaming compile failed: "+reason);err("falling back to ArrayBuffer instantiation");return instantiateArrayBuffer(receiveInstantiationResult)})})}else{return instantiateArrayBuffer(receiveInstantiationResult)}}if(Module["instantiateWasm"]){try{var exports=Module["instantiateWasm"](info,receiveInstance);return exports}catch(e){err("Module.instantiateWasm callback failed with error: "+e);return false}}instantiateAsync().catch(readyPromiseReject);return{}}var tempDouble;var tempI64;var ASM_CONSTS={3106167:function(){throw new Error("intentionally triggered fatal error!")},3106224:function($0){Hiwire.get_value($0)()},3106247:function(){Module.UTF8ToString=UTF8ToString;Module.wasmTable=wasmTable},3106317:function(){throw new Error("Fatal pyodide error")},3106356:function(){throw new Error("Fatal pyodide error")},3106395:function(){throw new Error("Fatal pyodide error")},3106434:function(){throw new Error("Fatal pyodide error")},3106473:function(){throw new Error("Fatal pyodide error")},3106512:function(){throw new Error("Fatal pyodide error")},3106551:function(){throw new Error("Fatal pyodide error")},3106590:function(){throw new Error("Fatal pyodide error")},3106629:function(){throw new Error("Fatal pyodide error")},3106668:function(){throw new Error("Fatal pyodide error")},3106707:function(){throw new Error("Fatal pyodide error")},3106746:function(){throw new Error("Fatal pyodide error")},3106785:function($0){API._pyodide=Hiwire.pop_value($0)},3106826:function($0){if(!$0){AL.alcErr=40964;return 1}},3106874:function($0){err("bad name in alcGetProcAddress: "+UTF8ToString($0))},3106937:function($0){if(!AL.currentCtx){err("alGetProcAddress() called without a valid context");return 1}if(!$0){AL.currentCtx.err=40963;return 1}},3107085:function($0){err("bad name in alGetProcAddress: "+UTF8ToString($0))}};function JsArray_Check(idobj){let obj=Hiwire.get_value(idobj);if(Array.isArray(obj)){return true}let typeTag=Object.prototype.toString.call(obj);if(typeTag==="[object HTMLCollection]"||typeTag==="[object NodeList]"){return true}if(ArrayBuffer.isView(obj)&&obj.constructor.name!=="DataView"){return true}return false}function JsArray_Delete(idobj,idx){"use strict";try{let obj=Hiwire.get_value(idobj);if(idx<0||idx>=obj.length){return-1}obj.splice(idx,1)}catch(e){Module.handle_js_error(e);return-1}return 0}function JsArray_Get(idobj,idx){"use strict";try{let obj=Hiwire.get_value(idobj);let result=obj[idx];if(result===undefined&&!(idx in obj)){return 0}return Hiwire.new_value(result)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsArray_New(){"use strict";try{return Hiwire.new_value([])}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsArray_Push(idarr,idval){"use strict";try{Hiwire.get_value(idarr).push(Hiwire.get_value(idval))}catch(e){Module.handle_js_error(e);return-1}return 0}function JsArray_Push_unchecked(idarr,idval){Hiwire.get_value(idarr).push(Hiwire.get_value(idval))}function JsArray_Set(idobj,idx,idval){"use strict";try{Hiwire.get_value(idobj)[idx]=Hiwire.get_value(idval)}catch(e){Module.handle_js_error(e);return-1}return 0}function JsBuffer_DecodeString_js(jsbuffer_id,encoding){"use strict";try{let buffer=Hiwire.get_value(jsbuffer_id);let encoding_js;if(encoding){encoding_js=UTF8ToString(encoding)}let decoder=new TextDecoder(encoding_js,{fatal:!!1});let res;try{res=decoder.decode(buffer)}catch(e){if(e instanceof TypeError){return 0}throw e}return Hiwire.new_value(res)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsMap_New(){"use strict";try{return Hiwire.new_value(new Map)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsMap_Set(mapid,keyid,valueid){"use strict";try{let map=Hiwire.get_value(mapid);let key=Hiwire.get_value(keyid);let value=Hiwire.get_value(valueid);map.set(key,value)}catch(e){Module.handle_js_error(e);return-1}return 0}function JsObject_DeleteString(idobj,ptrkey){"use strict";try{let jsobj=Hiwire.get_value(idobj);let jskey=UTF8ToString(ptrkey);delete jsobj[jskey]}catch(e){Module.handle_js_error(e);return-1}return 0}function JsObject_Dir(idobj){"use strict";try{let jsobj=Hiwire.get_value(idobj);let result=[];do{result.push(...Object.getOwnPropertyNames(jsobj).filter(s=>{let c=s.charCodeAt(0);return c<48||c>57}))}while(jsobj=Object.getPrototypeOf(jsobj));return Hiwire.new_value(result)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsObject_Entries(idobj){"use strict";try{let jsobj=Hiwire.get_value(idobj);return Hiwire.new_value(Object.entries(jsobj))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsObject_GetString(idobj,ptrkey){"use strict";try{let jsobj=Hiwire.get_value(idobj);let jskey=UTF8ToString(ptrkey);let result=jsobj[jskey];if(result===undefined&&!(jskey in jsobj)){return 0}return Hiwire.new_value(result)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsObject_Keys(idobj){"use strict";try{let jsobj=Hiwire.get_value(idobj);return Hiwire.new_value(Object.keys(jsobj))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsObject_New(){"use strict";try{return Hiwire.new_value({})}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsObject_SetString(idobj,ptrkey,idval){"use strict";try{let jsobj=Hiwire.get_value(idobj);let jskey=UTF8ToString(ptrkey);let jsval=Hiwire.get_value(idval);jsobj[jskey]=jsval}catch(e){Module.handle_js_error(e);return-1}return 0}function JsObject_Values(idobj){"use strict";try{let jsobj=Hiwire.get_value(idobj);return Hiwire.new_value(Object.values(jsobj))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsProxy_subscript_js(idobj,idkey){"use strict";try{let obj=Hiwire.get_value(idobj);let key=Hiwire.get_value(idkey);let result=obj.get(key);if(result===undefined){if(obj.has&&typeof obj.has==="function"&&!obj.has(key)){return 0}}return Hiwire.new_value(result)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsSet_Add(mapid,keyid){"use strict";try{let set=Hiwire.get_value(mapid);let key=Hiwire.get_value(keyid);set.add(key)}catch(e){Module.handle_js_error(e);return-1}return 0}function JsSet_New(){"use strict";try{return Hiwire.new_value(new Set)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function JsString_InternFromCString(str){"use strict";try{let jsstring=UTF8ToString(str);return Hiwire.intern_object(jsstring)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function _JsArray_PostProcess_helper(jscontext,array){"use strict";try{return Hiwire.new_value(Hiwire.get_value(jscontext).dict_converter(Hiwire.get_value(array)))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function _JsArray_PushEntry_helper(array,key,value){"use strict";try{Hiwire.get_value(array).push([Hiwire.get_value(key),Hiwire.get_value(value)])}catch(e){Module.handle_js_error(e);return-1}return 0}function _Py_CheckEmscriptenSignals_Helper(){if(!Module.Py_EmscriptenSignalBuffer){return 0}try{let result=Module.Py_EmscriptenSignalBuffer[0];Module.Py_EmscriptenSignalBuffer[0]=0;return result}catch(e){return 0}}function _python2js_buffer_inner(buf,itemsize,ndim,format,shape,strides,suboffsets){"use strict";try{let converter=Module.get_converter(format,itemsize);let result=Module._python2js_buffer_recursive(buf,0,{ndim:ndim,format:format,itemsize:itemsize,shape:shape,strides:strides,suboffsets:suboffsets,converter:converter});return Hiwire.new_value(result)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function array_to_js(array,len){"use strict";try{return Hiwire.new_value(Array.from(HEAP32.subarray(array/4,array/4+len)))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function console_error(msg){let jsmsg=UTF8ToString(msg);console.error(jsmsg)}function console_error_obj(obj){console.error(Hiwire.get_value(obj))}function create_once_callable(obj){"use strict";try{_Py_IncRef(obj);let alreadyCalled=!!0;function wrapper(...args){if(alreadyCalled){throw new Error("OnceProxy can only be called once")}try{return Module.callPyObject(obj,...args)}finally{wrapper.destroy()}}wrapper.destroy=function(){if(alreadyCalled){throw new Error("OnceProxy has already been destroyed")}alreadyCalled=!!1;Module.finalizationRegistry.unregister(wrapper);_Py_DecRef(obj)};Module.finalizationRegistry.register(wrapper,[obj,undefined],wrapper);return Hiwire.new_value(wrapper)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function create_promise_handles(handle_result,handle_exception,done_callback_id){"use strict";try{if(handle_result){_Py_IncRef(handle_result)}if(handle_exception){_Py_IncRef(handle_exception)}let done_callback=x=>{};if(done_callback_id){done_callback=Hiwire.get_value(done_callback_id)}let used=!!0;function checkUsed(){if(used){throw new Error("One of the promise handles has already been called.")}}function destroy(){checkUsed();used=!!1;if(handle_result){_Py_DecRef(handle_result)}if(handle_exception){_Py_DecRef(handle_exception)}}function onFulfilled(res){checkUsed();try{if(handle_result){return Module.callPyObject(handle_result,res)}}finally{done_callback(res);destroy()}}function onRejected(err){checkUsed();try{if(handle_exception){return Module.callPyObject(handle_exception,err)}}finally{done_callback(undefined);destroy()}}onFulfilled.destroy=destroy;onRejected.destroy=destroy;return Hiwire.new_value([onFulfilled,onRejected])}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function destroy_proxies(proxies_id,msg_ptr){let msg=undefined;if(msg_ptr){msg=UTF8ToString(msg_ptr)}let proxies=Hiwire.get_value(proxies_id);for(let px of proxies){Module.pyproxy_destroy(px,msg)}}function destroy_proxies_js(proxies_id){"use strict";try{for(let proxy of Hiwire.get_value(proxies_id)){proxy.destroy()}}catch(e){Module.handle_js_error(e);return-1}return 0}function destroy_proxy(proxy_id,msg_ptr){let msg=undefined;if(msg_ptr){msg=UTF8ToString(msg_ptr)}Module.pyproxy_destroy(Module.hiwire.get_value(proxy_id),msg)}function em_call_init_function(f){return Module.wasmTable.get(f)()}function fail_test(){API.fail_test=true}function ffi_call(cif,fn,rvalue,avalue){var abi=HEAPU32[(cif>>2)+0];var nargs=HEAPU32[(cif>>2)+1];var nfixedargs=HEAPU32[(cif>>2)+6];var arg_types_ptr=HEAPU32[(cif>>2)+2];var rtype_unboxed=unbox_small_structs(HEAPU32[(cif>>2)+3]);var rtype_ptr=rtype_unboxed[0];var rtype_id=rtype_unboxed[1];var orig_stack_ptr=stackSave();var cur_stack_ptr=orig_stack_ptr;var args=[];var ret_by_arg=false;if(rtype_id===15){throw new Error("complex ret marshalling nyi")}if(rtype_id<0||rtype_id>15){throw new Error("Unexpected rtype "+rtype_id)}if(rtype_id===4||rtype_id===13){args.push(rvalue);ret_by_arg=true}for(var i=0;i>2)+i];var arg_unboxed=unbox_small_structs(HEAPU32[(arg_types_ptr>>2)+i]);var arg_type_ptr=arg_unboxed[0];var arg_type_id=arg_unboxed[1];switch(arg_type_id){case 1:case 10:case 9:case 14:args.push(HEAPU32[(arg_ptr>>2)+0]);break;case 2:args.push(HEAPF32[(arg_ptr>>2)+0]);break;case 3:args.push(HEAPF64[(arg_ptr>>3)+0]);break;case 5:args.push(HEAPU8[arg_ptr+0]);break;case 6:args.push(HEAP8[arg_ptr+0]);break;case 7:args.push(HEAPU16[(arg_ptr>>1)+0]);break;case 8:args.push(HEAP16[(arg_ptr>>1)+0]);break;case 11:case 12:args.push(BigInt(HEAPU32[(arg_ptr>>2)+0*2])|BigInt(HEAPU32[(arg_ptr>>2)+0*2+1])<>2)+0*2])|BigInt(HEAPU32[(arg_ptr>>2)+0*2+1])<>2)+1*2])|BigInt(HEAPU32[(arg_ptr>>2)+1*2+1])<>2)+0];var align=HEAPU16[(arg_type_ptr+4>>1)+0];cur_stack_ptr-=size,cur_stack_ptr&=~(align-1);HEAP8.subarray(cur_stack_ptr,cur_stack_ptr+size).set(HEAP8.subarray(arg_ptr,arg_ptr+size));args.push(cur_stack_ptr);break;case 15:throw new Error("complex marshalling nyi");default:throw new Error("Unexpected type "+arg_type_id)}}if(nfixedargs!=nargs){var struct_arg_info=[];for(var i=nargs-1;i>=nfixedargs;i--){var arg_ptr=HEAPU32[(avalue>>2)+i];var arg_unboxed=unbox_small_structs(HEAPU32[(arg_types_ptr>>2)+i]);var arg_type_ptr=arg_unboxed[0];var arg_type_id=arg_unboxed[1];switch(arg_type_id){case 5:case 6:cur_stack_ptr-=1,cur_stack_ptr&=~(1-1);HEAPU8[cur_stack_ptr+0]=HEAPU8[arg_ptr+0];break;case 7:case 8:cur_stack_ptr-=2,cur_stack_ptr&=~(2-1);HEAPU16[(cur_stack_ptr>>1)+0]=HEAPU16[(arg_ptr>>1)+0];break;case 1:case 9:case 10:case 14:case 2:cur_stack_ptr-=4,cur_stack_ptr&=~(4-1);HEAPU32[(cur_stack_ptr>>2)+0]=HEAPU32[(arg_ptr>>2)+0];break;case 3:case 11:case 12:cur_stack_ptr-=8,cur_stack_ptr&=~(8-1);HEAPU32[(cur_stack_ptr>>2)+0]=HEAPU32[(arg_ptr>>2)+0];HEAPU32[(cur_stack_ptr>>2)+1]=HEAPU32[(arg_ptr>>2)+1];break;case 4:cur_stack_ptr-=16,cur_stack_ptr&=~(8-1);HEAPU32[(cur_stack_ptr>>2)+0]=HEAPU32[(arg_ptr>>2)+0];HEAPU32[(cur_stack_ptr>>2)+1]=HEAPU32[(arg_ptr>>2)+1];HEAPU32[(cur_stack_ptr>>2)+2]=HEAPU32[(arg_ptr>>2)+1];HEAPU32[(cur_stack_ptr>>2)+3]=HEAPU32[(arg_ptr>>2)+1];break;case 13:cur_stack_ptr-=4,cur_stack_ptr&=~(4-1);struct_arg_info.push([cur_stack_ptr,arg_ptr,HEAPU32[(arg_type_ptr>>2)+0],HEAPU16[(arg_type_ptr+4>>1)+0]]);break;case 15:throw new Error("complex arg marshalling nyi");default:throw new Error("Unexpected argtype "+arg_type_id)}}args.push(cur_stack_ptr);for(var i=0;i>2)+0]=cur_stack_ptr}}cur_stack_ptr-=0,cur_stack_ptr&=~(8-1);stackRestore(cur_stack_ptr);var result=wasmTable.get(fn).apply(null,args);stackRestore(orig_stack_ptr);if(ret_by_arg){return}switch(rtype_id){case 0:break;case 1:case 9:case 10:case 14:HEAPU32[(rvalue>>2)+0]=result;break;case 2:HEAPF32[(rvalue>>2)+0]=result;break;case 3:HEAPF64[(rvalue>>3)+0]=result;break;case 5:case 6:HEAPU8[rvalue+0]=result;break;case 7:case 8:HEAPU16[(rvalue>>1)+0]=result;break;case 11:case 12:HEAPU32[(rvalue>>2)+0*2]=Number(result&BigInt(4294967295))|0,HEAPU32[(rvalue>>2)+0*2+1]=Number(result>>BigInt(32))|0;break;case 15:throw new Error("complex ret marshalling nyi");default:throw new Error("Unexpected rtype "+rtype_id)}}function ffi_closure_alloc_helper(size,code){var closure=_malloc(size);var index=getEmptyTableSlot();HEAPU32[(code>>2)+0]=index;HEAPU32[(closure>>2)+0]=index;return closure}function ffi_closure_free_helper(closure){var index=HEAPU32[(closure>>2)+0];freeTableIndexes.push(index);_free(closure)}function ffi_prep_closure_loc_helper(closure,cif,fun,user_data,codeloc){var abi=HEAPU32[(cif>>2)+0];var nargs=HEAPU32[(cif>>2)+1];var nfixedargs=HEAPU32[(cif>>2)+6];var arg_types_ptr=HEAPU32[(cif>>2)+2];var rtype_unboxed=unbox_small_structs(HEAPU32[(cif>>2)+3]);var rtype_ptr=rtype_unboxed[0];var rtype_id=rtype_unboxed[1];var sig;var ret_by_arg=false;switch(rtype_id){case 0:sig="v";break;case 13:case 4:sig="vi";ret_by_arg=true;break;case 1:case 5:case 6:case 7:case 8:case 9:case 10:case 14:sig="i";break;case 2:sig="f";break;case 3:sig="d";break;case 11:case 12:sig="j";break;case 15:throw new Error("complex ret marshalling nyi");default:throw new Error("Unexpected rtype "+rtype_id)}var unboxed_arg_type_id_list=[];var unboxed_arg_type_info_list=[];for(var i=0;i>2)+i]);var arg_type_ptr=arg_unboxed[0];var arg_type_id=arg_unboxed[1];unboxed_arg_type_id_list.push(arg_type_id);unboxed_arg_type_info_list.push([HEAPU32[(arg_type_ptr>>2)+0],HEAPU16[(arg_type_ptr+4>>1)+0]])}for(var i=0;i>2)+carg_idx]=cur_ptr;HEAPU8[cur_ptr+0]=cur_arg;break;case 7:case 8:cur_ptr-=2,cur_ptr&=~(4-1);HEAPU32[(args_ptr>>2)+carg_idx]=cur_ptr;HEAPU16[(cur_ptr>>1)+0]=cur_arg;break;case 1:case 9:case 10:case 14:cur_ptr-=4,cur_ptr&=~(4-1);HEAPU32[(args_ptr>>2)+carg_idx]=cur_ptr;HEAPU32[(cur_ptr>>2)+0]=cur_arg;break;case 13:cur_ptr-=arg_size,cur_ptr&=~(arg_align-1);HEAP8.subarray(cur_ptr,cur_ptr+arg_size).set(HEAP8.subarray(cur_arg,cur_arg+arg_size));HEAPU32[(args_ptr>>2)+carg_idx]=cur_ptr;break;case 2:cur_ptr-=4,cur_ptr&=~(4-1);HEAPU32[(args_ptr>>2)+carg_idx]=cur_ptr;HEAPF32[(cur_ptr>>2)+0]=cur_arg;break;case 3:cur_ptr-=8,cur_ptr&=~(8-1);HEAPU32[(args_ptr>>2)+carg_idx]=cur_ptr;HEAPF64[(cur_ptr>>3)+0]=cur_arg;break;case 11:case 12:cur_ptr-=8,cur_ptr&=~(8-1);HEAPU32[(args_ptr>>2)+carg_idx]=cur_ptr;HEAPU32[(cur_ptr>>2)+0*2]=Number(cur_arg&BigInt(4294967295))|0,HEAPU32[(cur_ptr>>2)+0*2+1]=Number(cur_arg>>BigInt(32))|0;break;case 4:cur_ptr-=16,cur_ptr&=~(8-1);HEAPU32[(args_ptr>>2)+carg_idx]=cur_ptr;HEAPU32[(cur_ptr>>2)+0*2]=Number(cur_arg&BigInt(4294967295))|0,HEAPU32[(cur_ptr>>2)+0*2+1]=Number(cur_arg>>BigInt(32))|0;cur_arg=args[jsarg_idx++];HEAPU32[(cur_ptr>>2)+1*2]=Number(cur_arg&BigInt(4294967295))|0,HEAPU32[(cur_ptr>>2)+1*2+1]=Number(cur_arg>>BigInt(32))|0;break}}var varargs=args[args.length-1];for(var carg_idx=nfixedargs;carg_idx>2)+0];cur_ptr-=arg_size,cur_ptr&=~(arg_align-1);HEAP8.subarray(cur_ptr,cur_ptr+arg_size).set(HEAP8.subarray(struct_ptr,struct_ptr+arg_size));HEAPU32[(args_ptr>>2)+carg_idx]=cur_ptr}else{HEAPU32[(args_ptr>>2)+carg_idx]=varargs}varargs+=4}cur_ptr-=0,cur_ptr&=~(8-1);stackRestore(cur_ptr);wasmTable.get(HEAPU32[(closure>>2)+2]).apply(null,[HEAPU32[(closure>>2)+1],ret_ptr,args_ptr,HEAPU32[(closure>>2)+3]]);stackRestore(orig_stack_ptr);if(!ret_by_arg){switch(sig[0]){case"i":return HEAPU32[(ret_ptr>>2)+0];case"j":return BigInt(HEAPU32[(ret_ptr>>2)+0*2])|BigInt(HEAPU32[(ret_ptr>>2)+0*2+1])<>3)+0];case"f":return HEAPF32[(ret_ptr>>2)+0]}}}try{var wasm_trampoline=convertJsFunctionToWasm(trampoline,sig)}catch(e){return FFI_BAD_TYPEDEF}wasmTable.set(codeloc,wasm_trampoline);HEAPU32[(closure>>2)+1]=cif;HEAPU32[(closure>>2)+2]=fun;HEAPU32[(closure>>2)+3]=user_data;return 0}function get_async_js_call_done_callback(proxies_id){"use strict";try{let proxies=Hiwire.get_value(proxies_id);return Hiwire.new_value(function(result){let msg="This borrowed proxy was automatically destroyed "+"at the end of an asynchronous function call. Try "+"using create_proxy or create_once_callable.";for(let px of proxies){Module.pyproxy_destroy(px,msg)}if(API.isPyProxy(result)){Module.pyproxy_destroy(result,msg)}})}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function getter_call_trampoline(get,obj,closure){return wasmTable.get(get)(obj,closure)}function hiwire_CallMethod(idobj,name,idargs){"use strict";try{let jsobj=Hiwire.get_value(idobj);let jsname=Hiwire.get_value(name);let jsargs=Hiwire.get_value(idargs);return Hiwire.new_value(jsobj[jsname](...jsargs))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_CallMethodString(idobj,name,idargs){"use strict";try{let jsobj=Hiwire.get_value(idobj);let jsname=UTF8ToString(name);let jsargs=Hiwire.get_value(idargs);return Hiwire.new_value(jsobj[jsname](...jsargs))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_CallMethod_OneArg(idobj,name,idarg){"use strict";try{let jsobj=Hiwire.get_value(idobj);let jsname=Hiwire.get_value(name);let jsarg=Hiwire.get_value(idarg);return Hiwire.new_value(jsobj[jsname](jsarg))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_HasMethod(obj_id,name){let obj=Hiwire.get_value(obj_id);return obj&&typeof obj[Hiwire.get_value(name)]==="function"}function hiwire_assign_from_ptr(idobj,ptr){"use strict";try{let jsobj=Hiwire.get_value(idobj);Module.typedArrayAsUint8Array(jsobj).set(Module.HEAPU8.subarray(ptr,ptr+jsobj.byteLength))}catch(e){Module.handle_js_error(e);return-1}return 0}function hiwire_assign_to_ptr(idobj,ptr){"use strict";try{let jsobj=Hiwire.get_value(idobj);Module.HEAPU8.set(Module.typedArrayAsUint8Array(jsobj),ptr)}catch(e){Module.handle_js_error(e);return-1}return 0}function hiwire_call(idfunc,idargs){"use strict";try{let jsfunc=Hiwire.get_value(idfunc);let jsargs=Hiwire.get_value(idargs);return Hiwire.new_value(jsfunc(...jsargs))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_call_OneArg(idfunc,idarg){"use strict";try{let jsfunc=Hiwire.get_value(idfunc);let jsarg=Hiwire.get_value(idarg);return Hiwire.new_value(jsfunc(jsarg))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_call_bound(idfunc,idthis,idargs){"use strict";try{let func=Hiwire.get_value(idfunc);let this_;if(idthis===0){this_=null}else{this_=Hiwire.get_value(idthis)}let args=Hiwire.get_value(idargs);return Hiwire.new_value(func.apply(this_,args))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_construct(idobj,idargs){"use strict";try{let jsobj=Hiwire.get_value(idobj);let jsargs=Hiwire.get_value(idargs);return Hiwire.new_value(Reflect.construct(jsobj,jsargs))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_constructor_name(idobj){"use strict";try{return stringToNewUTF8(Hiwire.get_value(idobj).constructor.name)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_decref(idval){Hiwire.decref(idval)}function hiwire_double(val){"use strict";try{return Hiwire.new_value(val)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_equal(ida,idb){return!!(Hiwire.get_value(ida)===Hiwire.get_value(idb))}function hiwire_get_bool(idobj){let val=Hiwire.get_value(idobj);if(!val){return false}if(val.size===0){return false}if(Array.isArray(val)&&val.length===0){return false}return true}function hiwire_get_buffer_info(idobj,byteLength_ptr,format_ptr,size_ptr,checked_ptr){let jsobj=Hiwire.get_value(idobj);let byteLength=jsobj.byteLength;let[format_utf8,size,checked]=Module.get_buffer_datatype(jsobj);HEAPU32[(byteLength_ptr>>2)+0]=byteLength;HEAPU32[(format_ptr>>2)+0]=format_utf8;HEAPU32[(size_ptr>>2)+0]=size;HEAPU8[checked_ptr+0]=checked}function hiwire_get_iterator(idobj){"use strict";try{let jsobj=Hiwire.get_value(idobj);return Hiwire.new_value(jsobj[Symbol.iterator]())}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_get_length(idobj){"use strict";try{let val=Hiwire.get_value(idobj);if(typeof val.size==="number"){return val.size}if(typeof val.length==="number"){return val.length}return-1}catch(e){Module.handle_js_error(e);return-1}return 0}function hiwire_greater_than(ida,idb){return!!(Hiwire.get_value(ida)>Hiwire.get_value(idb))}function hiwire_greater_than_equal(ida,idb){return!!(Hiwire.get_value(ida)>=Hiwire.get_value(idb))}function hiwire_has_length(idobj){let val=Hiwire.get_value(idobj);return typeof val.size==="number"||typeof val.length==="number"&&typeof val!=="function"}function hiwire_incref(idval){if(idval&1){Hiwire.incref(idval)}return idval}function hiwire_init(){"use strict";try{let _hiwire={objects:new Map,counter:new Uint32Array([1])};Hiwire.UNDEFINED=HEAPU8[_Js_undefined+0];Hiwire.JSNULL=HEAPU8[_Js_null+0];Hiwire.TRUE=HEAPU8[_Js_true+0];Hiwire.FALSE=HEAPU8[_Js_false+0];_hiwire.objects.set(Hiwire.UNDEFINED,[undefined,-1]);_hiwire.objects.set(Hiwire.JSNULL,[null,-1]);_hiwire.objects.set(Hiwire.TRUE,[!!1,-1]);_hiwire.objects.set(Hiwire.FALSE,[!!0,-1]);let hiwire_next_permanent=Hiwire.FALSE+2;Hiwire.new_value=function(jsval){while(_hiwire.objects.has(_hiwire.counter[0])){_hiwire.counter[0]+=2}let idval=_hiwire.counter[0];_hiwire.objects.set(idval,[jsval,1]);_hiwire.counter[0]+=2;return idval};Hiwire.intern_object=function(obj){let id=hiwire_next_permanent;hiwire_next_permanent+=2;_hiwire.objects.set(id,[obj,-1]);return id};Hiwire.num_keys=function(){return Array.from(_hiwire.objects.keys()).filter(x=>x%2).length};Hiwire.get_value=function(idval){if(!idval){API.fail_test=!!1;if(_PyErr_Occurred()){let exc=_wrap_exception();let e=Hiwire.pop_value(exc);console.error(`Internal error: Argument '${idval}' to hiwire.get_value is falsy. `+"This was probably because the Python error indicator was set when get_value was called. "+"The Python error that caused this was:",e);throw e}else{console.error(`Internal error: Argument '${idval}' to hiwire.get_value is falsy`+" (but error indicator is not set).");throw new Error(`Internal error: Argument '${idval}' to hiwire.get_value is falsy`+" (but error indicator is not set).")}}if(!_hiwire.objects.has(idval)){console.error(`Undefined id ${idval}`);throw new Error(`Undefined id ${idval}`)}return _hiwire.objects.get(idval)[0]};Hiwire.decref=function(idval){if((idval&1)===0){return}let new_refcnt=--_hiwire.objects.get(idval)[1];if(new_refcnt===0){_hiwire.objects.delete(idval)}};Hiwire.incref=function(idval){_hiwire.objects.get(idval)[1]++};Hiwire.pop_value=function(idval){let result=Hiwire.get_value(idval);Hiwire.decref(idval);return result};Hiwire.isPromise=function(obj){try{return!!obj&&typeof obj.then==="function"}catch(e){return!!0}};Module.typedArrayAsUint8Array=function(arg){if(arg.buffer!==undefined){return new Uint8Array(arg.buffer,arg.byteOffset,arg.byteLength)}else{return new Uint8Array(arg)}};{let dtypes_str=["b","B","h","H","i","I","f","d"].join(String.fromCharCode(0));let dtypes_ptr=stringToNewUTF8(dtypes_str);let dtypes_map={};for(let[idx,val]of Object.entries(dtypes_str)){dtypes_map[val]=dtypes_ptr+Number(idx)}let buffer_datatype_map=new Map([["Int8Array",[dtypes_map["b"],1,!!1]],["Uint8Array",[dtypes_map["B"],1,!!1]],["Uint8ClampedArray",[dtypes_map["B"],1,!!1]],["Int16Array",[dtypes_map["h"],2,!!1]],["Uint16Array",[dtypes_map["H"],2,!!1]],["Int32Array",[dtypes_map["i"],4,!!1]],["Uint32Array",[dtypes_map["I"],4,!!1]],["Float32Array",[dtypes_map["f"],4,!!1]],["Float64Array",[dtypes_map["d"],8,!!1]],["DataView",[dtypes_map["B"],1,!!0]],["ArrayBuffer",[dtypes_map["B"],1,!!0]]]);Module.get_buffer_datatype=function(jsobj){return buffer_datatype_map.get(jsobj.constructor.name)||[0,0,!!0]}}if(globalThis.BigInt){Module.BigInt=BigInt}else{Module.BigInt=Number}return 0}catch(e){Module.handle_js_error(e);return-1}return 0}function hiwire_int(val){"use strict";try{return Hiwire.new_value(val)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_int_from_digits(digits,ndigits){"use strict";try{let result=BigInt(0);for(let i=0;i>2)+i])<>2)+ndigits-1]&2147483648)<>2)+0]=result_id;return done}catch(e){Module.handle_js_error(e);return-1}return 0}function hiwire_not_equal(ida,idb){return!!(Hiwire.get_value(ida)!==Hiwire.get_value(idb))}function hiwire_read_from_file(idobj,fd){"use strict";try{let jsobj=Hiwire.get_value(idobj);let uint8_buffer=Module.typedArrayAsUint8Array(jsobj);let stream=Module.FS.streams[fd];Module.FS.read(stream,uint8_buffer,0,uint8_buffer.byteLength)}catch(e){Module.handle_js_error(e);return-1}return 0}function hiwire_resolve_promise(idobj){"use strict";try{let obj=Hiwire.get_value(idobj);let result=Promise.resolve(obj);return Hiwire.new_value(result)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_string_ascii(ptr){"use strict";try{return Hiwire.new_value(AsciiToString(ptr))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_string_ucs1(ptr,len){"use strict";try{let jsstr="";for(let i=0;i>1)+i])}return Hiwire.new_value(jsstr)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_string_ucs4(ptr,len){"use strict";try{let jsstr="";for(let i=0;i>2)+i])}return Hiwire.new_value(jsstr)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_string_utf8(ptr){"use strict";try{return Hiwire.new_value(UTF8ToString(ptr))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_subarray(idarr,start,end){"use strict";try{let jsarr=Hiwire.get_value(idarr);let jssub=jsarr.subarray(start,end);return Hiwire.new_value(jssub)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_throw_error(iderr){throw Hiwire.pop_value(iderr)}function hiwire_to_bool(val){return!!Hiwire.get_value(val)}function hiwire_to_string(idobj){"use strict";try{return Hiwire.new_value(Hiwire.get_value(idobj).toString())}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_typeof(idobj){"use strict";try{return Hiwire.new_value(typeof Hiwire.get_value(idobj))}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function hiwire_write_to_file(idobj,fd){"use strict";try{let jsobj=Hiwire.get_value(idobj);let uint8_buffer=Module.typedArrayAsUint8Array(jsobj);let stream=Module.FS.streams[fd];Module.FS.write(stream,uint8_buffer,0,uint8_buffer.byteLength)}catch(e){Module.handle_js_error(e);return-1}return 0}function js2python(id){"use strict";try{let value=Hiwire.get_value(id);let result=Module.js2python_convertImmutable(value);if(result!==undefined){return result}return _JsProxy_create(id)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function js2python_convert(id,depth,default_converter){"use strict";try{let defaultConverter=default_converter?Module.hiwire.get_value(default_converter):undefined;return Module.js2python_convert(id,{depth:depth,defaultConverter:defaultConverter})}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function js2python_init(){"use strict";try{{0;let PropagateError=Module._PropagatePythonError;function js2python_string(value){let max_code_point=0;let num_code_points=0;for(let c of value){num_code_points++;let code_point=c.codePointAt(0);max_code_point=code_point>max_code_point?code_point:max_code_point}let result=_PyUnicode_New(num_code_points,max_code_point);if(result===0){throw new PropagateError}let ptr=_PyUnicode_Data(result);if(max_code_point>65535){for(let c of value){HEAPU32[ptr/4]=c.codePointAt(0);ptr+=4}}else if(max_code_point>255){for(let c of value){HEAPU16[ptr/2]=c.codePointAt(0);ptr+=2}}else{for(let c of value){HEAPU8[ptr]=c.codePointAt(0);ptr+=1}}return result}function js2python_bigint(value){let value_orig=value;let length=0;if(value<0){value=-value}while(value){length++;value>>=BigInt(32)}let stackTop=stackSave();let ptr=stackAlloc(length*4);value=value_orig;for(let i=0;i>2)+i]=Number(value&BigInt(4294967295));value>>=BigInt(32)}let result=__PyLong_FromByteArray(ptr,length*4,!!1,!!1);stackRestore(stackTop);return result}function js2python_convertImmutable(value){let result=js2python_convertImmutableInner(value);if(result===0){throw new PropagateError}return result}Module.js2python_convertImmutable=js2python_convertImmutable;function js2python_convertImmutableInner(value){let type=typeof value;if(type==="string"){return js2python_string(value)}else if(type==="number"){if(Number.isSafeInteger(value)){return _PyLong_FromDouble(value)}else{return _PyFloat_FromDouble(value)}}else if(type==="bigint"){return js2python_bigint(value)}else if(value===undefined||value===null){return __js2python_none()}else if(value===!!1){return __js2python_true()}else if(value===!!0){return __js2python_false()}else if(API.isPyProxy(value)){return __js2python_pyproxy(Module.PyProxy_getPtr(value))}return undefined}function js2python_convertList(obj,context){let list=_PyList_New(obj.length);if(list===0){return 0}let entryid=0;let item=0;try{context.cache.set(obj,list);for(let i=0;i2){throw new Error("Expected format string to have length <= 2, "+`got '${formatStr}'.`+errorMessage)}let formatChar=formatStr.slice(-1);let alignChar=formatStr.slice(0,-1);let bigEndian;switch(alignChar){case"!":case">":bigEndian=!!1;break;case"<":case"@":case"=":case"":bigEndian=!!0;break;default:throw new Error(`Unrecognized alignment character ${alignChar}.`+errorMessage)}let arrayType;switch(formatChar){case"b":arrayType=Int8Array;break;case"s":case"p":case"c":case"B":case"?":arrayType=Uint8Array;break;case"h":arrayType=Int16Array;break;case"H":arrayType=Uint16Array;break;case"i":case"l":case"n":arrayType=Int32Array;break;case"I":case"L":case"N":case"P":arrayType=Uint32Array;break;case"q":if(globalThis.BigInt64Array===undefined){throw new Error("BigInt64Array is not supported on this browser."+errorMessage)}arrayType=BigInt64Array;break;case"Q":if(globalThis.BigUint64Array===undefined){throw new Error("BigUint64Array is not supported on this browser."+errorMessage)}arrayType=BigUint64Array;break;case"f":arrayType=Float32Array;break;case"d":arrayType=Float64Array;break;case"e":throw new Error("Javascript has no Float16 support.");default:throw new Error(`Unrecognized format character '${formatChar}'.`+errorMessage)}return[arrayType,bigEndian]};Module.python2js_buffer_1d_contiguous=function(ptr,stride,n){"use strict";let byteLength=stride*n;return HEAP8.slice(ptr,ptr+byteLength).buffer};Module.python2js_buffer_1d_noncontiguous=function(ptr,stride,suboffset,n,itemsize){"use strict";let byteLength=itemsize*n;let buffer=new Uint8Array(byteLength);for(let i=0;i=0){curptr=HEAPU32[(curptr>>2)+0]+suboffset}buffer.set(HEAP8.subarray(curptr,curptr+itemsize),i*itemsize)}return buffer.buffer};Module._python2js_buffer_recursive=function(ptr,curdim,bufferData){"use strict";let n=HEAPU32[(bufferData.shape>>2)+curdim];let stride=HEAP32[(bufferData.strides>>2)+curdim];let suboffset=-1;if(bufferData.suboffsets!==0){suboffset=HEAP32[(bufferData.suboffsets>>2)+curdim]}if(curdim===bufferData.ndim-1){let arraybuffer;if(stride===bufferData.itemsize&&suboffset<0){arraybuffer=Module.python2js_buffer_1d_contiguous(ptr,stride,n)}else{arraybuffer=Module.python2js_buffer_1d_noncontiguous(ptr,stride,suboffset,n,bufferData.itemsize)}return bufferData.converter(arraybuffer)}let result=[];for(let i=0;i=0){curptr=HEAPU32[(curptr>>2)+0]+suboffset}result.push(Module._python2js_buffer_recursive(curPtr,curdim+1,bufferData))}return result};Module.get_converter=function(format,itemsize){"use strict";let formatStr=UTF8ToString(format);let[ArrayType,bigEndian]=Module.processBufferFormatString(formatStr);let formatChar=formatStr.slice(-1);switch(formatChar){case"s":let decoder=new TextDecoder("utf8");return buff=>decoder.decode(buff);case"?":return buff=>Array.from(new Uint8Array(buff),x=>!!x)}if(!bigEndian){return buff=>new ArrayType(buff)}let getFuncName;let setFuncName;switch(itemsize){case 2:getFuncName="getUint16";setFuncName="setUint16";break;case 4:getFuncName="getUint32";setFuncName="setUint32";break;case 8:getFuncName="getFloat64";setFuncName="setFloat64";break;default:throw new Error(`Unexpected size ${itemsize}`)}function swapFunc(buff){let dataview=new DataView(buff);let getFunc=dataview[getFuncName].bind(dataview);let setFunc=dataview[setFuncName].bind(dataview);for(let byte=0;bytenew ArrayType(swapFunc(buff))}}return 0}catch(e){Module.handle_js_error(e);return-1}return 0}function python2js_custom__create_jscontext(context,cache,dict_converter,default_converter){"use strict";try{let jscontext={};if(dict_converter!==0){jscontext.dict_converter=Hiwire.get_value(dict_converter)}if(default_converter!==0){jscontext.default_converter=Hiwire.get_value(default_converter);jscontext.cacheConversion=function(input,output){if(!API.isPyProxy(input)){throw new TypeError("The first argument to cacheConversion must be a PyProxy.")}let input_ptr=Module.PyProxy_getPtr(input);let output_key=Hiwire.new_value(output);__python2js_add_to_cache(cache,input_ptr,output_key);Hiwire.decref(output_key)};jscontext.converter=function(x){if(!API.isPyProxy(x)){return x}let ptr=Module.PyProxy_getPtr(x);let res=__python2js(context,ptr);return Hiwire.pop_value(res)}}return Hiwire.new_value(jscontext)}catch(e){Module.handle_js_error(e);return 0}throw new Error("Assertion error: control reached end of function without return")}function setter_call_trampoline(set,obj,value,closure){return wasmTable.get(set)(obj,value,closure)}function unbox_small_structs(type_ptr){var type_id=HEAPU16[(type_ptr+6>>1)+0];while(type_id===13){var elements=HEAPU32[(type_ptr+8>>2)+0];var first_element=HEAPU32[(elements>>2)+0];if(first_element===0){type_id=0;break}else if(HEAPU32[(elements>>2)+1]===0){type_ptr=first_element;type_id=HEAPU16[(first_element+6>>1)+0]}else{break}}return[type_ptr,type_id]}function _emscripten_set_main_loop_timing(mode,value){Browser.mainLoop.timingMode=mode;Browser.mainLoop.timingValue=value;if(!Browser.mainLoop.func){return 1}if(!Browser.mainLoop.running){Browser.mainLoop.running=true}if(mode==0){Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_setTimeout(){var timeUntilNextTick=Math.max(0,Browser.mainLoop.tickStartTime+value-_emscripten_get_now())|0;setTimeout(Browser.mainLoop.runner,timeUntilNextTick)};Browser.mainLoop.method="timeout"}else if(mode==1){Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_rAF(){Browser.requestAnimationFrame(Browser.mainLoop.runner)};Browser.mainLoop.method="rAF"}else if(mode==2){if(typeof setImmediate==="undefined"){var setImmediates=[];var emscriptenMainLoopMessageId="setimmediate";var Browser_setImmediate_messageHandler=function(event){if(event.data===emscriptenMainLoopMessageId||event.data.target===emscriptenMainLoopMessageId){event.stopPropagation();setImmediates.shift()()}};addEventListener("message",Browser_setImmediate_messageHandler,true);setImmediate=function Browser_emulated_setImmediate(func){setImmediates.push(func);if(ENVIRONMENT_IS_WORKER){if(Module["setImmediates"]===undefined)Module["setImmediates"]=[];Module["setImmediates"].push(func);postMessage({target:emscriptenMainLoopMessageId})}else postMessage(emscriptenMainLoopMessageId,"*")}}Browser.mainLoop.scheduler=function Browser_mainLoop_scheduler_setImmediate(){setImmediate(Browser.mainLoop.runner)};Browser.mainLoop.method="immediate"}return 0}Module["_emscripten_set_main_loop_timing"]=_emscripten_set_main_loop_timing;_emscripten_set_main_loop_timing.sig="iii";var _emscripten_get_now;if(ENVIRONMENT_IS_NODE){_emscripten_get_now=function(){var t=process["hrtime"]();return t[0]*1e3+t[1]/1e6}}else _emscripten_get_now=function(){return performance.now()};Module["_emscripten_get_now"]=_emscripten_get_now;function runtimeKeepalivePush(){runtimeKeepaliveCounter+=1}Module["runtimeKeepalivePush"]=runtimeKeepalivePush;runtimeKeepalivePush.sig="v";function _exit(status){exit(status)}Module["_exit"]=_exit;_exit.sig="vi";function handleException(e){if(e instanceof ExitStatus||e==="unwind"){return}if(e&&typeof e==="object"&&e.stack)err("exception thrown: "+[e,e.stack]);throw e}Module["handleException"]=handleException;function maybeExit(){if(!keepRuntimeAlive()){try{_exit(EXITSTATUS)}catch(e){handleException(e)}}}Module["maybeExit"]=maybeExit;function setMainLoop(browserIterationFunc,fps,simulateInfiniteLoop,arg,noSetTiming){assert(!Browser.mainLoop.func,"emscripten_set_main_loop: there can only be one main loop function at once: call emscripten_cancel_main_loop to cancel the previous one before setting a new one with different parameters.");Browser.mainLoop.func=browserIterationFunc;Browser.mainLoop.arg=arg;var thisMainLoopId=Browser.mainLoop.currentlyRunningMainloop;function checkIsRunning(){if(thisMainLoopId0){var start=Date.now();var blocker=Browser.mainLoop.queue.shift();blocker.func(blocker.arg);if(Browser.mainLoop.remainingBlockers){var remaining=Browser.mainLoop.remainingBlockers;var next=remaining%1==0?remaining-1:Math.floor(remaining);if(blocker.counted){Browser.mainLoop.remainingBlockers=next}else{next=next+.5;Browser.mainLoop.remainingBlockers=(8*remaining+next)/9}}out('main loop blocker "'+blocker.name+'" took '+(Date.now()-start)+" ms");Browser.mainLoop.updateStatus();if(!checkIsRunning())return;setTimeout(Browser.mainLoop.runner,0);return}if(!checkIsRunning())return;Browser.mainLoop.currentFrameNumber=Browser.mainLoop.currentFrameNumber+1|0;if(Browser.mainLoop.timingMode==1&&Browser.mainLoop.timingValue>1&&Browser.mainLoop.currentFrameNumber%Browser.mainLoop.timingValue!=0){Browser.mainLoop.scheduler();return}else if(Browser.mainLoop.timingMode==0){Browser.mainLoop.tickStartTime=_emscripten_get_now()}Browser.mainLoop.runIter(browserIterationFunc);if(!checkIsRunning())return;if(typeof SDL==="object"&&SDL.audio&&SDL.audio.queueNewAudioData)SDL.audio.queueNewAudioData();Browser.mainLoop.scheduler()};if(!noSetTiming){if(fps&&fps>0)_emscripten_set_main_loop_timing(0,1e3/fps);else _emscripten_set_main_loop_timing(1,1);Browser.mainLoop.scheduler()}if(simulateInfiniteLoop){throw"unwind"}}Module["setMainLoop"]=setMainLoop;function callUserCallback(func,synchronous){if(ABORT){return}if(synchronous){func();return}try{func()}catch(e){handleException(e)}}Module["callUserCallback"]=callUserCallback;function safeSetTimeout(func,timeout){return setTimeout(function(){callUserCallback(func)},timeout)}Module["safeSetTimeout"]=safeSetTimeout;function runtimeKeepalivePop(){runtimeKeepaliveCounter-=1}Module["runtimeKeepalivePop"]=runtimeKeepalivePop;runtimeKeepalivePop.sig="v";var Browser={mainLoop:{running:false,scheduler:null,method:"",currentlyRunningMainloop:0,func:null,arg:0,timingMode:0,timingValue:0,currentFrameNumber:0,queue:[],pause:function(){Browser.mainLoop.scheduler=null;Browser.mainLoop.currentlyRunningMainloop++},resume:function(){Browser.mainLoop.currentlyRunningMainloop++;var timingMode=Browser.mainLoop.timingMode;var timingValue=Browser.mainLoop.timingValue;var func=Browser.mainLoop.func;Browser.mainLoop.func=null;setMainLoop(func,0,false,Browser.mainLoop.arg,true);_emscripten_set_main_loop_timing(timingMode,timingValue);Browser.mainLoop.scheduler()},updateStatus:function(){if(Module["setStatus"]){var message=Module["statusMessage"]||"Please wait...";var remaining=Browser.mainLoop.remainingBlockers;var expected=Browser.mainLoop.expectedBlockers;if(remaining){if(remaining=6){var curr=leftchar>>leftbits-6&63;leftbits-=6;ret+=BASE[curr]}}if(leftbits==2){ret+=BASE[(leftchar&3)<<4];ret+=PAD+PAD}else if(leftbits==4){ret+=BASE[(leftchar&15)<<2];ret+=PAD}return ret}audio.src="data:audio/x-"+name.substr(-3)+";base64,"+encode64(byteArray);finish(audio)};audio.src=url;safeSetTimeout(function(){finish(audio)},1e4)}else{return fail()}};Module["preloadPlugins"].push(audioPlugin);var wasmPlugin={"asyncWasmLoadPromise":new Promise(function(resolve,reject){return resolve()}),"canHandle":function(name){return!Module.noWasmDecoding&&name.endsWith(".so")},"handle":function(byteArray,name,onload,onerror){this["asyncWasmLoadPromise"]=this["asyncWasmLoadPromise"].then(function(){return loadWebAssemblyModule(byteArray,{loadAsync:true,nodelete:true})}).then(function(module){Module["preloadedWasm"][name]=module;onload()},function(err){console.warn("Couldn't instantiate wasm: "+name+" '"+err+"'");onerror()})}};Module["preloadPlugins"].push(wasmPlugin);function pointerLockChange(){Browser.pointerLock=document["pointerLockElement"]===Module["canvas"]||document["mozPointerLockElement"]===Module["canvas"]||document["webkitPointerLockElement"]===Module["canvas"]||document["msPointerLockElement"]===Module["canvas"]}var canvas=Module["canvas"];if(canvas){canvas.requestPointerLock=canvas["requestPointerLock"]||canvas["mozRequestPointerLock"]||canvas["webkitRequestPointerLock"]||canvas["msRequestPointerLock"]||function(){};canvas.exitPointerLock=document["exitPointerLock"]||document["mozExitPointerLock"]||document["webkitExitPointerLock"]||document["msExitPointerLock"]||function(){};canvas.exitPointerLock=canvas.exitPointerLock.bind(document);document.addEventListener("pointerlockchange",pointerLockChange,false);document.addEventListener("mozpointerlockchange",pointerLockChange,false);document.addEventListener("webkitpointerlockchange",pointerLockChange,false);document.addEventListener("mspointerlockchange",pointerLockChange,false);if(Module["elementPointerLock"]){canvas.addEventListener("click",function(ev){if(!Browser.pointerLock&&Module["canvas"].requestPointerLock){Module["canvas"].requestPointerLock();ev.preventDefault()}},false)}}},createContext:function(canvas,useWebGL,setInModule,webGLContextAttributes){if(useWebGL&&Module.ctx&&canvas==Module.canvas)return Module.ctx;var ctx;var contextHandle;if(useWebGL){var contextAttributes={antialias:false,alpha:false,majorVersion:1};if(webGLContextAttributes){for(var attribute in webGLContextAttributes){contextAttributes[attribute]=webGLContextAttributes[attribute]}}if(typeof GL!=="undefined"){contextHandle=GL.createContext(canvas,contextAttributes);if(contextHandle){ctx=GL.getContext(contextHandle).GLctx}}}else{ctx=canvas.getContext("2d")}if(!ctx)return null;if(setInModule){if(!useWebGL)assert(typeof GLctx==="undefined","cannot set in module if GLctx is used, but we are a non-GL context that would replace it");Module.ctx=ctx;if(useWebGL)GL.makeContextCurrent(contextHandle);Module.useWebGL=useWebGL;Browser.moduleContextCreatedCallbacks.forEach(function(callback){callback()});Browser.init()}return ctx},destroyContext:function(canvas,useWebGL,setInModule){},fullscreenHandlersInstalled:false,lockPointer:undefined,resizeCanvas:undefined,requestFullscreen:function(lockPointer,resizeCanvas){Browser.lockPointer=lockPointer;Browser.resizeCanvas=resizeCanvas;if(typeof Browser.lockPointer==="undefined")Browser.lockPointer=true;if(typeof Browser.resizeCanvas==="undefined")Browser.resizeCanvas=false;var canvas=Module["canvas"];function fullscreenChange(){Browser.isFullscreen=false;var canvasContainer=canvas.parentNode;if((document["fullscreenElement"]||document["mozFullScreenElement"]||document["msFullscreenElement"]||document["webkitFullscreenElement"]||document["webkitCurrentFullScreenElement"])===canvasContainer){canvas.exitFullscreen=Browser.exitFullscreen;if(Browser.lockPointer)canvas.requestPointerLock();Browser.isFullscreen=true;if(Browser.resizeCanvas){Browser.setFullscreenCanvasSize()}else{Browser.updateCanvasDimensions(canvas)}}else{canvasContainer.parentNode.insertBefore(canvas,canvasContainer);canvasContainer.parentNode.removeChild(canvasContainer);if(Browser.resizeCanvas){Browser.setWindowedCanvasSize()}else{Browser.updateCanvasDimensions(canvas)}}if(Module["onFullScreen"])Module["onFullScreen"](Browser.isFullscreen);if(Module["onFullscreen"])Module["onFullscreen"](Browser.isFullscreen)}if(!Browser.fullscreenHandlersInstalled){Browser.fullscreenHandlersInstalled=true;document.addEventListener("fullscreenchange",fullscreenChange,false);document.addEventListener("mozfullscreenchange",fullscreenChange,false);document.addEventListener("webkitfullscreenchange",fullscreenChange,false);document.addEventListener("MSFullscreenChange",fullscreenChange,false)}var canvasContainer=document.createElement("div");canvas.parentNode.insertBefore(canvasContainer,canvas);canvasContainer.appendChild(canvas);canvasContainer.requestFullscreen=canvasContainer["requestFullscreen"]||canvasContainer["mozRequestFullScreen"]||canvasContainer["msRequestFullscreen"]||(canvasContainer["webkitRequestFullscreen"]?function(){canvasContainer["webkitRequestFullscreen"](Element["ALLOW_KEYBOARD_INPUT"])}:null)||(canvasContainer["webkitRequestFullScreen"]?function(){canvasContainer["webkitRequestFullScreen"](Element["ALLOW_KEYBOARD_INPUT"])}:null);canvasContainer.requestFullscreen()},exitFullscreen:function(){if(!Browser.isFullscreen){return false}var CFS=document["exitFullscreen"]||document["cancelFullScreen"]||document["mozCancelFullScreen"]||document["msExitFullscreen"]||document["webkitCancelFullScreen"]||function(){};CFS.apply(document,[]);return true},nextRAF:0,fakeRequestAnimationFrame:function(func){var now=Date.now();if(Browser.nextRAF===0){Browser.nextRAF=now+1e3/60}else{while(now+2>=Browser.nextRAF){Browser.nextRAF+=1e3/60}}var delay=Math.max(Browser.nextRAF-now,0);setTimeout(func,delay)},requestAnimationFrame:function(func){if(typeof requestAnimationFrame==="function"){requestAnimationFrame(func);return}var RAF=Browser.fakeRequestAnimationFrame;RAF(func)},safeSetTimeout:function(func){return safeSetTimeout(func)},safeRequestAnimationFrame:function(func){return Browser.requestAnimationFrame(function(){callUserCallback(func)})},getMimetype:function(name){return{"jpg":"image/jpeg","jpeg":"image/jpeg","png":"image/png","bmp":"image/bmp","ogg":"audio/ogg","wav":"audio/wav","mp3":"audio/mpeg"}[name.substr(name.lastIndexOf(".")+1)]},getUserMedia:function(func){if(!window.getUserMedia){window.getUserMedia=navigator["getUserMedia"]||navigator["mozGetUserMedia"]}window.getUserMedia(func)},getMovementX:function(event){return event["movementX"]||event["mozMovementX"]||event["webkitMovementX"]||0},getMovementY:function(event){return event["movementY"]||event["mozMovementY"]||event["webkitMovementY"]||0},getMouseWheelDelta:function(event){var delta=0;switch(event.type){case"DOMMouseScroll":delta=event.detail/3;break;case"mousewheel":delta=event.wheelDelta/120;break;case"wheel":delta=event.deltaY;switch(event.deltaMode){case 0:delta/=100;break;case 1:delta/=3;break;case 2:delta*=80;break;default:throw"unrecognized mouse wheel delta mode: "+event.deltaMode}break;default:throw"unrecognized mouse wheel event: "+event.type}return delta},mouseX:0,mouseY:0,mouseMovementX:0,mouseMovementY:0,touches:{},lastTouches:{},calculateMouseEvent:function(event){if(Browser.pointerLock){if(event.type!="mousemove"&&"mozMovementX"in event){Browser.mouseMovementX=Browser.mouseMovementY=0}else{Browser.mouseMovementX=Browser.getMovementX(event);Browser.mouseMovementY=Browser.getMovementY(event)}if(typeof SDL!="undefined"){Browser.mouseX=SDL.mouseX+Browser.mouseMovementX;Browser.mouseY=SDL.mouseY+Browser.mouseMovementY}else{Browser.mouseX+=Browser.mouseMovementX;Browser.mouseY+=Browser.mouseMovementY}}else{var rect=Module["canvas"].getBoundingClientRect();var cw=Module["canvas"].width;var ch=Module["canvas"].height;var scrollX=typeof window.scrollX!=="undefined"?window.scrollX:window.pageXOffset;var scrollY=typeof window.scrollY!=="undefined"?window.scrollY:window.pageYOffset;if(event.type==="touchstart"||event.type==="touchend"||event.type==="touchmove"){var touch=event.touch;if(touch===undefined){return}var adjustedX=touch.pageX-(scrollX+rect.left);var adjustedY=touch.pageY-(scrollY+rect.top);adjustedX=adjustedX*(cw/rect.width);adjustedY=adjustedY*(ch/rect.height);var coords={x:adjustedX,y:adjustedY};if(event.type==="touchstart"){Browser.lastTouches[touch.identifier]=coords;Browser.touches[touch.identifier]=coords}else if(event.type==="touchend"||event.type==="touchmove"){var last=Browser.touches[touch.identifier];if(!last)last=coords;Browser.lastTouches[touch.identifier]=last;Browser.touches[touch.identifier]=coords}return}var x=event.pageX-(scrollX+rect.left);var y=event.pageY-(scrollY+rect.top);x=x*(cw/rect.width);y=y*(ch/rect.height);Browser.mouseMovementX=x-Browser.mouseX;Browser.mouseMovementY=y-Browser.mouseY;Browser.mouseX=x;Browser.mouseY=y}},resizeListeners:[],updateResizeListeners:function(){var canvas=Module["canvas"];Browser.resizeListeners.forEach(function(listener){listener(canvas.width,canvas.height)})},setCanvasSize:function(width,height,noUpdates){var canvas=Module["canvas"];Browser.updateCanvasDimensions(canvas,width,height);if(!noUpdates)Browser.updateResizeListeners()},windowedWidth:0,windowedHeight:0,setFullscreenCanvasSize:function(){if(typeof SDL!="undefined"){var flags=HEAPU32[SDL.screen>>2];flags=flags|8388608;HEAP32[SDL.screen>>2]=flags}Browser.updateCanvasDimensions(Module["canvas"]);Browser.updateResizeListeners()},setWindowedCanvasSize:function(){if(typeof SDL!="undefined"){var flags=HEAPU32[SDL.screen>>2];flags=flags&~8388608;HEAP32[SDL.screen>>2]=flags}Browser.updateCanvasDimensions(Module["canvas"]);Browser.updateResizeListeners()},updateCanvasDimensions:function(canvas,wNative,hNative){if(wNative&&hNative){canvas.widthNative=wNative;canvas.heightNative=hNative}else{wNative=canvas.widthNative;hNative=canvas.heightNative}var w=wNative;var h=hNative;if(Module["forcedAspectRatio"]&&Module["forcedAspectRatio"]>0){if(w/h0){var callback=callbacks.shift();if(typeof callback=="function"){callback(Module);continue}var func=callback.func;if(typeof func==="number"){if(callback.arg===undefined){wasmTable.get(func)()}else{wasmTable.get(func)(callback.arg)}}else{func(callback.arg===undefined?null:callback.arg)}}}Module["callRuntimeCallbacks"]=callRuntimeCallbacks;function demangle(func){demangle.recursionGuard=(demangle.recursionGuard|0)+1;if(demangle.recursionGuard>1)return func;var __cxa_demangle_func=Module["___cxa_demangle"]||Module["__cxa_demangle"];assert(__cxa_demangle_func);var stackTop=stackSave();try{var s=func;if(s.startsWith("__Z"))s=s.substr(1);var len=lengthBytesUTF8(s)+1;var buf=stackAlloc(len);stringToUTF8(s,buf,len);var status=stackAlloc(4);var ret=__cxa_demangle_func(buf,0,0,status);if(HEAP32[status>>2]===0&&ret){return UTF8ToString(ret)}}catch(e){}finally{_free(ret);stackRestore(stackTop);if(demangle.recursionGuard<2)--demangle.recursionGuard}return func}Module["demangle"]=demangle;function demangleAll(text){var regex=/\b_Z[\w\d_]+/g;return text.replace(regex,function(x){var y=demangle(x);return x===y?x:y+" ["+x+"]"})}Module["demangleAll"]=demangleAll;function getDylinkMetadata(binary){var next=0;function getLEB(){var ret=0;var mul=1;while(1){var byte=binary[next++];ret+=(byte&127)*mul;mul*=128;if(!(byte&128))break}return ret}if(binary instanceof WebAssembly.Module){var dylinkSection=WebAssembly.Module.customSections(binary,"dylink");assert(dylinkSection.length!=0,"need dylink section");binary=new Int8Array(dylinkSection[0])}else{var int32View=new Uint32Array(new Uint8Array(binary.subarray(0,24)).buffer);assert(int32View[0]==1836278016,"need to see wasm magic number");assert(binary[8]===0,"need the dylink section to be first");next=9;getLEB();assert(binary[next]===6);next++;assert(binary[next]==="d".charCodeAt(0));next++;assert(binary[next]==="y".charCodeAt(0));next++;assert(binary[next]==="l".charCodeAt(0));next++;assert(binary[next]==="i".charCodeAt(0));next++;assert(binary[next]==="n".charCodeAt(0));next++;assert(binary[next]==="k".charCodeAt(0));next++}var customSection={};customSection.memorySize=getLEB();customSection.memoryAlign=getLEB();customSection.tableSize=getLEB();customSection.tableAlign=getLEB();var neededDynlibsCount=getLEB();customSection.neededDynlibs=[];for(var i=0;i>2]=stdTimezoneOffset*60;HEAP32[__get_daylight()>>2]=Number(winterOffset!=summerOffset);function extractZone(date){var match=date.toTimeString().match(/\(([A-Za-z ]+)\)$/);return match?match[1]:"GMT"}var winterName=extractZone(winter);var summerName=extractZone(summer);var winterNamePtr=allocateUTF8(winterName);var summerNamePtr=allocateUTF8(summerName);if(summerOffset>2]=winterNamePtr;HEAP32[__get_tzname()+4>>2]=summerNamePtr}else{HEAP32[__get_tzname()>>2]=summerNamePtr;HEAP32[__get_tzname()+4>>2]=winterNamePtr}}Module["_tzset_impl"]=_tzset_impl;_tzset_impl.sig="v";function _tzset(){if(_tzset.called)return;_tzset.called=true;_tzset_impl()}Module["_tzset"]=_tzset;_tzset.sig="v";function _mktime(tmPtr){_tzset();var date=new Date(HEAP32[tmPtr+20>>2]+1900,HEAP32[tmPtr+16>>2],HEAP32[tmPtr+12>>2],HEAP32[tmPtr+8>>2],HEAP32[tmPtr+4>>2],HEAP32[tmPtr>>2],0);var dst=HEAP32[tmPtr+32>>2];var guessedOffset=date.getTimezoneOffset();var start=new Date(date.getFullYear(),0,1);var summerOffset=new Date(date.getFullYear(),6,1).getTimezoneOffset();var winterOffset=start.getTimezoneOffset();var dstOffset=Math.min(winterOffset,summerOffset);if(dst<0){HEAP32[tmPtr+32>>2]=Number(summerOffset!=winterOffset&&dstOffset==guessedOffset)}else if(dst>0!=(dstOffset==guessedOffset)){var nonDstOffset=Math.max(winterOffset,summerOffset);var trueOffset=dst>0?dstOffset:nonDstOffset;date.setTime(date.getTime()+(trueOffset-guessedOffset)*6e4)}HEAP32[tmPtr+24>>2]=date.getDay();var yday=(date.getTime()-start.getTime())/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;HEAP32[tmPtr>>2]=date.getSeconds();HEAP32[tmPtr+4>>2]=date.getMinutes();HEAP32[tmPtr+8>>2]=date.getHours();HEAP32[tmPtr+12>>2]=date.getDate();HEAP32[tmPtr+16>>2]=date.getMonth();return date.getTime()/1e3|0}Module["_mktime"]=_mktime;_mktime.sig="ii";function ___asctime(tmPtr,buf){var date={tm_sec:HEAP32[tmPtr>>2],tm_min:HEAP32[tmPtr+4>>2],tm_hour:HEAP32[tmPtr+8>>2],tm_mday:HEAP32[tmPtr+12>>2],tm_mon:HEAP32[tmPtr+16>>2],tm_year:HEAP32[tmPtr+20>>2],tm_wday:HEAP32[tmPtr+24>>2]};var days=["Sun","Mon","Tue","Wed","Thu","Fri","Sat"];var months=["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"];var s=days[date.tm_wday]+" "+months[date.tm_mon]+(date.tm_mday<10?" ":" ")+date.tm_mday+(date.tm_hour<10?" 0":" ")+date.tm_hour+(date.tm_min<10?":0":":")+date.tm_min+(date.tm_sec<10?":0":":")+date.tm_sec+" "+(1900+date.tm_year)+"\n";stringToUTF8(s,buf,26);return buf}Module["___asctime"]=___asctime;___asctime.sig="iii";function ___assert_fail(condition,filename,line,func){abort("Assertion failed: "+UTF8ToString(condition)+", at: "+[filename?UTF8ToString(filename):"unknown filename",line,func?UTF8ToString(func):"unknown function"])}Module["___assert_fail"]=___assert_fail;___assert_fail.sig="viiii";var _emscripten_get_now_is_monotonic=true;Module["_emscripten_get_now_is_monotonic"]=_emscripten_get_now_is_monotonic;function setErrNo(value){HEAP32[___errno_location()>>2]=value;return value}Module["setErrNo"]=setErrNo;function _clock_gettime(clk_id,tp){var now;if(clk_id===0){now=Date.now()}else if((clk_id===1||clk_id===4)&&_emscripten_get_now_is_monotonic){now=_emscripten_get_now()}else{setErrNo(28);return-1}HEAP32[tp>>2]=now/1e3|0;HEAP32[tp+4>>2]=now%1e3*1e3*1e3|0;return 0}Module["_clock_gettime"]=_clock_gettime;_clock_gettime.sig="iii";function ___clock_gettime(a0,a1){return _clock_gettime(a0,a1)}Module["___clock_gettime"]=___clock_gettime;___clock_gettime.sig="iii";function ___cxa_allocate_exception(size){return _malloc(size+16)+16}Module["___cxa_allocate_exception"]=___cxa_allocate_exception;___cxa_allocate_exception.sig="vi";function _atexit(func,arg){}Module["_atexit"]=_atexit;_atexit.sig="iii";function ___cxa_atexit(a0,a1){return _atexit(a0,a1)}Module["___cxa_atexit"]=___cxa_atexit;___cxa_atexit.sig="iii";function ExceptionInfo(excPtr){this.excPtr=excPtr;this.ptr=excPtr-16;this.set_type=function(type){HEAP32[this.ptr+4>>2]=type};this.get_type=function(){return HEAP32[this.ptr+4>>2]};this.set_destructor=function(destructor){HEAP32[this.ptr+8>>2]=destructor};this.get_destructor=function(){return HEAP32[this.ptr+8>>2]};this.set_refcount=function(refcount){HEAP32[this.ptr>>2]=refcount};this.set_caught=function(caught){caught=caught?1:0;HEAP8[this.ptr+12>>0]=caught};this.get_caught=function(){return HEAP8[this.ptr+12>>0]!=0};this.set_rethrown=function(rethrown){rethrown=rethrown?1:0;HEAP8[this.ptr+13>>0]=rethrown};this.get_rethrown=function(){return HEAP8[this.ptr+13>>0]!=0};this.init=function(type,destructor){this.set_type(type);this.set_destructor(destructor);this.set_refcount(0);this.set_caught(false);this.set_rethrown(false)};this.add_ref=function(){var value=HEAP32[this.ptr>>2];HEAP32[this.ptr>>2]=value+1};this.release_ref=function(){var prev=HEAP32[this.ptr>>2];HEAP32[this.ptr>>2]=prev-1;return prev===1}}Module["ExceptionInfo"]=ExceptionInfo;function CatchInfo(ptr){this.free=function(){_free(this.ptr);this.ptr=0};this.set_base_ptr=function(basePtr){HEAP32[this.ptr>>2]=basePtr};this.get_base_ptr=function(){return HEAP32[this.ptr>>2]};this.set_adjusted_ptr=function(adjustedPtr){HEAP32[this.ptr+4>>2]=adjustedPtr};this.get_adjusted_ptr_addr=function(){return this.ptr+4};this.get_adjusted_ptr=function(){return HEAP32[this.ptr+4>>2]};this.get_exception_ptr=function(){var isPointer=Module["___cxa_is_pointer_type"](this.get_exception_info().get_type());if(isPointer){return HEAP32[this.get_base_ptr()>>2]}var adjusted=this.get_adjusted_ptr();if(adjusted!==0)return adjusted;return this.get_base_ptr()};this.get_exception_info=function(){return new ExceptionInfo(this.get_base_ptr())};if(ptr===undefined){this.ptr=_malloc(8);this.set_adjusted_ptr(0)}else{this.ptr=ptr}}Module["CatchInfo"]=CatchInfo;var exceptionCaught=[];Module["exceptionCaught"]=exceptionCaught;function exception_addRef(info){info.add_ref()}Module["exception_addRef"]=exception_addRef;var uncaughtExceptionCount=0;Module["uncaughtExceptionCount"]=uncaughtExceptionCount;function ___cxa_begin_catch(ptr){var catchInfo=new CatchInfo(ptr);var info=catchInfo.get_exception_info();if(!info.get_caught()){info.set_caught(true);uncaughtExceptionCount--}info.set_rethrown(false);exceptionCaught.push(catchInfo);exception_addRef(info);return catchInfo.get_exception_ptr()}Module["___cxa_begin_catch"]=___cxa_begin_catch;function ___cxa_current_primary_exception(){if(!exceptionCaught.length){return 0}var catchInfo=exceptionCaught[exceptionCaught.length-1];exception_addRef(catchInfo.get_exception_info());return catchInfo.get_base_ptr()}Module["___cxa_current_primary_exception"]=___cxa_current_primary_exception;function ___cxa_free_exception(ptr){return _free(new ExceptionInfo(ptr).ptr)}Module["___cxa_free_exception"]=___cxa_free_exception;___cxa_free_exception.sig="vi";function exception_decRef(info){if(info.release_ref()&&!info.get_rethrown()){var destructor=info.get_destructor();if(destructor){wasmTable.get(destructor)(info.excPtr)}___cxa_free_exception(info.excPtr)}}Module["exception_decRef"]=exception_decRef;function ___cxa_decrement_exception_refcount(ptr){if(!ptr)return;exception_decRef(new ExceptionInfo(ptr))}Module["___cxa_decrement_exception_refcount"]=___cxa_decrement_exception_refcount;var exceptionLast=0;Module["exceptionLast"]=exceptionLast;function ___cxa_end_catch(){_setThrew(0);var catchInfo=exceptionCaught.pop();exception_decRef(catchInfo.get_exception_info());catchInfo.free();exceptionLast=0}Module["___cxa_end_catch"]=___cxa_end_catch;___cxa_end_catch.sig="v";function ___resumeException(catchInfoPtr){var catchInfo=new CatchInfo(catchInfoPtr);var ptr=catchInfo.get_base_ptr();if(!exceptionLast){exceptionLast=ptr}catchInfo.free();throw ptr}Module["___resumeException"]=___resumeException;function ___cxa_find_matching_catch_2(){var thrown=exceptionLast;if(!thrown){setTempRet0(0);return 0|0}var info=new ExceptionInfo(thrown);var thrownType=info.get_type();var catchInfo=new CatchInfo;catchInfo.set_base_ptr(thrown);catchInfo.set_adjusted_ptr(thrown);if(!thrownType){setTempRet0(0);return catchInfo.ptr|0}var typeArray=Array.prototype.slice.call(arguments);for(var i=0;i>2]*1e3);HEAP32[tmPtr>>2]=date.getUTCSeconds();HEAP32[tmPtr+4>>2]=date.getUTCMinutes();HEAP32[tmPtr+8>>2]=date.getUTCHours();HEAP32[tmPtr+12>>2]=date.getUTCDate();HEAP32[tmPtr+16>>2]=date.getUTCMonth();HEAP32[tmPtr+20>>2]=date.getUTCFullYear()-1900;HEAP32[tmPtr+24>>2]=date.getUTCDay();HEAP32[tmPtr+36>>2]=0;HEAP32[tmPtr+32>>2]=0;var start=Date.UTC(date.getUTCFullYear(),0,1,0,0,0,0);var yday=(date.getTime()-start)/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;if(!_gmtime_r.GMTString)_gmtime_r.GMTString=allocateUTF8("GMT");HEAP32[tmPtr+40>>2]=_gmtime_r.GMTString;return tmPtr}Module["_gmtime_r"]=_gmtime_r;_gmtime_r.sig="iii";function ___gmtime_r(a0,a1){return _gmtime_r(a0,a1)}Module["___gmtime_r"]=___gmtime_r;___gmtime_r.sig="iii";function _localtime_r(time,tmPtr){_tzset();var date=new Date(HEAP32[time>>2]*1e3);HEAP32[tmPtr>>2]=date.getSeconds();HEAP32[tmPtr+4>>2]=date.getMinutes();HEAP32[tmPtr+8>>2]=date.getHours();HEAP32[tmPtr+12>>2]=date.getDate();HEAP32[tmPtr+16>>2]=date.getMonth();HEAP32[tmPtr+20>>2]=date.getFullYear()-1900;HEAP32[tmPtr+24>>2]=date.getDay();var start=new Date(date.getFullYear(),0,1);var yday=(date.getTime()-start.getTime())/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;HEAP32[tmPtr+36>>2]=-(date.getTimezoneOffset()*60);var summerOffset=new Date(date.getFullYear(),6,1).getTimezoneOffset();var winterOffset=start.getTimezoneOffset();var dst=(summerOffset!=winterOffset&&date.getTimezoneOffset()==Math.min(winterOffset,summerOffset))|0;HEAP32[tmPtr+32>>2]=dst;var zonePtr=HEAP32[__get_tzname()+(dst?4:0)>>2];HEAP32[tmPtr+40>>2]=zonePtr;return tmPtr}Module["_localtime_r"]=_localtime_r;_localtime_r.sig="iii";function ___localtime_r(a0,a1){return _localtime_r(a0,a1)}Module["___localtime_r"]=___localtime_r;___localtime_r.sig="iii";function ___map_file(pathname,size){setErrNo(63);return-1}Module["___map_file"]=___map_file;var __sigalrm_handler=0;Module["__sigalrm_handler"]=__sigalrm_handler;function ___sigaction(sig,act,oldact){if(sig==14){__sigalrm_handler=HEAP32[act>>2];return 0}return 0}Module["___sigaction"]=___sigaction;___sigaction.sig="viii";var ___stack_pointer=new WebAssembly.Global({"value":"i32","mutable":true},8404752);Module["___stack_pointer"]=___stack_pointer;var PATH={splitPath:function(filename){var splitPathRe=/^(\/?|)([\s\S]*?)((?:\.{1,2}|[^\/]+?|)(\.[^.\/]*|))(?:[\/]*)$/;return splitPathRe.exec(filename).slice(1)},normalizeArray:function(parts,allowAboveRoot){var up=0;for(var i=parts.length-1;i>=0;i--){var last=parts[i];if(last==="."){parts.splice(i,1)}else if(last===".."){parts.splice(i,1);up++}else if(up){parts.splice(i,1);up--}}if(allowAboveRoot){for(;up;up--){parts.unshift("..")}}return parts},normalize:function(path){var isAbsolute=path.charAt(0)==="/",trailingSlash=path.substr(-1)==="/";path=PATH.normalizeArray(path.split("/").filter(function(p){return!!p}),!isAbsolute).join("/");if(!path&&!isAbsolute){path="."}if(path&&trailingSlash){path+="/"}return(isAbsolute?"/":"")+path},dirname:function(path){var result=PATH.splitPath(path),root=result[0],dir=result[1];if(!root&&!dir){return"."}if(dir){dir=dir.substr(0,dir.length-1)}return root+dir},basename:function(path){if(path==="/")return"/";path=PATH.normalize(path);path=path.replace(/\/$/,"");var lastSlash=path.lastIndexOf("/");if(lastSlash===-1)return path;return path.substr(lastSlash+1)},extname:function(path){return PATH.splitPath(path)[3]},join:function(){var paths=Array.prototype.slice.call(arguments,0);return PATH.normalize(paths.join("/"))},join2:function(l,r){return PATH.normalize(l+"/"+r)}};Module["PATH"]=PATH;function getRandomDevice(){if(typeof crypto==="object"&&typeof crypto["getRandomValues"]==="function"){var randomBuffer=new Uint8Array(1);return function(){crypto.getRandomValues(randomBuffer);return randomBuffer[0]}}else if(ENVIRONMENT_IS_NODE){try{var crypto_module=require("crypto");return function(){return crypto_module["randomBytes"](1)[0]}}catch(e){}}return function(){abort("randomDevice")}}Module["getRandomDevice"]=getRandomDevice;var PATH_FS={resolve:function(){var resolvedPath="",resolvedAbsolute=false;for(var i=arguments.length-1;i>=-1&&!resolvedAbsolute;i--){var path=i>=0?arguments[i]:FS.cwd();if(typeof path!=="string"){throw new TypeError("Arguments to path.resolve must be strings")}else if(!path){return""}resolvedPath=path+"/"+resolvedPath;resolvedAbsolute=path.charAt(0)==="/"}resolvedPath=PATH.normalizeArray(resolvedPath.split("/").filter(function(p){return!!p}),!resolvedAbsolute).join("/");return(resolvedAbsolute?"/":"")+resolvedPath||"."},relative:function(from,to){from=PATH_FS.resolve(from).substr(1);to=PATH_FS.resolve(to).substr(1);function trim(arr){var start=0;for(;start=0;end--){if(arr[end]!=="")break}if(start>end)return[];return arr.slice(start,end-start+1)}var fromParts=trim(from.split("/"));var toParts=trim(to.split("/"));var length=Math.min(fromParts.length,toParts.length);var samePartsLength=length;for(var i=0;i0){result=buf.slice(0,bytesRead).toString("utf-8")}else{result=null}}else if(typeof window!="undefined"&&typeof window.prompt=="function"){result=window.prompt("Input: ");if(result!==null){result+="\n"}}else if(typeof readline=="function"){result=readline();if(result!==null){result+="\n"}}if(!result){return null}tty.input=intArrayFromString(result,true)}return tty.input.shift()},put_char:function(tty,val){if(val===null||val===10){out(UTF8ArrayToString(tty.output,0));tty.output=[]}else{if(val!=0)tty.output.push(val)}},flush:function(tty){if(tty.output&&tty.output.length>0){out(UTF8ArrayToString(tty.output,0));tty.output=[]}}},default_tty1_ops:{put_char:function(tty,val){if(val===null||val===10){err(UTF8ArrayToString(tty.output,0));tty.output=[]}else{if(val!=0)tty.output.push(val)}},flush:function(tty){if(tty.output&&tty.output.length>0){err(UTF8ArrayToString(tty.output,0));tty.output=[]}}}};Module["TTY"]=TTY;function zeroMemory(address,size){HEAPU8.fill(0,address,address+size)}Module["zeroMemory"]=zeroMemory;function mmapAlloc(size){size=alignMemory(size,65536);var ptr=_memalign(65536,size);if(!ptr)return 0;zeroMemory(ptr,size);return ptr}Module["mmapAlloc"]=mmapAlloc;var MEMFS={ops_table:null,mount:function(mount){return MEMFS.createNode(null,"/",16384|511,0)},createNode:function(parent,name,mode,dev){if(FS.isBlkdev(mode)||FS.isFIFO(mode)){throw new FS.ErrnoError(63)}if(!MEMFS.ops_table){MEMFS.ops_table={dir:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr,lookup:MEMFS.node_ops.lookup,mknod:MEMFS.node_ops.mknod,rename:MEMFS.node_ops.rename,unlink:MEMFS.node_ops.unlink,rmdir:MEMFS.node_ops.rmdir,readdir:MEMFS.node_ops.readdir,symlink:MEMFS.node_ops.symlink},stream:{llseek:MEMFS.stream_ops.llseek}},file:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr},stream:{llseek:MEMFS.stream_ops.llseek,read:MEMFS.stream_ops.read,write:MEMFS.stream_ops.write,allocate:MEMFS.stream_ops.allocate,mmap:MEMFS.stream_ops.mmap,msync:MEMFS.stream_ops.msync}},link:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr,readlink:MEMFS.node_ops.readlink},stream:{}},chrdev:{node:{getattr:MEMFS.node_ops.getattr,setattr:MEMFS.node_ops.setattr},stream:FS.chrdev_stream_ops}}}var node=FS.createNode(parent,name,mode,dev);if(FS.isDir(node.mode)){node.node_ops=MEMFS.ops_table.dir.node;node.stream_ops=MEMFS.ops_table.dir.stream;node.contents={}}else if(FS.isFile(node.mode)){node.node_ops=MEMFS.ops_table.file.node;node.stream_ops=MEMFS.ops_table.file.stream;node.usedBytes=0;node.contents=null}else if(FS.isLink(node.mode)){node.node_ops=MEMFS.ops_table.link.node;node.stream_ops=MEMFS.ops_table.link.stream}else if(FS.isChrdev(node.mode)){node.node_ops=MEMFS.ops_table.chrdev.node;node.stream_ops=MEMFS.ops_table.chrdev.stream}node.timestamp=Date.now();if(parent){parent.contents[name]=node;parent.timestamp=node.timestamp}return node},getFileDataAsTypedArray:function(node){if(!node.contents)return new Uint8Array(0);if(node.contents.subarray)return node.contents.subarray(0,node.usedBytes);return new Uint8Array(node.contents)},expandFileStorage:function(node,newCapacity){var prevCapacity=node.contents?node.contents.length:0;if(prevCapacity>=newCapacity)return;var CAPACITY_DOUBLING_MAX=1024*1024;newCapacity=Math.max(newCapacity,prevCapacity*(prevCapacity>>0);if(prevCapacity!=0)newCapacity=Math.max(newCapacity,256);var oldContents=node.contents;node.contents=new Uint8Array(newCapacity);if(node.usedBytes>0)node.contents.set(oldContents.subarray(0,node.usedBytes),0)},resizeFileStorage:function(node,newSize){if(node.usedBytes==newSize)return;if(newSize==0){node.contents=null;node.usedBytes=0}else{var oldContents=node.contents;node.contents=new Uint8Array(newSize);if(oldContents){node.contents.set(oldContents.subarray(0,Math.min(newSize,node.usedBytes)))}node.usedBytes=newSize}},node_ops:{getattr:function(node){var attr={};attr.dev=FS.isChrdev(node.mode)?node.id:1;attr.ino=node.id;attr.mode=node.mode;attr.nlink=1;attr.uid=0;attr.gid=0;attr.rdev=node.rdev;if(FS.isDir(node.mode)){attr.size=4096}else if(FS.isFile(node.mode)){attr.size=node.usedBytes}else if(FS.isLink(node.mode)){attr.size=node.link.length}else{attr.size=0}attr.atime=new Date(node.timestamp);attr.mtime=new Date(node.timestamp);attr.ctime=new Date(node.timestamp);attr.blksize=4096;attr.blocks=Math.ceil(attr.size/attr.blksize);return attr},setattr:function(node,attr){if(attr.mode!==undefined){node.mode=attr.mode}if(attr.timestamp!==undefined){node.timestamp=attr.timestamp}if(attr.size!==undefined){MEMFS.resizeFileStorage(node,attr.size)}},lookup:function(parent,name){throw FS.genericErrors[44]},mknod:function(parent,name,mode,dev){return MEMFS.createNode(parent,name,mode,dev)},rename:function(old_node,new_dir,new_name){if(FS.isDir(old_node.mode)){var new_node;try{new_node=FS.lookupNode(new_dir,new_name)}catch(e){}if(new_node){for(var i in new_node.contents){throw new FS.ErrnoError(55)}}}delete old_node.parent.contents[old_node.name];old_node.parent.timestamp=Date.now();old_node.name=new_name;new_dir.contents[new_name]=old_node;new_dir.timestamp=old_node.parent.timestamp;old_node.parent=new_dir},unlink:function(parent,name){delete parent.contents[name];parent.timestamp=Date.now()},rmdir:function(parent,name){var node=FS.lookupNode(parent,name);for(var i in node.contents){throw new FS.ErrnoError(55)}delete parent.contents[name];parent.timestamp=Date.now()},readdir:function(node){var entries=[".",".."];for(var key in node.contents){if(!node.contents.hasOwnProperty(key)){continue}entries.push(key)}return entries},symlink:function(parent,newname,oldpath){var node=MEMFS.createNode(parent,newname,511|40960,0);node.link=oldpath;return node},readlink:function(node){if(!FS.isLink(node.mode)){throw new FS.ErrnoError(28)}return node.link}},stream_ops:{read:function(stream,buffer,offset,length,position){var contents=stream.node.contents;if(position>=stream.node.usedBytes)return 0;var size=Math.min(stream.node.usedBytes-position,length);if(size>8&&contents.subarray){buffer.set(contents.subarray(position,position+size),offset)}else{for(var i=0;i0||position+length>2}}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}return stat.mode},realPath:function(node){var parts=[];while(node.parent!==node){parts.push(node.name);node=node.parent}parts.push(node.mount.opts.root);parts.reverse();return PATH.join.apply(null,parts)},flagsForNode:function(flags){flags&=~2097152;flags&=~2048;flags&=~32768;flags&=~524288;var newFlags=0;for(var k in NODEFS.flagsForNodeMap){if(flags&k){newFlags|=NODEFS.flagsForNodeMap[k];flags^=k}}if(!flags){return newFlags}else{throw new FS.ErrnoError(28)}},node_ops:{getattr:function(node){var path=NODEFS.realPath(node);var stat;try{stat=fs.lstatSync(path)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}if(NODEFS.isWindows&&!stat.blksize){stat.blksize=4096}if(NODEFS.isWindows&&!stat.blocks){stat.blocks=(stat.size+stat.blksize-1)/stat.blksize|0}return{dev:stat.dev,ino:stat.ino,mode:stat.mode,nlink:stat.nlink,uid:stat.uid,gid:stat.gid,rdev:stat.rdev,size:stat.size,atime:stat.atime,mtime:stat.mtime,ctime:stat.ctime,blksize:stat.blksize,blocks:stat.blocks}},setattr:function(node,attr){var path=NODEFS.realPath(node);try{if(attr.mode!==undefined){fs.chmodSync(path,attr.mode);node.mode=attr.mode}if(attr.timestamp!==undefined){var date=new Date(attr.timestamp);fs.utimesSync(path,date,date)}if(attr.size!==undefined){fs.truncateSync(path,attr.size)}}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}},lookup:function(parent,name){var path=PATH.join2(NODEFS.realPath(parent),name);var mode=NODEFS.getMode(path);return NODEFS.createNode(parent,name,mode)},mknod:function(parent,name,mode,dev){var node=NODEFS.createNode(parent,name,mode,dev);var path=NODEFS.realPath(node);try{if(FS.isDir(node.mode)){fs.mkdirSync(path,node.mode)}else{fs.writeFileSync(path,"",{mode:node.mode})}}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}return node},rename:function(oldNode,newDir,newName){var oldPath=NODEFS.realPath(oldNode);var newPath=PATH.join2(NODEFS.realPath(newDir),newName);try{fs.renameSync(oldPath,newPath)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}oldNode.name=newName},unlink:function(parent,name){var path=PATH.join2(NODEFS.realPath(parent),name);try{fs.unlinkSync(path)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}},rmdir:function(parent,name){var path=PATH.join2(NODEFS.realPath(parent),name);try{fs.rmdirSync(path)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}},readdir:function(node){var path=NODEFS.realPath(node);try{return fs.readdirSync(path)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}},symlink:function(parent,newName,oldPath){var newPath=PATH.join2(NODEFS.realPath(parent),newName);try{fs.symlinkSync(oldPath,newPath)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}},readlink:function(node){var path=NODEFS.realPath(node);try{path=fs.readlinkSync(path);path=NODEJS_PATH.relative(NODEJS_PATH.resolve(node.mount.opts.root),path);return path}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}}},stream_ops:{open:function(stream){var path=NODEFS.realPath(stream.node);try{if(FS.isFile(stream.node.mode)){stream.nfd=fs.openSync(path,NODEFS.flagsForNode(stream.flags))}}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}},close:function(stream){try{if(FS.isFile(stream.node.mode)&&stream.nfd){fs.closeSync(stream.nfd)}}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}},read:function(stream,buffer,offset,length,position){if(length===0)return 0;try{return fs.readSync(stream.nfd,Buffer.from(buffer.buffer),offset,length,position)}catch(e){throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}},write:function(stream,buffer,offset,length,position){try{return fs.writeSync(stream.nfd,Buffer.from(buffer.buffer),offset,length,position)}catch(e){throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}},llseek:function(stream,offset,whence){var position=offset;if(whence===1){position+=stream.position}else if(whence===2){if(FS.isFile(stream.node.mode)){try{var stat=fs.fstatSync(stream.nfd);position+=stat.size}catch(e){throw new FS.ErrnoError(NODEFS.convertNodeCode(e))}}}if(position<0){throw new FS.ErrnoError(28)}return position},mmap:function(stream,address,length,position,prot,flags){if(address!==0){throw new FS.ErrnoError(28)}if(!FS.isFile(stream.node.mode)){throw new FS.ErrnoError(43)}var ptr=mmapAlloc(length);NODEFS.stream_ops.read(stream,HEAP8,ptr,length,position);return{ptr:ptr,allocated:true}},msync:function(stream,buffer,offset,length,mmapFlags){if(!FS.isFile(stream.node.mode)){throw new FS.ErrnoError(43)}if(mmapFlags&2){return 0}var bytesWritten=NODEFS.stream_ops.write(stream,buffer,0,length,offset,false);return 0}}};Module["NODEFS"]=NODEFS;var WORKERFS={DIR_MODE:16895,FILE_MODE:33279,reader:null,mount:function(mount){assert(ENVIRONMENT_IS_WORKER);if(!WORKERFS.reader)WORKERFS.reader=new FileReaderSync;var root=WORKERFS.createNode(null,"/",WORKERFS.DIR_MODE,0);var createdParents={};function ensureParent(path){var parts=path.split("/");var parent=root;for(var i=0;i=stream.node.size)return 0;var chunk=stream.node.contents.slice(position,position+length);var ab=WORKERFS.reader.readAsArrayBuffer(chunk);buffer.set(new Uint8Array(ab),offset);return chunk.size},write:function(stream,buffer,offset,length,position){throw new FS.ErrnoError(29)},llseek:function(stream,offset,whence){var position=offset;if(whence===1){position+=stream.position}else if(whence===2){if(FS.isFile(stream.node.mode)){position+=stream.node.size}}if(position<0){throw new FS.ErrnoError(28)}return position}}};Module["WORKERFS"]=WORKERFS;var PROXYFS={mount:function(mount){return PROXYFS.createNode(null,"/",mount.opts.fs.lstat(mount.opts.root).mode,0)},createNode:function(parent,name,mode,dev){if(!FS.isDir(mode)&&!FS.isFile(mode)&&!FS.isLink(mode)){throw new FS.ErrnoError(ERRNO_CODES.EINVAL)}var node=FS.createNode(parent,name,mode);node.node_ops=PROXYFS.node_ops;node.stream_ops=PROXYFS.stream_ops;return node},realPath:function(node){var parts=[];while(node.parent!==node){parts.push(node.name);node=node.parent}parts.push(node.mount.opts.root);parts.reverse();return PATH.join.apply(null,parts)},node_ops:{getattr:function(node){var path=PROXYFS.realPath(node);var stat;try{stat=node.mount.opts.fs.lstat(path)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}return{dev:stat.dev,ino:stat.ino,mode:stat.mode,nlink:stat.nlink,uid:stat.uid,gid:stat.gid,rdev:stat.rdev,size:stat.size,atime:stat.atime,mtime:stat.mtime,ctime:stat.ctime,blksize:stat.blksize,blocks:stat.blocks}},setattr:function(node,attr){var path=PROXYFS.realPath(node);try{if(attr.mode!==undefined){node.mount.opts.fs.chmod(path,attr.mode);node.mode=attr.mode}if(attr.timestamp!==undefined){var date=new Date(attr.timestamp);node.mount.opts.fs.utime(path,date,date)}if(attr.size!==undefined){node.mount.opts.fs.truncate(path,attr.size)}}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}},lookup:function(parent,name){try{var path=PATH.join2(PROXYFS.realPath(parent),name);var mode=parent.mount.opts.fs.lstat(path).mode;var node=PROXYFS.createNode(parent,name,mode);return node}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}},mknod:function(parent,name,mode,dev){var node=PROXYFS.createNode(parent,name,mode,dev);var path=PROXYFS.realPath(node);try{if(FS.isDir(node.mode)){node.mount.opts.fs.mkdir(path,node.mode)}else{node.mount.opts.fs.writeFile(path,"",{mode:node.mode})}}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}return node},rename:function(oldNode,newDir,newName){var oldPath=PROXYFS.realPath(oldNode);var newPath=PATH.join2(PROXYFS.realPath(newDir),newName);try{oldNode.mount.opts.fs.rename(oldPath,newPath)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}},unlink:function(parent,name){var path=PATH.join2(PROXYFS.realPath(parent),name);try{parent.mount.opts.fs.unlink(path)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}},rmdir:function(parent,name){var path=PATH.join2(PROXYFS.realPath(parent),name);try{parent.mount.opts.fs.rmdir(path)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}},readdir:function(node){var path=PROXYFS.realPath(node);try{return node.mount.opts.fs.readdir(path)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}},symlink:function(parent,newName,oldPath){var newPath=PATH.join2(PROXYFS.realPath(parent),newName);try{parent.mount.opts.fs.symlink(oldPath,newPath)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}},readlink:function(node){var path=PROXYFS.realPath(node);try{return node.mount.opts.fs.readlink(path)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}}},stream_ops:{open:function(stream){var path=PROXYFS.realPath(stream.node);try{stream.nfd=stream.node.mount.opts.fs.open(path,stream.flags)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}},close:function(stream){try{stream.node.mount.opts.fs.close(stream.nfd)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}},read:function(stream,buffer,offset,length,position){try{return stream.node.mount.opts.fs.read(stream.nfd,buffer,offset,length,position)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}},write:function(stream,buffer,offset,length,position){try{return stream.node.mount.opts.fs.write(stream.nfd,buffer,offset,length,position)}catch(e){if(!e.code)throw e;throw new FS.ErrnoError(ERRNO_CODES[e.code])}},llseek:function(stream,offset,whence){var position=offset;if(whence===1){position+=stream.position}else if(whence===2){if(FS.isFile(stream.node.mode)){try{var stat=stream.node.node_ops.getattr(stream.node);position+=stat.size}catch(e){throw new FS.ErrnoError(ERRNO_CODES[e.code])}}}if(position<0){throw new FS.ErrnoError(ERRNO_CODES.EINVAL)}return position}}};Module["PROXYFS"]=PROXYFS;var LZ4={DIR_MODE:16895,FILE_MODE:33279,CHUNK_SIZE:-1,codec:null,init:function(){if(LZ4.codec)return;LZ4.codec=function(){var MiniLZ4=function(){var exports={};exports.uncompress=function(input,output,sIdx,eIdx){sIdx=sIdx||0;eIdx=eIdx||input.length-sIdx;for(var i=sIdx,n=eIdx,j=0;i>4;if(literals_length>0){var l=literals_length+240;while(l===255){l=input[i++];literals_length+=l}var end=i+literals_length;while(ij)return-(i-2);var match_length=token&15;var l=match_length+240;while(l===255){l=input[i++];match_length+=l}var pos=j-offset;var end=j+match_length+4;while(jmaxInputSize?0:isize+isize/255+16|0};exports.compress=function(src,dst,sIdx,eIdx){hashTable.set(empty);return compressBlock(src,dst,0,sIdx||0,eIdx||dst.length)};function compressBlock(src,dst,pos,sIdx,eIdx){var dpos=sIdx;var dlen=eIdx-sIdx;var anchor=0;if(src.length>=maxInputSize)throw new Error("input too large");if(src.length>mfLimit){var n=exports.compressBound(src.length);if(dlen>>hashShift;var ref=hashTable[hash]-1;hashTable[hash]=pos+1;if(ref<0||pos-ref>>>16>0||((src[ref+3]<<8|src[ref+2])!=sequenceHighBits||(src[ref+1]<<8|src[ref])!=sequenceLowBits)){step=findMatchAttempts++>>skipStrength;pos+=step;continue}findMatchAttempts=(1<=runMask){dst[dpos++]=(runMask<254;len-=255){dst[dpos++]=255}dst[dpos++]=len}else{dst[dpos++]=(literals_length<>8;if(match_length>=mlMask){match_length-=mlMask;while(match_length>=255){match_length-=255;dst[dpos++]=255}dst[dpos++]=match_length}anchor=pos}}if(anchor==0)return 0;literals_length=src.length-anchor;if(literals_length>=runMask){dst[dpos++]=runMask<254;ln-=255){dst[dpos++]=255}dst[dpos++]=ln}else{dst[dpos++]=literals_length<0){assert(compressedSize<=bound);compressed=compressed.subarray(0,compressedSize);compressedChunks.push(compressed);total+=compressedSize;successes.push(1);if(verify){var back=exports.uncompress(compressed,temp);assert(back===chunk.length,[back,chunk.length]);for(var i=0;i=0){currChunk=compressedData["cachedChunks"][found]}else{compressedData["cachedIndexes"].pop();compressedData["cachedIndexes"].unshift(chunkIndex);currChunk=compressedData["cachedChunks"].pop();compressedData["cachedChunks"].unshift(currChunk);if(compressedData["debug"]){out("decompressing chunk "+chunkIndex);Module["decompressedChunks"]=(Module["decompressedChunks"]||0)+1}var compressed=compressedData["data"].subarray(compressedStart,compressedStart+compressedSize);var originalSize=LZ4.codec.uncompress(compressed,currChunk);if(chunkIndex8){throw new FS.ErrnoError(32)}var parts=PATH.normalizeArray(path.split("/").filter(function(p){return!!p}),false);var current=FS.root;var current_path="/";for(var i=0;i40){throw new FS.ErrnoError(32)}}}}return{path:current_path,node:current}},getPath:function(node){var path;while(true){if(FS.isRoot(node)){var mount=node.mount.mountpoint;if(!path)return mount;return mount[mount.length-1]!=="/"?mount+"/"+path:mount+path}path=path?node.name+"/"+path:node.name;node=node.parent}},hashName:function(parentid,name){var hash=0;for(var i=0;i>>0)%FS.nameTable.length},hashAddNode:function(node){var hash=FS.hashName(node.parent.id,node.name);node.name_next=FS.nameTable[hash];FS.nameTable[hash]=node},hashRemoveNode:function(node){var hash=FS.hashName(node.parent.id,node.name);if(FS.nameTable[hash]===node){FS.nameTable[hash]=node.name_next}else{var current=FS.nameTable[hash];while(current){if(current.name_next===node){current.name_next=node.name_next;break}current=current.name_next}}},lookupNode:function(parent,name){var errCode=FS.mayLookup(parent);if(errCode){throw new FS.ErrnoError(errCode,parent)}var hash=FS.hashName(parent.id,name);for(var node=FS.nameTable[hash];node;node=node.name_next){var nodeName=node.name;if(node.parent.id===parent.id&&nodeName===name){return node}}return FS.lookup(parent,name)},createNode:function(parent,name,mode,rdev){var node=new FS.FSNode(parent,name,mode,rdev);FS.hashAddNode(node);return node},destroyNode:function(node){FS.hashRemoveNode(node)},isRoot:function(node){return node===node.parent},isMountpoint:function(node){return!!node.mounted},isFile:function(mode){return(mode&61440)===32768},isDir:function(mode){return(mode&61440)===16384},isLink:function(mode){return(mode&61440)===40960},isChrdev:function(mode){return(mode&61440)===8192},isBlkdev:function(mode){return(mode&61440)===24576},isFIFO:function(mode){return(mode&61440)===4096},isSocket:function(mode){return(mode&49152)===49152},flagModes:{"r":0,"r+":2,"w":577,"w+":578,"a":1089,"a+":1090},modeStringToFlags:function(str){var flags=FS.flagModes[str];if(typeof flags==="undefined"){throw new Error("Unknown file open mode: "+str)}return flags},flagsToPermissionString:function(flag){var perms=["r","w","rw"][flag&3];if(flag&512){perms+="w"}return perms},nodePermissions:function(node,perms){if(FS.ignorePermissions){return 0}if(perms.includes("r")&&!(node.mode&292)){return 2}else if(perms.includes("w")&&!(node.mode&146)){return 2}else if(perms.includes("x")&&!(node.mode&73)){return 2}return 0},mayLookup:function(dir){var errCode=FS.nodePermissions(dir,"x");if(errCode)return errCode;if(!dir.node_ops.lookup)return 2;return 0},mayCreate:function(dir,name){try{var node=FS.lookupNode(dir,name);return 20}catch(e){}return FS.nodePermissions(dir,"wx")},mayDelete:function(dir,name,isdir){var node;try{node=FS.lookupNode(dir,name)}catch(e){return e.errno}var errCode=FS.nodePermissions(dir,"wx");if(errCode){return errCode}if(isdir){if(!FS.isDir(node.mode)){return 54}if(FS.isRoot(node)||FS.getPath(node)===FS.cwd()){return 10}}else{if(FS.isDir(node.mode)){return 31}}return 0},mayOpen:function(node,flags){if(!node){return 44}if(FS.isLink(node.mode)){return 32}else if(FS.isDir(node.mode)){if(FS.flagsToPermissionString(flags)!=="r"||flags&512){return 31}}return FS.nodePermissions(node,FS.flagsToPermissionString(flags))},MAX_OPEN_FDS:4096,nextfd:function(fd_start,fd_end){fd_start=fd_start||0;fd_end=fd_end||FS.MAX_OPEN_FDS;for(var fd=fd_start;fd<=fd_end;fd++){if(!FS.streams[fd]){return fd}}throw new FS.ErrnoError(33)},getStream:function(fd){return FS.streams[fd]},createStream:function(stream,fd_start,fd_end){if(!FS.FSStream){FS.FSStream=function(){};FS.FSStream.prototype={object:{get:function(){return this.node},set:function(val){this.node=val}},isRead:{get:function(){return(this.flags&2097155)!==1}},isWrite:{get:function(){return(this.flags&2097155)!==0}},isAppend:{get:function(){return this.flags&1024}},get flags(){return this.shared.flags},set flags(value){this.shared.flags=value},get position(){return this.shared.position},set position(value){this.shared.position=value}}}var newStream=new FS.FSStream;newStream.shared={};for(var p in stream){newStream[p]=stream[p]}stream=newStream;var fd=FS.nextfd(fd_start,fd_end);stream.fd=fd;FS.streams[fd]=stream;return stream},closeStream:function(fd){FS.streams[fd]=null},chrdev_stream_ops:{open:function(stream){var device=FS.getDevice(stream.node.rdev);stream.stream_ops=device.stream_ops;if(stream.stream_ops.open){stream.stream_ops.open(stream)}},llseek:function(){throw new FS.ErrnoError(70)}},major:function(dev){return dev>>8},minor:function(dev){return dev&255},makedev:function(ma,mi){return ma<<8|mi},registerDevice:function(dev,ops){FS.devices[dev]={stream_ops:ops}},getDevice:function(dev){return FS.devices[dev]},getMounts:function(mount){var mounts=[];var check=[mount];while(check.length){var m=check.pop();mounts.push(m);check.push.apply(check,m.mounts)}return mounts},syncfs:function(populate,callback){if(typeof populate==="function"){callback=populate;populate=false}FS.syncFSRequests++;if(FS.syncFSRequests>1){err("warning: "+FS.syncFSRequests+" FS.syncfs operations in flight at once, probably just doing extra work")}var mounts=FS.getMounts(FS.root.mount);var completed=0;function doCallback(errCode){FS.syncFSRequests--;return callback(errCode)}function done(errCode){if(errCode){if(!done.errored){done.errored=true;return doCallback(errCode)}return}if(++completed>=mounts.length){doCallback(null)}}mounts.forEach(function(mount){if(!mount.type.syncfs){return done(null)}mount.type.syncfs(mount,populate,done)})},mount:function(type,opts,mountpoint){var root=mountpoint==="/";var pseudo=!mountpoint;var node;if(root&&FS.root){throw new FS.ErrnoError(10)}else if(!root&&!pseudo){var lookup=FS.lookupPath(mountpoint,{follow_mount:false});mountpoint=lookup.path;node=lookup.node;if(FS.isMountpoint(node)){throw new FS.ErrnoError(10)}if(!FS.isDir(node.mode)){throw new FS.ErrnoError(54)}}var mount={type:type,opts:opts,mountpoint:mountpoint,mounts:[]};var mountRoot=type.mount(mount);mountRoot.mount=mount;mount.root=mountRoot;if(root){FS.root=mountRoot}else if(node){node.mounted=mount;if(node.mount){node.mount.mounts.push(mount)}}return mountRoot},unmount:function(mountpoint){var lookup=FS.lookupPath(mountpoint,{follow_mount:false});if(!FS.isMountpoint(lookup.node)){throw new FS.ErrnoError(28)}var node=lookup.node;var mount=node.mounted;var mounts=FS.getMounts(mount);Object.keys(FS.nameTable).forEach(function(hash){var current=FS.nameTable[hash];while(current){var next=current.name_next;if(mounts.includes(current.mount)){FS.destroyNode(current)}current=next}});node.mounted=null;var idx=node.mount.mounts.indexOf(mount);node.mount.mounts.splice(idx,1)},lookup:function(parent,name){return parent.node_ops.lookup(parent,name)},mknod:function(path,mode,dev){var lookup=FS.lookupPath(path,{parent:true});var parent=lookup.node;var name=PATH.basename(path);if(!name||name==="."||name===".."){throw new FS.ErrnoError(28)}var errCode=FS.mayCreate(parent,name);if(errCode){throw new FS.ErrnoError(errCode)}if(!parent.node_ops.mknod){throw new FS.ErrnoError(63)}return parent.node_ops.mknod(parent,name,mode,dev)},create:function(path,mode){mode=mode!==undefined?mode:438;mode&=4095;mode|=32768;return FS.mknod(path,mode,0)},mkdir:function(path,mode){mode=mode!==undefined?mode:511;mode&=511|512;mode|=16384;return FS.mknod(path,mode,0)},mkdirTree:function(path,mode){var dirs=path.split("/");var d="";for(var i=0;ithis.length-1||idx<0){return undefined}var chunkOffset=idx%this.chunkSize;var chunkNum=idx/this.chunkSize|0;return this.getter(chunkNum)[chunkOffset]};LazyUint8Array.prototype.setDataGetter=function LazyUint8Array_setDataGetter(getter){this.getter=getter};LazyUint8Array.prototype.cacheLength=function LazyUint8Array_cacheLength(){var xhr=new XMLHttpRequest;xhr.open("HEAD",url,false);xhr.send(null);if(!(xhr.status>=200&&xhr.status<300||xhr.status===304))throw new Error("Couldn't load "+url+". Status: "+xhr.status);var datalength=Number(xhr.getResponseHeader("Content-length"));var header;var hasByteServing=(header=xhr.getResponseHeader("Accept-Ranges"))&&header==="bytes";var usesGzip=(header=xhr.getResponseHeader("Content-Encoding"))&&header==="gzip";var chunkSize=1024*1024;if(!hasByteServing)chunkSize=datalength;var doXHR=function(from,to){if(from>to)throw new Error("invalid range ("+from+", "+to+") or no bytes requested!");if(to>datalength-1)throw new Error("only "+datalength+" bytes available! programmer error!");var xhr=new XMLHttpRequest;xhr.open("GET",url,false);if(datalength!==chunkSize)xhr.setRequestHeader("Range","bytes="+from+"-"+to);if(typeof Uint8Array!="undefined")xhr.responseType="arraybuffer";if(xhr.overrideMimeType){xhr.overrideMimeType("text/plain; charset=x-user-defined")}xhr.send(null);if(!(xhr.status>=200&&xhr.status<300||xhr.status===304))throw new Error("Couldn't load "+url+". Status: "+xhr.status);if(xhr.response!==undefined){return new Uint8Array(xhr.response||[])}else{return intArrayFromString(xhr.responseText||"",true)}};var lazyArray=this;lazyArray.setDataGetter(function(chunkNum){var start=chunkNum*chunkSize;var end=(chunkNum+1)*chunkSize-1;end=Math.min(end,datalength-1);if(typeof lazyArray.chunks[chunkNum]==="undefined"){lazyArray.chunks[chunkNum]=doXHR(start,end)}if(typeof lazyArray.chunks[chunkNum]==="undefined")throw new Error("doXHR failed!");return lazyArray.chunks[chunkNum]});if(usesGzip||!datalength){chunkSize=datalength=1;datalength=this.getter(0).length;chunkSize=datalength;out("LazyFiles on gzip forces download of the whole file when length is accessed")}this._length=datalength;this._chunkSize=chunkSize;this.lengthKnown=true};if(typeof XMLHttpRequest!=="undefined"){if(!ENVIRONMENT_IS_WORKER)throw"Cannot do synchronous binary XHRs outside webworkers in modern browsers. Use --embed-file or --preload-file in emcc";var lazyArray=new LazyUint8Array;Object.defineProperties(lazyArray,{length:{get:function(){if(!this.lengthKnown){this.cacheLength()}return this._length}},chunkSize:{get:function(){if(!this.lengthKnown){this.cacheLength()}return this._chunkSize}}});var properties={isDevice:false,contents:lazyArray}}else{var properties={isDevice:false,url:url}}var node=FS.createFile(parent,name,properties,canRead,canWrite);if(properties.contents){node.contents=properties.contents}else if(properties.url){node.contents=null;node.url=properties.url}Object.defineProperties(node,{usedBytes:{get:function(){return this.contents.length}}});var stream_ops={};var keys=Object.keys(node.stream_ops);keys.forEach(function(key){var fn=node.stream_ops[key];stream_ops[key]=function forceLoadLazyFile(){FS.forceLoadFile(node);return fn.apply(null,arguments)}});stream_ops.read=function stream_ops_read(stream,buffer,offset,length,position){FS.forceLoadFile(node);var contents=stream.node.contents;if(position>=contents.length)return 0;var size=Math.min(contents.length-position,length);if(contents.slice){for(var i=0;i>2]=stat.dev;HEAP32[buf+4>>2]=0;HEAP32[buf+8>>2]=stat.ino;HEAP32[buf+12>>2]=stat.mode;HEAP32[buf+16>>2]=stat.nlink;HEAP32[buf+20>>2]=stat.uid;HEAP32[buf+24>>2]=stat.gid;HEAP32[buf+28>>2]=stat.rdev;HEAP32[buf+32>>2]=0;tempI64=[stat.size>>>0,(tempDouble=stat.size,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[buf+40>>2]=tempI64[0],HEAP32[buf+44>>2]=tempI64[1];HEAP32[buf+48>>2]=4096;HEAP32[buf+52>>2]=stat.blocks;HEAP32[buf+56>>2]=stat.atime.getTime()/1e3|0;HEAP32[buf+60>>2]=0;HEAP32[buf+64>>2]=stat.mtime.getTime()/1e3|0;HEAP32[buf+68>>2]=0;HEAP32[buf+72>>2]=stat.ctime.getTime()/1e3|0;HEAP32[buf+76>>2]=0;tempI64=[stat.ino>>>0,(tempDouble=stat.ino,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[buf+80>>2]=tempI64[0],HEAP32[buf+84>>2]=tempI64[1];return 0},doMsync:function(addr,stream,len,flags,offset){var buffer=HEAPU8.slice(addr,addr+len);FS.msync(stream,buffer,offset,len,flags)},doMkdir:function(path,mode){path=PATH.normalize(path);if(path[path.length-1]==="/")path=path.substr(0,path.length-1);FS.mkdir(path,mode,0);return 0},doMknod:function(path,mode,dev){switch(mode&61440){case 32768:case 8192:case 24576:case 4096:case 49152:break;default:return-28}FS.mknod(path,mode,dev);return 0},doReadlink:function(path,buf,bufsize){if(bufsize<=0)return-28;var ret=FS.readlink(path);var len=Math.min(bufsize,lengthBytesUTF8(ret));var endChar=HEAP8[buf+len];stringToUTF8(ret,buf,bufsize+1);HEAP8[buf+len]=endChar;return len},doAccess:function(path,amode){if(amode&~7){return-28}var node;var lookup=FS.lookupPath(path,{follow:true});node=lookup.node;if(!node){return-44}var perms="";if(amode&4)perms+="r";if(amode&2)perms+="w";if(amode&1)perms+="x";if(perms&&FS.nodePermissions(node,perms)){return-2}return 0},doDup:function(stream,suggestFD){var suggest=FS.getStream(suggestFD);if(suggest)FS.close(suggest);return FS.createStream(stream,suggestFD,suggestFD).fd},doReadv:function(stream,iov,iovcnt,offset){var ret=0;for(var i=0;i>2];var len=HEAP32[iov+(i*8+4)>>2];var curr=FS.read(stream,HEAP8,ptr,len,offset);if(curr<0)return-1;ret+=curr;if(curr>2];var len=HEAP32[iov+(i*8+4)>>2];var curr=FS.write(stream,HEAP8,ptr,len,offset);if(curr<0)return-1;ret+=curr}return ret},varargs:undefined,get:function(){SYSCALLS.varargs+=4;var ret=HEAP32[SYSCALLS.varargs-4>>2];return ret},getStr:function(ptr){var ret=UTF8ToString(ptr);return ret},getStreamFromFD:function(fd){var stream=FS.getStream(fd);if(!stream)throw new FS.ErrnoError(8);return stream},get64:function(low,high){return low}};Module["SYSCALLS"]=SYSCALLS;function ___sys__newselect(nfds,readfds,writefds,exceptfds,timeout){try{var total=0;var srcReadLow=readfds?HEAP32[readfds>>2]:0,srcReadHigh=readfds?HEAP32[readfds+4>>2]:0;var srcWriteLow=writefds?HEAP32[writefds>>2]:0,srcWriteHigh=writefds?HEAP32[writefds+4>>2]:0;var srcExceptLow=exceptfds?HEAP32[exceptfds>>2]:0,srcExceptHigh=exceptfds?HEAP32[exceptfds+4>>2]:0;var dstReadLow=0,dstReadHigh=0;var dstWriteLow=0,dstWriteHigh=0;var dstExceptLow=0,dstExceptHigh=0;var allLow=(readfds?HEAP32[readfds>>2]:0)|(writefds?HEAP32[writefds>>2]:0)|(exceptfds?HEAP32[exceptfds>>2]:0);var allHigh=(readfds?HEAP32[readfds+4>>2]:0)|(writefds?HEAP32[writefds+4>>2]:0)|(exceptfds?HEAP32[exceptfds+4>>2]:0);var check=function(fd,low,high,val){return fd<32?low&val:high&val};for(var fd=0;fd>2]=dstReadLow;HEAP32[readfds+4>>2]=dstReadHigh}if(writefds){HEAP32[writefds>>2]=dstWriteLow;HEAP32[writefds+4>>2]=dstWriteHigh}if(exceptfds){HEAP32[exceptfds>>2]=dstExceptLow;HEAP32[exceptfds+4>>2]=dstExceptHigh}return total}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys__newselect"]=___sys__newselect;var SOCKFS={mount:function(mount){Module["websocket"]=Module["websocket"]&&"object"===typeof Module["websocket"]?Module["websocket"]:{};Module["websocket"]._callbacks={};Module["websocket"]["on"]=function(event,callback){if("function"===typeof callback){this._callbacks[event]=callback}return this};Module["websocket"].emit=function(event,param){if("function"===typeof this._callbacks[event]){this._callbacks[event].call(this,param)}};return FS.createNode(null,"/",16384|511,0)},createSocket:function(family,type,protocol){type&=~526336;var streaming=type==1;if(protocol){assert(streaming==(protocol==6))}var sock={family:family,type:type,protocol:protocol,server:null,error:null,peers:{},pending:[],recv_queue:[],sock_ops:SOCKFS.websocket_sock_ops};var name=SOCKFS.nextname();var node=FS.createNode(SOCKFS.root,name,49152,0);node.sock=sock;var stream=FS.createStream({path:name,node:node,flags:2,seekable:false,stream_ops:SOCKFS.stream_ops});sock.stream=stream;return sock},getSocket:function(fd){var stream=FS.getStream(fd);if(!stream||!FS.isSocket(stream.node.mode)){return null}return stream.node.sock},stream_ops:{poll:function(stream){var sock=stream.node.sock;return sock.sock_ops.poll(sock)},ioctl:function(stream,request,varargs){var sock=stream.node.sock;return sock.sock_ops.ioctl(sock,request,varargs)},read:function(stream,buffer,offset,length,position){var sock=stream.node.sock;var msg=sock.sock_ops.recvmsg(sock,length);if(!msg){return 0}buffer.set(msg.buffer,offset);return msg.buffer.length},write:function(stream,buffer,offset,length,position){var sock=stream.node.sock;return sock.sock_ops.sendmsg(sock,buffer,offset,length)},close:function(stream){var sock=stream.node.sock;sock.sock_ops.close(sock)}},nextname:function(){if(!SOCKFS.nextname.current){SOCKFS.nextname.current=0}return"socket["+SOCKFS.nextname.current+++"]"},websocket_sock_ops:{createPeer:function(sock,addr,port){var ws;if(typeof addr==="object"){ws=addr;addr=null;port=null}if(ws){if(ws._socket){addr=ws._socket.remoteAddress;port=ws._socket.remotePort}else{var result=/ws[s]?:\/\/([^:]+):(\d+)/.exec(ws.url);if(!result){throw new Error("WebSocket URL must be in the format ws(s)://address:port")}addr=result[1];port=parseInt(result[2],10)}}else{try{var runtimeConfig=Module["websocket"]&&"object"===typeof Module["websocket"];var url="ws:#".replace("#","//");if(runtimeConfig){if("string"===typeof Module["websocket"]["url"]){url=Module["websocket"]["url"]}}if(url==="ws://"||url==="wss://"){var parts=addr.split("/");url=url+parts[0]+":"+port+"/"+parts.slice(1).join("/")}var subProtocols="binary";if(runtimeConfig){if("string"===typeof Module["websocket"]["subprotocol"]){subProtocols=Module["websocket"]["subprotocol"]}}var opts=undefined;if(subProtocols!=="null"){subProtocols=subProtocols.replace(/^ +| +$/g,"").split(/ *, */);opts=ENVIRONMENT_IS_NODE?{"protocol":subProtocols.toString()}:subProtocols}if(runtimeConfig&&null===Module["websocket"]["subprotocol"]){subProtocols="null";opts=undefined}var WebSocketConstructor;if(ENVIRONMENT_IS_NODE){WebSocketConstructor=require("ws")}else{WebSocketConstructor=WebSocket}ws=new WebSocketConstructor(url,opts);ws.binaryType="arraybuffer"}catch(e){throw new FS.ErrnoError(23)}}var peer={addr:addr,port:port,socket:ws,dgram_send_queue:[]};SOCKFS.websocket_sock_ops.addPeer(sock,peer);SOCKFS.websocket_sock_ops.handlePeerEvents(sock,peer);if(sock.type===2&&typeof sock.sport!=="undefined"){peer.dgram_send_queue.push(new Uint8Array([255,255,255,255,"p".charCodeAt(0),"o".charCodeAt(0),"r".charCodeAt(0),"t".charCodeAt(0),(sock.sport&65280)>>8,sock.sport&255]))}return peer},getPeer:function(sock,addr,port){return sock.peers[addr+":"+port]},addPeer:function(sock,peer){sock.peers[peer.addr+":"+peer.port]=peer},removePeer:function(sock,peer){delete sock.peers[peer.addr+":"+peer.port]},handlePeerEvents:function(sock,peer){var first=true;var handleOpen=function(){Module["websocket"].emit("open",sock.stream.fd);try{var queued=peer.dgram_send_queue.shift();while(queued){peer.socket.send(queued);queued=peer.dgram_send_queue.shift()}}catch(e){peer.socket.close()}};function handleMessage(data){if(typeof data==="string"){var encoder=new TextEncoder;data=encoder.encode(data)}else{assert(data.byteLength!==undefined);if(data.byteLength==0){return}else{data=new Uint8Array(data)}}var wasfirst=first;first=false;if(wasfirst&&data.length===10&&data[0]===255&&data[1]===255&&data[2]===255&&data[3]===255&&data[4]==="p".charCodeAt(0)&&data[5]==="o".charCodeAt(0)&&data[6]==="r".charCodeAt(0)&&data[7]==="t".charCodeAt(0)){var newport=data[8]<<8|data[9];SOCKFS.websocket_sock_ops.removePeer(sock,peer);peer.port=newport;SOCKFS.websocket_sock_ops.addPeer(sock,peer);return}sock.recv_queue.push({addr:peer.addr,port:peer.port,data:data});Module["websocket"].emit("message",sock.stream.fd)}if(ENVIRONMENT_IS_NODE){peer.socket.on("open",handleOpen);peer.socket.on("message",function(data,flags){if(!flags.binary){return}handleMessage(new Uint8Array(data).buffer)});peer.socket.on("close",function(){Module["websocket"].emit("close",sock.stream.fd)});peer.socket.on("error",function(error){sock.error=14;Module["websocket"].emit("error",[sock.stream.fd,sock.error,"ECONNREFUSED: Connection refused"])})}else{peer.socket.onopen=handleOpen;peer.socket.onclose=function(){Module["websocket"].emit("close",sock.stream.fd)};peer.socket.onmessage=function peer_socket_onmessage(event){handleMessage(event.data)};peer.socket.onerror=function(error){sock.error=14;Module["websocket"].emit("error",[sock.stream.fd,sock.error,"ECONNREFUSED: Connection refused"])}}},poll:function(sock){if(sock.type===1&&sock.server){return sock.pending.length?64|1:0}var mask=0;var dest=sock.type===1?SOCKFS.websocket_sock_ops.getPeer(sock,sock.daddr,sock.dport):null;if(sock.recv_queue.length||!dest||dest&&dest.socket.readyState===dest.socket.CLOSING||dest&&dest.socket.readyState===dest.socket.CLOSED){mask|=64|1}if(!dest||dest&&dest.socket.readyState===dest.socket.OPEN){mask|=4}if(dest&&dest.socket.readyState===dest.socket.CLOSING||dest&&dest.socket.readyState===dest.socket.CLOSED){mask|=16}return mask},ioctl:function(sock,request,arg){switch(request){case 21531:var bytes=0;if(sock.recv_queue.length){bytes=sock.recv_queue[0].data.length}HEAP32[arg>>2]=bytes;return 0;default:return 28}},close:function(sock){if(sock.server){try{sock.server.close()}catch(e){}sock.server=null}var peers=Object.keys(sock.peers);for(var i=0;i>>0}Module["inetPton4"]=inetPton4;function jstoi_q(str){return parseInt(str)}Module["jstoi_q"]=jstoi_q;function inetPton6(str){var words;var w,offset,z,i;var valid6regx=/^((?=.*::)(?!.*::.+::)(::)?([\dA-F]{1,4}:(:|\b)|){5}|([\dA-F]{1,4}:){6})((([\dA-F]{1,4}((?!\3)::|:\b|$))|(?!\2\3)){2}|(((2[0-4]|1\d|[1-9])?\d|25[0-5])\.?\b){4})$/i;var parts=[];if(!valid6regx.test(str)){return null}if(str==="::"){return[0,0,0,0,0,0,0,0]}if(str.startsWith("::")){str=str.replace("::","Z:")}else{str=str.replace("::",":Z:")}if(str.indexOf(".")>0){str=str.replace(new RegExp("[.]","g"),":");words=str.split(":");words[words.length-4]=jstoi_q(words[words.length-4])+jstoi_q(words[words.length-3])*256;words[words.length-3]=jstoi_q(words[words.length-2])+jstoi_q(words[words.length-1])*256;words=words.slice(0,words.length-2)}else{words=str.split(":")}offset=0;z=0;for(w=0;w>2]=16}HEAP16[sa>>1]=family;HEAP32[sa+4>>2]=addr;HEAP16[sa+2>>1]=_htons(port);break;case 10:addr=inetPton6(addr);zeroMemory(sa,28);if(addrlen){HEAP32[addrlen>>2]=28}HEAP32[sa>>2]=family;HEAP32[sa+8>>2]=addr[0];HEAP32[sa+12>>2]=addr[1];HEAP32[sa+16>>2]=addr[2];HEAP32[sa+20>>2]=addr[3];HEAP16[sa+2>>1]=_htons(port);break;default:return 5}return 0}Module["writeSockaddr"]=writeSockaddr;var DNS={address_map:{id:1,addrs:{},names:{}},lookup_name:function(name){var res=inetPton4(name);if(res!==null){return name}res=inetPton6(name);if(res!==null){return name}var addr;if(DNS.address_map.addrs[name]){addr=DNS.address_map.addrs[name]}else{var id=DNS.address_map.id++;assert(id<65535,"exceeded max address mappings of 65535");addr="172.29."+(id&255)+"."+(id&65280);DNS.address_map.names[addr]=name;DNS.address_map.addrs[name]=addr}return addr},lookup_addr:function(addr){if(DNS.address_map.names[addr]){return DNS.address_map.names[addr]}return null}};Module["DNS"]=DNS;function ___sys_accept4(fd,addr,addrlen,flags){try{var sock=getSocketFromFD(fd);var newsock=sock.sock_ops.accept(sock);if(addr){var errno=writeSockaddr(addr,newsock.family,DNS.lookup_name(newsock.daddr),newsock.dport,addrlen)}return newsock.stream.fd}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_accept4"]=___sys_accept4;function ___sys_access(path,amode){try{path=SYSCALLS.getStr(path);return SYSCALLS.doAccess(path,amode)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_access"]=___sys_access;var ___sys_acct=function(){return-52};Module["___sys_acct"]=___sys_acct;function inetNtop4(addr){return(addr&255)+"."+(addr>>8&255)+"."+(addr>>16&255)+"."+(addr>>24&255)}Module["inetNtop4"]=inetNtop4;function inetNtop6(ints){var str="";var word=0;var longest=0;var lastzero=0;var zstart=0;var len=0;var i=0;var parts=[ints[0]&65535,ints[0]>>16,ints[1]&65535,ints[1]>>16,ints[2]&65535,ints[2]>>16,ints[3]&65535,ints[3]>>16];var hasipv4=true;var v4part="";for(i=0;i<5;i++){if(parts[i]!==0){hasipv4=false;break}}if(hasipv4){v4part=inetNtop4(parts[6]|parts[7]<<16);if(parts[5]===-1){str="::ffff:";str+=v4part;return str}if(parts[5]===0){str="::";if(v4part==="0.0.0.0")v4part="";if(v4part==="0.0.0.1")v4part="1";str+=v4part;return str}}for(word=0;word<8;word++){if(parts[word]===0){if(word-lastzero>1){len=0}lastzero=word;len++}if(len>longest){longest=len;zstart=word-longest+1}}for(word=0;word<8;word++){if(longest>1){if(parts[word]===0&&word>=zstart&&word>1];var port=_ntohs(HEAPU16[sa+2>>1]);var addr;switch(family){case 2:if(salen!==16){return{errno:28}}addr=HEAP32[sa+4>>2];addr=inetNtop4(addr);break;case 10:if(salen!==28){return{errno:28}}addr=[HEAP32[sa+8>>2],HEAP32[sa+12>>2],HEAP32[sa+16>>2],HEAP32[sa+20>>2]];addr=inetNtop6(addr);break;default:return{errno:5}}return{family:family,addr:addr,port:port}}Module["readSockaddr"]=readSockaddr;function getSocketAddress(addrp,addrlen,allowNull){if(allowNull&&addrp===0)return null;var info=readSockaddr(addrp,addrlen);if(info.errno)throw new FS.ErrnoError(info.errno);info.addr=DNS.lookup_addr(info.addr)||info.addr;return info}Module["getSocketAddress"]=getSocketAddress;function ___sys_bind(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);var info=getSocketAddress(addr,addrlen);sock.sock_ops.bind(sock,info.addr,info.port);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_bind"]=___sys_bind;function ___sys_chdir(path){try{path=SYSCALLS.getStr(path);FS.chdir(path);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_chdir"]=___sys_chdir;function ___sys_chmod(path,mode){try{path=SYSCALLS.getStr(path);FS.chmod(path,mode);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_chmod"]=___sys_chmod;function ___sys_chown32(path,owner,group){try{path=SYSCALLS.getStr(path);FS.chown(path,owner,group);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_chown32"]=___sys_chown32;function ___sys_connect(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);var info=getSocketAddress(addr,addrlen);sock.sock_ops.connect(sock,info.addr,info.port);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_connect"]=___sys_connect;function ___sys_dup(fd){try{var old=SYSCALLS.getStreamFromFD(fd);return FS.createStream(old,0).fd}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_dup"]=___sys_dup;function ___sys_dup2(oldfd,suggestFD){try{var old=SYSCALLS.getStreamFromFD(oldfd);if(old.fd===suggestFD)return suggestFD;return SYSCALLS.doDup(old,suggestFD)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_dup2"]=___sys_dup2;function ___sys_dup3(fd,suggestFD,flags){try{var old=SYSCALLS.getStreamFromFD(fd);if(old.fd===suggestFD)return-28;return SYSCALLS.doDup(old,suggestFD)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_dup3"]=___sys_dup3;function ___sys_fadvise64_64(fd,offset,len,advice){return 0}Module["___sys_fadvise64_64"]=___sys_fadvise64_64;function ___sys_fallocate(fd,mode,off_low,off_high,len_low,len_high){try{var stream=SYSCALLS.getStreamFromFD(fd);var offset=SYSCALLS.get64(off_low,off_high);var len=SYSCALLS.get64(len_low,len_high);FS.allocate(stream,offset,len);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_fallocate"]=___sys_fallocate;function ___sys_fchdir(fd){try{var stream=SYSCALLS.getStreamFromFD(fd);FS.chdir(stream.path);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_fchdir"]=___sys_fchdir;function ___sys_fchmod(fd,mode){try{FS.fchmod(fd,mode);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_fchmod"]=___sys_fchmod;function ___sys_fchmodat(dirfd,path,mode,varargs){SYSCALLS.varargs=varargs;try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);FS.chmod(path,mode);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_fchmodat"]=___sys_fchmodat;function ___sys_fchown32(fd,owner,group){try{FS.fchown(fd,owner,group);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_fchown32"]=___sys_fchown32;function ___sys_fchownat(dirfd,path,owner,group,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);FS.chown(path,owner,group);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_fchownat"]=___sys_fchownat;function ___sys_fcntl64(fd,cmd,varargs){SYSCALLS.varargs=varargs;try{var stream=SYSCALLS.getStreamFromFD(fd);switch(cmd){case 0:{var arg=SYSCALLS.get();if(arg<0){return-28}var newStream;newStream=FS.createStream(stream,arg);return newStream.fd}case 1:case 2:return 0;case 3:return stream.flags;case 4:{var arg=SYSCALLS.get();stream.flags|=arg;return 0}case 12:{var arg=SYSCALLS.get();var offset=0;HEAP16[arg+offset>>1]=2;return 0}case 13:case 14:return 0;case 16:case 8:return-28;case 9:setErrNo(28);return-1;default:{return-28}}}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_fcntl64"]=___sys_fcntl64;function ___sys_fdatasync(fd){try{var stream=SYSCALLS.getStreamFromFD(fd);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_fdatasync"]=___sys_fdatasync;function ___sys_fstat64(fd,buf){try{var stream=SYSCALLS.getStreamFromFD(fd);return SYSCALLS.doStat(FS.stat,stream.path,buf)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_fstat64"]=___sys_fstat64;function ___sys_fstatat64(dirfd,path,buf,flags){try{path=SYSCALLS.getStr(path);var nofollow=flags&256;var allowEmpty=flags&4096;flags=flags&~4352;path=SYSCALLS.calculateAt(dirfd,path,allowEmpty);return SYSCALLS.doStat(nofollow?FS.lstat:FS.stat,path,buf)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_fstatat64"]=___sys_fstatat64;function ___sys_statfs64(path,size,buf){try{path=SYSCALLS.getStr(path);HEAP32[buf+4>>2]=4096;HEAP32[buf+40>>2]=4096;HEAP32[buf+8>>2]=1e6;HEAP32[buf+12>>2]=5e5;HEAP32[buf+16>>2]=5e5;HEAP32[buf+20>>2]=FS.nextInode;HEAP32[buf+24>>2]=1e6;HEAP32[buf+28>>2]=42;HEAP32[buf+44>>2]=2;HEAP32[buf+36>>2]=255;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_statfs64"]=___sys_statfs64;function ___sys_fstatfs64(fd,size,buf){try{var stream=SYSCALLS.getStreamFromFD(fd);return ___sys_statfs64(0,size,buf)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_fstatfs64"]=___sys_fstatfs64;function ___sys_ftruncate64(fd,zero,low,high){try{var length=SYSCALLS.get64(low,high);FS.ftruncate(fd,length);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_ftruncate64"]=___sys_ftruncate64;function ___sys_getcwd(buf,size){try{if(size===0)return-28;var cwd=FS.cwd();var cwdLengthInBytes=lengthBytesUTF8(cwd);if(size>>0,(tempDouble=id,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[dirp+pos>>2]=tempI64[0],HEAP32[dirp+pos+4>>2]=tempI64[1];tempI64=[(idx+1)*struct_size>>>0,(tempDouble=(idx+1)*struct_size,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[dirp+pos+8>>2]=tempI64[0],HEAP32[dirp+pos+12>>2]=tempI64[1];HEAP16[dirp+pos+16>>1]=280;HEAP8[dirp+pos+18>>0]=type;stringToUTF8(name,dirp+pos+19,256);pos+=struct_size;idx+=1}FS.llseek(stream,idx*struct_size,0);return pos}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_getdents64"]=___sys_getdents64;function ___sys_getegid32(){return 0}Module["___sys_getegid32"]=___sys_getegid32;___sys_getegid32.sig="i";function ___sys_geteuid32(){return ___sys_getegid32()}Module["___sys_geteuid32"]=___sys_geteuid32;___sys_geteuid32.sig="i";function ___sys_getgid32(){return ___sys_getegid32()}Module["___sys_getgid32"]=___sys_getgid32;___sys_getgid32.sig="i";function ___sys_getgroups32(size,list){if(size<1)return-28;HEAP32[list>>2]=0;return 1}Module["___sys_getgroups32"]=___sys_getgroups32;var ___sys_getitimer=function(){return-52};Module["___sys_getitimer"]=___sys_getitimer;function ___sys_getpeername(fd,addr,addrlen){try{var sock=getSocketFromFD(fd);if(!sock.daddr){return-53}var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(sock.daddr),sock.dport,addrlen);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_getpeername"]=___sys_getpeername;function ___sys_getpgid(pid){if(pid&&pid!==42)return-71;return 42}Module["___sys_getpgid"]=___sys_getpgid;function ___sys_getpid(){return 42}Module["___sys_getpid"]=___sys_getpid;function ___sys_getppid(){return 1}Module["___sys_getppid"]=___sys_getppid;function ___sys_getpriority(){return 0}Module["___sys_getpriority"]=___sys_getpriority;function ___sys_getresgid32(ruid,euid,suid){try{HEAP32[ruid>>2]=0;HEAP32[euid>>2]=0;HEAP32[suid>>2]=0;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_getresgid32"]=___sys_getresgid32;___sys_getresgid32.sig="iiii";function ___sys_getresuid32(a0,a1,a2){return ___sys_getresgid32(a0,a1,a2)}Module["___sys_getresuid32"]=___sys_getresuid32;___sys_getresuid32.sig="iiii";function ___sys_getrusage(who,usage){try{zeroMemory(usage,136);HEAP32[usage>>2]=1;HEAP32[usage+4>>2]=2;HEAP32[usage+8>>2]=3;HEAP32[usage+12>>2]=4;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_getrusage"]=___sys_getrusage;function ___sys_getsid(pid){if(pid&&pid!==42)return-71;return 42}Module["___sys_getsid"]=___sys_getsid;function ___sys_getsockname(fd,addr,addrlen){try{err("__sys_getsockname "+fd);var sock=getSocketFromFD(fd);var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(sock.saddr||"0.0.0.0"),sock.sport,addrlen);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_getsockname"]=___sys_getsockname;function ___sys_getsockopt(fd,level,optname,optval,optlen){try{var sock=getSocketFromFD(fd);if(level===1){if(optname===4){HEAP32[optval>>2]=sock.error;HEAP32[optlen>>2]=4;sock.error=null;return 0}}return-50}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_getsockopt"]=___sys_getsockopt;function ___sys_getuid32(){return ___sys_getegid32()}Module["___sys_getuid32"]=___sys_getuid32;___sys_getuid32.sig="i";function ___sys_ioctl(fd,op,varargs){SYSCALLS.varargs=varargs;try{var stream=SYSCALLS.getStreamFromFD(fd);switch(op){case 21509:case 21505:{if(!stream.tty)return-59;return 0}case 21510:case 21511:case 21512:case 21506:case 21507:case 21508:{if(!stream.tty)return-59;return 0}case 21519:{if(!stream.tty)return-59;var argp=SYSCALLS.get();HEAP32[argp>>2]=0;return 0}case 21520:{if(!stream.tty)return-59;return-28}case 21531:{var argp=SYSCALLS.get();return FS.ioctl(stream,op,argp)}case 21523:{if(!stream.tty)return-59;return 0}case 21524:{if(!stream.tty)return-59;return 0}default:abort("bad ioctl syscall "+op)}}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_ioctl"]=___sys_ioctl;function ___sys_lchown32(path,owner,group){try{path=SYSCALLS.getStr(path);FS.chown(path,owner,group);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_lchown32"]=___sys_lchown32;function ___sys_link(oldpath,newpath){return-34}Module["___sys_link"]=___sys_link;function ___sys_linkat(olddirfd,oldpath,newdirfd,newpath,flags){return-34}Module["___sys_linkat"]=___sys_linkat;function ___sys_listen(fd,backlog){try{var sock=getSocketFromFD(fd);sock.sock_ops.listen(sock,backlog);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_listen"]=___sys_listen;function ___sys_lstat64(path,buf){try{path=SYSCALLS.getStr(path);return SYSCALLS.doStat(FS.lstat,path,buf)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_lstat64"]=___sys_lstat64;function ___sys_madvise1(addr,length,advice){try{return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_madvise1"]=___sys_madvise1;var ___sys_mincore=function(){return-52};Module["___sys_mincore"]=___sys_mincore;function ___sys_mkdir(path,mode){try{path=SYSCALLS.getStr(path);return SYSCALLS.doMkdir(path,mode)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_mkdir"]=___sys_mkdir;function ___sys_mkdirat(dirfd,path,mode){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);return SYSCALLS.doMkdir(path,mode)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_mkdirat"]=___sys_mkdirat;function ___sys_mknod(path,mode,dev){try{path=SYSCALLS.getStr(path);return SYSCALLS.doMknod(path,mode,dev)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_mknod"]=___sys_mknod;function ___sys_mknodat(dirfd,path,mode,dev){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);return SYSCALLS.doMknod(path,mode,dev)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_mknodat"]=___sys_mknodat;function ___sys_mlock(addr,len){try{return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_mlock"]=___sys_mlock;___sys_mlock.sig="iii";function ___sys_mlockall(flags){try{return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_mlockall"]=___sys_mlockall;___sys_mlockall.sig="ii";function syscallMmap2(addr,len,prot,flags,fd,off){off<<=12;var ptr;var allocated=false;if((flags&16)!==0&&addr%65536!==0){return-28}if((flags&32)!==0){ptr=mmapAlloc(len);if(!ptr)return-48;allocated=true}else{var info=FS.getStream(fd);if(!info)return-8;var res=FS.mmap(info,addr,len,off,prot,flags);ptr=res.ptr;allocated=res.allocated}SYSCALLS.mappings[ptr]={malloc:ptr,len:len,allocated:allocated,fd:fd,prot:prot,flags:flags,offset:off};return ptr}Module["syscallMmap2"]=syscallMmap2;function ___sys_mmap2(addr,len,prot,flags,fd,off){try{return syscallMmap2(addr,len,prot,flags,fd,off)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_mmap2"]=___sys_mmap2;function ___sys_mprotect(addr,len,size){try{return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_mprotect"]=___sys_mprotect;function ___sys_mremap(old_addr,old_size,new_size,flags){try{return-48}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_mremap"]=___sys_mremap;function ___sys_msync(addr,len,flags){try{var info=SYSCALLS.mappings[addr];if(!info)return 0;SYSCALLS.doMsync(addr,FS.getStream(info.fd),len,info.flags,0);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_msync"]=___sys_msync;function ___sys_munlock(addr,len){try{return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_munlock"]=___sys_munlock;___sys_munlock.sig="iii";function ___sys_munlockall(){try{return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_munlockall"]=___sys_munlockall;___sys_munlockall.sig="i";function syscallMunmap(addr,len){var info=SYSCALLS.mappings[addr];if(len===0||!info){return-28}if(len===info.len){var stream=FS.getStream(info.fd);if(stream){if(info.prot&2){SYSCALLS.doMsync(addr,stream,len,info.flags,info.offset)}FS.munmap(stream)}SYSCALLS.mappings[addr]=null;if(info.allocated){_free(info.malloc)}}return 0}Module["syscallMunmap"]=syscallMunmap;function ___sys_munmap(addr,len){try{return syscallMunmap(addr,len)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_munmap"]=___sys_munmap;function ___sys_nice(inc){return-63}Module["___sys_nice"]=___sys_nice;function ___sys_open(path,flags,varargs){SYSCALLS.varargs=varargs;try{var pathname=SYSCALLS.getStr(path);var mode=varargs?SYSCALLS.get():0;var stream=FS.open(pathname,flags,mode);return stream.fd}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_open"]=___sys_open;function ___sys_openat(dirfd,path,flags,varargs){SYSCALLS.varargs=varargs;try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);var mode=varargs?SYSCALLS.get():0;return FS.open(path,flags,mode).fd}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_openat"]=___sys_openat;function ___sys_pause(){return-27}Module["___sys_pause"]=___sys_pause;var PIPEFS={BUCKET_BUFFER_SIZE:8192,mount:function(mount){return FS.createNode(null,"/",16384|511,0)},createPipe:function(){var pipe={buckets:[],refcnt:2};pipe.buckets.push({buffer:new Uint8Array(PIPEFS.BUCKET_BUFFER_SIZE),offset:0,roffset:0});var rName=PIPEFS.nextname();var wName=PIPEFS.nextname();var rNode=FS.createNode(PIPEFS.root,rName,4096,0);var wNode=FS.createNode(PIPEFS.root,wName,4096,0);rNode.pipe=pipe;wNode.pipe=pipe;var readableStream=FS.createStream({path:rName,node:rNode,flags:0,seekable:false,stream_ops:PIPEFS.stream_ops});rNode.stream=readableStream;var writableStream=FS.createStream({path:wName,node:wNode,flags:1,seekable:false,stream_ops:PIPEFS.stream_ops});wNode.stream=writableStream;return{readable_fd:readableStream.fd,writable_fd:writableStream.fd}},stream_ops:{poll:function(stream){var pipe=stream.node.pipe;if((stream.flags&2097155)===1){return 256|4}else{if(pipe.buckets.length>0){for(var i=0;i0){return 64|1}}}}return 0},ioctl:function(stream,request,varargs){return 28},fsync:function(stream){return 28},read:function(stream,buffer,offset,length,position){var pipe=stream.node.pipe;var currentLength=0;for(var i=0;i=dataLen){currBucket.buffer.set(data,currBucket.offset);currBucket.offset+=dataLen;return dataLen}else if(freeBytesInCurrBuffer>0){currBucket.buffer.set(data.subarray(0,freeBytesInCurrBuffer),currBucket.offset);currBucket.offset+=freeBytesInCurrBuffer;data=data.subarray(freeBytesInCurrBuffer,data.byteLength)}var numBuckets=data.byteLength/PIPEFS.BUCKET_BUFFER_SIZE|0;var remElements=data.byteLength%PIPEFS.BUCKET_BUFFER_SIZE;for(var i=0;i0){var newBucket={buffer:new Uint8Array(PIPEFS.BUCKET_BUFFER_SIZE),offset:data.byteLength,roffset:0};pipe.buckets.push(newBucket);newBucket.buffer.set(data)}return dataLen},close:function(stream){var pipe=stream.node.pipe;pipe.refcnt--;if(pipe.refcnt===0){pipe.buckets=null}}},nextname:function(){if(!PIPEFS.nextname.current){PIPEFS.nextname.current=0}return"pipe["+PIPEFS.nextname.current+++"]"}};Module["PIPEFS"]=PIPEFS;function ___sys_pipe(fdPtr){try{if(fdPtr==0){throw new FS.ErrnoError(21)}var res=PIPEFS.createPipe();HEAP32[fdPtr>>2]=res.readable_fd;HEAP32[fdPtr+4>>2]=res.writable_fd;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_pipe"]=___sys_pipe;var ___sys_pipe2=function(){return-52};Module["___sys_pipe2"]=___sys_pipe2;function ___sys_poll(fds,nfds,timeout){try{var nonzero=0;for(var i=0;i>2];var events=HEAP16[pollfd+4>>1];var mask=32;var stream=FS.getStream(fd);if(stream){mask=SYSCALLS.DEFAULT_POLLMASK;if(stream.stream_ops.poll){mask=stream.stream_ops.poll(stream)}}mask&=events|8|16;if(mask)nonzero++;HEAP16[pollfd+6>>1]=mask}return nonzero}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_poll"]=___sys_poll;function ___sys_prlimit64(pid,resource,new_limit,old_limit){try{if(old_limit){HEAP32[old_limit>>2]=-1;HEAP32[old_limit+4>>2]=-1;HEAP32[old_limit+8>>2]=-1;HEAP32[old_limit+12>>2]=-1}return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_prlimit64"]=___sys_prlimit64;var ___sys_pselect6=function(){return-52};Module["___sys_pselect6"]=___sys_pselect6;function ___sys_readlink(path,buf,bufsize){try{path=SYSCALLS.getStr(path);return SYSCALLS.doReadlink(path,buf,bufsize)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_readlink"]=___sys_readlink;function ___sys_readlinkat(dirfd,path,buf,bufsize){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);return SYSCALLS.doReadlink(path,buf,bufsize)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_readlinkat"]=___sys_readlinkat;function ___sys_recvfrom(fd,buf,len,flags,addr,addrlen){try{var sock=getSocketFromFD(fd);var msg=sock.sock_ops.recvmsg(sock,len);if(!msg)return 0;if(addr){var errno=writeSockaddr(addr,sock.family,DNS.lookup_name(msg.addr),msg.port,addrlen)}HEAPU8.set(msg.buffer,buf);return msg.buffer.byteLength}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_recvfrom"]=___sys_recvfrom;var ___sys_recvmmsg=function(){return-52};Module["___sys_recvmmsg"]=___sys_recvmmsg;function ___sys_recvmsg(fd,message,flags){try{var sock=getSocketFromFD(fd);var iov=HEAP32[message+8>>2];var num=HEAP32[message+12>>2];var total=0;for(var i=0;i>2]}var msg=sock.sock_ops.recvmsg(sock,total);if(!msg)return 0;var name=HEAP32[message>>2];if(name){var errno=writeSockaddr(name,sock.family,DNS.lookup_name(msg.addr),msg.port)}var bytesRead=0;var bytesRemaining=msg.buffer.byteLength;for(var i=0;bytesRemaining>0&&i>2];var iovlen=HEAP32[iov+(8*i+4)>>2];if(!iovlen){continue}var length=Math.min(iovlen,bytesRemaining);var buf=msg.buffer.subarray(bytesRead,bytesRead+length);HEAPU8.set(buf,iovbase+bytesRead);bytesRead+=length;bytesRemaining-=length}return bytesRead}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_recvmsg"]=___sys_recvmsg;function ___sys_rename(old_path,new_path){try{old_path=SYSCALLS.getStr(old_path);new_path=SYSCALLS.getStr(new_path);FS.rename(old_path,new_path);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_rename"]=___sys_rename;function ___sys_renameat(olddirfd,oldpath,newdirfd,newpath){try{oldpath=SYSCALLS.getStr(oldpath);newpath=SYSCALLS.getStr(newpath);oldpath=SYSCALLS.calculateAt(olddirfd,oldpath);newpath=SYSCALLS.calculateAt(newdirfd,newpath);FS.rename(oldpath,newpath);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_renameat"]=___sys_renameat;function ___sys_rmdir(path){try{path=SYSCALLS.getStr(path);FS.rmdir(path);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_rmdir"]=___sys_rmdir;var ___sys_sendmmsg=function(){return-52};Module["___sys_sendmmsg"]=___sys_sendmmsg;function ___sys_sendmsg(fd,message,flags){try{var sock=getSocketFromFD(fd);var iov=HEAP32[message+8>>2];var num=HEAP32[message+12>>2];var addr,port;var name=HEAP32[message>>2];var namelen=HEAP32[message+4>>2];if(name){var info=readSockaddr(name,namelen);if(info.errno)return-info.errno;port=info.port;addr=DNS.lookup_addr(info.addr)||info.addr}var total=0;for(var i=0;i>2]}var view=new Uint8Array(total);var offset=0;for(var i=0;i>2];var iovlen=HEAP32[iov+(8*i+4)>>2];for(var j=0;j>0]}}return sock.sock_ops.sendmsg(sock,view,0,total,addr,port)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_sendmsg"]=___sys_sendmsg;function ___sys_sendto(fd,message,length,flags,addr,addr_len){try{var sock=getSocketFromFD(fd);var dest=getSocketAddress(addr,addr_len,true);if(!dest){return FS.write(sock.stream,HEAP8,message,length)}else{return sock.sock_ops.sendmsg(sock,HEAP8,message,length,dest.addr,dest.port)}}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_sendto"]=___sys_sendto;function ___sys_setdomainname(name,size){return-63}Module["___sys_setdomainname"]=___sys_setdomainname;var ___sys_setitimer=function(){return-52};Module["___sys_setitimer"]=___sys_setitimer;function ___sys_setpgid(pid,pgid){if(pid&&pid!==42)return-71;if(pgid&&pgid!==42)return-63;return 0}Module["___sys_setpgid"]=___sys_setpgid;function ___sys_setpriority(){return-63}Module["___sys_setpriority"]=___sys_setpriority;function ___sys_setrlimit(varargs){return 0}Module["___sys_setrlimit"]=___sys_setrlimit;function ___sys_setsid(){return 0}Module["___sys_setsid"]=___sys_setsid;function ___sys_setsockopt(fd){try{return-50}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_setsockopt"]=___sys_setsockopt;function ___sys_shutdown(fd,how){try{getSocketFromFD(fd);return-52}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_shutdown"]=___sys_shutdown;function ___sys_socket(domain,type,protocol){try{var sock=SOCKFS.createSocket(domain,type,protocol);return sock.stream.fd}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_socket"]=___sys_socket;var ___sys_socketpair=function(){return-52};Module["___sys_socketpair"]=___sys_socketpair;function ___sys_stat64(path,buf){try{path=SYSCALLS.getStr(path);return SYSCALLS.doStat(FS.stat,path,buf)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_stat64"]=___sys_stat64;function ___sys_symlink(target,linkpath){try{target=SYSCALLS.getStr(target);linkpath=SYSCALLS.getStr(linkpath);FS.symlink(target,linkpath);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_symlink"]=___sys_symlink;function ___sys_symlinkat(target,newdirfd,linkpath){try{linkpath=SYSCALLS.calculateAt(newdirfd,linkpath);FS.symlink(target,linkpath);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_symlinkat"]=___sys_symlinkat;function ___sys_sync(){return 0}Module["___sys_sync"]=___sys_sync;function ___sys_truncate64(path,zero,low,high){try{path=SYSCALLS.getStr(path);var length=SYSCALLS.get64(low,high);FS.truncate(path,length);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_truncate64"]=___sys_truncate64;function ___sys_ugetrlimit(resource,rlim){try{HEAP32[rlim>>2]=-1;HEAP32[rlim+4>>2]=-1;HEAP32[rlim+8>>2]=-1;HEAP32[rlim+12>>2]=-1;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_ugetrlimit"]=___sys_ugetrlimit;function ___sys_umask(mask){try{var old=SYSCALLS.umask;SYSCALLS.umask=mask;return old}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_umask"]=___sys_umask;function ___sys_uname(buf){try{if(!buf)return-21;var layout={"__size__":390,"domainname":325,"machine":260,"nodename":65,"release":130,"sysname":0,"version":195};var copyString=function(element,value){var offset=layout[element];writeAsciiToMemory(value,buf+offset)};copyString("sysname","Emscripten");copyString("nodename","emscripten");copyString("release","1.0");copyString("version","#1");copyString("machine","wasm32");return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_uname"]=___sys_uname;function ___sys_unlink(path){try{path=SYSCALLS.getStr(path);FS.unlink(path);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_unlink"]=___sys_unlink;function ___sys_unlinkat(dirfd,path,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);if(flags===0){FS.unlink(path)}else if(flags===512){FS.rmdir(path)}else{abort("Invalid flags passed to unlinkat")}return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_unlinkat"]=___sys_unlinkat;function ___sys_utimensat(dirfd,path,times,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path,true);var seconds=HEAP32[times>>2];var nanoseconds=HEAP32[times+4>>2];var atime=seconds*1e3+nanoseconds/(1e3*1e3);times+=8;seconds=HEAP32[times>>2];nanoseconds=HEAP32[times+4>>2];var mtime=seconds*1e3+nanoseconds/(1e3*1e3);FS.utime(path,atime,mtime);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_utimensat"]=___sys_utimensat;var ___sys_wait4=function(){return-52};Module["___sys_wait4"]=___sys_wait4;function __emscripten_throw_longjmp(){throw"longjmp"}Module["__emscripten_throw_longjmp"]=__emscripten_throw_longjmp;__emscripten_throw_longjmp.sig="v";function _abort(){abort()}Module["_abort"]=_abort;_abort.sig="v";var AL={QUEUE_INTERVAL:25,QUEUE_LOOKAHEAD:.1,DEVICE_NAME:"Emscripten OpenAL",CAPTURE_DEVICE_NAME:"Emscripten OpenAL capture",ALC_EXTENSIONS:{ALC_SOFT_pause_device:true,ALC_SOFT_HRTF:true},AL_EXTENSIONS:{AL_EXT_float32:true,AL_SOFT_loop_points:true,AL_SOFT_source_length:true,AL_EXT_source_distance_model:true,AL_SOFT_source_spatialize:true},_alcErr:0,alcErr:0,deviceRefCounts:{},alcStringCache:{},paused:false,stringCache:{},contexts:{},currentCtx:null,buffers:{0:{id:0,refCount:0,audioBuf:null,frequency:0,bytesPerSample:2,channels:1,length:0}},paramArray:[],_nextId:1,newId:function(){return AL.freeIds.length>0?AL.freeIds.pop():AL._nextId++},freeIds:[],scheduleContextAudio:function(ctx){if(Browser.mainLoop.timingMode===1&&document["visibilityState"]!="visible"){return}for(var i in ctx.sources){AL.scheduleSourceAudio(ctx.sources[i])}},scheduleSourceAudio:function(src,lookahead){if(Browser.mainLoop.timingMode===1&&document["visibilityState"]!="visible"){return}if(src.state!==4114){return}var currentTime=AL.updateSourceTime(src);var startTime=src.bufStartTime;var startOffset=src.bufOffset;var bufCursor=src.bufsProcessed;for(var i=0;i=src.bufQueue.length){if(src.looping){bufCursor%=src.bufQueue.length}else{break}}var buf=src.bufQueue[bufCursor%src.bufQueue.length];if(buf.length===0){skipCount++;if(skipCount===src.bufQueue.length){break}}else{var audioSrc=src.context.audioCtx.createBufferSource();audioSrc.buffer=buf.audioBuf;audioSrc.playbackRate.value=src.playbackRate;if(buf.audioBuf._loopStart||buf.audioBuf._loopEnd){audioSrc.loopStart=buf.audioBuf._loopStart;audioSrc.loopEnd=buf.audioBuf._loopEnd}var duration=0;if(src.type===4136&&src.looping){duration=Number.POSITIVE_INFINITY;audioSrc.loop=true;if(buf.audioBuf._loopStart){audioSrc.loopStart=buf.audioBuf._loopStart}if(buf.audioBuf._loopEnd){audioSrc.loopEnd=buf.audioBuf._loopEnd}}else{duration=(buf.audioBuf.duration-startOffset)/src.playbackRate}audioSrc._startOffset=startOffset;audioSrc._duration=duration;audioSrc._skipCount=skipCount;skipCount=0;audioSrc.connect(src.gain);if(typeof audioSrc.start!=="undefined"){startTime=Math.max(startTime,src.context.audioCtx.currentTime);audioSrc.start(startTime,startOffset)}else if(typeof audioSrc.noteOn!=="undefined"){startTime=Math.max(startTime,src.context.audioCtx.currentTime);audioSrc.noteOn(startTime)}audioSrc._startTime=startTime;src.audioQueue.push(audioSrc);startTime+=duration}startOffset=0;bufCursor++}},updateSourceTime:function(src){var currentTime=src.context.audioCtx.currentTime;if(src.state!==4114){return currentTime}if(!isFinite(src.bufStartTime)){src.bufStartTime=currentTime-src.bufOffset/src.playbackRate;src.bufOffset=0}var nextStartTime=0;while(src.audioQueue.length){var audioSrc=src.audioQueue[0];src.bufsProcessed+=audioSrc._skipCount;nextStartTime=audioSrc._startTime+audioSrc._duration;if(currentTime=src.bufQueue.length&&!src.looping){AL.setSourceState(src,4116)}else if(src.type===4136&&src.looping){var buf=src.bufQueue[0];if(buf.length===0){src.bufOffset=0}else{var delta=(currentTime-src.bufStartTime)*src.playbackRate;var loopStart=buf.audioBuf._loopStart||0;var loopEnd=buf.audioBuf._loopEnd||buf.audioBuf.duration;if(loopEnd<=loopStart){loopEnd=buf.audioBuf.duration}if(delta0){src.bufStartTime+=Math.floor((currentTime-src.bufStartTime)/srcDuration)*srcDuration}}for(var i=0;i=src.bufQueue.length){if(src.looping){src.bufsProcessed%=src.bufQueue.length}else{AL.setSourceState(src,4116);break}}var buf=src.bufQueue[src.bufsProcessed];if(buf.length>0){nextStartTime=src.bufStartTime+buf.audioBuf.duration/src.playbackRate;if(currentTime1){src.audioQueue.length=1}},stopSourceAudio:function(src){for(var i=0;isrc.bufQueue[src.bufsProcessed].audioBuf.duration){offset-=src.bufQueue[src.bufsProcessed].audiobuf.duration;src.bufsProcessed++}src.bufOffset=offset}if(playing){AL.setSourceState(src,4114)}},getGlobalParam:function(funcname,param){if(!AL.currentCtx){return null}switch(param){case 49152:return AL.currentCtx.dopplerFactor;case 49155:return AL.currentCtx.speedOfSound;case 53248:return AL.currentCtx.distanceModel;default:AL.currentCtx.err=40962;return null}},setGlobalParam:function(funcname,param,value){if(!AL.currentCtx){return}switch(param){case 49152:if(!Number.isFinite(value)||value<0){AL.currentCtx.err=40963;return}AL.currentCtx.dopplerFactor=value;AL.updateListenerSpace(AL.currentCtx);break;case 49155:if(!Number.isFinite(value)||value<=0){AL.currentCtx.err=40963;return}AL.currentCtx.speedOfSound=value;AL.updateListenerSpace(AL.currentCtx);break;case 53248:switch(value){case 0:case 53249:case 53250:case 53251:case 53252:case 53253:case 53254:AL.currentCtx.distanceModel=value;AL.updateContextGlobal(AL.currentCtx);break;default:AL.currentCtx.err=40963;return}break;default:AL.currentCtx.err=40962;return}},getListenerParam:function(funcname,param){if(!AL.currentCtx){return null}switch(param){case 4100:return AL.currentCtx.listener.position;case 4102:return AL.currentCtx.listener.velocity;case 4111:return AL.currentCtx.listener.direction.concat(AL.currentCtx.listener.up);case 4106:return AL.currentCtx.gain.gain.value;default:AL.currentCtx.err=40962;return null}},setListenerParam:function(funcname,param,value){if(!AL.currentCtx){return}if(value===null){AL.currentCtx.err=40962;return}var listener=AL.currentCtx.listener;switch(param){case 4100:if(!Number.isFinite(value[0])||!Number.isFinite(value[1])||!Number.isFinite(value[2])){AL.currentCtx.err=40963;return}listener.position[0]=value[0];listener.position[1]=value[1];listener.position[2]=value[2];AL.updateListenerSpace(AL.currentCtx);break;case 4102:if(!Number.isFinite(value[0])||!Number.isFinite(value[1])||!Number.isFinite(value[2])){AL.currentCtx.err=40963;return}listener.velocity[0]=value[0];listener.velocity[1]=value[1];listener.velocity[2]=value[2];AL.updateListenerSpace(AL.currentCtx);break;case 4106:if(!Number.isFinite(value)||value<0){AL.currentCtx.err=40963;return}AL.currentCtx.gain.gain.value=value;break;case 4111:if(!Number.isFinite(value[0])||!Number.isFinite(value[1])||!Number.isFinite(value[2])||!Number.isFinite(value[3])||!Number.isFinite(value[4])||!Number.isFinite(value[5])){AL.currentCtx.err=40963;return}listener.direction[0]=value[0];listener.direction[1]=value[1];listener.direction[2]=value[2];listener.up[0]=value[3];listener.up[1]=value[4];listener.up[2]=value[5];AL.updateListenerSpace(AL.currentCtx);break;default:AL.currentCtx.err=40962;return}},getBufferParam:function(funcname,bufferId,param){if(!AL.currentCtx){return}var buf=AL.buffers[bufferId];if(!buf||bufferId===0){AL.currentCtx.err=40961;return}switch(param){case 8193:return buf.frequency;case 8194:return buf.bytesPerSample*8;case 8195:return buf.channels;case 8196:return buf.length*buf.bytesPerSample*buf.channels;case 8213:if(buf.length===0){return[0,0]}else{return[(buf.audioBuf._loopStart||0)*buf.frequency,(buf.audioBuf._loopEnd||buf.length)*buf.frequency]}default:AL.currentCtx.err=40962;return null}},setBufferParam:function(funcname,bufferId,param,value){if(!AL.currentCtx){return}var buf=AL.buffers[bufferId];if(!buf||bufferId===0){AL.currentCtx.err=40961;return}if(value===null){AL.currentCtx.err=40962;return}switch(param){case 8196:if(value!==0){AL.currentCtx.err=40963;return}break;case 8213:if(value[0]<0||value[0]>buf.length||value[1]<0||value[1]>buf.Length||value[0]>=value[1]){AL.currentCtx.err=40963;return}if(buf.refCount>0){AL.currentCtx.err=40964;return}if(buf.audioBuf){buf.audioBuf._loopStart=value[0]/buf.frequency;buf.audioBuf._loopEnd=value[1]/buf.frequency}break;default:AL.currentCtx.err=40962;return}},getSourceParam:function(funcname,sourceId,param){if(!AL.currentCtx){return null}var src=AL.currentCtx.sources[sourceId];if(!src){AL.currentCtx.err=40961;return null}switch(param){case 514:return src.relative;case 4097:return src.coneInnerAngle;case 4098:return src.coneOuterAngle;case 4099:return src.pitch;case 4100:return src.position;case 4101:return src.direction;case 4102:return src.velocity;case 4103:return src.looping;case 4105:if(src.type===4136){return src.bufQueue[0].id}else{return 0}case 4106:return src.gain.gain.value;case 4109:return src.minGain;case 4110:return src.maxGain;case 4112:return src.state;case 4117:if(src.bufQueue.length===1&&src.bufQueue[0].id===0){return 0}else{return src.bufQueue.length}case 4118:if(src.bufQueue.length===1&&src.bufQueue[0].id===0||src.looping){return 0}else{return src.bufsProcessed}case 4128:return src.refDistance;case 4129:return src.rolloffFactor;case 4130:return src.coneOuterGain;case 4131:return src.maxDistance;case 4132:return AL.sourceTell(src);case 4133:var offset=AL.sourceTell(src);if(offset>0){offset*=src.bufQueue[0].frequency}return offset;case 4134:var offset=AL.sourceTell(src);if(offset>0){offset*=src.bufQueue[0].frequency*src.bufQueue[0].bytesPerSample}return offset;case 4135:return src.type;case 4628:return src.spatialize;case 8201:var length=0;var bytesPerFrame=0;for(var i=0;i0){var audioSrc=src.audioQueue[0];audioSrc.loop=true;audioSrc._duration=Number.POSITIVE_INFINITY}}else if(value===0){src.looping=false;var currentTime=AL.updateSourceTime(src);if(src.type===4136&&src.audioQueue.length>0){var audioSrc=src.audioQueue[0];audioSrc.loop=false;audioSrc._duration=src.bufQueue[0].audioBuf.duration/src.playbackRate;audioSrc._startTime=currentTime-src.bufOffset/src.playbackRate}}else{AL.currentCtx.err=40963;return}break;case 4105:if(src.state===4114||src.state===4115){AL.currentCtx.err=40964;return}if(value===0){for(var i in src.bufQueue){src.bufQueue[i].refCount--}src.bufQueue.length=1;src.bufQueue[0]=AL.buffers[0];src.bufsProcessed=0;src.type=4144}else{var buf=AL.buffers[value];if(!buf){AL.currentCtx.err=40963;return}for(var i in src.bufQueue){src.bufQueue[i].refCount--}src.bufQueue.length=0;buf.refCount++;src.bufQueue=[buf];src.bufsProcessed=0;src.type=4136}AL.initSourcePanner(src);AL.scheduleSourceAudio(src);break;case 4106:if(!Number.isFinite(value)||value<0){AL.currentCtx.err=40963;return}src.gain.gain.value=value;break;case 4109:if(!Number.isFinite(value)||value<0||value>Math.min(src.maxGain,1)){AL.currentCtx.err=40963;return}src.minGain=value;break;case 4110:if(!Number.isFinite(value)||value1){AL.currentCtx.err=40963;return}src.maxGain=value;break;case 4128:if(!Number.isFinite(value)||value<0){AL.currentCtx.err=40963;return}src.refDistance=value;if(src.panner){src.panner.refDistance=value}break;case 4129:if(!Number.isFinite(value)||value<0){AL.currentCtx.err=40963;return}src.rolloffFactor=value;if(src.panner){src.panner.rolloffFactor=value}break;case 4130:if(!Number.isFinite(value)||value<0||value>1){AL.currentCtx.err=40963;return}src.coneOuterGain=value;if(src.panner){src.panner.coneOuterGain=value}break;case 4131:if(!Number.isFinite(value)||value<0){AL.currentCtx.err=40963;return}src.maxDistance=value;if(src.panner){src.panner.maxDistance=value}break;case 4132:if(value<0||value>AL.sourceDuration(src)){AL.currentCtx.err=40963;return}AL.sourceSeek(src,value);break;case 4133:var srcLen=AL.sourceDuration(src);if(srcLen>0){var frequency;for(var bufId in src.bufQueue){if(bufId){frequency=src.bufQueue[bufId].frequency;break}}value/=frequency}if(value<0||value>srcLen){AL.currentCtx.err=40963;return}AL.sourceSeek(src,value);break;case 4134:var srcLen=AL.sourceDuration(src);if(srcLen>0){var bytesPerSec;for(var bufId in src.bufQueue){if(bufId){var buf=src.bufQueue[bufId];bytesPerSec=buf.frequency*buf.bytesPerSample*buf.channels;break}}value/=bytesPerSec}if(value<0||value>srcLen){AL.currentCtx.err=40963;return}AL.sourceSeek(src,value);break;case 4628:if(value!==0&&value!==1&&value!==2){AL.currentCtx.err=40963;return}src.spatialize=value;AL.initSourcePanner(src);break;case 8201:case 8202:case 8203:AL.currentCtx.err=40964;break;case 53248:switch(value){case 0:case 53249:case 53250:case 53251:case 53252:case 53253:case 53254:src.distanceModel=value;if(AL.currentCtx.sourceDistanceModel){AL.updateContextGlobal(AL.currentCtx)}break;default:AL.currentCtx.err=40963;return}break;default:AL.currentCtx.err=40962;return}},captures:{},sharedCaptureAudioCtx:null,requireValidCaptureDevice:function(deviceId,funcname){if(deviceId===0){AL.alcErr=40961;return null}var c=AL.captures[deviceId];if(!c){AL.alcErr=40961;return null}var err=c.mediaStreamError;if(err){AL.alcErr=40961;return null}return c}};Module["AL"]=AL;function _alBuffer3f(bufferId,param,value0,value1,value2){AL.setBufferParam("alBuffer3f",bufferId,param,null)}Module["_alBuffer3f"]=_alBuffer3f;_alBuffer3f.sig="viifff";function _alBuffer3i(bufferId,param,value0,value1,value2){AL.setBufferParam("alBuffer3i",bufferId,param,null)}Module["_alBuffer3i"]=_alBuffer3i;_alBuffer3i.sig="viiiii";function _alBufferData(bufferId,format,pData,size,freq){if(!AL.currentCtx){return}var buf=AL.buffers[bufferId];if(!buf){AL.currentCtx.err=40963;return}if(freq<=0){AL.currentCtx.err=40963;return}var audioBuf=null;try{switch(format){case 4352:if(size>0){audioBuf=AL.currentCtx.audioCtx.createBuffer(1,size,freq);var channel0=audioBuf.getChannelData(0);for(var i=0;i0){audioBuf=AL.currentCtx.audioCtx.createBuffer(1,size>>1,freq);var channel0=audioBuf.getChannelData(0);pData>>=1;for(var i=0;i>1;++i){channel0[i]=HEAP16[pData++]*30517578125e-15}}buf.bytesPerSample=2;buf.channels=1;buf.length=size>>1;break;case 4354:if(size>0){audioBuf=AL.currentCtx.audioCtx.createBuffer(2,size>>1,freq);var channel0=audioBuf.getChannelData(0);var channel1=audioBuf.getChannelData(1);for(var i=0;i>1;++i){channel0[i]=HEAPU8[pData++]*.0078125-1;channel1[i]=HEAPU8[pData++]*.0078125-1}}buf.bytesPerSample=1;buf.channels=2;buf.length=size>>1;break;case 4355:if(size>0){audioBuf=AL.currentCtx.audioCtx.createBuffer(2,size>>2,freq);var channel0=audioBuf.getChannelData(0);var channel1=audioBuf.getChannelData(1);pData>>=1;for(var i=0;i>2;++i){channel0[i]=HEAP16[pData++]*30517578125e-15;channel1[i]=HEAP16[pData++]*30517578125e-15}}buf.bytesPerSample=2;buf.channels=2;buf.length=size>>2;break;case 65552:if(size>0){audioBuf=AL.currentCtx.audioCtx.createBuffer(1,size>>2,freq);var channel0=audioBuf.getChannelData(0);pData>>=2;for(var i=0;i>2;++i){channel0[i]=HEAPF32[pData++]}}buf.bytesPerSample=4;buf.channels=1;buf.length=size>>2;break;case 65553:if(size>0){audioBuf=AL.currentCtx.audioCtx.createBuffer(2,size>>3,freq);var channel0=audioBuf.getChannelData(0);var channel1=audioBuf.getChannelData(1);pData>>=2;for(var i=0;i>3;++i){channel0[i]=HEAPF32[pData++];channel1[i]=HEAPF32[pData++]}}buf.bytesPerSample=4;buf.channels=2;buf.length=size>>3;break;default:AL.currentCtx.err=40963;return}buf.frequency=freq;buf.audioBuf=audioBuf}catch(e){AL.currentCtx.err=40963;return}}Module["_alBufferData"]=_alBufferData;_alBufferData.sig="viiiii";function _alBufferf(bufferId,param,value){AL.setBufferParam("alBufferf",bufferId,param,null)}Module["_alBufferf"]=_alBufferf;_alBufferf.sig="viif";function _alBufferfv(bufferId,param,pValues){if(!AL.currentCtx){return}if(!pValues){AL.currentCtx.err=40963;return}AL.setBufferParam("alBufferfv",bufferId,param,null)}Module["_alBufferfv"]=_alBufferfv;_alBufferfv.sig="viii";function _alBufferi(bufferId,param,value){AL.setBufferParam("alBufferi",bufferId,param,null)}Module["_alBufferi"]=_alBufferi;_alBufferi.sig="viii";function _alBufferiv(bufferId,param,pValues){if(!AL.currentCtx){return}if(!pValues){AL.currentCtx.err=40963;return}switch(param){case 8213:AL.paramArray[0]=HEAP32[pValues>>2];AL.paramArray[1]=HEAP32[pValues+4>>2];AL.setBufferParam("alBufferiv",bufferId,param,AL.paramArray);break;default:AL.setBufferParam("alBufferiv",bufferId,param,null);break}}Module["_alBufferiv"]=_alBufferiv;_alBufferiv.sig="viii";function _alDeleteBuffers(count,pBufferIds){if(!AL.currentCtx){return}for(var i=0;i>2];if(bufId===0){continue}if(!AL.buffers[bufId]){AL.currentCtx.err=40961;return}if(AL.buffers[bufId].refCount){AL.currentCtx.err=40964;return}}for(var i=0;i>2];if(bufId===0){continue}AL.deviceRefCounts[AL.buffers[bufId].deviceId]--;delete AL.buffers[bufId];AL.freeIds.push(bufId)}}Module["_alDeleteBuffers"]=_alDeleteBuffers;_alDeleteBuffers.sig="vii";function _alSourcei(sourceId,param,value){switch(param){case 514:case 4097:case 4098:case 4103:case 4105:case 4128:case 4129:case 4131:case 4132:case 4133:case 4134:case 4628:case 8201:case 8202:case 53248:AL.setSourceParam("alSourcei",sourceId,param,value);break;default:AL.setSourceParam("alSourcei",sourceId,param,null);break}}Module["_alSourcei"]=_alSourcei;_alSourcei.sig="viii";function _alDeleteSources(count,pSourceIds){if(!AL.currentCtx){return}for(var i=0;i>2];if(!AL.currentCtx.sources[srcId]){AL.currentCtx.err=40961;return}}for(var i=0;i>2];AL.setSourceState(AL.currentCtx.sources[srcId],4116);_alSourcei(srcId,4105,0);delete AL.currentCtx.sources[srcId];AL.freeIds.push(srcId)}}Module["_alDeleteSources"]=_alDeleteSources;_alDeleteSources.sig="vii";function _alDisable(param){if(!AL.currentCtx){return}switch(param){case"AL_SOURCE_DISTANCE_MODEL":AL.currentCtx.sourceDistanceModel=false;AL.updateContextGlobal(AL.currentCtx);break;default:AL.currentCtx.err=40962;return}}Module["_alDisable"]=_alDisable;_alDisable.sig="vi";function _alDistanceModel(model){AL.setGlobalParam("alDistanceModel",53248,model)}Module["_alDistanceModel"]=_alDistanceModel;_alDistanceModel.sig="vi";function _alDopplerFactor(value){AL.setGlobalParam("alDopplerFactor",49152,value)}Module["_alDopplerFactor"]=_alDopplerFactor;_alDopplerFactor.sig="vi";function _alDopplerVelocity(value){warnOnce("alDopplerVelocity() is deprecated, and only kept for compatibility with OpenAL 1.0. Use alSpeedOfSound() instead.");if(!AL.currentCtx){return}if(value<=0){AL.currentCtx.err=40963;return}}Module["_alDopplerVelocity"]=_alDopplerVelocity;_alDopplerVelocity.sig="vi";function _alEnable(param){if(!AL.currentCtx){return}switch(param){case"AL_SOURCE_DISTANCE_MODEL":AL.currentCtx.sourceDistanceModel=true;AL.updateContextGlobal(AL.currentCtx);break;default:AL.currentCtx.err=40962;return}}Module["_alEnable"]=_alEnable;_alEnable.sig="vi";function _alGenBuffers(count,pBufferIds){if(!AL.currentCtx){return}for(var i=0;i>2]=buf.id}}Module["_alGenBuffers"]=_alGenBuffers;_alGenBuffers.sig="vii";function _alGenSources(count,pSourceIds){if(!AL.currentCtx){return}for(var i=0;i>2]=src.id}}Module["_alGenSources"]=_alGenSources;_alGenSources.sig="vii";function _alGetBoolean(param){var val=AL.getGlobalParam("alGetBoolean",param);if(val===null){return 0}switch(param){case 49152:case 49155:case 53248:return val!==0?1:0;default:AL.currentCtx.err=40962;return 0}}Module["_alGetBoolean"]=_alGetBoolean;_alGetBoolean.sig="ii";function _alGetBooleanv(param,pValues){var val=AL.getGlobalParam("alGetBooleanv",param);if(val===null||!pValues){return}switch(param){case 49152:case 49155:case 53248:HEAP8[pValues>>0]=val;break;default:AL.currentCtx.err=40962;return}}Module["_alGetBooleanv"]=_alGetBooleanv;_alGetBooleanv.sig="vii";function _alGetBuffer3f(bufferId,param,pValue0,pValue1,pValue2){var val=AL.getBufferParam("alGetBuffer3f",bufferId,param);if(val===null){return}if(!pValue0||!pValue1||!pValue2){AL.currentCtx.err=40963;return}AL.currentCtx.err=40962}Module["_alGetBuffer3f"]=_alGetBuffer3f;_alGetBuffer3f.sig="viiiii";function _alGetBuffer3i(bufferId,param,pValue0,pValue1,pValue2){var val=AL.getBufferParam("alGetBuffer3i",bufferId,param);if(val===null){return}if(!pValue0||!pValue1||!pValue2){AL.currentCtx.err=40963;return}AL.currentCtx.err=40962}Module["_alGetBuffer3i"]=_alGetBuffer3i;_alGetBuffer3i.sig="viiiii";function _alGetBufferf(bufferId,param,pValue){var val=AL.getBufferParam("alGetBufferf",bufferId,param);if(val===null){return}if(!pValue){AL.currentCtx.err=40963;return}AL.currentCtx.err=40962}Module["_alGetBufferf"]=_alGetBufferf;_alGetBufferf.sig="viii";function _alGetBufferfv(bufferId,param,pValues){var val=AL.getBufferParam("alGetBufferfv",bufferId,param);if(val===null){return}if(!pValues){AL.currentCtx.err=40963;return}AL.currentCtx.err=40962}Module["_alGetBufferfv"]=_alGetBufferfv;_alGetBufferfv.sig="viii";function _alGetBufferi(bufferId,param,pValue){var val=AL.getBufferParam("alGetBufferi",bufferId,param);if(val===null){return}if(!pValue){AL.currentCtx.err=40963;return}switch(param){case 8193:case 8194:case 8195:case 8196:HEAP32[pValue>>2]=val;break;default:AL.currentCtx.err=40962;return}}Module["_alGetBufferi"]=_alGetBufferi;_alGetBufferi.sig="viii";function _alGetBufferiv(bufferId,param,pValues){var val=AL.getBufferParam("alGetBufferiv",bufferId,param);if(val===null){return}if(!pValues){AL.currentCtx.err=40963;return}switch(param){case 8193:case 8194:case 8195:case 8196:HEAP32[pValues>>2]=val;break;case 8213:HEAP32[pValues>>2]=val[0];HEAP32[pValues+4>>2]=val[1];break;default:AL.currentCtx.err=40962;return}}Module["_alGetBufferiv"]=_alGetBufferiv;_alGetBufferiv.sig="viii";function _alGetDouble(param){var val=AL.getGlobalParam("alGetDouble",param);if(val===null){return 0}switch(param){case 49152:case 49155:case 53248:return val;default:AL.currentCtx.err=40962;return 0}}Module["_alGetDouble"]=_alGetDouble;_alGetDouble.sig="di";function _alGetDoublev(param,pValues){var val=AL.getGlobalParam("alGetDoublev",param);if(val===null||!pValues){return}switch(param){case 49152:case 49155:case 53248:HEAPF64[pValues>>3]=val;break;default:AL.currentCtx.err=40962;return}}Module["_alGetDoublev"]=_alGetDoublev;_alGetDoublev.sig="vii";function _alGetEnumValue(pEnumName){if(!AL.currentCtx){return 0}if(!pEnumName){AL.currentCtx.err=40963;return 0}var name=UTF8ToString(pEnumName);switch(name){case"AL_BITS":return 8194;case"AL_BUFFER":return 4105;case"AL_BUFFERS_PROCESSED":return 4118;case"AL_BUFFERS_QUEUED":return 4117;case"AL_BYTE_OFFSET":return 4134;case"AL_CHANNELS":return 8195;case"AL_CONE_INNER_ANGLE":return 4097;case"AL_CONE_OUTER_ANGLE":return 4098;case"AL_CONE_OUTER_GAIN":return 4130;case"AL_DIRECTION":return 4101;case"AL_DISTANCE_MODEL":return 53248;case"AL_DOPPLER_FACTOR":return 49152;case"AL_DOPPLER_VELOCITY":return 49153;case"AL_EXPONENT_DISTANCE":return 53253;case"AL_EXPONENT_DISTANCE_CLAMPED":return 53254;case"AL_EXTENSIONS":return 45060;case"AL_FORMAT_MONO16":return 4353;case"AL_FORMAT_MONO8":return 4352;case"AL_FORMAT_STEREO16":return 4355;case"AL_FORMAT_STEREO8":return 4354;case"AL_FREQUENCY":return 8193;case"AL_GAIN":return 4106;case"AL_INITIAL":return 4113;case"AL_INVALID":return-1;case"AL_ILLEGAL_ENUM":case"AL_INVALID_ENUM":return 40962;case"AL_INVALID_NAME":return 40961;case"AL_ILLEGAL_COMMAND":case"AL_INVALID_OPERATION":return 40964;case"AL_INVALID_VALUE":return 40963;case"AL_INVERSE_DISTANCE":return 53249;case"AL_INVERSE_DISTANCE_CLAMPED":return 53250;case"AL_LINEAR_DISTANCE":return 53251;case"AL_LINEAR_DISTANCE_CLAMPED":return 53252;case"AL_LOOPING":return 4103;case"AL_MAX_DISTANCE":return 4131;case"AL_MAX_GAIN":return 4110;case"AL_MIN_GAIN":return 4109;case"AL_NONE":return 0;case"AL_NO_ERROR":return 0;case"AL_ORIENTATION":return 4111;case"AL_OUT_OF_MEMORY":return 40965;case"AL_PAUSED":return 4115;case"AL_PENDING":return 8209;case"AL_PITCH":return 4099;case"AL_PLAYING":return 4114;case"AL_POSITION":return 4100;case"AL_PROCESSED":return 8210;case"AL_REFERENCE_DISTANCE":return 4128;case"AL_RENDERER":return 45059;case"AL_ROLLOFF_FACTOR":return 4129;case"AL_SAMPLE_OFFSET":return 4133;case"AL_SEC_OFFSET":return 4132;case"AL_SIZE":return 8196;case"AL_SOURCE_RELATIVE":return 514;case"AL_SOURCE_STATE":return 4112;case"AL_SOURCE_TYPE":return 4135;case"AL_SPEED_OF_SOUND":return 49155;case"AL_STATIC":return 4136;case"AL_STOPPED":return 4116;case"AL_STREAMING":return 4137;case"AL_UNDETERMINED":return 4144;case"AL_UNUSED":return 8208;case"AL_VELOCITY":return 4102;case"AL_VENDOR":return 45057;case"AL_VERSION":return 45058;case"AL_AUTO_SOFT":return 2;case"AL_SOURCE_DISTANCE_MODEL":return 512;case"AL_SOURCE_SPATIALIZE_SOFT":return 4628;case"AL_LOOP_POINTS_SOFT":return 8213;case"AL_BYTE_LENGTH_SOFT":return 8201;case"AL_SAMPLE_LENGTH_SOFT":return 8202;case"AL_SEC_LENGTH_SOFT":return 8203;case"AL_FORMAT_MONO_FLOAT32":return 65552;case"AL_FORMAT_STEREO_FLOAT32":return 65553;default:AL.currentCtx.err=40963;return 0}}Module["_alGetEnumValue"]=_alGetEnumValue;_alGetEnumValue.sig="ii";function _alGetError(){if(!AL.currentCtx){return 40964}else{var err=AL.currentCtx.err;AL.currentCtx.err=0;return err}}Module["_alGetError"]=_alGetError;_alGetError.sig="i";function _alGetFloat(param){var val=AL.getGlobalParam("alGetFloat",param);if(val===null){return 0}switch(param){case 49152:case 49155:case 53248:return val;default:return 0}}Module["_alGetFloat"]=_alGetFloat;_alGetFloat.sig="fi";function _alGetFloatv(param,pValues){var val=AL.getGlobalParam("alGetFloatv",param);if(val===null||!pValues){return}switch(param){case 49152:case 49155:case 53248:HEAPF32[pValues>>2]=val;break;default:AL.currentCtx.err=40962;return}}Module["_alGetFloatv"]=_alGetFloatv;_alGetFloatv.sig="vii";function _alGetInteger(param){var val=AL.getGlobalParam("alGetInteger",param);if(val===null){return 0}switch(param){case 49152:case 49155:case 53248:return val;default:AL.currentCtx.err=40962;return 0}}Module["_alGetInteger"]=_alGetInteger;_alGetInteger.sig="ii";function _alGetIntegerv(param,pValues){var val=AL.getGlobalParam("alGetIntegerv",param);if(val===null||!pValues){return}switch(param){case 49152:case 49155:case 53248:HEAP32[pValues>>2]=val;break;default:AL.currentCtx.err=40962;return}}Module["_alGetIntegerv"]=_alGetIntegerv;_alGetIntegerv.sig="vii";function _alGetListener3f(param,pValue0,pValue1,pValue2){var val=AL.getListenerParam("alGetListener3f",param);if(val===null){return}if(!pValue0||!pValue1||!pValue2){AL.currentCtx.err=40963;return}switch(param){case 4100:case 4102:HEAPF32[pValue0>>2]=val[0];HEAPF32[pValue1>>2]=val[1];HEAPF32[pValue2>>2]=val[2];break;default:AL.currentCtx.err=40962;return}}Module["_alGetListener3f"]=_alGetListener3f;_alGetListener3f.sig="viiii";function _alGetListener3i(param,pValue0,pValue1,pValue2){var val=AL.getListenerParam("alGetListener3i",param);if(val===null){return}if(!pValue0||!pValue1||!pValue2){AL.currentCtx.err=40963;return}switch(param){case 4100:case 4102:HEAP32[pValue0>>2]=val[0];HEAP32[pValue1>>2]=val[1];HEAP32[pValue2>>2]=val[2];break;default:AL.currentCtx.err=40962;return}}Module["_alGetListener3i"]=_alGetListener3i;_alGetListener3i.sig="viiii";function _alGetListenerf(param,pValue){var val=AL.getListenerParam("alGetListenerf",param);if(val===null){return}if(!pValue){AL.currentCtx.err=40963;return}switch(param){case 4106:HEAPF32[pValue>>2]=val;break;default:AL.currentCtx.err=40962;return}}Module["_alGetListenerf"]=_alGetListenerf;_alGetListenerf.sig="vii";function _alGetListenerfv(param,pValues){var val=AL.getListenerParam("alGetListenerfv",param);if(val===null){return}if(!pValues){AL.currentCtx.err=40963;return}switch(param){case 4100:case 4102:HEAPF32[pValues>>2]=val[0];HEAPF32[pValues+4>>2]=val[1];HEAPF32[pValues+8>>2]=val[2];break;case 4111:HEAPF32[pValues>>2]=val[0];HEAPF32[pValues+4>>2]=val[1];HEAPF32[pValues+8>>2]=val[2];HEAPF32[pValues+12>>2]=val[3];HEAPF32[pValues+16>>2]=val[4];HEAPF32[pValues+20>>2]=val[5];break;default:AL.currentCtx.err=40962;return}}Module["_alGetListenerfv"]=_alGetListenerfv;_alGetListenerfv.sig="vii";function _alGetListeneri(param,pValue){var val=AL.getListenerParam("alGetListeneri",param);if(val===null){return}if(!pValue){AL.currentCtx.err=40963;return}AL.currentCtx.err=40962}Module["_alGetListeneri"]=_alGetListeneri;_alGetListeneri.sig="vii";function _alGetListeneriv(param,pValues){var val=AL.getListenerParam("alGetListeneriv",param);if(val===null){return}if(!pValues){AL.currentCtx.err=40963;return}switch(param){case 4100:case 4102:HEAP32[pValues>>2]=val[0];HEAP32[pValues+4>>2]=val[1];HEAP32[pValues+8>>2]=val[2];break;case 4111:HEAP32[pValues>>2]=val[0];HEAP32[pValues+4>>2]=val[1];HEAP32[pValues+8>>2]=val[2];HEAP32[pValues+12>>2]=val[3];HEAP32[pValues+16>>2]=val[4];HEAP32[pValues+20>>2]=val[5];break;default:AL.currentCtx.err=40962;return}}Module["_alGetListeneriv"]=_alGetListeneriv;_alGetListeneriv.sig="vii";function _alGetSource3f(sourceId,param,pValue0,pValue1,pValue2){var val=AL.getSourceParam("alGetSource3f",sourceId,param);if(val===null){return}if(!pValue0||!pValue1||!pValue2){AL.currentCtx.err=40963;return}switch(param){case 4100:case 4101:case 4102:HEAPF32[pValue0>>2]=val[0];HEAPF32[pValue1>>2]=val[1];HEAPF32[pValue2>>2]=val[2];break;default:AL.currentCtx.err=40962;return}}Module["_alGetSource3f"]=_alGetSource3f;_alGetSource3f.sig="viiiii";function _alGetSource3i(sourceId,param,pValue0,pValue1,pValue2){var val=AL.getSourceParam("alGetSource3i",sourceId,param);if(val===null){return}if(!pValue0||!pValue1||!pValue2){AL.currentCtx.err=40963;return}switch(param){case 4100:case 4101:case 4102:HEAP32[pValue0>>2]=val[0];HEAP32[pValue1>>2]=val[1];HEAP32[pValue2>>2]=val[2];break;default:AL.currentCtx.err=40962;return}}Module["_alGetSource3i"]=_alGetSource3i;_alGetSource3i.sig="viiiii";function _alGetSourcef(sourceId,param,pValue){var val=AL.getSourceParam("alGetSourcef",sourceId,param);if(val===null){return}if(!pValue){AL.currentCtx.err=40963;return}switch(param){case 4097:case 4098:case 4099:case 4106:case 4109:case 4110:case 4128:case 4129:case 4130:case 4131:case 4132:case 4133:case 4134:case 8203:HEAPF32[pValue>>2]=val;break;default:AL.currentCtx.err=40962;return}}Module["_alGetSourcef"]=_alGetSourcef;_alGetSourcef.sig="viii";function _alGetSourcefv(sourceId,param,pValues){var val=AL.getSourceParam("alGetSourcefv",sourceId,param);if(val===null){return}if(!pValues){AL.currentCtx.err=40963;return}switch(param){case 4097:case 4098:case 4099:case 4106:case 4109:case 4110:case 4128:case 4129:case 4130:case 4131:case 4132:case 4133:case 4134:case 8203:HEAPF32[pValues>>2]=val[0];break;case 4100:case 4101:case 4102:HEAPF32[pValues>>2]=val[0];HEAPF32[pValues+4>>2]=val[1];HEAPF32[pValues+8>>2]=val[2];break;default:AL.currentCtx.err=40962;return}}Module["_alGetSourcefv"]=_alGetSourcefv;_alGetSourcefv.sig="viii";function _alGetSourcei(sourceId,param,pValue){var val=AL.getSourceParam("alGetSourcei",sourceId,param);if(val===null){return}if(!pValue){AL.currentCtx.err=40963;return}switch(param){case 514:case 4097:case 4098:case 4103:case 4105:case 4112:case 4117:case 4118:case 4128:case 4129:case 4131:case 4132:case 4133:case 4134:case 4135:case 4628:case 8201:case 8202:case 53248:HEAP32[pValue>>2]=val;break;default:AL.currentCtx.err=40962;return}}Module["_alGetSourcei"]=_alGetSourcei;_alGetSourcei.sig="viii";function _alGetSourceiv(sourceId,param,pValues){var val=AL.getSourceParam("alGetSourceiv",sourceId,param);if(val===null){return}if(!pValues){AL.currentCtx.err=40963;return}switch(param){case 514:case 4097:case 4098:case 4103:case 4105:case 4112:case 4117:case 4118:case 4128:case 4129:case 4131:case 4132:case 4133:case 4134:case 4135:case 4628:case 8201:case 8202:case 53248:HEAP32[pValues>>2]=val;break;case 4100:case 4101:case 4102:HEAP32[pValues>>2]=val[0];HEAP32[pValues+4>>2]=val[1];HEAP32[pValues+8>>2]=val[2];break;default:AL.currentCtx.err=40962;return}}Module["_alGetSourceiv"]=_alGetSourceiv;_alGetSourceiv.sig="viii";function _alGetString(param){if(!AL.currentCtx){return 0}if(AL.stringCache[param]){return AL.stringCache[param]}var ret;switch(param){case 0:ret="No Error";break;case 40961:ret="Invalid Name";break;case 40962:ret="Invalid Enum";break;case 40963:ret="Invalid Value";break;case 40964:ret="Invalid Operation";break;case 40965:ret="Out of Memory";break;case 45057:ret="Emscripten";break;case 45058:ret="1.1";break;case 45059:ret="WebAudio";break;case 45060:ret="";for(var ext in AL.AL_EXTENSIONS){ret=ret.concat(ext);ret=ret.concat(" ")}ret=ret.trim();break;default:AL.currentCtx.err=40962;return 0}ret=allocate(intArrayFromString(ret),ALLOC_NORMAL);AL.stringCache[param]=ret;return ret}Module["_alGetString"]=_alGetString;_alGetString.sig="ii";function _alIsBuffer(bufferId){if(!AL.currentCtx){return false}if(bufferId>AL.buffers.length){return false}if(!AL.buffers[bufferId]){return false}else{return true}}Module["_alIsBuffer"]=_alIsBuffer;_alIsBuffer.sig="ii";function _alIsEnabled(param){if(!AL.currentCtx){return 0}switch(param){case"AL_SOURCE_DISTANCE_MODEL":return AL.currentCtx.sourceDistanceModel?0:1;default:AL.currentCtx.err=40962;return 0}}Module["_alIsEnabled"]=_alIsEnabled;_alIsEnabled.sig="ii";function _alIsExtensionPresent(pExtName){var name=UTF8ToString(pExtName);return AL.AL_EXTENSIONS[name]?1:0}Module["_alIsExtensionPresent"]=_alIsExtensionPresent;_alIsExtensionPresent.sig="ii";function _alIsSource(sourceId){if(!AL.currentCtx){return false}if(!AL.currentCtx.sources[sourceId]){return false}else{return true}}Module["_alIsSource"]=_alIsSource;_alIsSource.sig="ii";function _alListener3f(param,value0,value1,value2){switch(param){case 4100:case 4102:AL.paramArray[0]=value0;AL.paramArray[1]=value1;AL.paramArray[2]=value2;AL.setListenerParam("alListener3f",param,AL.paramArray);break;default:AL.setListenerParam("alListener3f",param,null);break}}Module["_alListener3f"]=_alListener3f;_alListener3f.sig="vifff";function _alListener3i(param,value0,value1,value2){switch(param){case 4100:case 4102:AL.paramArray[0]=value0;AL.paramArray[1]=value1;AL.paramArray[2]=value2;AL.setListenerParam("alListener3i",param,AL.paramArray);break;default:AL.setListenerParam("alListener3i",param,null);break}}Module["_alListener3i"]=_alListener3i;_alListener3i.sig="viiii";function _alListenerf(param,value){switch(param){case 4106:AL.setListenerParam("alListenerf",param,value);break;default:AL.setListenerParam("alListenerf",param,null);break}}Module["_alListenerf"]=_alListenerf;_alListenerf.sig="vif";function _alListenerfv(param,pValues){if(!AL.currentCtx){return}if(!pValues){AL.currentCtx.err=40963;return}switch(param){case 4100:case 4102:AL.paramArray[0]=HEAPF32[pValues>>2];AL.paramArray[1]=HEAPF32[pValues+4>>2];AL.paramArray[2]=HEAPF32[pValues+8>>2];AL.setListenerParam("alListenerfv",param,AL.paramArray);break;case 4111:AL.paramArray[0]=HEAPF32[pValues>>2];AL.paramArray[1]=HEAPF32[pValues+4>>2];AL.paramArray[2]=HEAPF32[pValues+8>>2];AL.paramArray[3]=HEAPF32[pValues+12>>2];AL.paramArray[4]=HEAPF32[pValues+16>>2];AL.paramArray[5]=HEAPF32[pValues+20>>2];AL.setListenerParam("alListenerfv",param,AL.paramArray);break;default:AL.setListenerParam("alListenerfv",param,null);break}}Module["_alListenerfv"]=_alListenerfv;_alListenerfv.sig="vii";function _alListeneri(param,value){AL.setListenerParam("alListeneri",param,null)}Module["_alListeneri"]=_alListeneri;_alListeneri.sig="vii";function _alListeneriv(param,pValues){if(!AL.currentCtx){return}if(!pValues){AL.currentCtx.err=40963;return}switch(param){case 4100:case 4102:AL.paramArray[0]=HEAP32[pValues>>2];AL.paramArray[1]=HEAP32[pValues+4>>2];AL.paramArray[2]=HEAP32[pValues+8>>2];AL.setListenerParam("alListeneriv",param,AL.paramArray);break;case 4111:AL.paramArray[0]=HEAP32[pValues>>2];AL.paramArray[1]=HEAP32[pValues+4>>2];AL.paramArray[2]=HEAP32[pValues+8>>2];AL.paramArray[3]=HEAP32[pValues+12>>2];AL.paramArray[4]=HEAP32[pValues+16>>2];AL.paramArray[5]=HEAP32[pValues+20>>2];AL.setListenerParam("alListeneriv",param,AL.paramArray);break;default:AL.setListenerParam("alListeneriv",param,null);break}}Module["_alListeneriv"]=_alListeneriv;_alListeneriv.sig="vii";function _alSource3f(sourceId,param,value0,value1,value2){switch(param){case 4100:case 4101:case 4102:AL.paramArray[0]=value0;AL.paramArray[1]=value1;AL.paramArray[2]=value2;AL.setSourceParam("alSource3f",sourceId,param,AL.paramArray);break;default:AL.setSourceParam("alSource3f",sourceId,param,null);break}}Module["_alSource3f"]=_alSource3f;_alSource3f.sig="viifff";function _alSource3i(sourceId,param,value0,value1,value2){switch(param){case 4100:case 4101:case 4102:AL.paramArray[0]=value0;AL.paramArray[1]=value1;AL.paramArray[2]=value2;AL.setSourceParam("alSource3i",sourceId,param,AL.paramArray);break;default:AL.setSourceParam("alSource3i",sourceId,param,null);break}}Module["_alSource3i"]=_alSource3i;_alSource3i.sig="viiiii";function _alSourcePause(sourceId){if(!AL.currentCtx){return}var src=AL.currentCtx.sources[sourceId];if(!src){AL.currentCtx.err=40961;return}AL.setSourceState(src,4115)}Module["_alSourcePause"]=_alSourcePause;_alSourcePause.sig="vi";function _alSourcePausev(count,pSourceIds){if(!AL.currentCtx){return}if(!pSourceIds){AL.currentCtx.err=40963}for(var i=0;i>2]]){AL.currentCtx.err=40961;return}}for(var i=0;i>2],4115)}}Module["_alSourcePausev"]=_alSourcePausev;_alSourcePausev.sig="vii";function _alSourcePlay(sourceId){if(!AL.currentCtx){return}var src=AL.currentCtx.sources[sourceId];if(!src){AL.currentCtx.err=40961;return}AL.setSourceState(src,4114)}Module["_alSourcePlay"]=_alSourcePlay;_alSourcePlay.sig="vi";function _alSourcePlayv(count,pSourceIds){if(!AL.currentCtx){return}if(!pSourceIds){AL.currentCtx.err=40963}for(var i=0;i>2]]){AL.currentCtx.err=40961;return}}for(var i=0;i>2],4114)}}Module["_alSourcePlayv"]=_alSourcePlayv;_alSourcePlayv.sig="vii";function _alSourceQueueBuffers(sourceId,count,pBufferIds){if(!AL.currentCtx){return}var src=AL.currentCtx.sources[sourceId];if(!src){AL.currentCtx.err=40961;return}if(src.type===4136){AL.currentCtx.err=40964;return}if(count===0){return}var templateBuf=AL.buffers[0];for(var i=0;i>2];var buf=AL.buffers[bufId];if(!buf){AL.currentCtx.err=40961;return}if(templateBuf.id!==0&&(buf.frequency!==templateBuf.frequency||buf.bytesPerSample!==templateBuf.bytesPerSample||buf.channels!==templateBuf.channels)){AL.currentCtx.err=40964}}if(src.bufQueue.length===1&&src.bufQueue[0].id===0){src.bufQueue.length=0}src.type=4137;for(var i=0;i>2];var buf=AL.buffers[bufId];buf.refCount++;src.bufQueue.push(buf)}if(src.looping){AL.cancelPendingSourceAudio(src)}AL.initSourcePanner(src);AL.scheduleSourceAudio(src)}Module["_alSourceQueueBuffers"]=_alSourceQueueBuffers;_alSourceQueueBuffers.sig="viii";function _alSourceRewind(sourceId){if(!AL.currentCtx){return}var src=AL.currentCtx.sources[sourceId];if(!src){AL.currentCtx.err=40961;return}AL.setSourceState(src,4116);AL.setSourceState(src,4113)}Module["_alSourceRewind"]=_alSourceRewind;_alSourceRewind.sig="vi";function _alSourceRewindv(count,pSourceIds){if(!AL.currentCtx){return}if(!pSourceIds){AL.currentCtx.err=40963}for(var i=0;i>2]]){AL.currentCtx.err=40961;return}}for(var i=0;i>2],4113)}}Module["_alSourceRewindv"]=_alSourceRewindv;_alSourceRewindv.sig="vii";function _alSourceStop(sourceId){if(!AL.currentCtx){return}var src=AL.currentCtx.sources[sourceId];if(!src){AL.currentCtx.err=40961;return}AL.setSourceState(src,4116)}Module["_alSourceStop"]=_alSourceStop;_alSourceStop.sig="vi";function _alSourceStopv(count,pSourceIds){if(!AL.currentCtx){return}if(!pSourceIds){AL.currentCtx.err=40963}for(var i=0;i>2]]){AL.currentCtx.err=40961;return}}for(var i=0;i>2],4116)}}Module["_alSourceStopv"]=_alSourceStopv;_alSourceStopv.sig="vii";function _alSourceUnqueueBuffers(sourceId,count,pBufferIds){if(!AL.currentCtx){return}var src=AL.currentCtx.sources[sourceId];if(!src){AL.currentCtx.err=40961;return}if(count>(src.bufQueue.length===1&&src.bufQueue[0].id===0?0:src.bufsProcessed)){AL.currentCtx.err=40963;return}if(count===0){return}for(var i=0;i>2]=buf.id;src.bufsProcessed--}if(src.bufQueue.length===0){src.bufQueue.push(AL.buffers[0])}AL.initSourcePanner(src);AL.scheduleSourceAudio(src)}Module["_alSourceUnqueueBuffers"]=_alSourceUnqueueBuffers;_alSourceUnqueueBuffers.sig="viii";function _alSourcef(sourceId,param,value){switch(param){case 4097:case 4098:case 4099:case 4106:case 4109:case 4110:case 4128:case 4129:case 4130:case 4131:case 4132:case 4133:case 4134:case 8203:AL.setSourceParam("alSourcef",sourceId,param,value);break;default:AL.setSourceParam("alSourcef",sourceId,param,null);break}}Module["_alSourcef"]=_alSourcef;_alSourcef.sig="viif";function _alSourcefv(sourceId,param,pValues){if(!AL.currentCtx){return}if(!pValues){AL.currentCtx.err=40963;return}switch(param){case 4097:case 4098:case 4099:case 4106:case 4109:case 4110:case 4128:case 4129:case 4130:case 4131:case 4132:case 4133:case 4134:case 8203:var val=HEAPF32[pValues>>2];AL.setSourceParam("alSourcefv",sourceId,param,val);break;case 4100:case 4101:case 4102:AL.paramArray[0]=HEAPF32[pValues>>2];AL.paramArray[1]=HEAPF32[pValues+4>>2];AL.paramArray[2]=HEAPF32[pValues+8>>2];AL.setSourceParam("alSourcefv",sourceId,param,AL.paramArray);break;default:AL.setSourceParam("alSourcefv",sourceId,param,null);break}}Module["_alSourcefv"]=_alSourcefv;_alSourcefv.sig="viii";function _alSourceiv(sourceId,param,pValues){if(!AL.currentCtx){return}if(!pValues){AL.currentCtx.err=40963;return}switch(param){case 514:case 4097:case 4098:case 4103:case 4105:case 4128:case 4129:case 4131:case 4132:case 4133:case 4134:case 4628:case 8201:case 8202:case 53248:var val=HEAP32[pValues>>2];AL.setSourceParam("alSourceiv",sourceId,param,val);break;case 4100:case 4101:case 4102:AL.paramArray[0]=HEAP32[pValues>>2];AL.paramArray[1]=HEAP32[pValues+4>>2];AL.paramArray[2]=HEAP32[pValues+8>>2];AL.setSourceParam("alSourceiv",sourceId,param,AL.paramArray);break;default:AL.setSourceParam("alSourceiv",sourceId,param,null);break}}Module["_alSourceiv"]=_alSourceiv;_alSourceiv.sig="viii";function _alSpeedOfSound(value){AL.setGlobalParam("alSpeedOfSound",49155,value)}Module["_alSpeedOfSound"]=_alSpeedOfSound;_alSpeedOfSound.sig="vi";function _alarm(seconds){setTimeout(function(){if(__sigalrm_handler)wasmTable.get(__sigalrm_handler)(0)},seconds*1e3)}Module["_alarm"]=_alarm;function _alcCaptureCloseDevice(deviceId){var c=AL.requireValidCaptureDevice(deviceId,"alcCaptureCloseDevice");if(!c)return false;delete AL.captures[deviceId];AL.freeIds.push(deviceId);if(c.mediaStreamSourceNode)c.mediaStreamSourceNode.disconnect();if(c.mergerNode)c.mergerNode.disconnect();if(c.splitterNode)c.splitterNode.disconnect();if(c.scriptProcessorNode)c.scriptProcessorNode.disconnect();if(c.mediaStream){c.mediaStream.getTracks().forEach(function(track){track.stop()})}delete c.buffers;c.capturedFrameCount=0;c.isCapturing=false;return true}Module["_alcCaptureCloseDevice"]=_alcCaptureCloseDevice;_alcCaptureCloseDevice.sig="ii";function listenOnce(object,event,func){object.addEventListener(event,func,{"once":true})}Module["listenOnce"]=listenOnce;function autoResumeAudioContext(ctx,elements){if(!elements){elements=[document,document.getElementById("canvas")]}["keydown","mousedown","touchstart"].forEach(function(event){elements.forEach(function(element){if(element){listenOnce(element,event,function(){if(ctx.state==="suspended")ctx.resume()})}})})}Module["autoResumeAudioContext"]=autoResumeAudioContext;function _alcCaptureOpenDevice(pDeviceName,requestedSampleRate,format,bufferFrameCapacity){var resolvedDeviceName=AL.CAPTURE_DEVICE_NAME;if(pDeviceName!==0){resolvedDeviceName=UTF8ToString(pDeviceName);if(resolvedDeviceName!==AL.CAPTURE_DEVICE_NAME){AL.alcErr=40965;return 0}}if(bufferFrameCapacity<0){AL.alcErr=40964;return 0}navigator.getUserMedia=navigator.getUserMedia||navigator.webkitGetUserMedia||navigator.mozGetUserMedia||navigator.msGetUserMedia;var has_getUserMedia=navigator.getUserMedia||navigator.mediaDevices&&navigator.mediaDevices.getUserMedia;if(!has_getUserMedia){AL.alcErr=40965;return 0}var AudioContext=window.AudioContext||window.webkitAudioContext;if(!AL.sharedCaptureAudioCtx){try{AL.sharedCaptureAudioCtx=new AudioContext}catch(e){AL.alcErr=40965;return 0}}autoResumeAudioContext(AL.sharedCaptureAudioCtx);var outputChannelCount;switch(format){case 65552:case 4353:case 4352:outputChannelCount=1;break;case 65553:case 4355:case 4354:outputChannelCount=2;break;default:AL.alcErr=40964;return 0}function newF32Array(cap){return new Float32Array(cap)}function newI16Array(cap){return new Int16Array(cap)}function newU8Array(cap){return new Uint8Array(cap)}var requestedSampleType;var newSampleArray;switch(format){case 65552:case 65553:requestedSampleType="f32";newSampleArray=newF32Array;break;case 4353:case 4355:requestedSampleType="i16";newSampleArray=newI16Array;break;case 4352:case 4354:requestedSampleType="u8";newSampleArray=newU8Array;break}var buffers=[];try{for(var chan=0;chanoutputChannelCount){newCapture.mergerNode=newCapture.audioCtx.createChannelMerger(inputChannelCount);newCapture.mediaStreamSourceNode.connect(newCapture.mergerNode);newCapture.mergerNode.connect(newCapture.scriptProcessorNode)}else if(inputChannelCountc.capturedFrameCount/fratio){err("alcCaptureSamples() with invalid bufferSize");AL.alcErr=40964;return}function setF32Sample(i,sample){HEAPF32[pFrames+4*i>>2]=sample}function setI16Sample(i,sample){HEAP16[pFrames+2*i>>1]=sample}function setU8Sample(i,sample){HEAP8[pFrames+i>>0]=sample}var setSample;switch(c.requestedSampleType){case"f32":setSample=setF32Sample;break;case"i16":setSample=setI16Sample;break;case"u8":setSample=setU8Sample;break;default:return}if(Math.floor(fratio)==fratio){for(var i=0,frame_i=0;frame_i0){return 0}delete AL.deviceRefCounts[deviceId];AL.freeIds.push(deviceId);return 1}Module["_alcCloseDevice"]=_alcCloseDevice;_alcCloseDevice.sig="ii";function _alcCreateContext(deviceId,pAttrList){if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return 0}var options=null;var attrs=[];var hrtf=null;pAttrList>>=2;if(pAttrList){var attr=0;var val=0;while(true){attr=HEAP32[pAttrList++];attrs.push(attr);if(attr===0){break}val=HEAP32[pAttrList++];attrs.push(val);switch(attr){case 4103:if(!options){options={}}options.sampleRate=val;break;case 4112:case 4113:break;case 6546:switch(val){case 0:hrtf=false;break;case 1:hrtf=true;break;case 2:break;default:AL.alcErr=40964;return 0}break;case 6550:if(val!==0){AL.alcErr=40964;return 0}break;default:AL.alcErr=40964;return 0}}}var AudioContext=window.AudioContext||window.webkitAudioContext;var ac=null;try{if(options){ac=new AudioContext(options)}else{ac=new AudioContext}}catch(e){if(e.name==="NotSupportedError"){AL.alcErr=40964}else{AL.alcErr=40961}return 0}autoResumeAudioContext(ac);if(typeof ac.createGain==="undefined"){ac.createGain=ac.createGainNode}var gain=ac.createGain();gain.connect(ac.destination);var ctx={deviceId:deviceId,id:AL.newId(),attrs:attrs,audioCtx:ac,listener:{position:[0,0,0],velocity:[0,0,0],direction:[0,0,0],up:[0,0,0]},sources:[],interval:setInterval(function(){AL.scheduleContextAudio(ctx)},AL.QUEUE_INTERVAL),gain:gain,distanceModel:53250,speedOfSound:343.3,dopplerFactor:1,sourceDistanceModel:false,hrtf:hrtf||false,_err:0,get err(){return this._err},set err(val){if(this._err===0||val===0){this._err=val}}};AL.deviceRefCounts[deviceId]++;AL.contexts[ctx.id]=ctx;if(hrtf!==null){for(var ctxId in AL.contexts){var c=AL.contexts[ctxId];if(c.deviceId===deviceId){c.hrtf=hrtf;AL.updateContextGlobal(c)}}}return ctx.id}Module["_alcCreateContext"]=_alcCreateContext;_alcCreateContext.sig="iii";function _alcDestroyContext(contextId){var ctx=AL.contexts[contextId];if(AL.currentCtx===ctx){AL.alcErr=40962;return}if(AL.contexts[contextId].interval){clearInterval(AL.contexts[contextId].interval)}AL.deviceRefCounts[ctx.deviceId]--;delete AL.contexts[contextId];AL.freeIds.push(contextId)}Module["_alcDestroyContext"]=_alcDestroyContext;_alcDestroyContext.sig="vi";function _alcGetContextsDevice(contextId){if(contextId in AL.contexts){return AL.contexts[contextId].deviceId}else{return 0}}Module["_alcGetContextsDevice"]=_alcGetContextsDevice;_alcGetContextsDevice.sig="ii";function _alcGetCurrentContext(){if(AL.currentCtx!==null){return AL.currentCtx.id}else{return 0}}Module["_alcGetCurrentContext"]=_alcGetCurrentContext;_alcGetCurrentContext.sig="i";function _alcGetEnumValue(deviceId,pEnumName){if(deviceId!==0&&!(deviceId in AL.deviceRefCounts)){return 0}else if(!pEnumName){AL.alcErr=40964;return 0}var name=UTF8ToString(pEnumName);switch(name){case"ALC_NO_ERROR":return 0;case"ALC_INVALID_DEVICE":return 40961;case"ALC_INVALID_CONTEXT":return 40962;case"ALC_INVALID_ENUM":return 40963;case"ALC_INVALID_VALUE":return 40964;case"ALC_OUT_OF_MEMORY":return 40965;case"ALC_MAJOR_VERSION":return 4096;case"ALC_MINOR_VERSION":return 4097;case"ALC_ATTRIBUTES_SIZE":return 4098;case"ALC_ALL_ATTRIBUTES":return 4099;case"ALC_DEFAULT_DEVICE_SPECIFIER":return 4100;case"ALC_DEVICE_SPECIFIER":return 4101;case"ALC_EXTENSIONS":return 4102;case"ALC_FREQUENCY":return 4103;case"ALC_REFRESH":return 4104;case"ALC_SYNC":return 4105;case"ALC_MONO_SOURCES":return 4112;case"ALC_STEREO_SOURCES":return 4113;case"ALC_CAPTURE_DEVICE_SPECIFIER":return 784;case"ALC_CAPTURE_DEFAULT_DEVICE_SPECIFIER":return 785;case"ALC_CAPTURE_SAMPLES":return 786;case"ALC_HRTF_SOFT":return 6546;case"ALC_HRTF_ID_SOFT":return 6550;case"ALC_DONT_CARE_SOFT":return 2;case"ALC_HRTF_STATUS_SOFT":return 6547;case"ALC_NUM_HRTF_SPECIFIERS_SOFT":return 6548;case"ALC_HRTF_SPECIFIER_SOFT":return 6549;case"ALC_HRTF_DISABLED_SOFT":return 0;case"ALC_HRTF_ENABLED_SOFT":return 1;case"ALC_HRTF_DENIED_SOFT":return 2;case"ALC_HRTF_REQUIRED_SOFT":return 3;case"ALC_HRTF_HEADPHONES_DETECTED_SOFT":return 4;case"ALC_HRTF_UNSUPPORTED_FORMAT_SOFT":return 5;default:AL.alcErr=40964;return 0}}Module["_alcGetEnumValue"]=_alcGetEnumValue;_alcGetEnumValue.sig="iii";function _alcGetError(deviceId){var err=AL.alcErr;AL.alcErr=0;return err}Module["_alcGetError"]=_alcGetError;_alcGetError.sig="ii";function _alcGetIntegerv(deviceId,param,size,pValues){if(size===0||!pValues){return}switch(param){case 4096:HEAP32[pValues>>2]=1;break;case 4097:HEAP32[pValues>>2]=1;break;case 4098:if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return}if(!AL.currentCtx){AL.alcErr=40962;return}HEAP32[pValues>>2]=AL.currentCtx.attrs.length;break;case 4099:if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return}if(!AL.currentCtx){AL.alcErr=40962;return}for(var i=0;i>2]=AL.currentCtx.attrs[i]}break;case 4103:if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return}if(!AL.currentCtx){AL.alcErr=40962;return}HEAP32[pValues>>2]=AL.currentCtx.audioCtx.sampleRate;break;case 4112:case 4113:if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return}if(!AL.currentCtx){AL.alcErr=40962;return}HEAP32[pValues>>2]=2147483647;break;case 6546:case 6547:if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return}var hrtfStatus=0;for(var ctxId in AL.contexts){var ctx=AL.contexts[ctxId];if(ctx.deviceId===deviceId){hrtfStatus=ctx.hrtf?1:0}}HEAP32[pValues>>2]=hrtfStatus;break;case 6548:if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return}HEAP32[pValues>>2]=1;break;case 131075:if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return}if(!AL.currentCtx){AL.alcErr=40962;return}HEAP32[pValues>>2]=1;case 786:var c=AL.requireValidCaptureDevice(deviceId,"alcGetIntegerv");if(!c){return}var n=c.capturedFrameCount;var dstfreq=c.requestedSampleRate;var srcfreq=c.audioCtx.sampleRate;var nsamples=Math.floor(n*(dstfreq/srcfreq));HEAP32[pValues>>2]=nsamples;break;default:AL.alcErr=40963;return}}Module["_alcGetIntegerv"]=_alcGetIntegerv;_alcGetIntegerv.sig="viiii";function _alcGetString(deviceId,param){if(AL.alcStringCache[param]){return AL.alcStringCache[param]}var ret;switch(param){case 0:ret="No Error";break;case 40961:ret="Invalid Device";break;case 40962:ret="Invalid Context";break;case 40963:ret="Invalid Enum";break;case 40964:ret="Invalid Value";break;case 40965:ret="Out of Memory";break;case 4100:if(typeof AudioContext!=="undefined"||typeof webkitAudioContext!=="undefined"){ret=AL.DEVICE_NAME}else{return 0}break;case 4101:if(typeof AudioContext!=="undefined"||typeof webkitAudioContext!=="undefined"){ret=AL.DEVICE_NAME.concat("\0")}else{ret="\0"}break;case 785:ret=AL.CAPTURE_DEVICE_NAME;break;case 784:if(deviceId===0)ret=AL.CAPTURE_DEVICE_NAME.concat("\0");else{var c=AL.requireValidCaptureDevice(deviceId,"alcGetString");if(!c){return 0}ret=c.deviceName}break;case 4102:if(!deviceId){AL.alcErr=40961;return 0}ret="";for(var ext in AL.ALC_EXTENSIONS){ret=ret.concat(ext);ret=ret.concat(" ")}ret=ret.trim();break;default:AL.alcErr=40963;return 0}ret=allocate(intArrayFromString(ret),ALLOC_NORMAL);AL.alcStringCache[param]=ret;return ret}Module["_alcGetString"]=_alcGetString;_alcGetString.sig="iii";function _alcIsExtensionPresent(deviceId,pExtName){var name=UTF8ToString(pExtName);return AL.ALC_EXTENSIONS[name]?1:0}Module["_alcIsExtensionPresent"]=_alcIsExtensionPresent;_alcIsExtensionPresent.sig="iii";function _alcMakeContextCurrent(contextId){if(contextId===0){AL.currentCtx=null;return 0}else{AL.currentCtx=AL.contexts[contextId];return 1}}Module["_alcMakeContextCurrent"]=_alcMakeContextCurrent;_alcMakeContextCurrent.sig="ii";function _alcOpenDevice(pDeviceName){if(pDeviceName){var name=UTF8ToString(pDeviceName);if(name!==AL.DEVICE_NAME){return 0}}if(typeof AudioContext!=="undefined"||typeof webkitAudioContext!=="undefined"){var deviceId=AL.newId();AL.deviceRefCounts[deviceId]=0;return deviceId}else{return 0}}Module["_alcOpenDevice"]=_alcOpenDevice;_alcOpenDevice.sig="ii";function _alcProcessContext(contextId){}Module["_alcProcessContext"]=_alcProcessContext;_alcProcessContext.sig="vi";function _alcSuspendContext(contextId){}Module["_alcSuspendContext"]=_alcSuspendContext;_alcSuspendContext.sig="vi";function _chroot(path){setErrNo(2);return-1}Module["_chroot"]=_chroot;_chroot.sig="ii";function _clock(){if(_clock.start===undefined)_clock.start=Date.now();return(Date.now()-_clock.start)*(1e6/1e3)|0}Module["_clock"]=_clock;_clock.sig="i";function _emscripten_get_now_res(){if(ENVIRONMENT_IS_NODE){return 1}else return 1e3}Module["_emscripten_get_now_res"]=_emscripten_get_now_res;function _clock_getres(clk_id,res){var nsec;if(clk_id===0){nsec=1e3*1e3}else if(clk_id===1&&_emscripten_get_now_is_monotonic){nsec=_emscripten_get_now_res()}else{setErrNo(28);return-1}HEAP32[res>>2]=nsec/1e9|0;HEAP32[res+4>>2]=nsec;return 0}Module["_clock_getres"]=_clock_getres;var DLFCN={error:null,errorMsg:null};Module["DLFCN"]=DLFCN;function _dlclose(handle){var lib=LDSO.loadedLibs[handle];if(!lib){DLFCN.errorMsg="Tried to dlclose() unopened handle: "+handle;return 1}if(--lib.refcount==0){delete LDSO.loadedLibNames[lib.name];delete LDSO.loadedLibs[handle]}return 0}Module["_dlclose"]=_dlclose;_dlclose.sig="ii";function stringToNewUTF8(jsString){var length=lengthBytesUTF8(jsString)+1;var cString=_malloc(length);stringToUTF8(jsString,cString,length);return cString}Module["stringToNewUTF8"]=stringToNewUTF8;function _dlerror(){if(DLFCN.errorMsg===null){return 0}if(DLFCN.error)_free(DLFCN.error);DLFCN.error=stringToNewUTF8(DLFCN.errorMsg);DLFCN.errorMsg=null;return DLFCN.error}Module["_dlerror"]=_dlerror;_dlerror.sig="i";var ENV={};Module["ENV"]=ENV;function dlopenInternal(filenameAddr,flags,jsflags){var searchpaths=[];var filename;if(filenameAddr===0){filename="__main__"}else{filename=UTF8ToString(filenameAddr);var isValidFile=function(filename){var target=FS.findObject(filename);return target&&!target.isFolder&&!target.isDevice};if(!isValidFile(filename)){if(ENV["LD_LIBRARY_PATH"]){searchpaths=ENV["LD_LIBRARY_PATH"].split(":")}for(var ident in searchpaths){var searchfile=PATH.join2(searchpaths[ident],filename);if(isValidFile(searchfile)){filename=searchfile;break}}}}if(!(flags&(1|2))){DLFCN.errorMsg="invalid mode for dlopen(): Either RTLD_LAZY or RTLD_NOW is required";return 0}var combinedFlags={global:Boolean(flags&256),nodelete:Boolean(flags&4096),loadAsync:jsflags.loadAsync,fs:jsflags.fs};if(jsflags.loadAsync){return loadDynamicLibrary(filename,combinedFlags)}try{return loadDynamicLibrary(filename,combinedFlags)}catch(e){DLFCN.errorMsg="Could not load dynamic lib: "+filename+"\n"+e;return 0}}Module["dlopenInternal"]=dlopenInternal;function _dlopen(filename,flags){var jsflags={loadAsync:false,fs:FS};return dlopenInternal(filename,flags,jsflags)}Module["_dlopen"]=_dlopen;_dlopen.sig="iii";function _dlsym(handle,symbol){symbol=UTF8ToString(symbol);var result;if(handle==0){result=resolveGlobalSymbol(symbol,true);if(!result){DLFCN.errorMsg='Tried to lookup unknown symbol "'+symbol+'" in dynamic lib: RTLD_DEFAULT';return 0}}else{var lib=LDSO.loadedLibs[handle];if(!lib){DLFCN.errorMsg="Tried to dlsym() from an unopened handle: "+handle;return 0}if(!lib.module.hasOwnProperty(symbol)){DLFCN.errorMsg='Tried to lookup unknown symbol "'+symbol+'" in dynamic lib: '+lib.name;return 0}result=lib.module["orig$"+symbol];if(!result)result=lib.module[symbol]}if(typeof result==="function"){return addFunctionWasm(result,result.sig)}else{return result}}Module["_dlsym"]=_dlsym;_dlsym.sig="iii";function _emscripten_alcDevicePauseSOFT(deviceId){if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return}if(AL.paused){return}AL.paused=true;for(var ctxId in AL.contexts){var ctx=AL.contexts[ctxId];if(ctx.deviceId!==deviceId){continue}ctx.audioCtx.suspend();clearInterval(ctx.interval);ctx.interval=null}}Module["_emscripten_alcDevicePauseSOFT"]=_emscripten_alcDevicePauseSOFT;_emscripten_alcDevicePauseSOFT.sig="vi";function _emscripten_alcDeviceResumeSOFT(deviceId){if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return}if(!AL.paused){return}AL.paused=false;for(var ctxId in AL.contexts){var ctx=AL.contexts[ctxId];if(ctx.deviceId!==deviceId){continue}ctx.interval=setInterval(function(){AL.scheduleContextAudio(ctx)},AL.QUEUE_INTERVAL);ctx.audioCtx.resume()}}Module["_emscripten_alcDeviceResumeSOFT"]=_emscripten_alcDeviceResumeSOFT;_emscripten_alcDeviceResumeSOFT.sig="vi";function _emscripten_alcGetStringiSOFT(deviceId,param,index){if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return 0}if(AL.alcStringCache[param]){return AL.alcStringCache[param]}var ret;switch(param){case 6549:if(index===0){ret="Web Audio HRTF"}else{AL.alcErr=40964;return 0}break;default:if(index===0){return _alcGetString(deviceId,param)}else{AL.alcErr=40963;return 0}}ret=allocate(intArrayFromString(ret),ALLOC_NORMAL);AL.alcStringCache[param]=ret;return ret}Module["_emscripten_alcGetStringiSOFT"]=_emscripten_alcGetStringiSOFT;_emscripten_alcGetStringiSOFT.sig="iiii";function _emscripten_alcResetDeviceSOFT(deviceId,pAttrList){if(!(deviceId in AL.deviceRefCounts)){AL.alcErr=40961;return 0}var hrtf=null;pAttrList>>=2;if(pAttrList){var attr=0;var val=0;while(true){attr=HEAP32[pAttrList++];if(attr===0){break}val=HEAP32[pAttrList++];switch(attr){case 6546:if(val===1){hrtf=true}else if(val===0){hrtf=false}break}}}if(hrtf!==null){for(var ctxId in AL.contexts){var ctx=AL.contexts[ctxId];if(ctx.deviceId===deviceId){ctx.hrtf=hrtf;AL.updateContextGlobal(ctx)}}}return 1}Module["_emscripten_alcResetDeviceSOFT"]=_emscripten_alcResetDeviceSOFT;_emscripten_alcResetDeviceSOFT.sig="iii";var readAsmConstArgsArray=[];Module["readAsmConstArgsArray"]=readAsmConstArgsArray;function readAsmConstArgs(sigPtr,buf){readAsmConstArgsArray.length=0;var ch;buf>>=2;while(ch=HEAPU8[sigPtr++]){var double=ch<105;if(double&&buf&1)buf++;readAsmConstArgsArray.push(double?HEAPF64[buf++>>1]:HEAP32[buf]);++buf}return readAsmConstArgsArray}Module["readAsmConstArgs"]=readAsmConstArgs;function _emscripten_asm_const_int(code,sigPtr,argbuf){code-=1024;var args=readAsmConstArgs(sigPtr,argbuf);return ASM_CONSTS[code].apply(null,args)}Module["_emscripten_asm_const_int"]=_emscripten_asm_const_int;_emscripten_asm_const_int.sig="iiii";function _emscripten_exit_with_live_runtime(){throw"unwind"}Module["_emscripten_exit_with_live_runtime"]=_emscripten_exit_with_live_runtime;_emscripten_exit_with_live_runtime.sig="v";function _emscripten_get_heap_max(){return 2147483648}Module["_emscripten_get_heap_max"]=_emscripten_get_heap_max;function __webgl_enable_ANGLE_instanced_arrays(ctx){var ext=ctx.getExtension("ANGLE_instanced_arrays");if(ext){ctx["vertexAttribDivisor"]=function(index,divisor){ext["vertexAttribDivisorANGLE"](index,divisor)};ctx["drawArraysInstanced"]=function(mode,first,count,primcount){ext["drawArraysInstancedANGLE"](mode,first,count,primcount)};ctx["drawElementsInstanced"]=function(mode,count,type,indices,primcount){ext["drawElementsInstancedANGLE"](mode,count,type,indices,primcount)};return 1}}Module["__webgl_enable_ANGLE_instanced_arrays"]=__webgl_enable_ANGLE_instanced_arrays;function __webgl_enable_OES_vertex_array_object(ctx){var ext=ctx.getExtension("OES_vertex_array_object");if(ext){ctx["createVertexArray"]=function(){return ext["createVertexArrayOES"]()};ctx["deleteVertexArray"]=function(vao){ext["deleteVertexArrayOES"](vao)};ctx["bindVertexArray"]=function(vao){ext["bindVertexArrayOES"](vao)};ctx["isVertexArray"]=function(vao){return ext["isVertexArrayOES"](vao)};return 1}}Module["__webgl_enable_OES_vertex_array_object"]=__webgl_enable_OES_vertex_array_object;function __webgl_enable_WEBGL_draw_buffers(ctx){var ext=ctx.getExtension("WEBGL_draw_buffers");if(ext){ctx["drawBuffers"]=function(n,bufs){ext["drawBuffersWEBGL"](n,bufs)};return 1}}Module["__webgl_enable_WEBGL_draw_buffers"]=__webgl_enable_WEBGL_draw_buffers;function __webgl_enable_WEBGL_multi_draw(ctx){return!!(ctx.multiDrawWebgl=ctx.getExtension("WEBGL_multi_draw"))}Module["__webgl_enable_WEBGL_multi_draw"]=__webgl_enable_WEBGL_multi_draw;var GL={counter:1,buffers:[],programs:[],framebuffers:[],renderbuffers:[],textures:[],shaders:[],vaos:[],contexts:[],offscreenCanvases:{},queries:[],stringCache:{},unpackAlignment:4,recordError:function recordError(errorCode){if(!GL.lastError){GL.lastError=errorCode}},getNewId:function(table){var ret=GL.counter++;for(var i=table.length;i>2]:-1;source+=UTF8ToString(HEAP32[string+i*4>>2],len<0?undefined:len)}return source},createContext:function(canvas,webGLContextAttributes){if(!canvas.getContextSafariWebGL2Fixed){canvas.getContextSafariWebGL2Fixed=canvas.getContext;canvas.getContext=function(ver,attrs){var gl=canvas.getContextSafariWebGL2Fixed(ver,attrs);return ver=="webgl"==gl instanceof WebGLRenderingContext?gl:null}}var ctx=canvas.getContext("webgl",webGLContextAttributes);if(!ctx)return 0;var handle=GL.registerContext(ctx,webGLContextAttributes);return handle},registerContext:function(ctx,webGLContextAttributes){var handle=GL.getNewId(GL.contexts);var context={handle:handle,attributes:webGLContextAttributes,version:webGLContextAttributes.majorVersion,GLctx:ctx};if(ctx.canvas)ctx.canvas.GLctxObject=context;GL.contexts[handle]=context;if(typeof webGLContextAttributes.enableExtensionsByDefault==="undefined"||webGLContextAttributes.enableExtensionsByDefault){GL.initExtensions(context)}return handle},makeContextCurrent:function(contextHandle){GL.currentContext=GL.contexts[contextHandle];Module.ctx=GLctx=GL.currentContext&&GL.currentContext.GLctx;return!(contextHandle&&!GLctx)},getContext:function(contextHandle){return GL.contexts[contextHandle]},deleteContext:function(contextHandle){if(GL.currentContext===GL.contexts[contextHandle])GL.currentContext=null;if(typeof JSEvents==="object")JSEvents.removeAllHandlersOnTarget(GL.contexts[contextHandle].GLctx.canvas);if(GL.contexts[contextHandle]&&GL.contexts[contextHandle].GLctx.canvas)GL.contexts[contextHandle].GLctx.canvas.GLctxObject=undefined;GL.contexts[contextHandle]=null},initExtensions:function(context){if(!context)context=GL.currentContext;if(context.initExtensionsDone)return;context.initExtensionsDone=true;var GLctx=context.GLctx;__webgl_enable_ANGLE_instanced_arrays(GLctx);__webgl_enable_OES_vertex_array_object(GLctx);__webgl_enable_WEBGL_draw_buffers(GLctx);{GLctx.disjointTimerQueryExt=GLctx.getExtension("EXT_disjoint_timer_query")}__webgl_enable_WEBGL_multi_draw(GLctx);var exts=GLctx.getSupportedExtensions()||[];exts.forEach(function(ext){if(!ext.includes("lose_context")&&!ext.includes("debug")){GLctx.getExtension(ext)}})}};Module["GL"]=GL;function _emscripten_glActiveTexture(x0){GLctx["activeTexture"](x0)}Module["_emscripten_glActiveTexture"]=_emscripten_glActiveTexture;_emscripten_glActiveTexture.sig="vi";function _emscripten_glAttachShader(program,shader){GLctx.attachShader(GL.programs[program],GL.shaders[shader])}Module["_emscripten_glAttachShader"]=_emscripten_glAttachShader;_emscripten_glAttachShader.sig="vii";function _emscripten_glBeginQueryEXT(target,id){GLctx.disjointTimerQueryExt["beginQueryEXT"](target,GL.queries[id])}Module["_emscripten_glBeginQueryEXT"]=_emscripten_glBeginQueryEXT;_emscripten_glBeginQueryEXT.sig="vii";function _emscripten_glBindAttribLocation(program,index,name){GLctx.bindAttribLocation(GL.programs[program],index,UTF8ToString(name))}Module["_emscripten_glBindAttribLocation"]=_emscripten_glBindAttribLocation;_emscripten_glBindAttribLocation.sig="viii";function _emscripten_glBindBuffer(target,buffer){GLctx.bindBuffer(target,GL.buffers[buffer])}Module["_emscripten_glBindBuffer"]=_emscripten_glBindBuffer;_emscripten_glBindBuffer.sig="vii";function _emscripten_glBindFramebuffer(target,framebuffer){GLctx.bindFramebuffer(target,GL.framebuffers[framebuffer])}Module["_emscripten_glBindFramebuffer"]=_emscripten_glBindFramebuffer;_emscripten_glBindFramebuffer.sig="vii";function _emscripten_glBindRenderbuffer(target,renderbuffer){GLctx.bindRenderbuffer(target,GL.renderbuffers[renderbuffer])}Module["_emscripten_glBindRenderbuffer"]=_emscripten_glBindRenderbuffer;_emscripten_glBindRenderbuffer.sig="vii";function _emscripten_glBindTexture(target,texture){GLctx.bindTexture(target,GL.textures[texture])}Module["_emscripten_glBindTexture"]=_emscripten_glBindTexture;_emscripten_glBindTexture.sig="vii";function _emscripten_glBindVertexArrayOES(vao){GLctx["bindVertexArray"](GL.vaos[vao])}Module["_emscripten_glBindVertexArrayOES"]=_emscripten_glBindVertexArrayOES;_emscripten_glBindVertexArrayOES.sig="vi";function _emscripten_glBlendColor(x0,x1,x2,x3){GLctx["blendColor"](x0,x1,x2,x3)}Module["_emscripten_glBlendColor"]=_emscripten_glBlendColor;_emscripten_glBlendColor.sig="vffff";function _emscripten_glBlendEquation(x0){GLctx["blendEquation"](x0)}Module["_emscripten_glBlendEquation"]=_emscripten_glBlendEquation;_emscripten_glBlendEquation.sig="vi";function _emscripten_glBlendEquationSeparate(x0,x1){GLctx["blendEquationSeparate"](x0,x1)}Module["_emscripten_glBlendEquationSeparate"]=_emscripten_glBlendEquationSeparate;_emscripten_glBlendEquationSeparate.sig="vii";function _emscripten_glBlendFunc(x0,x1){GLctx["blendFunc"](x0,x1)}Module["_emscripten_glBlendFunc"]=_emscripten_glBlendFunc;_emscripten_glBlendFunc.sig="vii";function _emscripten_glBlendFuncSeparate(x0,x1,x2,x3){GLctx["blendFuncSeparate"](x0,x1,x2,x3)}Module["_emscripten_glBlendFuncSeparate"]=_emscripten_glBlendFuncSeparate;_emscripten_glBlendFuncSeparate.sig="viiii";function _emscripten_glBufferData(target,size,data,usage){GLctx.bufferData(target,data?HEAPU8.subarray(data,data+size):size,usage)}Module["_emscripten_glBufferData"]=_emscripten_glBufferData;_emscripten_glBufferData.sig="viiii";function _emscripten_glBufferSubData(target,offset,size,data){GLctx.bufferSubData(target,offset,HEAPU8.subarray(data,data+size))}Module["_emscripten_glBufferSubData"]=_emscripten_glBufferSubData;_emscripten_glBufferSubData.sig="viiii";function _emscripten_glCheckFramebufferStatus(x0){return GLctx["checkFramebufferStatus"](x0)}Module["_emscripten_glCheckFramebufferStatus"]=_emscripten_glCheckFramebufferStatus;_emscripten_glCheckFramebufferStatus.sig="ii";function _emscripten_glClear(x0){GLctx["clear"](x0)}Module["_emscripten_glClear"]=_emscripten_glClear;_emscripten_glClear.sig="vi";function _emscripten_glClearColor(x0,x1,x2,x3){GLctx["clearColor"](x0,x1,x2,x3)}Module["_emscripten_glClearColor"]=_emscripten_glClearColor;_emscripten_glClearColor.sig="viiii";function _emscripten_glClearDepthf(x0){GLctx["clearDepth"](x0)}Module["_emscripten_glClearDepthf"]=_emscripten_glClearDepthf;_emscripten_glClearDepthf.sig="vi";function _emscripten_glClearStencil(x0){GLctx["clearStencil"](x0)}Module["_emscripten_glClearStencil"]=_emscripten_glClearStencil;_emscripten_glClearStencil.sig="vi";function _emscripten_glColorMask(red,green,blue,alpha){GLctx.colorMask(!!red,!!green,!!blue,!!alpha)}Module["_emscripten_glColorMask"]=_emscripten_glColorMask;_emscripten_glColorMask.sig="viiii";function _emscripten_glCompileShader(shader){GLctx.compileShader(GL.shaders[shader])}Module["_emscripten_glCompileShader"]=_emscripten_glCompileShader;_emscripten_glCompileShader.sig="vi";function _emscripten_glCompressedTexImage2D(target,level,internalFormat,width,height,border,imageSize,data){GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,data?HEAPU8.subarray(data,data+imageSize):null)}Module["_emscripten_glCompressedTexImage2D"]=_emscripten_glCompressedTexImage2D;_emscripten_glCompressedTexImage2D.sig="viiiiiiii";function _emscripten_glCompressedTexSubImage2D(target,level,xoffset,yoffset,width,height,format,imageSize,data){GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,data?HEAPU8.subarray(data,data+imageSize):null)}Module["_emscripten_glCompressedTexSubImage2D"]=_emscripten_glCompressedTexSubImage2D;_emscripten_glCompressedTexSubImage2D.sig="viiiiiiiii";function _emscripten_glCopyTexImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}Module["_emscripten_glCopyTexImage2D"]=_emscripten_glCopyTexImage2D;_emscripten_glCopyTexImage2D.sig="viiiiiiii";function _emscripten_glCopyTexSubImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexSubImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}Module["_emscripten_glCopyTexSubImage2D"]=_emscripten_glCopyTexSubImage2D;_emscripten_glCopyTexSubImage2D.sig="viiiiiiii";function _emscripten_glCreateProgram(){var id=GL.getNewId(GL.programs);var program=GLctx.createProgram();program.name=id;program.maxUniformLength=program.maxAttributeLength=program.maxUniformBlockNameLength=0;program.uniformIdCounter=1;GL.programs[id]=program;return id}Module["_emscripten_glCreateProgram"]=_emscripten_glCreateProgram;_emscripten_glCreateProgram.sig="i";function _emscripten_glCreateShader(shaderType){var id=GL.getNewId(GL.shaders);GL.shaders[id]=GLctx.createShader(shaderType);return id}Module["_emscripten_glCreateShader"]=_emscripten_glCreateShader;_emscripten_glCreateShader.sig="ii";function _emscripten_glCullFace(x0){GLctx["cullFace"](x0)}Module["_emscripten_glCullFace"]=_emscripten_glCullFace;_emscripten_glCullFace.sig="vi";function _emscripten_glDeleteBuffers(n,buffers){for(var i=0;i>2];var buffer=GL.buffers[id];if(!buffer)continue;GLctx.deleteBuffer(buffer);buffer.name=0;GL.buffers[id]=null}}Module["_emscripten_glDeleteBuffers"]=_emscripten_glDeleteBuffers;_emscripten_glDeleteBuffers.sig="vii";function _emscripten_glDeleteFramebuffers(n,framebuffers){for(var i=0;i>2];var framebuffer=GL.framebuffers[id];if(!framebuffer)continue;GLctx.deleteFramebuffer(framebuffer);framebuffer.name=0;GL.framebuffers[id]=null}}Module["_emscripten_glDeleteFramebuffers"]=_emscripten_glDeleteFramebuffers;_emscripten_glDeleteFramebuffers.sig="vii";function _emscripten_glDeleteProgram(id){if(!id)return;var program=GL.programs[id];if(!program){GL.recordError(1281);return}GLctx.deleteProgram(program);program.name=0;GL.programs[id]=null}Module["_emscripten_glDeleteProgram"]=_emscripten_glDeleteProgram;_emscripten_glDeleteProgram.sig="vi";function _emscripten_glDeleteQueriesEXT(n,ids){for(var i=0;i>2];var query=GL.queries[id];if(!query)continue;GLctx.disjointTimerQueryExt["deleteQueryEXT"](query);GL.queries[id]=null}}Module["_emscripten_glDeleteQueriesEXT"]=_emscripten_glDeleteQueriesEXT;_emscripten_glDeleteQueriesEXT.sig="vii";function _emscripten_glDeleteRenderbuffers(n,renderbuffers){for(var i=0;i>2];var renderbuffer=GL.renderbuffers[id];if(!renderbuffer)continue;GLctx.deleteRenderbuffer(renderbuffer);renderbuffer.name=0;GL.renderbuffers[id]=null}}Module["_emscripten_glDeleteRenderbuffers"]=_emscripten_glDeleteRenderbuffers;_emscripten_glDeleteRenderbuffers.sig="vii";function _emscripten_glDeleteShader(id){if(!id)return;var shader=GL.shaders[id];if(!shader){GL.recordError(1281);return}GLctx.deleteShader(shader);GL.shaders[id]=null}Module["_emscripten_glDeleteShader"]=_emscripten_glDeleteShader;_emscripten_glDeleteShader.sig="vi";function _emscripten_glDeleteTextures(n,textures){for(var i=0;i>2];var texture=GL.textures[id];if(!texture)continue;GLctx.deleteTexture(texture);texture.name=0;GL.textures[id]=null}}Module["_emscripten_glDeleteTextures"]=_emscripten_glDeleteTextures;_emscripten_glDeleteTextures.sig="vii";function _emscripten_glDeleteVertexArraysOES(n,vaos){for(var i=0;i>2];GLctx["deleteVertexArray"](GL.vaos[id]);GL.vaos[id]=null}}Module["_emscripten_glDeleteVertexArraysOES"]=_emscripten_glDeleteVertexArraysOES;_emscripten_glDeleteVertexArraysOES.sig="vii";function _emscripten_glDepthFunc(x0){GLctx["depthFunc"](x0)}Module["_emscripten_glDepthFunc"]=_emscripten_glDepthFunc;_emscripten_glDepthFunc.sig="vi";function _emscripten_glDepthMask(flag){GLctx.depthMask(!!flag)}Module["_emscripten_glDepthMask"]=_emscripten_glDepthMask;_emscripten_glDepthMask.sig="vi";function _emscripten_glDepthRangef(x0,x1){GLctx["depthRange"](x0,x1)}Module["_emscripten_glDepthRangef"]=_emscripten_glDepthRangef;_emscripten_glDepthRangef.sig="vii";function _emscripten_glDetachShader(program,shader){GLctx.detachShader(GL.programs[program],GL.shaders[shader])}Module["_emscripten_glDetachShader"]=_emscripten_glDetachShader;_emscripten_glDetachShader.sig="vii";function _emscripten_glDisable(x0){GLctx["disable"](x0)}Module["_emscripten_glDisable"]=_emscripten_glDisable;_emscripten_glDisable.sig="vi";function _emscripten_glDisableVertexAttribArray(index){GLctx.disableVertexAttribArray(index)}Module["_emscripten_glDisableVertexAttribArray"]=_emscripten_glDisableVertexAttribArray;_emscripten_glDisableVertexAttribArray.sig="vi";function _emscripten_glDrawArrays(mode,first,count){GLctx.drawArrays(mode,first,count)}Module["_emscripten_glDrawArrays"]=_emscripten_glDrawArrays;_emscripten_glDrawArrays.sig="viii";function _emscripten_glDrawArraysInstancedANGLE(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}Module["_emscripten_glDrawArraysInstancedANGLE"]=_emscripten_glDrawArraysInstancedANGLE;_emscripten_glDrawArraysInstancedANGLE.sig="viiii";var tempFixedLengthArray=[];Module["tempFixedLengthArray"]=tempFixedLengthArray;function _emscripten_glDrawBuffersWEBGL(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}Module["_emscripten_glDrawBuffersWEBGL"]=_emscripten_glDrawBuffersWEBGL;_emscripten_glDrawBuffersWEBGL.sig="vii";function _emscripten_glDrawElements(mode,count,type,indices){GLctx.drawElements(mode,count,type,indices)}Module["_emscripten_glDrawElements"]=_emscripten_glDrawElements;_emscripten_glDrawElements.sig="viiii";function _emscripten_glDrawElementsInstancedANGLE(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}Module["_emscripten_glDrawElementsInstancedANGLE"]=_emscripten_glDrawElementsInstancedANGLE;_emscripten_glDrawElementsInstancedANGLE.sig="viiiii";function _emscripten_glEnable(x0){GLctx["enable"](x0)}Module["_emscripten_glEnable"]=_emscripten_glEnable;_emscripten_glEnable.sig="vi";function _emscripten_glEnableVertexAttribArray(index){GLctx.enableVertexAttribArray(index)}Module["_emscripten_glEnableVertexAttribArray"]=_emscripten_glEnableVertexAttribArray;_emscripten_glEnableVertexAttribArray.sig="vi";function _emscripten_glEndQueryEXT(target){GLctx.disjointTimerQueryExt["endQueryEXT"](target)}Module["_emscripten_glEndQueryEXT"]=_emscripten_glEndQueryEXT;_emscripten_glEndQueryEXT.sig="vi";function _emscripten_glFinish(){GLctx["finish"]()}Module["_emscripten_glFinish"]=_emscripten_glFinish;_emscripten_glFinish.sig="v";function _emscripten_glFlush(){GLctx["flush"]()}Module["_emscripten_glFlush"]=_emscripten_glFlush;_emscripten_glFlush.sig="v";function _emscripten_glFramebufferRenderbuffer(target,attachment,renderbuffertarget,renderbuffer){GLctx.framebufferRenderbuffer(target,attachment,renderbuffertarget,GL.renderbuffers[renderbuffer])}Module["_emscripten_glFramebufferRenderbuffer"]=_emscripten_glFramebufferRenderbuffer;_emscripten_glFramebufferRenderbuffer.sig="viiii";function _emscripten_glFramebufferTexture2D(target,attachment,textarget,texture,level){GLctx.framebufferTexture2D(target,attachment,textarget,GL.textures[texture],level)}Module["_emscripten_glFramebufferTexture2D"]=_emscripten_glFramebufferTexture2D;_emscripten_glFramebufferTexture2D.sig="viiiii";function _emscripten_glFrontFace(x0){GLctx["frontFace"](x0)}Module["_emscripten_glFrontFace"]=_emscripten_glFrontFace;_emscripten_glFrontFace.sig="vi";function __glGenObject(n,buffers,createFunction,objectTable){for(var i=0;i>2]=id}}Module["__glGenObject"]=__glGenObject;__glGenObject.sig="vii";function _emscripten_glGenBuffers(n,buffers){__glGenObject(n,buffers,"createBuffer",GL.buffers)}Module["_emscripten_glGenBuffers"]=_emscripten_glGenBuffers;_emscripten_glGenBuffers.sig="vii";function _emscripten_glGenFramebuffers(n,ids){__glGenObject(n,ids,"createFramebuffer",GL.framebuffers)}Module["_emscripten_glGenFramebuffers"]=_emscripten_glGenFramebuffers;_emscripten_glGenFramebuffers.sig="vii";function _emscripten_glGenQueriesEXT(n,ids){for(var i=0;i>2]=0;return}var id=GL.getNewId(GL.queries);query.name=id;GL.queries[id]=query;HEAP32[ids+i*4>>2]=id}}Module["_emscripten_glGenQueriesEXT"]=_emscripten_glGenQueriesEXT;_emscripten_glGenQueriesEXT.sig="vii";function _emscripten_glGenRenderbuffers(n,renderbuffers){__glGenObject(n,renderbuffers,"createRenderbuffer",GL.renderbuffers)}Module["_emscripten_glGenRenderbuffers"]=_emscripten_glGenRenderbuffers;_emscripten_glGenRenderbuffers.sig="vii";function _emscripten_glGenTextures(n,textures){__glGenObject(n,textures,"createTexture",GL.textures)}Module["_emscripten_glGenTextures"]=_emscripten_glGenTextures;_emscripten_glGenTextures.sig="vii";function _emscripten_glGenVertexArraysOES(n,arrays){__glGenObject(n,arrays,"createVertexArray",GL.vaos)}Module["_emscripten_glGenVertexArraysOES"]=_emscripten_glGenVertexArraysOES;_emscripten_glGenVertexArraysOES.sig="vii";function _emscripten_glGenerateMipmap(x0){GLctx["generateMipmap"](x0)}Module["_emscripten_glGenerateMipmap"]=_emscripten_glGenerateMipmap;_emscripten_glGenerateMipmap.sig="vi";function __glGetActiveAttribOrUniform(funcName,program,index,bufSize,length,size,type,name){program=GL.programs[program];var info=GLctx[funcName](program,index);if(info){var numBytesWrittenExclNull=name&&stringToUTF8(info.name,name,bufSize);if(length)HEAP32[length>>2]=numBytesWrittenExclNull;if(size)HEAP32[size>>2]=info.size;if(type)HEAP32[type>>2]=info.type}}Module["__glGetActiveAttribOrUniform"]=__glGetActiveAttribOrUniform;function _emscripten_glGetActiveAttrib(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveAttrib",program,index,bufSize,length,size,type,name)}Module["_emscripten_glGetActiveAttrib"]=_emscripten_glGetActiveAttrib;_emscripten_glGetActiveAttrib.sig="viiiiiii";function _emscripten_glGetActiveUniform(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveUniform",program,index,bufSize,length,size,type,name)}Module["_emscripten_glGetActiveUniform"]=_emscripten_glGetActiveUniform;_emscripten_glGetActiveUniform.sig="viiiiiii";function _emscripten_glGetAttachedShaders(program,maxCount,count,shaders){var result=GLctx.getAttachedShaders(GL.programs[program]);var len=result.length;if(len>maxCount){len=maxCount}HEAP32[count>>2]=len;for(var i=0;i>2]=id}}Module["_emscripten_glGetAttachedShaders"]=_emscripten_glGetAttachedShaders;_emscripten_glGetAttachedShaders.sig="viiii";function _emscripten_glGetAttribLocation(program,name){return GLctx.getAttribLocation(GL.programs[program],UTF8ToString(name))}Module["_emscripten_glGetAttribLocation"]=_emscripten_glGetAttribLocation;_emscripten_glGetAttribLocation.sig="iii";function writeI53ToI64(ptr,num){HEAPU32[ptr>>2]=num;HEAPU32[ptr+4>>2]=(num-HEAPU32[ptr>>2])/4294967296}Module["writeI53ToI64"]=writeI53ToI64;function emscriptenWebGLGet(name_,p,type){if(!p){GL.recordError(1281);return}var ret=undefined;switch(name_){case 36346:ret=1;break;case 36344:if(type!=0&&type!=1){GL.recordError(1280)}return;case 36345:ret=0;break;case 34466:var formats=GLctx.getParameter(34467);ret=formats?formats.length:0;break}if(ret===undefined){var result=GLctx.getParameter(name_);switch(typeof result){case"number":ret=result;break;case"boolean":ret=result?1:0;break;case"string":GL.recordError(1280);return;case"object":if(result===null){switch(name_){case 34964:case 35725:case 34965:case 36006:case 36007:case 32873:case 34229:case 34068:{ret=0;break}default:{GL.recordError(1280);return}}}else if(result instanceof Float32Array||result instanceof Uint32Array||result instanceof Int32Array||result instanceof Array){for(var i=0;i>2]=result[i];break;case 2:HEAPF32[p+i*4>>2]=result[i];break;case 4:HEAP8[p+i>>0]=result[i]?1:0;break}}return}else{try{ret=result.name|0}catch(e){GL.recordError(1280);err("GL_INVALID_ENUM in glGet"+type+"v: Unknown object returned from WebGL getParameter("+name_+")! (error: "+e+")");return}}break;default:GL.recordError(1280);err("GL_INVALID_ENUM in glGet"+type+"v: Native code calling glGet"+type+"v("+name_+") and it returns "+result+" of type "+typeof result+"!");return}}switch(type){case 1:writeI53ToI64(p,ret);break;case 0:HEAP32[p>>2]=ret;break;case 2:HEAPF32[p>>2]=ret;break;case 4:HEAP8[p>>0]=ret?1:0;break}}Module["emscriptenWebGLGet"]=emscriptenWebGLGet;function _emscripten_glGetBooleanv(name_,p){emscriptenWebGLGet(name_,p,4)}Module["_emscripten_glGetBooleanv"]=_emscripten_glGetBooleanv;_emscripten_glGetBooleanv.sig="vii";function _emscripten_glGetBufferParameteriv(target,value,data){if(!data){GL.recordError(1281);return}HEAP32[data>>2]=GLctx.getBufferParameter(target,value)}Module["_emscripten_glGetBufferParameteriv"]=_emscripten_glGetBufferParameteriv;_emscripten_glGetBufferParameteriv.sig="viii";function _emscripten_glGetError(){var error=GLctx.getError()||GL.lastError;GL.lastError=0;return error}Module["_emscripten_glGetError"]=_emscripten_glGetError;_emscripten_glGetError.sig="i";function _emscripten_glGetFloatv(name_,p){emscriptenWebGLGet(name_,p,2)}Module["_emscripten_glGetFloatv"]=_emscripten_glGetFloatv;_emscripten_glGetFloatv.sig="vii";function _emscripten_glGetFramebufferAttachmentParameteriv(target,attachment,pname,params){var result=GLctx.getFramebufferAttachmentParameter(target,attachment,pname);if(result instanceof WebGLRenderbuffer||result instanceof WebGLTexture){result=result.name|0}HEAP32[params>>2]=result}Module["_emscripten_glGetFramebufferAttachmentParameteriv"]=_emscripten_glGetFramebufferAttachmentParameteriv;_emscripten_glGetFramebufferAttachmentParameteriv.sig="viiii";function _emscripten_glGetIntegerv(name_,p){emscriptenWebGLGet(name_,p,0)}Module["_emscripten_glGetIntegerv"]=_emscripten_glGetIntegerv;_emscripten_glGetIntegerv.sig="vii";function _emscripten_glGetProgramInfoLog(program,maxLength,length,infoLog){var log=GLctx.getProgramInfoLog(GL.programs[program]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}Module["_emscripten_glGetProgramInfoLog"]=_emscripten_glGetProgramInfoLog;_emscripten_glGetProgramInfoLog.sig="viiii";function _emscripten_glGetProgramiv(program,pname,p){if(!p){GL.recordError(1281);return}if(program>=GL.counter){GL.recordError(1281);return}program=GL.programs[program];if(pname==35716){var log=GLctx.getProgramInfoLog(program);if(log===null)log="(unknown error)";HEAP32[p>>2]=log.length+1}else if(pname==35719){if(!program.maxUniformLength){for(var i=0;i>2]=program.maxUniformLength}else if(pname==35722){if(!program.maxAttributeLength){for(var i=0;i>2]=program.maxAttributeLength}else if(pname==35381){if(!program.maxUniformBlockNameLength){for(var i=0;i>2]=program.maxUniformBlockNameLength}else{HEAP32[p>>2]=GLctx.getProgramParameter(program,pname)}}Module["_emscripten_glGetProgramiv"]=_emscripten_glGetProgramiv;_emscripten_glGetProgramiv.sig="viii";function _emscripten_glGetQueryObjecti64vEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param;{param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname)}var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}writeI53ToI64(params,ret)}Module["_emscripten_glGetQueryObjecti64vEXT"]=_emscripten_glGetQueryObjecti64vEXT;_emscripten_glGetQueryObjecti64vEXT.sig="viii";function _emscripten_glGetQueryObjectivEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname);var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}HEAP32[params>>2]=ret}Module["_emscripten_glGetQueryObjectivEXT"]=_emscripten_glGetQueryObjectivEXT;_emscripten_glGetQueryObjectivEXT.sig="viii";function _emscripten_glGetQueryObjectui64vEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param;{param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname)}var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}writeI53ToI64(params,ret)}Module["_emscripten_glGetQueryObjectui64vEXT"]=_emscripten_glGetQueryObjectui64vEXT;_emscripten_glGetQueryObjectui64vEXT.sig="viii";function _emscripten_glGetQueryObjectuivEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname);var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}HEAP32[params>>2]=ret}Module["_emscripten_glGetQueryObjectuivEXT"]=_emscripten_glGetQueryObjectuivEXT;_emscripten_glGetQueryObjectuivEXT.sig="viii";function _emscripten_glGetQueryivEXT(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.disjointTimerQueryExt["getQueryEXT"](target,pname)}Module["_emscripten_glGetQueryivEXT"]=_emscripten_glGetQueryivEXT;_emscripten_glGetQueryivEXT.sig="viii";function _emscripten_glGetRenderbufferParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getRenderbufferParameter(target,pname)}Module["_emscripten_glGetRenderbufferParameteriv"]=_emscripten_glGetRenderbufferParameteriv;_emscripten_glGetRenderbufferParameteriv.sig="viii";function _emscripten_glGetShaderInfoLog(shader,maxLength,length,infoLog){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}Module["_emscripten_glGetShaderInfoLog"]=_emscripten_glGetShaderInfoLog;_emscripten_glGetShaderInfoLog.sig="viiii";function _emscripten_glGetShaderPrecisionFormat(shaderType,precisionType,range,precision){var result=GLctx.getShaderPrecisionFormat(shaderType,precisionType);HEAP32[range>>2]=result.rangeMin;HEAP32[range+4>>2]=result.rangeMax;HEAP32[precision>>2]=result.precision}Module["_emscripten_glGetShaderPrecisionFormat"]=_emscripten_glGetShaderPrecisionFormat;_emscripten_glGetShaderPrecisionFormat.sig="viiii";function _emscripten_glGetShaderSource(shader,bufSize,length,source){var result=GLctx.getShaderSource(GL.shaders[shader]);if(!result)return;var numBytesWrittenExclNull=bufSize>0&&source?stringToUTF8(result,source,bufSize):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}Module["_emscripten_glGetShaderSource"]=_emscripten_glGetShaderSource;_emscripten_glGetShaderSource.sig="viiii";function _emscripten_glGetShaderiv(shader,pname,p){if(!p){GL.recordError(1281);return}if(pname==35716){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var logLength=log?log.length+1:0;HEAP32[p>>2]=logLength}else if(pname==35720){var source=GLctx.getShaderSource(GL.shaders[shader]);var sourceLength=source?source.length+1:0;HEAP32[p>>2]=sourceLength}else{HEAP32[p>>2]=GLctx.getShaderParameter(GL.shaders[shader],pname)}}Module["_emscripten_glGetShaderiv"]=_emscripten_glGetShaderiv;_emscripten_glGetShaderiv.sig="viii";function _emscripten_glGetString(name_){var ret=GL.stringCache[name_];if(!ret){switch(name_){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));ret=stringToNewUTF8(exts.join(" "));break;case 7936:case 7937:case 37445:case 37446:var s=GLctx.getParameter(name_);if(!s){GL.recordError(1280)}ret=s&&stringToNewUTF8(s);break;case 7938:var glVersion=GLctx.getParameter(7938);{glVersion="OpenGL ES 2.0 ("+glVersion+")"}ret=stringToNewUTF8(glVersion);break;case 35724:var glslVersion=GLctx.getParameter(35724);var ver_re=/^WebGL GLSL ES ([0-9]\.[0-9][0-9]?)(?:$| .*)/;var ver_num=glslVersion.match(ver_re);if(ver_num!==null){if(ver_num[1].length==3)ver_num[1]=ver_num[1]+"0";glslVersion="OpenGL ES GLSL ES "+ver_num[1]+" ("+glslVersion+")"}ret=stringToNewUTF8(glslVersion);break;default:GL.recordError(1280)}GL.stringCache[name_]=ret}return ret}Module["_emscripten_glGetString"]=_emscripten_glGetString;_emscripten_glGetString.sig="ii";function _emscripten_glGetTexParameterfv(target,pname,params){if(!params){GL.recordError(1281);return}HEAPF32[params>>2]=GLctx.getTexParameter(target,pname)}Module["_emscripten_glGetTexParameterfv"]=_emscripten_glGetTexParameterfv;_emscripten_glGetTexParameterfv.sig="viii";function _emscripten_glGetTexParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getTexParameter(target,pname)}Module["_emscripten_glGetTexParameteriv"]=_emscripten_glGetTexParameteriv;_emscripten_glGetTexParameteriv.sig="viii";function webglGetLeftBracePos(name){return name.slice(-1)=="]"&&name.lastIndexOf("[")}Module["webglGetLeftBracePos"]=webglGetLeftBracePos;function webglPrepareUniformLocationsBeforeFirstUse(program){var uniformLocsById=program.uniformLocsById,uniformSizeAndIdsByName=program.uniformSizeAndIdsByName,i,j;if(!uniformLocsById){program.uniformLocsById=uniformLocsById={};program.uniformArrayNamesById={};for(i=0;i0?nm.slice(0,lb):nm;var id=program.uniformIdCounter;program.uniformIdCounter+=sz;uniformSizeAndIdsByName[arrayName]=[sz,id];for(j=0;j0){arrayIndex=jstoi_q(name.slice(leftBrace+1))>>>0;uniformBaseName=name.slice(0,leftBrace)}var sizeAndId=program.uniformSizeAndIdsByName[uniformBaseName];if(sizeAndId&&arrayIndex0?"["+webglLoc+"]":""))}return webglLoc}else{GL.recordError(1282)}}Module["webglGetUniformLocation"]=webglGetUniformLocation;function emscriptenWebGLGetUniform(program,location,params,type){if(!params){GL.recordError(1281);return}program=GL.programs[program];webglPrepareUniformLocationsBeforeFirstUse(program);var data=GLctx.getUniform(program,webglGetUniformLocation(location));if(typeof data=="number"||typeof data=="boolean"){switch(type){case 0:HEAP32[params>>2]=data;break;case 2:HEAPF32[params>>2]=data;break}}else{for(var i=0;i>2]=data[i];break;case 2:HEAPF32[params+i*4>>2]=data[i];break}}}}Module["emscriptenWebGLGetUniform"]=emscriptenWebGLGetUniform;function _emscripten_glGetUniformfv(program,location,params){emscriptenWebGLGetUniform(program,location,params,2)}Module["_emscripten_glGetUniformfv"]=_emscripten_glGetUniformfv;_emscripten_glGetUniformfv.sig="viii";function _emscripten_glGetUniformiv(program,location,params){emscriptenWebGLGetUniform(program,location,params,0)}Module["_emscripten_glGetUniformiv"]=_emscripten_glGetUniformiv;_emscripten_glGetUniformiv.sig="viii";function _emscripten_glGetVertexAttribPointerv(index,pname,pointer){if(!pointer){GL.recordError(1281);return}HEAP32[pointer>>2]=GLctx.getVertexAttribOffset(index,pname)}Module["_emscripten_glGetVertexAttribPointerv"]=_emscripten_glGetVertexAttribPointerv;_emscripten_glGetVertexAttribPointerv.sig="viii";function emscriptenWebGLGetVertexAttrib(index,pname,params,type){if(!params){GL.recordError(1281);return}var data=GLctx.getVertexAttrib(index,pname);if(pname==34975){HEAP32[params>>2]=data&&data["name"]}else if(typeof data=="number"||typeof data=="boolean"){switch(type){case 0:HEAP32[params>>2]=data;break;case 2:HEAPF32[params>>2]=data;break;case 5:HEAP32[params>>2]=Math.fround(data);break}}else{for(var i=0;i>2]=data[i];break;case 2:HEAPF32[params+i*4>>2]=data[i];break;case 5:HEAP32[params+i*4>>2]=Math.fround(data[i]);break}}}}Module["emscriptenWebGLGetVertexAttrib"]=emscriptenWebGLGetVertexAttrib;function _emscripten_glGetVertexAttribfv(index,pname,params){emscriptenWebGLGetVertexAttrib(index,pname,params,2)}Module["_emscripten_glGetVertexAttribfv"]=_emscripten_glGetVertexAttribfv;_emscripten_glGetVertexAttribfv.sig="viii";function _emscripten_glGetVertexAttribiv(index,pname,params){emscriptenWebGLGetVertexAttrib(index,pname,params,5)}Module["_emscripten_glGetVertexAttribiv"]=_emscripten_glGetVertexAttribiv;_emscripten_glGetVertexAttribiv.sig="viii";function _emscripten_glHint(x0,x1){GLctx["hint"](x0,x1)}Module["_emscripten_glHint"]=_emscripten_glHint;_emscripten_glHint.sig="vii";function _emscripten_glIsBuffer(buffer){var b=GL.buffers[buffer];if(!b)return 0;return GLctx.isBuffer(b)}Module["_emscripten_glIsBuffer"]=_emscripten_glIsBuffer;_emscripten_glIsBuffer.sig="ii";function _emscripten_glIsEnabled(x0){return GLctx["isEnabled"](x0)}Module["_emscripten_glIsEnabled"]=_emscripten_glIsEnabled;_emscripten_glIsEnabled.sig="ii";function _emscripten_glIsFramebuffer(framebuffer){var fb=GL.framebuffers[framebuffer];if(!fb)return 0;return GLctx.isFramebuffer(fb)}Module["_emscripten_glIsFramebuffer"]=_emscripten_glIsFramebuffer;_emscripten_glIsFramebuffer.sig="ii";function _emscripten_glIsProgram(program){program=GL.programs[program];if(!program)return 0;return GLctx.isProgram(program)}Module["_emscripten_glIsProgram"]=_emscripten_glIsProgram;_emscripten_glIsProgram.sig="ii";function _emscripten_glIsQueryEXT(id){var query=GL.queries[id];if(!query)return 0;return GLctx.disjointTimerQueryExt["isQueryEXT"](query)}Module["_emscripten_glIsQueryEXT"]=_emscripten_glIsQueryEXT;_emscripten_glIsQueryEXT.sig="ii";function _emscripten_glIsRenderbuffer(renderbuffer){var rb=GL.renderbuffers[renderbuffer];if(!rb)return 0;return GLctx.isRenderbuffer(rb)}Module["_emscripten_glIsRenderbuffer"]=_emscripten_glIsRenderbuffer;_emscripten_glIsRenderbuffer.sig="ii";function _emscripten_glIsShader(shader){var s=GL.shaders[shader];if(!s)return 0;return GLctx.isShader(s)}Module["_emscripten_glIsShader"]=_emscripten_glIsShader;_emscripten_glIsShader.sig="ii";function _emscripten_glIsTexture(id){var texture=GL.textures[id];if(!texture)return 0;return GLctx.isTexture(texture)}Module["_emscripten_glIsTexture"]=_emscripten_glIsTexture;_emscripten_glIsTexture.sig="ii";function _emscripten_glIsVertexArrayOES(array){var vao=GL.vaos[array];if(!vao)return 0;return GLctx["isVertexArray"](vao)}Module["_emscripten_glIsVertexArrayOES"]=_emscripten_glIsVertexArrayOES;_emscripten_glIsVertexArrayOES.sig="ii";function _emscripten_glLineWidth(x0){GLctx["lineWidth"](x0)}Module["_emscripten_glLineWidth"]=_emscripten_glLineWidth;_emscripten_glLineWidth.sig="vi";function _emscripten_glLinkProgram(program){program=GL.programs[program];GLctx.linkProgram(program);program.uniformLocsById=0;program.uniformSizeAndIdsByName={}}Module["_emscripten_glLinkProgram"]=_emscripten_glLinkProgram;_emscripten_glLinkProgram.sig="vi";function _emscripten_glPixelStorei(pname,param){if(pname==3317){GL.unpackAlignment=param}GLctx.pixelStorei(pname,param)}Module["_emscripten_glPixelStorei"]=_emscripten_glPixelStorei;_emscripten_glPixelStorei.sig="vii";function _emscripten_glPolygonOffset(x0,x1){GLctx["polygonOffset"](x0,x1)}Module["_emscripten_glPolygonOffset"]=_emscripten_glPolygonOffset;_emscripten_glPolygonOffset.sig="vii";function _emscripten_glQueryCounterEXT(id,target){GLctx.disjointTimerQueryExt["queryCounterEXT"](GL.queries[id],target)}Module["_emscripten_glQueryCounterEXT"]=_emscripten_glQueryCounterEXT;_emscripten_glQueryCounterEXT.sig="vii";function computeUnpackAlignedImageSize(width,height,sizePerPixel,alignment){function roundedToNextMultipleOf(x,y){return x+y-1&-y}var plainRowSize=width*sizePerPixel;var alignedRowSize=roundedToNextMultipleOf(plainRowSize,alignment);return height*alignedRowSize}Module["computeUnpackAlignedImageSize"]=computeUnpackAlignedImageSize;function __colorChannelsInGlTextureFormat(format){var colorChannels={5:3,6:4,8:2,29502:3,29504:4};return colorChannels[format-6402]||1}Module["__colorChannelsInGlTextureFormat"]=__colorChannelsInGlTextureFormat;function heapObjectForWebGLType(type){type-=5120;if(type==1)return HEAPU8;if(type==4)return HEAP32;if(type==6)return HEAPF32;if(type==5||type==28922)return HEAPU32;return HEAPU16}Module["heapObjectForWebGLType"]=heapObjectForWebGLType;function heapAccessShiftForWebGLHeap(heap){return 31-Math.clz32(heap.BYTES_PER_ELEMENT)}Module["heapAccessShiftForWebGLHeap"]=heapAccessShiftForWebGLHeap;function emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat){var heap=heapObjectForWebGLType(type);var shift=heapAccessShiftForWebGLHeap(heap);var byteSize=1<>shift,pixels+bytes>>shift)}Module["emscriptenWebGLGetTexPixelData"]=emscriptenWebGLGetTexPixelData;function _emscripten_glReadPixels(x,y,width,height,format,type,pixels){var pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,format);if(!pixelData){GL.recordError(1280);return}GLctx.readPixels(x,y,width,height,format,type,pixelData)}Module["_emscripten_glReadPixels"]=_emscripten_glReadPixels;_emscripten_glReadPixels.sig="viiiiiii";function _emscripten_glReleaseShaderCompiler(){}Module["_emscripten_glReleaseShaderCompiler"]=_emscripten_glReleaseShaderCompiler;_emscripten_glReleaseShaderCompiler.sig="v";function _emscripten_glRenderbufferStorage(x0,x1,x2,x3){GLctx["renderbufferStorage"](x0,x1,x2,x3)}Module["_emscripten_glRenderbufferStorage"]=_emscripten_glRenderbufferStorage;_emscripten_glRenderbufferStorage.sig="viiii";function _emscripten_glSampleCoverage(value,invert){GLctx.sampleCoverage(value,!!invert)}Module["_emscripten_glSampleCoverage"]=_emscripten_glSampleCoverage;_emscripten_glSampleCoverage.sig="vii";function _emscripten_glScissor(x0,x1,x2,x3){GLctx["scissor"](x0,x1,x2,x3)}Module["_emscripten_glScissor"]=_emscripten_glScissor;_emscripten_glScissor.sig="viiii";function _emscripten_glShaderBinary(){GL.recordError(1280)}Module["_emscripten_glShaderBinary"]=_emscripten_glShaderBinary;_emscripten_glShaderBinary.sig="v";function _emscripten_glShaderSource(shader,count,string,length){var source=GL.getSource(shader,count,string,length);GLctx.shaderSource(GL.shaders[shader],source)}Module["_emscripten_glShaderSource"]=_emscripten_glShaderSource;_emscripten_glShaderSource.sig="viiii";function _emscripten_glStencilFunc(x0,x1,x2){GLctx["stencilFunc"](x0,x1,x2)}Module["_emscripten_glStencilFunc"]=_emscripten_glStencilFunc;_emscripten_glStencilFunc.sig="viii";function _emscripten_glStencilFuncSeparate(x0,x1,x2,x3){GLctx["stencilFuncSeparate"](x0,x1,x2,x3)}Module["_emscripten_glStencilFuncSeparate"]=_emscripten_glStencilFuncSeparate;_emscripten_glStencilFuncSeparate.sig="viiii";function _emscripten_glStencilMask(x0){GLctx["stencilMask"](x0)}Module["_emscripten_glStencilMask"]=_emscripten_glStencilMask;_emscripten_glStencilMask.sig="vi";function _emscripten_glStencilMaskSeparate(x0,x1){GLctx["stencilMaskSeparate"](x0,x1)}Module["_emscripten_glStencilMaskSeparate"]=_emscripten_glStencilMaskSeparate;_emscripten_glStencilMaskSeparate.sig="vii";function _emscripten_glStencilOp(x0,x1,x2){GLctx["stencilOp"](x0,x1,x2)}Module["_emscripten_glStencilOp"]=_emscripten_glStencilOp;_emscripten_glStencilOp.sig="viii";function _emscripten_glStencilOpSeparate(x0,x1,x2,x3){GLctx["stencilOpSeparate"](x0,x1,x2,x3)}Module["_emscripten_glStencilOpSeparate"]=_emscripten_glStencilOpSeparate;_emscripten_glStencilOpSeparate.sig="viiii";function _emscripten_glTexImage2D(target,level,internalFormat,width,height,border,format,type,pixels){GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels?emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat):null)}Module["_emscripten_glTexImage2D"]=_emscripten_glTexImage2D;_emscripten_glTexImage2D.sig="viiiiiiiii";function _emscripten_glTexParameterf(x0,x1,x2){GLctx["texParameterf"](x0,x1,x2)}Module["_emscripten_glTexParameterf"]=_emscripten_glTexParameterf;_emscripten_glTexParameterf.sig="viii";function _emscripten_glTexParameterfv(target,pname,params){var param=HEAPF32[params>>2];GLctx.texParameterf(target,pname,param)}Module["_emscripten_glTexParameterfv"]=_emscripten_glTexParameterfv;_emscripten_glTexParameterfv.sig="viii";function _emscripten_glTexParameteri(x0,x1,x2){GLctx["texParameteri"](x0,x1,x2)}Module["_emscripten_glTexParameteri"]=_emscripten_glTexParameteri;_emscripten_glTexParameteri.sig="viii";function _emscripten_glTexParameteriv(target,pname,params){var param=HEAP32[params>>2];GLctx.texParameteri(target,pname,param)}Module["_emscripten_glTexParameteriv"]=_emscripten_glTexParameteriv;_emscripten_glTexParameteriv.sig="viii";function _emscripten_glTexSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels){var pixelData=null;if(pixels)pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,0);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixelData)}Module["_emscripten_glTexSubImage2D"]=_emscripten_glTexSubImage2D;_emscripten_glTexSubImage2D.sig="viiiiiiiii";function _emscripten_glUniform1f(location,v0){GLctx.uniform1f(webglGetUniformLocation(location),v0)}Module["_emscripten_glUniform1f"]=_emscripten_glUniform1f;_emscripten_glUniform1f.sig="vif";var miniTempWebGLFloatBuffers=[];Module["miniTempWebGLFloatBuffers"]=miniTempWebGLFloatBuffers;function _emscripten_glUniform1fv(location,count,value){if(count<=288){var view=miniTempWebGLFloatBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1fv(webglGetUniformLocation(location),view)}Module["_emscripten_glUniform1fv"]=_emscripten_glUniform1fv;_emscripten_glUniform1fv.sig="viii";function _emscripten_glUniform1i(location,v0){GLctx.uniform1i(webglGetUniformLocation(location),v0)}Module["_emscripten_glUniform1i"]=_emscripten_glUniform1i;_emscripten_glUniform1i.sig="vii";var __miniTempWebGLIntBuffers=[];Module["__miniTempWebGLIntBuffers"]=__miniTempWebGLIntBuffers;function _emscripten_glUniform1iv(location,count,value){if(count<=288){var view=__miniTempWebGLIntBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAP32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1iv(webglGetUniformLocation(location),view)}Module["_emscripten_glUniform1iv"]=_emscripten_glUniform1iv;_emscripten_glUniform1iv.sig="viii";function _emscripten_glUniform2f(location,v0,v1){GLctx.uniform2f(webglGetUniformLocation(location),v0,v1)}Module["_emscripten_glUniform2f"]=_emscripten_glUniform2f;_emscripten_glUniform2f.sig="viff";function _emscripten_glUniform2fv(location,count,value){if(count<=144){var view=miniTempWebGLFloatBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2fv(webglGetUniformLocation(location),view)}Module["_emscripten_glUniform2fv"]=_emscripten_glUniform2fv;_emscripten_glUniform2fv.sig="viii";function _emscripten_glUniform2i(location,v0,v1){GLctx.uniform2i(webglGetUniformLocation(location),v0,v1)}Module["_emscripten_glUniform2i"]=_emscripten_glUniform2i;_emscripten_glUniform2i.sig="viii";function _emscripten_glUniform2iv(location,count,value){if(count<=144){var view=__miniTempWebGLIntBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2iv(webglGetUniformLocation(location),view)}Module["_emscripten_glUniform2iv"]=_emscripten_glUniform2iv;_emscripten_glUniform2iv.sig="viii";function _emscripten_glUniform3f(location,v0,v1,v2){GLctx.uniform3f(webglGetUniformLocation(location),v0,v1,v2)}Module["_emscripten_glUniform3f"]=_emscripten_glUniform3f;_emscripten_glUniform3f.sig="vifff";function _emscripten_glUniform3fv(location,count,value){if(count<=96){var view=miniTempWebGLFloatBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3fv(webglGetUniformLocation(location),view)}Module["_emscripten_glUniform3fv"]=_emscripten_glUniform3fv;_emscripten_glUniform3fv.sig="viii";function _emscripten_glUniform3i(location,v0,v1,v2){GLctx.uniform3i(webglGetUniformLocation(location),v0,v1,v2)}Module["_emscripten_glUniform3i"]=_emscripten_glUniform3i;_emscripten_glUniform3i.sig="viiii";function _emscripten_glUniform3iv(location,count,value){if(count<=96){var view=__miniTempWebGLIntBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3iv(webglGetUniformLocation(location),view)}Module["_emscripten_glUniform3iv"]=_emscripten_glUniform3iv;_emscripten_glUniform3iv.sig="viii";function _emscripten_glUniform4f(location,v0,v1,v2,v3){GLctx.uniform4f(webglGetUniformLocation(location),v0,v1,v2,v3)}Module["_emscripten_glUniform4f"]=_emscripten_glUniform4f;_emscripten_glUniform4f.sig="viffff";function _emscripten_glUniform4fv(location,count,value){if(count<=72){var view=miniTempWebGLFloatBuffers[4*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<4*count;i+=4){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3]}}else{var view=HEAPF32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4fv(webglGetUniformLocation(location),view)}Module["_emscripten_glUniform4fv"]=_emscripten_glUniform4fv;_emscripten_glUniform4fv.sig="viii";function _emscripten_glUniform4i(location,v0,v1,v2,v3){GLctx.uniform4i(webglGetUniformLocation(location),v0,v1,v2,v3)}Module["_emscripten_glUniform4i"]=_emscripten_glUniform4i;_emscripten_glUniform4i.sig="viiiii";function _emscripten_glUniform4iv(location,count,value){if(count<=72){var view=__miniTempWebGLIntBuffers[4*count-1];for(var i=0;i<4*count;i+=4){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2];view[i+3]=HEAP32[value+(4*i+12)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4iv(webglGetUniformLocation(location),view)}Module["_emscripten_glUniform4iv"]=_emscripten_glUniform4iv;_emscripten_glUniform4iv.sig="viii";function _emscripten_glUniformMatrix2fv(location,count,transpose,value){if(count<=72){var view=miniTempWebGLFloatBuffers[4*count-1];for(var i=0;i<4*count;i+=4){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2];view[i+3]=HEAPF32[value+(4*i+12)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*16>>2)}GLctx.uniformMatrix2fv(webglGetUniformLocation(location),!!transpose,view)}Module["_emscripten_glUniformMatrix2fv"]=_emscripten_glUniformMatrix2fv;_emscripten_glUniformMatrix2fv.sig="viiii";function _emscripten_glUniformMatrix3fv(location,count,transpose,value){if(count<=32){var view=miniTempWebGLFloatBuffers[9*count-1];for(var i=0;i<9*count;i+=9){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2];view[i+3]=HEAPF32[value+(4*i+12)>>2];view[i+4]=HEAPF32[value+(4*i+16)>>2];view[i+5]=HEAPF32[value+(4*i+20)>>2];view[i+6]=HEAPF32[value+(4*i+24)>>2];view[i+7]=HEAPF32[value+(4*i+28)>>2];view[i+8]=HEAPF32[value+(4*i+32)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*36>>2)}GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,view)}Module["_emscripten_glUniformMatrix3fv"]=_emscripten_glUniformMatrix3fv;_emscripten_glUniformMatrix3fv.sig="viiii";function _emscripten_glUniformMatrix4fv(location,count,transpose,value){if(count<=18){var view=miniTempWebGLFloatBuffers[16*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<16*count;i+=16){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3];view[i+4]=heap[dst+4];view[i+5]=heap[dst+5];view[i+6]=heap[dst+6];view[i+7]=heap[dst+7];view[i+8]=heap[dst+8];view[i+9]=heap[dst+9];view[i+10]=heap[dst+10];view[i+11]=heap[dst+11];view[i+12]=heap[dst+12];view[i+13]=heap[dst+13];view[i+14]=heap[dst+14];view[i+15]=heap[dst+15]}}else{var view=HEAPF32.subarray(value>>2,value+count*64>>2)}GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,view)}Module["_emscripten_glUniformMatrix4fv"]=_emscripten_glUniformMatrix4fv;_emscripten_glUniformMatrix4fv.sig="viiii";function _emscripten_glUseProgram(program){program=GL.programs[program];GLctx.useProgram(program);GLctx.currentProgram=program}Module["_emscripten_glUseProgram"]=_emscripten_glUseProgram;_emscripten_glUseProgram.sig="vi";function _emscripten_glValidateProgram(program){GLctx.validateProgram(GL.programs[program])}Module["_emscripten_glValidateProgram"]=_emscripten_glValidateProgram;_emscripten_glValidateProgram.sig="vi";function _emscripten_glVertexAttrib1f(x0,x1){GLctx["vertexAttrib1f"](x0,x1)}Module["_emscripten_glVertexAttrib1f"]=_emscripten_glVertexAttrib1f;_emscripten_glVertexAttrib1f.sig="vii";function _emscripten_glVertexAttrib1fv(index,v){GLctx.vertexAttrib1f(index,HEAPF32[v>>2])}Module["_emscripten_glVertexAttrib1fv"]=_emscripten_glVertexAttrib1fv;_emscripten_glVertexAttrib1fv.sig="vii";function _emscripten_glVertexAttrib2f(x0,x1,x2){GLctx["vertexAttrib2f"](x0,x1,x2)}Module["_emscripten_glVertexAttrib2f"]=_emscripten_glVertexAttrib2f;_emscripten_glVertexAttrib2f.sig="viii";function _emscripten_glVertexAttrib2fv(index,v){GLctx.vertexAttrib2f(index,HEAPF32[v>>2],HEAPF32[v+4>>2])}Module["_emscripten_glVertexAttrib2fv"]=_emscripten_glVertexAttrib2fv;_emscripten_glVertexAttrib2fv.sig="vii";function _emscripten_glVertexAttrib3f(x0,x1,x2,x3){GLctx["vertexAttrib3f"](x0,x1,x2,x3)}Module["_emscripten_glVertexAttrib3f"]=_emscripten_glVertexAttrib3f;_emscripten_glVertexAttrib3f.sig="viiii";function _emscripten_glVertexAttrib3fv(index,v){GLctx.vertexAttrib3f(index,HEAPF32[v>>2],HEAPF32[v+4>>2],HEAPF32[v+8>>2])}Module["_emscripten_glVertexAttrib3fv"]=_emscripten_glVertexAttrib3fv;_emscripten_glVertexAttrib3fv.sig="vii";function _emscripten_glVertexAttrib4f(x0,x1,x2,x3,x4){GLctx["vertexAttrib4f"](x0,x1,x2,x3,x4)}Module["_emscripten_glVertexAttrib4f"]=_emscripten_glVertexAttrib4f;_emscripten_glVertexAttrib4f.sig="viiiii";function _emscripten_glVertexAttrib4fv(index,v){GLctx.vertexAttrib4f(index,HEAPF32[v>>2],HEAPF32[v+4>>2],HEAPF32[v+8>>2],HEAPF32[v+12>>2])}Module["_emscripten_glVertexAttrib4fv"]=_emscripten_glVertexAttrib4fv;_emscripten_glVertexAttrib4fv.sig="vii";function _emscripten_glVertexAttribDivisorANGLE(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}Module["_emscripten_glVertexAttribDivisorANGLE"]=_emscripten_glVertexAttribDivisorANGLE;_emscripten_glVertexAttribDivisorANGLE.sig="vii";function _emscripten_glVertexAttribPointer(index,size,type,normalized,stride,ptr){GLctx.vertexAttribPointer(index,size,type,!!normalized,stride,ptr)}Module["_emscripten_glVertexAttribPointer"]=_emscripten_glVertexAttribPointer;_emscripten_glVertexAttribPointer.sig="viiiiii";function _emscripten_glViewport(x0,x1,x2,x3){GLctx["viewport"](x0,x1,x2,x3)}Module["_emscripten_glViewport"]=_emscripten_glViewport;_emscripten_glViewport.sig="viiii";function _emscripten_memcpy_big(dest,src,num){HEAPU8.copyWithin(dest,src,src+num)}Module["_emscripten_memcpy_big"]=_emscripten_memcpy_big;function emscripten_realloc_buffer(size){try{wasmMemory.grow(size-buffer.byteLength+65535>>>16);updateGlobalBufferAndViews(wasmMemory.buffer);return 1}catch(e){}}Module["emscripten_realloc_buffer"]=emscripten_realloc_buffer;function _emscripten_resize_heap(requestedSize){var oldSize=HEAPU8.length;requestedSize=requestedSize>>>0;var maxHeapSize=2147483648;if(requestedSize>maxHeapSize){return false}for(var cutDown=1;cutDown<=4;cutDown*=2){var overGrownHeapSize=oldSize*(1+.2/cutDown);overGrownHeapSize=Math.min(overGrownHeapSize,requestedSize+100663296);var newSize=Math.min(maxHeapSize,alignUp(Math.max(requestedSize,overGrownHeapSize),65536));var replacement=emscripten_realloc_buffer(newSize);if(replacement){return true}}return false}Module["_emscripten_resize_heap"]=_emscripten_resize_heap;function _emscripten_thread_sleep(msecs){var start=_emscripten_get_now();while(_emscripten_get_now()-start>2]=ptr;writeAsciiToMemory(string,ptr);bufSize+=string.length+1});return 0}Module["_environ_get"]=_environ_get;_environ_get.sig="iii";function _environ_sizes_get(penviron_count,penviron_buf_size){var strings=getEnvStrings();HEAP32[penviron_count>>2]=strings.length;var bufSize=0;strings.forEach(function(string){bufSize+=string.length+1});HEAP32[penviron_buf_size>>2]=bufSize;return 0}Module["_environ_sizes_get"]=_environ_sizes_get;_environ_sizes_get.sig="iii";function _execve(path,argv,envp){setErrNo(45);return-1}Module["_execve"]=_execve;_execve.sig="iiii";function _fd_close(fd){try{var stream=SYSCALLS.getStreamFromFD(fd);FS.close(stream);return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}Module["_fd_close"]=_fd_close;_fd_close.sig="ii";function _fd_fdstat_get(fd,pbuf){try{var stream=SYSCALLS.getStreamFromFD(fd);var type=stream.tty?2:FS.isDir(stream.mode)?3:FS.isLink(stream.mode)?7:4;HEAP8[pbuf>>0]=type;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}Module["_fd_fdstat_get"]=_fd_fdstat_get;_fd_fdstat_get.sig="iii";function _fd_pread(fd,iov,iovcnt,offset_low,offset_high,pnum){try{var stream=SYSCALLS.getStreamFromFD(fd);var num=SYSCALLS.doReadv(stream,iov,iovcnt,offset_low);HEAP32[pnum>>2]=num;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}Module["_fd_pread"]=_fd_pread;function _fd_pwrite(fd,iov,iovcnt,offset_low,offset_high,pnum){try{var stream=SYSCALLS.getStreamFromFD(fd);var num=SYSCALLS.doWritev(stream,iov,iovcnt,offset_low);HEAP32[pnum>>2]=num;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}Module["_fd_pwrite"]=_fd_pwrite;function _fd_read(fd,iov,iovcnt,pnum){try{var stream=SYSCALLS.getStreamFromFD(fd);var num=SYSCALLS.doReadv(stream,iov,iovcnt);HEAP32[pnum>>2]=num;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}Module["_fd_read"]=_fd_read;_fd_read.sig="iiiii";function _fd_seek(fd,offset_low,offset_high,whence,newOffset){try{var stream=SYSCALLS.getStreamFromFD(fd);var HIGH_OFFSET=4294967296;var offset=offset_high*HIGH_OFFSET+(offset_low>>>0);var DOUBLE_LIMIT=9007199254740992;if(offset<=-DOUBLE_LIMIT||offset>=DOUBLE_LIMIT){return-61}FS.llseek(stream,offset,whence);tempI64=[stream.position>>>0,(tempDouble=stream.position,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[newOffset>>2]=tempI64[0],HEAP32[newOffset+4>>2]=tempI64[1];if(stream.getdents&&offset===0&&whence===0)stream.getdents=null;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}Module["_fd_seek"]=_fd_seek;function _fd_sync(fd){try{var stream=SYSCALLS.getStreamFromFD(fd);if(stream.stream_ops&&stream.stream_ops.fsync){return-stream.stream_ops.fsync(stream)}return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}Module["_fd_sync"]=_fd_sync;_fd_sync.sig="ii";function _fd_write(fd,iov,iovcnt,pnum){try{var stream=SYSCALLS.getStreamFromFD(fd);var num=SYSCALLS.doWritev(stream,iov,iovcnt);HEAP32[pnum>>2]=num;return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return e.errno}}Module["_fd_write"]=_fd_write;_fd_write.sig="iiiii";function _fork(){setErrNo(52);return-1}Module["_fork"]=_fork;_fork.sig="i";var GAI_ERRNO_MESSAGES={};Module["GAI_ERRNO_MESSAGES"]=GAI_ERRNO_MESSAGES;function _gai_strerror(val){var buflen=256;if(!_gai_strerror.buffer){_gai_strerror.buffer=_malloc(buflen);GAI_ERRNO_MESSAGES["0"]="Success";GAI_ERRNO_MESSAGES[""+-1]="Invalid value for 'ai_flags' field";GAI_ERRNO_MESSAGES[""+-2]="NAME or SERVICE is unknown";GAI_ERRNO_MESSAGES[""+-3]="Temporary failure in name resolution";GAI_ERRNO_MESSAGES[""+-4]="Non-recoverable failure in name res";GAI_ERRNO_MESSAGES[""+-6]="'ai_family' not supported";GAI_ERRNO_MESSAGES[""+-7]="'ai_socktype' not supported";GAI_ERRNO_MESSAGES[""+-8]="SERVICE not supported for 'ai_socktype'";GAI_ERRNO_MESSAGES[""+-10]="Memory allocation failure";GAI_ERRNO_MESSAGES[""+-11]="System error returned in 'errno'";GAI_ERRNO_MESSAGES[""+-12]="Argument buffer overflow"}var msg="Unknown error";if(val in GAI_ERRNO_MESSAGES){if(GAI_ERRNO_MESSAGES[val].length>buflen-1){msg="Message too long"}else{msg=GAI_ERRNO_MESSAGES[val]}}writeAsciiToMemory(msg,_gai_strerror.buffer);return _gai_strerror.buffer}Module["_gai_strerror"]=_gai_strerror;function _getTempRet0(){return getTempRet0()}Module["_getTempRet0"]=_getTempRet0;_getTempRet0.sig="i";function _getaddrinfo(node,service,hint,out){var addrs=[];var canon=null;var addr=0;var port=0;var flags=0;var family=0;var type=0;var proto=0;var ai,last;function allocaddrinfo(family,type,proto,canon,addr,port){var sa,salen,ai;var errno;salen=family===10?28:16;addr=family===10?inetNtop6(addr):inetNtop4(addr);sa=_malloc(salen);errno=writeSockaddr(sa,family,addr,port);assert(!errno);ai=_malloc(32);HEAP32[ai+4>>2]=family;HEAP32[ai+8>>2]=type;HEAP32[ai+12>>2]=proto;HEAP32[ai+24>>2]=canon;HEAP32[ai+20>>2]=sa;if(family===10){HEAP32[ai+16>>2]=28}else{HEAP32[ai+16>>2]=16}HEAP32[ai+28>>2]=0;return ai}if(hint){flags=HEAP32[hint>>2];family=HEAP32[hint+4>>2];type=HEAP32[hint+8>>2];proto=HEAP32[hint+12>>2]}if(type&&!proto){proto=type===2?17:6}if(!type&&proto){type=proto===17?2:1}if(proto===0){proto=6}if(type===0){type=1}if(!node&&!service){return-2}if(flags&~(1|2|4|1024|8|16|32)){return-1}if(hint!==0&&HEAP32[hint>>2]&2&&!node){return-1}if(flags&32){return-2}if(type!==0&&type!==1&&type!==2){return-7}if(family!==0&&family!==2&&family!==10){return-6}if(service){service=UTF8ToString(service);port=parseInt(service,10);if(isNaN(port)){if(flags&1024){return-2}return-8}}if(!node){if(family===0){family=2}if((flags&1)===0){if(family===2){addr=_htonl(2130706433)}else{addr=[0,0,0,1]}}ai=allocaddrinfo(family,type,proto,null,addr,port);HEAP32[out>>2]=ai;return 0}node=UTF8ToString(node);addr=inetPton4(node);if(addr!==null){if(family===0||family===2){family=2}else if(family===10&&flags&8){addr=[0,0,_htonl(65535),addr];family=10}else{return-2}}else{addr=inetPton6(node);if(addr!==null){if(family===0||family===10){family=10}else{return-2}}}if(addr!=null){ai=allocaddrinfo(family,type,proto,node,addr,port);HEAP32[out>>2]=ai;return 0}if(flags&4){return-2}node=DNS.lookup_name(node);addr=inetPton4(node);if(family===0){family=2}else if(family===10){addr=[0,0,_htonl(65535),addr]}ai=allocaddrinfo(family,type,proto,null,addr,port);HEAP32[out>>2]=ai;return 0}Module["_getaddrinfo"]=_getaddrinfo;_getaddrinfo.sig="iiiii";function _getentropy(buffer,size){if(!_getentropy.randomDevice){_getentropy.randomDevice=getRandomDevice()}for(var i=0;i>0]=_getentropy.randomDevice()}return 0}Module["_getentropy"]=_getentropy;function getHostByName(name){var ret=_malloc(20);var nameBuf=_malloc(name.length+1);stringToUTF8(name,nameBuf,name.length+1);HEAP32[ret>>2]=nameBuf;var aliasesBuf=_malloc(4);HEAP32[aliasesBuf>>2]=0;HEAP32[ret+4>>2]=aliasesBuf;var afinet=2;HEAP32[ret+8>>2]=afinet;HEAP32[ret+12>>2]=4;var addrListBuf=_malloc(12);HEAP32[addrListBuf>>2]=addrListBuf+8;HEAP32[addrListBuf+4>>2]=0;HEAP32[addrListBuf+8>>2]=inetPton4(DNS.lookup_name(name));HEAP32[ret+16>>2]=addrListBuf;return ret}Module["getHostByName"]=getHostByName;function _gethostbyaddr(addr,addrlen,type){if(type!==2){setErrNo(5);return null}addr=HEAP32[addr>>2];var host=inetNtop4(addr);var lookup=DNS.lookup_addr(host);if(lookup){host=lookup}return getHostByName(host)}Module["_gethostbyaddr"]=_gethostbyaddr;_gethostbyaddr.sig="iiii";function _gethostbyname(name){return getHostByName(UTF8ToString(name))}Module["_gethostbyname"]=_gethostbyname;_gethostbyname.sig="ii";function _getloadavg(loadavg,nelem){var limit=Math.min(nelem,3);var doubleSize=8;for(var i=0;i>3]=.1}return limit}Module["_getloadavg"]=_getloadavg;function _getnameinfo(sa,salen,node,nodelen,serv,servlen,flags){var info=readSockaddr(sa,salen);if(info.errno){return-6}var port=info.port;var addr=info.addr;var overflowed=false;if(node&&nodelen){var lookup;if(flags&1||!(lookup=DNS.lookup_addr(addr))){if(flags&8){return-2}}else{addr=lookup}var numBytesWrittenExclNull=stringToUTF8(addr,node,nodelen);if(numBytesWrittenExclNull+1>=nodelen){overflowed=true}}if(serv&&servlen){port=""+port;var numBytesWrittenExclNull=stringToUTF8(port,serv,servlen);if(numBytesWrittenExclNull+1>=servlen){overflowed=true}}if(overflowed){return-12}return 0}Module["_getnameinfo"]=_getnameinfo;var Protocols={list:[],map:{}};Module["Protocols"]=Protocols;function _setprotoent(stayopen){function allocprotoent(name,proto,aliases){var nameBuf=_malloc(name.length+1);writeAsciiToMemory(name,nameBuf);var j=0;var length=aliases.length;var aliasListBuf=_malloc((length+1)*4);for(var i=0;i>2]=aliasBuf}HEAP32[aliasListBuf+j>>2]=0;var pe=_malloc(12);HEAP32[pe>>2]=nameBuf;HEAP32[pe+4>>2]=aliasListBuf;HEAP32[pe+8>>2]=proto;return pe}var list=Protocols.list;var map=Protocols.map;if(list.length===0){var entry=allocprotoent("tcp",6,["TCP"]);list.push(entry);map["tcp"]=map["6"]=entry;entry=allocprotoent("udp",17,["UDP"]);list.push(entry);map["udp"]=map["17"]=entry}_setprotoent.index=0}Module["_setprotoent"]=_setprotoent;function _getprotobyname(name){name=UTF8ToString(name);_setprotoent(true);var result=Protocols.map[name];return result}Module["_getprotobyname"]=_getprotobyname;function _gettimeofday(ptr){var now=Date.now();HEAP32[ptr>>2]=now/1e3|0;HEAP32[ptr+4>>2]=now%1e3*1e3|0;return 0}Module["_gettimeofday"]=_gettimeofday;function _kill(pid,sig){setErrNo(63);return-1}Module["_kill"]=_kill;function _posix_spawn(){return _fork()}Module["_posix_spawn"]=_posix_spawn;_posix_spawn.sig="i";function _proc_exit(code){procExit(code)}Module["_proc_exit"]=_proc_exit;_proc_exit.sig="vi";function _pthread_sigmask(how,set,oldset){err("pthread_sigmask() is not supported: this is a no-op.");return 0}Module["_pthread_sigmask"]=_pthread_sigmask;function _raise(sig){setErrNo(52);return-1}Module["_raise"]=_raise;function _setTempRet0(val){setTempRet0(val)}Module["_setTempRet0"]=_setTempRet0;_setTempRet0.sig="vi";function _setgroups(ngroups,gidset){if(ngroups<1||ngroups>_sysconf(3)){setErrNo(28);return-1}setErrNo(63);return-1}Module["_setgroups"]=_setgroups;function _siginterrupt(){return 0}Module["_siginterrupt"]=_siginterrupt;function _sigpending(set){HEAP32[set>>2]=0;return 0}Module["_sigpending"]=_sigpending;function _sigtimedwait(set,sig,timeout){return 28}Module["_sigtimedwait"]=_sigtimedwait;function __isLeapYear(year){return year%4===0&&(year%100!==0||year%400===0)}Module["__isLeapYear"]=__isLeapYear;function __arraySum(array,index){var sum=0;for(var i=0;i<=index;sum+=array[i++]){}return sum}Module["__arraySum"]=__arraySum;var __MONTH_DAYS_LEAP=[31,29,31,30,31,30,31,31,30,31,30,31];Module["__MONTH_DAYS_LEAP"]=__MONTH_DAYS_LEAP;var __MONTH_DAYS_REGULAR=[31,28,31,30,31,30,31,31,30,31,30,31];Module["__MONTH_DAYS_REGULAR"]=__MONTH_DAYS_REGULAR;function __addDays(date,days){var newDate=new Date(date.getTime());while(days>0){var leap=__isLeapYear(newDate.getFullYear());var currentMonth=newDate.getMonth();var daysInCurrentMonth=(leap?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR)[currentMonth];if(days>daysInCurrentMonth-newDate.getDate()){days-=daysInCurrentMonth-newDate.getDate()+1;newDate.setDate(1);if(currentMonth<11){newDate.setMonth(currentMonth+1)}else{newDate.setMonth(0);newDate.setFullYear(newDate.getFullYear()+1)}}else{newDate.setDate(newDate.getDate()+days);return newDate}}return newDate}Module["__addDays"]=__addDays;function _strftime(s,maxsize,format,tm){var tm_zone=HEAP32[tm+40>>2];var date={tm_sec:HEAP32[tm>>2],tm_min:HEAP32[tm+4>>2],tm_hour:HEAP32[tm+8>>2],tm_mday:HEAP32[tm+12>>2],tm_mon:HEAP32[tm+16>>2],tm_year:HEAP32[tm+20>>2],tm_wday:HEAP32[tm+24>>2],tm_yday:HEAP32[tm+28>>2],tm_isdst:HEAP32[tm+32>>2],tm_gmtoff:HEAP32[tm+36>>2],tm_zone:tm_zone?UTF8ToString(tm_zone):""};var pattern=UTF8ToString(format);var EXPANSION_RULES_1={"%c":"%a %b %d %H:%M:%S %Y","%D":"%m/%d/%y","%F":"%Y-%m-%d","%h":"%b","%r":"%I:%M:%S %p","%R":"%H:%M","%T":"%H:%M:%S","%x":"%m/%d/%y","%X":"%H:%M:%S","%Ec":"%c","%EC":"%C","%Ex":"%m/%d/%y","%EX":"%H:%M:%S","%Ey":"%y","%EY":"%Y","%Od":"%d","%Oe":"%e","%OH":"%H","%OI":"%I","%Om":"%m","%OM":"%M","%OS":"%S","%Ou":"%u","%OU":"%U","%OV":"%V","%Ow":"%w","%OW":"%W","%Oy":"%y"};for(var rule in EXPANSION_RULES_1){pattern=pattern.replace(new RegExp(rule,"g"),EXPANSION_RULES_1[rule])}var WEEKDAYS=["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"];var MONTHS=["January","February","March","April","May","June","July","August","September","October","November","December"];function leadingSomething(value,digits,character){var str=typeof value==="number"?value.toString():value||"";while(str.length0?1:0}var compare;if((compare=sgn(date1.getFullYear()-date2.getFullYear()))===0){if((compare=sgn(date1.getMonth()-date2.getMonth()))===0){compare=sgn(date1.getDate()-date2.getDate())}}return compare}function getFirstWeekStartDate(janFourth){switch(janFourth.getDay()){case 0:return new Date(janFourth.getFullYear()-1,11,29);case 1:return janFourth;case 2:return new Date(janFourth.getFullYear(),0,3);case 3:return new Date(janFourth.getFullYear(),0,2);case 4:return new Date(janFourth.getFullYear(),0,1);case 5:return new Date(janFourth.getFullYear()-1,11,31);case 6:return new Date(janFourth.getFullYear()-1,11,30)}}function getWeekBasedYear(date){var thisDate=__addDays(new Date(date.tm_year+1900,0,1),date.tm_yday);var janFourthThisYear=new Date(thisDate.getFullYear(),0,4);var janFourthNextYear=new Date(thisDate.getFullYear()+1,0,4);var firstWeekStartThisYear=getFirstWeekStartDate(janFourthThisYear);var firstWeekStartNextYear=getFirstWeekStartDate(janFourthNextYear);if(compareByDay(firstWeekStartThisYear,thisDate)<=0){if(compareByDay(firstWeekStartNextYear,thisDate)<=0){return thisDate.getFullYear()+1}else{return thisDate.getFullYear()}}else{return thisDate.getFullYear()-1}}var EXPANSION_RULES_2={"%a":function(date){return WEEKDAYS[date.tm_wday].substring(0,3)},"%A":function(date){return WEEKDAYS[date.tm_wday]},"%b":function(date){return MONTHS[date.tm_mon].substring(0,3)},"%B":function(date){return MONTHS[date.tm_mon]},"%C":function(date){var year=date.tm_year+1900;return leadingNulls(year/100|0,2)},"%d":function(date){return leadingNulls(date.tm_mday,2)},"%e":function(date){return leadingSomething(date.tm_mday,2," ")},"%g":function(date){return getWeekBasedYear(date).toString().substring(2)},"%G":function(date){return getWeekBasedYear(date)},"%H":function(date){return leadingNulls(date.tm_hour,2)},"%I":function(date){var twelveHour=date.tm_hour;if(twelveHour==0)twelveHour=12;else if(twelveHour>12)twelveHour-=12;return leadingNulls(twelveHour,2)},"%j":function(date){return leadingNulls(date.tm_mday+__arraySum(__isLeapYear(date.tm_year+1900)?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR,date.tm_mon-1),3)},"%m":function(date){return leadingNulls(date.tm_mon+1,2)},"%M":function(date){return leadingNulls(date.tm_min,2)},"%n":function(){return"\n"},"%p":function(date){if(date.tm_hour>=0&&date.tm_hour<12){return"AM"}else{return"PM"}},"%S":function(date){return leadingNulls(date.tm_sec,2)},"%t":function(){return"\t"},"%u":function(date){return date.tm_wday||7},"%U":function(date){var janFirst=new Date(date.tm_year+1900,0,1);var firstSunday=janFirst.getDay()===0?janFirst:__addDays(janFirst,7-janFirst.getDay());var endDate=new Date(date.tm_year+1900,date.tm_mon,date.tm_mday);if(compareByDay(firstSunday,endDate)<0){var februaryFirstUntilEndMonth=__arraySum(__isLeapYear(endDate.getFullYear())?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR,endDate.getMonth()-1)-31;var firstSundayUntilEndJanuary=31-firstSunday.getDate();var days=firstSundayUntilEndJanuary+februaryFirstUntilEndMonth+endDate.getDate();return leadingNulls(Math.ceil(days/7),2)}return compareByDay(firstSunday,janFirst)===0?"01":"00"},"%V":function(date){var janFourthThisYear=new Date(date.tm_year+1900,0,4);var janFourthNextYear=new Date(date.tm_year+1901,0,4);var firstWeekStartThisYear=getFirstWeekStartDate(janFourthThisYear);var firstWeekStartNextYear=getFirstWeekStartDate(janFourthNextYear);var endDate=__addDays(new Date(date.tm_year+1900,0,1),date.tm_yday);if(compareByDay(endDate,firstWeekStartThisYear)<0){return"53"}if(compareByDay(firstWeekStartNextYear,endDate)<=0){return"01"}var daysDifference;if(firstWeekStartThisYear.getFullYear()=0;off=Math.abs(off)/60;off=off/60*100+off%60;return(ahead?"+":"-")+String("0000"+off).slice(-4)},"%Z":function(date){return date.tm_zone},"%%":function(){return"%"}};for(var rule in EXPANSION_RULES_2){if(pattern.includes(rule)){pattern=pattern.replace(new RegExp(rule,"g"),EXPANSION_RULES_2[rule](date))}}var bytes=intArrayFromString(pattern,false);if(bytes.length>maxsize){return 0}writeArrayToMemory(bytes,s);return bytes.length-1}Module["_strftime"]=_strftime;_strftime.sig="iiiii";function _strftime_l(s,maxsize,format,tm){return _strftime(s,maxsize,format,tm)}Module["_strftime_l"]=_strftime_l;function _system(command){if(ENVIRONMENT_IS_NODE){if(!command)return 1;var cmdstr=UTF8ToString(command);if(!cmdstr.length)return 0;var cp=require("child_process");var ret=cp.spawnSync(cmdstr,[],{shell:true,stdio:"inherit"});var _W_EXITCODE=function(ret,sig){return ret<<8|sig};if(ret.status===null){var signalToNumber=function(sig){switch(sig){case"SIGHUP":return 1;case"SIGINT":return 2;case"SIGQUIT":return 3;case"SIGFPE":return 8;case"SIGKILL":return 9;case"SIGALRM":return 14;case"SIGTERM":return 15}return 2};return _W_EXITCODE(0,signalToNumber(ret.signal))}return _W_EXITCODE(ret.status,0)}if(!command)return 0;setErrNo(52);return-1}Module["_system"]=_system;function _time(ptr){var ret=Date.now()/1e3|0;if(ptr){HEAP32[ptr>>2]=ret}return ret}Module["_time"]=_time;_time.sig="ii";function _times(buffer){if(buffer!==0){zeroMemory(buffer,16)}return 0}Module["_times"]=_times;function setFileTime(path,time){path=UTF8ToString(path);try{FS.utime(path,time,time);return 0}catch(e){if(!(e instanceof FS.ErrnoError))throw e+" : "+stackTrace();setErrNo(e.errno);return-1}}Module["setFileTime"]=setFileTime;function _utimes(path,times){var time;if(times){var mtime=times+8;time=HEAP32[mtime>>2]*1e3;time+=HEAP32[mtime+4>>2]/1e3}else{time=Date.now()}return setFileTime(path,time)}Module["_utimes"]=_utimes;_utimes.sig="iii";function _wait3(a0){return _wait(a0)}Module["_wait3"]=_wait3;_wait3.sig="ii";function _wait4(a0){return _wait(a0)}Module["_wait4"]=_wait4;_wait4.sig="ii";function _waitid(a0){return _wait(a0)}Module["_waitid"]=_waitid;_waitid.sig="ii";var ___memory_base=1024;Module["___memory_base"]=___memory_base;var ___table_base=1;Module["___table_base"]=___table_base;function _utime(path,times){var time;if(times){time=HEAP32[times+4>>2]*1e3}else{time=Date.now()}return setFileTime(path,time)}Module["_utime"]=_utime;_utime.sig="iii";function _flock(fd,operation){return 0}Module["_flock"]=_flock;function _vfork(){return _fork()}Module["_vfork"]=_vfork;_vfork.sig="i";function _emscripten_notify_memory_growth(memoryIndex){updateGlobalBufferAndViews(wasmMemory.buffer)}Module["_emscripten_notify_memory_growth"]=_emscripten_notify_memory_growth;function _difftime(time1,time0){return time1-time0}Module["_difftime"]=_difftime;_difftime.sig="dii";function _timelocal(a0){return _mktime(a0)}Module["_timelocal"]=_timelocal;_timelocal.sig="ii";function _timegm(tmPtr){_tzset();var time=Date.UTC(HEAP32[tmPtr+20>>2]+1900,HEAP32[tmPtr+16>>2],HEAP32[tmPtr+12>>2],HEAP32[tmPtr+8>>2],HEAP32[tmPtr+4>>2],HEAP32[tmPtr>>2],0);var date=new Date(time);HEAP32[tmPtr+24>>2]=date.getUTCDay();var start=Date.UTC(date.getUTCFullYear(),0,1,0,0,0,0);var yday=(date.getTime()-start)/(1e3*60*60*24)|0;HEAP32[tmPtr+28>>2]=yday;return date.getTime()/1e3|0}Module["_timegm"]=_timegm;_timegm.sig="ii";function _ctime_r(time,buf){var stack=stackSave();var rv=___asctime(_localtime_r(time,stackAlloc(44)),buf);stackRestore(stack);return rv}Module["_ctime_r"]=_ctime_r;_ctime_r.sig="iii";function _dysize(year){var leap=year%4==0&&(year%100!=0||year%400==0);return leap?366:365}Module["_dysize"]=_dysize;function _stime(when){setErrNo(63);return-1}Module["_stime"]=_stime;function _strptime(buf,format,tm){var pattern=UTF8ToString(format);var SPECIAL_CHARS="\\!@#$^&*()+=-[]/{}|:<>?,.";for(var i=0,ii=SPECIAL_CHARS.length;i=0;i=pattern.indexOf("%")){capture.push(pattern[i+1]);pattern=pattern.replace(new RegExp("\\%"+pattern[i+1],"g"),"")}var matches=new RegExp("^"+pattern,"i").exec(UTF8ToString(buf));function initDate(){function fixup(value,min,max){return typeof value!=="number"||isNaN(value)?min:value>=min?value<=max?value:max:min}return{year:fixup(HEAP32[tm+20>>2]+1900,1970,9999),month:fixup(HEAP32[tm+16>>2],0,11),day:fixup(HEAP32[tm+12>>2],1,31),hour:fixup(HEAP32[tm+8>>2],0,23),min:fixup(HEAP32[tm+4>>2],0,59),sec:fixup(HEAP32[tm>>2],0,59)}}if(matches){var date=initDate();var value;var getMatch=function(symbol){var pos=capture.indexOf(symbol);if(pos>=0){return matches[pos+1]}return};if(value=getMatch("S")){date.sec=jstoi_q(value)}if(value=getMatch("M")){date.min=jstoi_q(value)}if(value=getMatch("H")){date.hour=jstoi_q(value)}else if(value=getMatch("I")){var hour=jstoi_q(value);if(value=getMatch("p")){hour+=value.toUpperCase()[0]==="P"?12:0}date.hour=hour}if(value=getMatch("Y")){date.year=jstoi_q(value)}else if(value=getMatch("y")){var year=jstoi_q(value);if(value=getMatch("C")){year+=jstoi_q(value)*100}else{year+=year<69?2e3:1900}date.year=year}if(value=getMatch("m")){date.month=jstoi_q(value)-1}else if(value=getMatch("b")){date.month=MONTH_NUMBERS[value.substring(0,3).toUpperCase()]||0}if(value=getMatch("d")){date.day=jstoi_q(value)}else if(value=getMatch("j")){var day=jstoi_q(value);var leapYear=__isLeapYear(date.year);for(var month=0;month<12;++month){var daysUntilMonth=__arraySum(leapYear?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR,month-1);if(day<=daysUntilMonth+(leapYear?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR)[month]){date.day=day-daysUntilMonth}}}else if(value=getMatch("a")){var weekDay=value.substring(0,3).toUpperCase();if(value=getMatch("U")){var weekDayNumber=DAY_NUMBERS_SUN_FIRST[weekDay];var weekNumber=jstoi_q(value);var janFirst=new Date(date.year,0,1);var endDate;if(janFirst.getDay()===0){endDate=__addDays(janFirst,weekDayNumber+7*(weekNumber-1))}else{endDate=__addDays(janFirst,7-janFirst.getDay()+weekDayNumber+7*(weekNumber-1))}date.day=endDate.getDate();date.month=endDate.getMonth()}else if(value=getMatch("W")){var weekDayNumber=DAY_NUMBERS_MON_FIRST[weekDay];var weekNumber=jstoi_q(value);var janFirst=new Date(date.year,0,1);var endDate;if(janFirst.getDay()===1){endDate=__addDays(janFirst,weekDayNumber+7*(weekNumber-1))}else{endDate=__addDays(janFirst,7-janFirst.getDay()+1+weekDayNumber+7*(weekNumber-1))}date.day=endDate.getDate();date.month=endDate.getMonth()}}var fullDate=new Date(date.year,date.month,date.day,date.hour,date.min,date.sec,0);HEAP32[tm>>2]=fullDate.getSeconds();HEAP32[tm+4>>2]=fullDate.getMinutes();HEAP32[tm+8>>2]=fullDate.getHours();HEAP32[tm+12>>2]=fullDate.getDate();HEAP32[tm+16>>2]=fullDate.getMonth();HEAP32[tm+20>>2]=fullDate.getFullYear()-1900;HEAP32[tm+24>>2]=fullDate.getDay();HEAP32[tm+28>>2]=__arraySum(__isLeapYear(fullDate.getFullYear())?__MONTH_DAYS_LEAP:__MONTH_DAYS_REGULAR,fullDate.getMonth()-1)+fullDate.getDate()-1;HEAP32[tm+32>>2]=0;return buf+intArrayFromString(matches[0]).length-1}return 0}Module["_strptime"]=_strptime;function _strptime_l(buf,format,tm){return _strptime(buf,format,tm)}Module["_strptime_l"]=_strptime_l;function _getdate(string){return 0}Module["_getdate"]=_getdate;function _timespec_get(ts,base){if(base!==1){setErrNo(28);return 0}var ret=_clock_gettime(0,ts);return ret<0?0:base}Module["_timespec_get"]=_timespec_get;function _clock_getcpuclockid(pid,clk_id){if(pid<0)return 71;if(pid!==0&&pid!==42)return 52;if(clk_id)HEAP32[clk_id>>2]=2;return 0}Module["_clock_getcpuclockid"]=_clock_getcpuclockid;function _ftime(p){var millis=Date.now();HEAP32[p>>2]=millis/1e3|0;HEAP16[p+4>>1]=millis%1e3;HEAP16[p+6>>1]=0;HEAP16[p+8>>1]=0;return 0}Module["_ftime"]=_ftime;var ERRNO_MESSAGES={0:"Success",1:"Arg list too long",2:"Permission denied",3:"Address already in use",4:"Address not available",5:"Address family not supported by protocol family",6:"No more processes",7:"Socket already connected",8:"Bad file number",9:"Trying to read unreadable message",10:"Mount device busy",11:"Operation canceled",12:"No children",13:"Connection aborted",14:"Connection refused",15:"Connection reset by peer",16:"File locking deadlock error",17:"Destination address required",18:"Math arg out of domain of func",19:"Quota exceeded",20:"File exists",21:"Bad address",22:"File too large",23:"Host is unreachable",24:"Identifier removed",25:"Illegal byte sequence",26:"Connection already in progress",27:"Interrupted system call",28:"Invalid argument",29:"I/O error",30:"Socket is already connected",31:"Is a directory",32:"Too many symbolic links",33:"Too many open files",34:"Too many links",35:"Message too long",36:"Multihop attempted",37:"File or path name too long",38:"Network interface is not configured",39:"Connection reset by network",40:"Network is unreachable",41:"Too many open files in system",42:"No buffer space available",43:"No such device",44:"No such file or directory",45:"Exec format error",46:"No record locks available",47:"The link has been severed",48:"Not enough core",49:"No message of desired type",50:"Protocol not available",51:"No space left on device",52:"Function not implemented",53:"Socket is not connected",54:"Not a directory",55:"Directory not empty",56:"State not recoverable",57:"Socket operation on non-socket",59:"Not a typewriter",60:"No such device or address",61:"Value too large for defined data type",62:"Previous owner died",63:"Not super-user",64:"Broken pipe",65:"Protocol error",66:"Unknown protocol",67:"Protocol wrong type for socket",68:"Math result not representable",69:"Read only file system",70:"Illegal seek",71:"No such process",72:"Stale file handle",73:"Connection timed out",74:"Text file busy",75:"Cross-device link",100:"Device not a stream",101:"Bad font file fmt",102:"Invalid slot",103:"Invalid request code",104:"No anode",105:"Block device required",106:"Channel number out of range",107:"Level 3 halted",108:"Level 3 reset",109:"Link number out of range",110:"Protocol driver not attached",111:"No CSI structure available",112:"Level 2 halted",113:"Invalid exchange",114:"Invalid request descriptor",115:"Exchange full",116:"No data (for no delay io)",117:"Timer expired",118:"Out of streams resources",119:"Machine is not on the network",120:"Package not installed",121:"The object is remote",122:"Advertise error",123:"Srmount error",124:"Communication error on send",125:"Cross mount point (not really error)",126:"Given log. name not unique",127:"f.d. invalid for this operation",128:"Remote address changed",129:"Can access a needed shared lib",130:"Accessing a corrupted shared lib",131:".lib section in a.out corrupted",132:"Attempting to link in too many libs",133:"Attempting to exec a shared library",135:"Streams pipe error",136:"Too many users",137:"Socket type not supported",138:"Not supported",139:"Protocol family not supported",140:"Can't send after socket shutdown",141:"Too many references",142:"Host is down",148:"No medium (in tape drive)",156:"Level 2 not synchronized"};Module["ERRNO_MESSAGES"]=ERRNO_MESSAGES;function _gethostbyname_r(name,ret,buf,buflen,out,err){var data=_gethostbyname(name);_memcpy(ret,data,20);_free(data);HEAP32[err>>2]=0;HEAP32[out>>2]=ret;return 0}Module["_gethostbyname_r"]=_gethostbyname_r;_gethostbyname_r.sig="iiiiiii";function _endprotoent(){}Module["_endprotoent"]=_endprotoent;function _getprotoent(number){if(_setprotoent.index===Protocols.list.length){return 0}else{var result=Protocols.list[_setprotoent.index++];return result}}Module["_getprotoent"]=_getprotoent;function _getprotobynumber(number){_setprotoent(true);var result=Protocols.map[number];return result}Module["_getprotobynumber"]=_getprotobynumber;function _getpwnam(){throw"getpwnam: TODO"}Module["_getpwnam"]=_getpwnam;function _getpwnam_r(){throw"getpwnam_r: TODO"}Module["_getpwnam_r"]=_getpwnam_r;function _getpwuid(){throw"getpwuid: TODO"}Module["_getpwuid"]=_getpwuid;function _getpwuid_r(){throw"getpwuid_r: TODO"}Module["_getpwuid_r"]=_getpwuid_r;function _setpwent(){throw"setpwent: TODO"}Module["_setpwent"]=_setpwent;function _getpwent(){throw"getpwent: TODO"}Module["_getpwent"]=_getpwent;function _endpwent(){throw"endpwent: TODO"}Module["_endpwent"]=_endpwent;function _getgrgid(){throw"getgrgid: TODO"}Module["_getgrgid"]=_getgrgid;function _getgrgid_r(){throw"getgrgid_r: TODO"}Module["_getgrgid_r"]=_getgrgid_r;function _getgrnam(){throw"getgrnam: TODO"}Module["_getgrnam"]=_getgrnam;function _getgrnam_r(){throw"getgrnam_r: TODO"}Module["_getgrnam_r"]=_getgrnam_r;function _getgrent(){throw"getgrent: TODO"}Module["_getgrent"]=_getgrent;function _endgrent(){throw"endgrent: TODO"}Module["_endgrent"]=_endgrent;function _setgrent(){throw"setgrent: TODO"}Module["_setgrent"]=_setgrent;function _emscripten_run_script(ptr){eval(UTF8ToString(ptr))}Module["_emscripten_run_script"]=_emscripten_run_script;_emscripten_run_script.sig="vi";function _emscripten_run_script_int(ptr){return eval(UTF8ToString(ptr))|0}Module["_emscripten_run_script_int"]=_emscripten_run_script_int;_emscripten_run_script_int.sig="ii";function _emscripten_run_script_string(ptr){var s=eval(UTF8ToString(ptr));if(s==null){return 0}s+="";var me=_emscripten_run_script_string;var len=lengthBytesUTF8(s);if(!me.bufferSize||me.bufferSize=4){symbolName=parts[1];file=parts[2];lineno=parts[3];column=parts[4]|0}else{callstack+=line+"\n";continue}}var haveSourceMap=false;if(flags&8){var orig=emscripten_source_map.originalPositionFor({line:lineno,column:column});haveSourceMap=orig&&orig.source;if(haveSourceMap){if(flags&64){orig.source=orig.source.substring(orig.source.replace(/\\/g,"/").lastIndexOf("/")+1)}callstack+=" at "+symbolName+" ("+orig.source+":"+orig.line+":"+orig.column+")\n"}}if(flags&16||!haveSourceMap){if(flags&64){file=file.substring(file.replace(/\\/g,"/").lastIndexOf("/")+1)}callstack+=(haveSourceMap?" = "+symbolName:" at "+symbolName)+" ("+file+":"+lineno+":"+column+")\n"}if(flags&128&&stack_args[0]){if(stack_args[1]==symbolName&&stack_args[2].length>0){callstack=callstack.replace(/\s+$/,"");callstack+=" with values: "+stack_args[1]+stack_args[2]+"\n"}stack_args=traverseStack(stack_args[0])}}callstack=callstack.replace(/\s+$/,"");return callstack}Module["_emscripten_get_callstack_js"]=_emscripten_get_callstack_js;function _emscripten_get_callstack(flags,str,maxbytes){var callstack=_emscripten_get_callstack_js(flags);if(!str||maxbytes<=0){return lengthBytesUTF8(callstack)+1}var bytesWrittenExcludingNull=stringToUTF8(callstack,str,maxbytes);return bytesWrittenExcludingNull+1}Module["_emscripten_get_callstack"]=_emscripten_get_callstack;function _emscripten_log_js(flags,str){if(flags&24){str=str.replace(/\s+$/,"");str+=(str.length>0?"\n":"")+_emscripten_get_callstack_js(flags)}if(flags&1){if(flags&4){err(str)}else if(flags&2){console.warn(str)}else if(flags&512){console.info(str)}else if(flags&256){console.debug(str)}else{out(str)}}else if(flags&6){err(str)}else{out(str)}}Module["_emscripten_log_js"]=_emscripten_log_js;function reallyNegative(x){return x<0||x===0&&1/x===-Infinity}Module["reallyNegative"]=reallyNegative;function convertI32PairToI53(lo,hi){return(lo>>>0)+hi*4294967296}Module["convertI32PairToI53"]=convertI32PairToI53;function convertU32PairToI53(lo,hi){return(lo>>>0)+(hi>>>0)*4294967296}Module["convertU32PairToI53"]=convertU32PairToI53;function reSign(value,bits){if(value<=0){return value}var half=bits<=32?Math.abs(1<=half&&(bits<=32||value>half)){value=-2*half+value}return value}Module["reSign"]=reSign;function unSign(value,bits){if(value>=0){return value}return bits<=32?2*Math.abs(1<>3];argIndex+=8}else if(type=="i64"){ret=[HEAP32[argIndex>>2],HEAP32[argIndex+4>>2]];argIndex+=8}else{type="i32";ret=HEAP32[argIndex>>2];argIndex+=4}return ret}var ret=[];var curr,next,currArg;while(1){var startTextIndex=textIndex;curr=HEAP8[textIndex>>0];if(curr===0)break;next=HEAP8[textIndex+1>>0];if(curr==37){var flagAlwaysSigned=false;var flagLeftAlign=false;var flagAlternative=false;var flagZeroPad=false;var flagPadSign=false;flagsLoop:while(1){switch(next){case 43:flagAlwaysSigned=true;break;case 45:flagLeftAlign=true;break;case 35:flagAlternative=true;break;case 48:if(flagZeroPad){break flagsLoop}else{flagZeroPad=true;break}case 32:flagPadSign=true;break;default:break flagsLoop}textIndex++;next=HEAP8[textIndex+1>>0]}var width=0;if(next==42){width=getNextArg("i32");textIndex++;next=HEAP8[textIndex+1>>0]}else{while(next>=48&&next<=57){width=width*10+(next-48);textIndex++;next=HEAP8[textIndex+1>>0]}}var precisionSet=false,precision=-1;if(next==46){precision=0;precisionSet=true;textIndex++;next=HEAP8[textIndex+1>>0];if(next==42){precision=getNextArg("i32");textIndex++}else{while(1){var precisionChr=HEAP8[textIndex+1>>0];if(precisionChr<48||precisionChr>57)break;precision=precision*10+(precisionChr-48);textIndex++}}next=HEAP8[textIndex+1>>0]}if(precision<0){precision=6;precisionSet=false}var argSize;switch(String.fromCharCode(next)){case"h":var nextNext=HEAP8[textIndex+2>>0];if(nextNext==104){textIndex++;argSize=1}else{argSize=2}break;case"l":var nextNext=HEAP8[textIndex+2>>0];if(nextNext==108){textIndex++;argSize=8}else{argSize=4}break;case"L":case"q":case"j":argSize=8;break;case"z":case"t":case"I":argSize=4;break;default:argSize=null}if(argSize)textIndex++;next=HEAP8[textIndex+1>>0];switch(String.fromCharCode(next)){case"d":case"i":case"u":case"o":case"x":case"X":case"p":{var signed=next==100||next==105;argSize=argSize||4;currArg=getNextArg("i"+argSize*8);var argText;if(argSize==8){currArg=next==117?convertU32PairToI53(currArg[0],currArg[1]):convertI32PairToI53(currArg[0],currArg[1])}if(argSize<=4){var limit=Math.pow(256,argSize)-1;currArg=(signed?reSign:unSign)(currArg&limit,argSize*8)}var currAbsArg=Math.abs(currArg);var prefix="";if(next==100||next==105){argText=reSign(currArg,8*argSize,1).toString(10)}else if(next==117){argText=unSign(currArg,8*argSize,1).toString(10);currArg=Math.abs(currArg)}else if(next==111){argText=(flagAlternative?"0":"")+currAbsArg.toString(8)}else if(next==120||next==88){prefix=flagAlternative&&currArg!=0?"0x":"";if(currArg<0){currArg=-currArg;argText=(currAbsArg-1).toString(16);var buffer=[];for(var i=0;i=0){if(flagAlwaysSigned){prefix="+"+prefix}else if(flagPadSign){prefix=" "+prefix}}if(argText.charAt(0)=="-"){prefix="-"+prefix;argText=argText.substr(1)}while(prefix.length+argText.lengthexponent&&exponent>=-4){next=(next==103?"f":"F").charCodeAt(0);precision-=exponent+1}else{next=(next==103?"e":"E").charCodeAt(0);precision--}effectivePrecision=Math.min(precision,20)}if(next==101||next==69){argText=currArg.toExponential(effectivePrecision);if(/[eE][-+]\d$/.test(argText)){argText=argText.slice(0,-1)+"0"+argText.slice(-1)}}else if(next==102||next==70){argText=currArg.toFixed(effectivePrecision);if(currArg===0&&reallyNegative(currArg)){argText="-"+argText}}var parts=argText.split("e");if(isGeneral&&!flagAlternative){while(parts[0].length>1&&parts[0].includes(".")&&(parts[0].slice(-1)=="0"||parts[0].slice(-1)==".")){parts[0]=parts[0].slice(0,-1)}}else{if(flagAlternative&&argText.indexOf(".")==-1)parts[0]+=".";while(precision>effectivePrecision++)parts[0]+="0"}argText=parts[0]+(parts.length>1?"e"+parts[1]:"");if(next==69)argText=argText.toUpperCase();if(currArg>=0){if(flagAlwaysSigned){argText="+"+argText}else if(flagPadSign){argText=" "+argText}}}while(argText.length>0])}}else{ret=ret.concat(intArrayFromString("(null)".substr(0,argLength),true))}if(flagLeftAlign){while(argLength0){ret.push(32)}if(!flagLeftAlign)ret.push(getNextArg("i8"));break}case"n":{var ptr=getNextArg("i32*");HEAP32[ptr>>2]=ret.length;break}case"%":{ret.push(curr);break}default:{for(var i=startTextIndex;i>0])}}}textIndex+=2}else{ret.push(curr);textIndex+=1}}return ret}Module["formatString"]=formatString;function _emscripten_log(flags,format,varargs){var result=formatString(format,varargs);var str=UTF8ArrayToString(result,0);_emscripten_log_js(flags,str)}Module["_emscripten_log"]=_emscripten_log;function _emscripten_get_compiler_setting(name){throw"You must build with -s RETAIN_COMPILER_SETTINGS=1 for getCompilerSetting or emscripten_get_compiler_setting to work"}Module["_emscripten_get_compiler_setting"]=_emscripten_get_compiler_setting;function _emscripten_has_asyncify(){return 0}Module["_emscripten_has_asyncify"]=_emscripten_has_asyncify;function _emscripten_debugger(){debugger}Module["_emscripten_debugger"]=_emscripten_debugger;function _emscripten_print_double(x,to,max){var str=x+"";if(to)return stringToUTF8(str,to,max);else return lengthBytesUTF8(str)}Module["_emscripten_print_double"]=_emscripten_print_double;function _emscripten_generate_pc(frame){abort("Cannot use emscripten_generate_pc (needed by __builtin_return_address) without -s USE_OFFSET_CONVERTER")}Module["_emscripten_generate_pc"]=_emscripten_generate_pc;function _emscripten_return_address(level){var callstack=(new Error).stack.split("\n");if(callstack[0]=="Error"){callstack.shift()}return _emscripten_generate_pc(callstack[level+2])}Module["_emscripten_return_address"]=_emscripten_return_address;var UNWIND_CACHE={};Module["UNWIND_CACHE"]=UNWIND_CACHE;function __emscripten_save_in_unwind_cache(callstack){callstack.forEach(function(frame){var pc=_emscripten_generate_pc(frame);if(pc){UNWIND_CACHE[pc]=frame}})}Module["__emscripten_save_in_unwind_cache"]=__emscripten_save_in_unwind_cache;function _emscripten_stack_snapshot(){var callstack=(new Error).stack.split("\n");if(callstack[0]=="Error"){callstack.shift()}__emscripten_save_in_unwind_cache(callstack);UNWIND_CACHE.last_addr=_emscripten_generate_pc(callstack[2]);UNWIND_CACHE.last_stack=callstack;return UNWIND_CACHE.last_addr}Module["_emscripten_stack_snapshot"]=_emscripten_stack_snapshot;function _emscripten_stack_unwind_buffer(addr,buffer,count){var stack;if(UNWIND_CACHE.last_addr==addr){stack=UNWIND_CACHE.last_stack}else{stack=(new Error).stack.split("\n");if(stack[0]=="Error"){stack.shift()}__emscripten_save_in_unwind_cache(stack)}var offset=2;while(stack[offset]&&_emscripten_generate_pc(stack[offset])!=addr){++offset}for(var i=0;i>2]=_emscripten_generate_pc(stack[i+offset])}return i}Module["_emscripten_stack_unwind_buffer"]=_emscripten_stack_unwind_buffer;function _emscripten_pc_get_function(pc){abort("Cannot use emscripten_pc_get_function without -s USE_OFFSET_CONVERTER")}Module["_emscripten_pc_get_function"]=_emscripten_pc_get_function;function _emscripten_pc_get_source_js(pc){if(UNWIND_CACHE.last_get_source_pc==pc)return UNWIND_CACHE.last_source;var match;var source;if(!source){var frame=UNWIND_CACHE[pc];if(!frame)return null;if(match=/\((.*):(\d+):(\d+)\)$/.exec(frame)){source={file:match[1],line:match[2],column:match[3]}}else if(match=/@(.*):(\d+):(\d+)/.exec(frame)){source={file:match[1],line:match[2],column:match[3]}}}UNWIND_CACHE.last_get_source_pc=pc;UNWIND_CACHE.last_source=source;return source}Module["_emscripten_pc_get_source_js"]=_emscripten_pc_get_source_js;function withBuiltinMalloc(func){var prev_malloc=typeof _malloc!=="undefined"?_malloc:undefined;var prev_memalign=typeof _memalign!=="undefined"?_memalign:undefined;var prev_free=typeof _free!=="undefined"?_free:undefined;_malloc=_emscripten_builtin_malloc;_memalign=_emscripten_builtin_memalign;_free=_emscripten_builtin_free;try{return func()}finally{_malloc=prev_malloc;_memalign=prev_memalign;_free=prev_free}}Module["withBuiltinMalloc"]=withBuiltinMalloc;function _emscripten_pc_get_file(pc){var result=_emscripten_pc_get_source_js(pc);if(!result)return 0;withBuiltinMalloc(function(){if(_emscripten_pc_get_file.ret)_free(_emscripten_pc_get_file.ret);_emscripten_pc_get_file.ret=allocateUTF8(result.file)});return _emscripten_pc_get_file.ret}Module["_emscripten_pc_get_file"]=_emscripten_pc_get_file;function _emscripten_pc_get_line(pc){var result=_emscripten_pc_get_source_js(pc);return result?result.line:0}Module["_emscripten_pc_get_line"]=_emscripten_pc_get_line;function _emscripten_pc_get_column(pc){var result=_emscripten_pc_get_source_js(pc);return result?result.column||0:0}Module["_emscripten_pc_get_column"]=_emscripten_pc_get_column;function _emscripten_get_module_name(buf,length){return stringToUTF8(wasmBinaryFile,buf,length)}Module["_emscripten_get_module_name"]=_emscripten_get_module_name;function _emscripten_builtin_mmap2(addr,len,prot,flags,fd,off){return withBuiltinMalloc(function(){return syscallMmap2(addr,len,prot,flags,fd,off)})}Module["_emscripten_builtin_mmap2"]=_emscripten_builtin_mmap2;function _emscripten_builtin_munmap(addr,len){return withBuiltinMalloc(function(){return syscallMunmap(addr,len)})}Module["_emscripten_builtin_munmap"]=_emscripten_builtin_munmap;function _emscripten_asm_const_double(a0,a1,a2){return _emscripten_asm_const_int(a0,a1,a2)}Module["_emscripten_asm_const_double"]=_emscripten_asm_const_double;_emscripten_asm_const_double.sig="iiii";function mainThreadEM_ASM(code,sigPtr,argbuf,sync){code-=1024;var args=readAsmConstArgs(sigPtr,argbuf);return ASM_CONSTS[code].apply(null,args)}Module["mainThreadEM_ASM"]=mainThreadEM_ASM;function _emscripten_asm_const_int_sync_on_main_thread(code,sigPtr,argbuf){return mainThreadEM_ASM(code,sigPtr,argbuf,1)}Module["_emscripten_asm_const_int_sync_on_main_thread"]=_emscripten_asm_const_int_sync_on_main_thread;_emscripten_asm_const_int_sync_on_main_thread.sig="iiii";function _emscripten_asm_const_double_sync_on_main_thread(a0,a1,a2){return _emscripten_asm_const_int_sync_on_main_thread(a0,a1,a2)}Module["_emscripten_asm_const_double_sync_on_main_thread"]=_emscripten_asm_const_double_sync_on_main_thread;_emscripten_asm_const_double_sync_on_main_thread.sig="iiii";function _emscripten_asm_const_async_on_main_thread(code,sigPtr,argbuf){return mainThreadEM_ASM(code,sigPtr,argbuf,0)}Module["_emscripten_asm_const_async_on_main_thread"]=_emscripten_asm_const_async_on_main_thread;function jstoi_s(str){return Number(str)}Module["jstoi_s"]=jstoi_s;function __Unwind_Backtrace(func,arg){var trace=_emscripten_get_callstack_js();var parts=trace.split("\n");for(var i=0;i>3)+i]);return Math.hypot.apply(null,args)}Module["_emscripten_math_hypot"]=_emscripten_math_hypot;function _emscripten_math_sin(x){return Math.sin(x)}Module["_emscripten_math_sin"]=_emscripten_math_sin;function _emscripten_math_sinh(x){return Math.sinh(x)}Module["_emscripten_math_sinh"]=_emscripten_math_sinh;function _emscripten_math_tan(x){return Math.tan(x)}Module["_emscripten_math_tan"]=_emscripten_math_tan;function _emscripten_math_tanh(x){return Math.tanh(x)}Module["_emscripten_math_tanh"]=_emscripten_math_tanh;function _sigaction(a0,a1,a2){return ___sigaction(a0,a1,a2)}Module["_sigaction"]=_sigaction;_sigaction.sig="viii";function ___sys_getpgrp(){return 42}Module["___sys_getpgrp"]=___sys_getpgrp;function ___sys_rt_sigqueueinfo(tgid,pid,uinfo){try{return 0}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_rt_sigqueueinfo"]=___sys_rt_sigqueueinfo;function ___sys_setregid32(ruid,euid){if(ruid!==0)return-63;return 0}Module["___sys_setregid32"]=___sys_setregid32;___sys_setregid32.sig="iii";function ___sys_setreuid32(a0,a1){return ___sys_setregid32(a0,a1)}Module["___sys_setreuid32"]=___sys_setreuid32;___sys_setreuid32.sig="iii";function ___sys_setgid32(uid){if(uid!==0)return-63;return 0}Module["___sys_setgid32"]=___sys_setgid32;___sys_setgid32.sig="ii";function ___sys_setuid32(a0){return ___sys_setgid32(a0)}Module["___sys_setuid32"]=___sys_setuid32;___sys_setuid32.sig="ii";function ___sys_setresgid32(ruid,euid,suid){if(euid!==0)return-63;return 0}Module["___sys_setresgid32"]=___sys_setresgid32;___sys_setresgid32.sig="iiii";function ___sys_setresuid32(a0,a1,a2){return ___sys_setresgid32(a0,a1,a2)}Module["___sys_setresuid32"]=___sys_setresuid32;___sys_setresuid32.sig="iiii";function ___sys_faccessat(dirfd,path,amode,flags){try{path=SYSCALLS.getStr(path);path=SYSCALLS.calculateAt(dirfd,path);return SYSCALLS.doAccess(path,amode)}catch(e){if(typeof FS==="undefined"||!(e instanceof FS.ErrnoError))abort(e);return-e.errno}}Module["___sys_faccessat"]=___sys_faccessat;var JSEvents={inEventHandler:0,removeAllEventListeners:function(){for(var i=JSEvents.eventHandlers.length-1;i>=0;--i){JSEvents._removeHandler(i)}JSEvents.eventHandlers=[];JSEvents.deferredCalls=[]},registerRemoveEventListeners:function(){if(!JSEvents.removeEventListenersRegistered){__ATEXIT__.push(JSEvents.removeAllEventListeners);JSEvents.removeEventListenersRegistered=true}},deferredCalls:[],deferCall:function(targetFunction,precedence,argsList){function arraysHaveEqualContent(arrA,arrB){if(arrA.length!=arrB.length)return false;for(var i in arrA){if(arrA[i]!=arrB[i])return false}return true}for(var i in JSEvents.deferredCalls){var call=JSEvents.deferredCalls[i];if(call.targetFunction==targetFunction&&arraysHaveEqualContent(call.argsList,argsList)){return}}JSEvents.deferredCalls.push({targetFunction:targetFunction,precedence:precedence,argsList:argsList});JSEvents.deferredCalls.sort(function(x,y){return x.precedence2?UTF8ToString(cString):cString}Module["maybeCStringToJsString"]=maybeCStringToJsString;var specialHTMLTargets=[0,typeof document!=="undefined"?document:0,typeof window!=="undefined"?window:0];Module["specialHTMLTargets"]=specialHTMLTargets;function findEventTarget(target){target=maybeCStringToJsString(target);var domElement=specialHTMLTargets[target]||(typeof document!=="undefined"?document.querySelector(target):undefined);return domElement}Module["findEventTarget"]=findEventTarget;function registerKeyEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.keyEvent)JSEvents.keyEvent=_malloc(176);var keyEventHandlerFunc=function(e){var keyEventData=JSEvents.keyEvent;HEAPF64[keyEventData>>3]=e.timeStamp;var idx=keyEventData>>2;HEAP32[idx+2]=e.location;HEAP32[idx+3]=e.ctrlKey;HEAP32[idx+4]=e.shiftKey;HEAP32[idx+5]=e.altKey;HEAP32[idx+6]=e.metaKey;HEAP32[idx+7]=e.repeat;HEAP32[idx+8]=e.charCode;HEAP32[idx+9]=e.keyCode;HEAP32[idx+10]=e.which;stringToUTF8(e.key||"",keyEventData+44,32);stringToUTF8(e.code||"",keyEventData+76,32);stringToUTF8(e.char||"",keyEventData+108,32);stringToUTF8(e.locale||"",keyEventData+140,32);if(wasmTable.get(callbackfunc)(eventTypeId,keyEventData,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:keyEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerKeyEventCallback"]=registerKeyEventCallback;function findCanvasEventTarget(target){return findEventTarget(target)}Module["findCanvasEventTarget"]=findCanvasEventTarget;function _emscripten_set_keypress_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,1,"keypress",targetThread);return 0}Module["_emscripten_set_keypress_callback_on_thread"]=_emscripten_set_keypress_callback_on_thread;_emscripten_set_keypress_callback_on_thread.sig="iiiiii";function _emscripten_set_keydown_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,2,"keydown",targetThread);return 0}Module["_emscripten_set_keydown_callback_on_thread"]=_emscripten_set_keydown_callback_on_thread;_emscripten_set_keydown_callback_on_thread.sig="iiiiii";function _emscripten_set_keyup_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerKeyEventCallback(target,userData,useCapture,callbackfunc,3,"keyup",targetThread);return 0}Module["_emscripten_set_keyup_callback_on_thread"]=_emscripten_set_keyup_callback_on_thread;_emscripten_set_keyup_callback_on_thread.sig="iiiiii";function getBoundingClientRect(e){return specialHTMLTargets.indexOf(e)<0?e.getBoundingClientRect():{"left":0,"top":0}}Module["getBoundingClientRect"]=getBoundingClientRect;function fillMouseEventData(eventStruct,e,target){HEAPF64[eventStruct>>3]=e.timeStamp;var idx=eventStruct>>2;HEAP32[idx+2]=e.screenX;HEAP32[idx+3]=e.screenY;HEAP32[idx+4]=e.clientX;HEAP32[idx+5]=e.clientY;HEAP32[idx+6]=e.ctrlKey;HEAP32[idx+7]=e.shiftKey;HEAP32[idx+8]=e.altKey;HEAP32[idx+9]=e.metaKey;HEAP16[idx*2+20]=e.button;HEAP16[idx*2+21]=e.buttons;HEAP32[idx+11]=e["movementX"];HEAP32[idx+12]=e["movementY"];var rect=getBoundingClientRect(target);HEAP32[idx+13]=e.clientX-rect.left;HEAP32[idx+14]=e.clientY-rect.top}Module["fillMouseEventData"]=fillMouseEventData;function registerMouseEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.mouseEvent)JSEvents.mouseEvent=_malloc(72);target=findEventTarget(target);var mouseEventHandlerFunc=function(ev){var e=ev||event;fillMouseEventData(JSEvents.mouseEvent,e,target);if(wasmTable.get(callbackfunc)(eventTypeId,JSEvents.mouseEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:eventTypeString!="mousemove"&&eventTypeString!="mouseenter"&&eventTypeString!="mouseleave",eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:mouseEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerMouseEventCallback"]=registerMouseEventCallback;function _emscripten_set_click_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,4,"click",targetThread);return 0}Module["_emscripten_set_click_callback_on_thread"]=_emscripten_set_click_callback_on_thread;_emscripten_set_click_callback_on_thread.sig="iiiiii";function _emscripten_set_mousedown_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,5,"mousedown",targetThread);return 0}Module["_emscripten_set_mousedown_callback_on_thread"]=_emscripten_set_mousedown_callback_on_thread;_emscripten_set_mousedown_callback_on_thread.sig="iiiiii";function _emscripten_set_mouseup_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,6,"mouseup",targetThread);return 0}Module["_emscripten_set_mouseup_callback_on_thread"]=_emscripten_set_mouseup_callback_on_thread;_emscripten_set_mouseup_callback_on_thread.sig="iiiiii";function _emscripten_set_dblclick_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,7,"dblclick",targetThread);return 0}Module["_emscripten_set_dblclick_callback_on_thread"]=_emscripten_set_dblclick_callback_on_thread;_emscripten_set_dblclick_callback_on_thread.sig="iiiiii";function _emscripten_set_mousemove_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,8,"mousemove",targetThread);return 0}Module["_emscripten_set_mousemove_callback_on_thread"]=_emscripten_set_mousemove_callback_on_thread;_emscripten_set_mousemove_callback_on_thread.sig="iiiiii";function _emscripten_set_mouseenter_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,33,"mouseenter",targetThread);return 0}Module["_emscripten_set_mouseenter_callback_on_thread"]=_emscripten_set_mouseenter_callback_on_thread;_emscripten_set_mouseenter_callback_on_thread.sig="iiiiii";function _emscripten_set_mouseleave_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,34,"mouseleave",targetThread);return 0}Module["_emscripten_set_mouseleave_callback_on_thread"]=_emscripten_set_mouseleave_callback_on_thread;_emscripten_set_mouseleave_callback_on_thread.sig="iiiiii";function _emscripten_set_mouseover_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,35,"mouseover",targetThread);return 0}Module["_emscripten_set_mouseover_callback_on_thread"]=_emscripten_set_mouseover_callback_on_thread;_emscripten_set_mouseover_callback_on_thread.sig="iiiiii";function _emscripten_set_mouseout_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerMouseEventCallback(target,userData,useCapture,callbackfunc,36,"mouseout",targetThread);return 0}Module["_emscripten_set_mouseout_callback_on_thread"]=_emscripten_set_mouseout_callback_on_thread;_emscripten_set_mouseout_callback_on_thread.sig="iiiiii";function _emscripten_get_mouse_status(mouseState){if(!JSEvents.mouseEvent)return-7;HEAP8.set(HEAP8.subarray(JSEvents.mouseEvent,JSEvents.mouseEvent+72),mouseState);return 0}Module["_emscripten_get_mouse_status"]=_emscripten_get_mouse_status;_emscripten_get_mouse_status.sig="ii";function registerWheelEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.wheelEvent)JSEvents.wheelEvent=_malloc(104);var wheelHandlerFunc=function(ev){var e=ev||event;var wheelEvent=JSEvents.wheelEvent;fillMouseEventData(wheelEvent,e,target);HEAPF64[wheelEvent+72>>3]=e["deltaX"];HEAPF64[wheelEvent+80>>3]=e["deltaY"];HEAPF64[wheelEvent+88>>3]=e["deltaZ"];HEAP32[wheelEvent+96>>2]=e["deltaMode"];if(wasmTable.get(callbackfunc)(eventTypeId,wheelEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:wheelHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerWheelEventCallback"]=registerWheelEventCallback;function _emscripten_set_wheel_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){target=findEventTarget(target);if(typeof target.onwheel!=="undefined"){registerWheelEventCallback(target,userData,useCapture,callbackfunc,9,"wheel",targetThread);return 0}else{return-1}}Module["_emscripten_set_wheel_callback_on_thread"]=_emscripten_set_wheel_callback_on_thread;_emscripten_set_wheel_callback_on_thread.sig="iiiiii";function registerUiEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.uiEvent)JSEvents.uiEvent=_malloc(36);target=findEventTarget(target);var uiEventHandlerFunc=function(ev){var e=ev||event;if(e.target!=target){return}var b=document.body;if(!b){return}var uiEvent=JSEvents.uiEvent;HEAP32[uiEvent>>2]=e.detail;HEAP32[uiEvent+4>>2]=b.clientWidth;HEAP32[uiEvent+8>>2]=b.clientHeight;HEAP32[uiEvent+12>>2]=innerWidth;HEAP32[uiEvent+16>>2]=innerHeight;HEAP32[uiEvent+20>>2]=outerWidth;HEAP32[uiEvent+24>>2]=outerHeight;HEAP32[uiEvent+28>>2]=pageXOffset;HEAP32[uiEvent+32>>2]=pageYOffset;if(wasmTable.get(callbackfunc)(eventTypeId,uiEvent,userData))e.preventDefault()};var eventHandler={target:target,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:uiEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerUiEventCallback"]=registerUiEventCallback;function _emscripten_set_resize_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerUiEventCallback(target,userData,useCapture,callbackfunc,10,"resize",targetThread);return 0}Module["_emscripten_set_resize_callback_on_thread"]=_emscripten_set_resize_callback_on_thread;_emscripten_set_resize_callback_on_thread.sig="iiiiii";function _emscripten_set_scroll_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerUiEventCallback(target,userData,useCapture,callbackfunc,11,"scroll",targetThread);return 0}Module["_emscripten_set_scroll_callback_on_thread"]=_emscripten_set_scroll_callback_on_thread;_emscripten_set_scroll_callback_on_thread.sig="iiiiii";function registerFocusEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.focusEvent)JSEvents.focusEvent=_malloc(256);var focusEventHandlerFunc=function(ev){var e=ev||event;var nodeName=JSEvents.getNodeNameForTarget(e.target);var id=e.target.id?e.target.id:"";var focusEvent=JSEvents.focusEvent;stringToUTF8(nodeName,focusEvent+0,128);stringToUTF8(id,focusEvent+128,128);if(wasmTable.get(callbackfunc)(eventTypeId,focusEvent,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:focusEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerFocusEventCallback"]=registerFocusEventCallback;function _emscripten_set_blur_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerFocusEventCallback(target,userData,useCapture,callbackfunc,12,"blur",targetThread);return 0}Module["_emscripten_set_blur_callback_on_thread"]=_emscripten_set_blur_callback_on_thread;_emscripten_set_blur_callback_on_thread.sig="iiiiii";function _emscripten_set_focus_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerFocusEventCallback(target,userData,useCapture,callbackfunc,13,"focus",targetThread);return 0}Module["_emscripten_set_focus_callback_on_thread"]=_emscripten_set_focus_callback_on_thread;_emscripten_set_focus_callback_on_thread.sig="iiiiii";function _emscripten_set_focusin_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerFocusEventCallback(target,userData,useCapture,callbackfunc,14,"focusin",targetThread);return 0}Module["_emscripten_set_focusin_callback_on_thread"]=_emscripten_set_focusin_callback_on_thread;_emscripten_set_focusin_callback_on_thread.sig="iiiiii";function _emscripten_set_focusout_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerFocusEventCallback(target,userData,useCapture,callbackfunc,15,"focusout",targetThread);return 0}Module["_emscripten_set_focusout_callback_on_thread"]=_emscripten_set_focusout_callback_on_thread;_emscripten_set_focusout_callback_on_thread.sig="iiiiii";function fillDeviceOrientationEventData(eventStruct,e,target){HEAPF64[eventStruct>>3]=e.alpha;HEAPF64[eventStruct+8>>3]=e.beta;HEAPF64[eventStruct+16>>3]=e.gamma;HEAP32[eventStruct+24>>2]=e.absolute}Module["fillDeviceOrientationEventData"]=fillDeviceOrientationEventData;function registerDeviceOrientationEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.deviceOrientationEvent)JSEvents.deviceOrientationEvent=_malloc(32);var deviceOrientationEventHandlerFunc=function(ev){var e=ev||event;fillDeviceOrientationEventData(JSEvents.deviceOrientationEvent,e,target);if(wasmTable.get(callbackfunc)(eventTypeId,JSEvents.deviceOrientationEvent,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:deviceOrientationEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerDeviceOrientationEventCallback"]=registerDeviceOrientationEventCallback;function _emscripten_set_deviceorientation_callback_on_thread(userData,useCapture,callbackfunc,targetThread){registerDeviceOrientationEventCallback(2,userData,useCapture,callbackfunc,16,"deviceorientation",targetThread);return 0}Module["_emscripten_set_deviceorientation_callback_on_thread"]=_emscripten_set_deviceorientation_callback_on_thread;_emscripten_set_deviceorientation_callback_on_thread.sig="iiiii";function _emscripten_get_deviceorientation_status(orientationState){if(!JSEvents.deviceOrientationEvent)return-7;HEAP32.set(HEAP32.subarray(JSEvents.deviceOrientationEvent,32),orientationState);return 0}Module["_emscripten_get_deviceorientation_status"]=_emscripten_get_deviceorientation_status;_emscripten_get_deviceorientation_status.sig="ii";function fillDeviceMotionEventData(eventStruct,e,target){var supportedFields=0;var a=e["acceleration"];supportedFields|=a&&1;var ag=e["accelerationIncludingGravity"];supportedFields|=ag&&2;var rr=e["rotationRate"];supportedFields|=rr&&4;a=a||{};ag=ag||{};rr=rr||{};HEAPF64[eventStruct>>3]=a["x"];HEAPF64[eventStruct+8>>3]=a["y"];HEAPF64[eventStruct+16>>3]=a["z"];HEAPF64[eventStruct+24>>3]=ag["x"];HEAPF64[eventStruct+32>>3]=ag["y"];HEAPF64[eventStruct+40>>3]=ag["z"];HEAPF64[eventStruct+48>>3]=rr["alpha"];HEAPF64[eventStruct+56>>3]=rr["beta"];HEAPF64[eventStruct+64>>3]=rr["gamma"]}Module["fillDeviceMotionEventData"]=fillDeviceMotionEventData;function registerDeviceMotionEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.deviceMotionEvent)JSEvents.deviceMotionEvent=_malloc(80);var deviceMotionEventHandlerFunc=function(ev){var e=ev||event;fillDeviceMotionEventData(JSEvents.deviceMotionEvent,e,target);if(wasmTable.get(callbackfunc)(eventTypeId,JSEvents.deviceMotionEvent,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:deviceMotionEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerDeviceMotionEventCallback"]=registerDeviceMotionEventCallback;function _emscripten_set_devicemotion_callback_on_thread(userData,useCapture,callbackfunc,targetThread){registerDeviceMotionEventCallback(2,userData,useCapture,callbackfunc,17,"devicemotion",targetThread);return 0}Module["_emscripten_set_devicemotion_callback_on_thread"]=_emscripten_set_devicemotion_callback_on_thread;_emscripten_set_devicemotion_callback_on_thread.sig="iiiii";function _emscripten_get_devicemotion_status(motionState){if(!JSEvents.deviceMotionEvent)return-7;HEAP32.set(HEAP32.subarray(JSEvents.deviceMotionEvent,80),motionState);return 0}Module["_emscripten_get_devicemotion_status"]=_emscripten_get_devicemotion_status;_emscripten_get_devicemotion_status.sig="ii";function screenOrientation(){if(!screen)return undefined;return screen.orientation||screen.mozOrientation||screen.webkitOrientation||screen.msOrientation}Module["screenOrientation"]=screenOrientation;function fillOrientationChangeEventData(eventStruct){var orientations=["portrait-primary","portrait-secondary","landscape-primary","landscape-secondary"];var orientations2=["portrait","portrait","landscape","landscape"];var orientationString=screenOrientation();var orientation=orientations.indexOf(orientationString);if(orientation==-1){orientation=orientations2.indexOf(orientationString)}HEAP32[eventStruct>>2]=1<>2]=orientation}Module["fillOrientationChangeEventData"]=fillOrientationChangeEventData;function registerOrientationChangeEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.orientationChangeEvent)JSEvents.orientationChangeEvent=_malloc(8);var orientationChangeEventHandlerFunc=function(ev){var e=ev||event;var orientationChangeEvent=JSEvents.orientationChangeEvent;fillOrientationChangeEventData(orientationChangeEvent);if(wasmTable.get(callbackfunc)(eventTypeId,orientationChangeEvent,userData))e.preventDefault()};if(eventTypeString=="orientationchange"&&screen.mozOrientation!==undefined){eventTypeString="mozorientationchange"}var eventHandler={target:target,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:orientationChangeEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerOrientationChangeEventCallback"]=registerOrientationChangeEventCallback;function _emscripten_set_orientationchange_callback_on_thread(userData,useCapture,callbackfunc,targetThread){if(!screen||!screen["addEventListener"])return-1;registerOrientationChangeEventCallback(screen,userData,useCapture,callbackfunc,18,"orientationchange",targetThread);return 0}Module["_emscripten_set_orientationchange_callback_on_thread"]=_emscripten_set_orientationchange_callback_on_thread;_emscripten_set_orientationchange_callback_on_thread.sig="iiiii";function _emscripten_get_orientation_status(orientationChangeEvent){if(!screenOrientation()&&typeof orientation==="undefined")return-1;fillOrientationChangeEventData(orientationChangeEvent);return 0}Module["_emscripten_get_orientation_status"]=_emscripten_get_orientation_status;_emscripten_get_orientation_status.sig="ii";function _emscripten_lock_orientation(allowedOrientations){var orientations=[];if(allowedOrientations&1)orientations.push("portrait-primary");if(allowedOrientations&2)orientations.push("portrait-secondary");if(allowedOrientations&4)orientations.push("landscape-primary");if(allowedOrientations&8)orientations.push("landscape-secondary");var succeeded;if(screen.lockOrientation){succeeded=screen.lockOrientation(orientations)}else if(screen.mozLockOrientation){succeeded=screen.mozLockOrientation(orientations)}else if(screen.webkitLockOrientation){succeeded=screen.webkitLockOrientation(orientations)}else if(screen.msLockOrientation){succeeded=screen.msLockOrientation(orientations)}else{return-1}if(succeeded){return 0}else{return-6}}Module["_emscripten_lock_orientation"]=_emscripten_lock_orientation;_emscripten_lock_orientation.sig="ii";function _emscripten_unlock_orientation(){if(screen.unlockOrientation){screen.unlockOrientation()}else if(screen.mozUnlockOrientation){screen.mozUnlockOrientation()}else if(screen.webkitUnlockOrientation){screen.webkitUnlockOrientation()}else if(screen.msUnlockOrientation){screen.msUnlockOrientation()}else{return-1}return 0}Module["_emscripten_unlock_orientation"]=_emscripten_unlock_orientation;_emscripten_unlock_orientation.sig="i";function fillFullscreenChangeEventData(eventStruct){var fullscreenElement=document.fullscreenElement||document.mozFullScreenElement||document.webkitFullscreenElement||document.msFullscreenElement;var isFullscreen=!!fullscreenElement;HEAP32[eventStruct>>2]=isFullscreen;HEAP32[eventStruct+4>>2]=JSEvents.fullscreenEnabled();var reportedElement=isFullscreen?fullscreenElement:JSEvents.previousFullscreenElement;var nodeName=JSEvents.getNodeNameForTarget(reportedElement);var id=reportedElement&&reportedElement.id?reportedElement.id:"";stringToUTF8(nodeName,eventStruct+8,128);stringToUTF8(id,eventStruct+136,128);HEAP32[eventStruct+264>>2]=reportedElement?reportedElement.clientWidth:0;HEAP32[eventStruct+268>>2]=reportedElement?reportedElement.clientHeight:0;HEAP32[eventStruct+272>>2]=screen.width;HEAP32[eventStruct+276>>2]=screen.height;if(isFullscreen){JSEvents.previousFullscreenElement=fullscreenElement}}Module["fillFullscreenChangeEventData"]=fillFullscreenChangeEventData;function registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.fullscreenChangeEvent)JSEvents.fullscreenChangeEvent=_malloc(280);var fullscreenChangeEventhandlerFunc=function(ev){var e=ev||event;var fullscreenChangeEvent=JSEvents.fullscreenChangeEvent;fillFullscreenChangeEventData(fullscreenChangeEvent);if(wasmTable.get(callbackfunc)(eventTypeId,fullscreenChangeEvent,userData))e.preventDefault()};var eventHandler={target:target,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:fullscreenChangeEventhandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerFullscreenChangeEventCallback"]=registerFullscreenChangeEventCallback;function _emscripten_set_fullscreenchange_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){if(!JSEvents.fullscreenEnabled())return-1;target=findEventTarget(target);if(!target)return-4;registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,19,"fullscreenchange",targetThread);registerFullscreenChangeEventCallback(target,userData,useCapture,callbackfunc,19,"webkitfullscreenchange",targetThread);return 0}Module["_emscripten_set_fullscreenchange_callback_on_thread"]=_emscripten_set_fullscreenchange_callback_on_thread;_emscripten_set_fullscreenchange_callback_on_thread.sig="iiiiii";function _emscripten_get_fullscreen_status(fullscreenStatus){if(!JSEvents.fullscreenEnabled())return-1;fillFullscreenChangeEventData(fullscreenStatus);return 0}Module["_emscripten_get_fullscreen_status"]=_emscripten_get_fullscreen_status;_emscripten_get_fullscreen_status.sig="ii";function _emscripten_get_canvas_element_size(target,width,height){var canvas=findCanvasEventTarget(target);if(!canvas)return-4;HEAP32[width>>2]=canvas.width;HEAP32[height>>2]=canvas.height}Module["_emscripten_get_canvas_element_size"]=_emscripten_get_canvas_element_size;function getCanvasElementSize(target){var stackTop=stackSave();var w=stackAlloc(8);var h=w+4;var targetInt=stackAlloc(target.id.length+1);stringToUTF8(target.id,targetInt,target.id.length+1);var ret=_emscripten_get_canvas_element_size(targetInt,w,h);var size=[HEAP32[w>>2],HEAP32[h>>2]];stackRestore(stackTop);return size}Module["getCanvasElementSize"]=getCanvasElementSize;function _emscripten_set_canvas_element_size(target,width,height){var canvas=findCanvasEventTarget(target);if(!canvas)return-4;canvas.width=width;canvas.height=height;return 0}Module["_emscripten_set_canvas_element_size"]=_emscripten_set_canvas_element_size;_emscripten_set_canvas_element_size.sig="iiii";function setCanvasElementSize(target,width,height){if(!target.controlTransferredOffscreen){target.width=width;target.height=height}else{var stackTop=stackSave();var targetInt=stackAlloc(target.id.length+1);stringToUTF8(target.id,targetInt,target.id.length+1);_emscripten_set_canvas_element_size(targetInt,width,height);stackRestore(stackTop)}}Module["setCanvasElementSize"]=setCanvasElementSize;function registerRestoreOldStyle(canvas){var canvasSize=getCanvasElementSize(canvas);var oldWidth=canvasSize[0];var oldHeight=canvasSize[1];var oldCssWidth=canvas.style.width;var oldCssHeight=canvas.style.height;var oldBackgroundColor=canvas.style.backgroundColor;var oldDocumentBackgroundColor=document.body.style.backgroundColor;var oldPaddingLeft=canvas.style.paddingLeft;var oldPaddingRight=canvas.style.paddingRight;var oldPaddingTop=canvas.style.paddingTop;var oldPaddingBottom=canvas.style.paddingBottom;var oldMarginLeft=canvas.style.marginLeft;var oldMarginRight=canvas.style.marginRight;var oldMarginTop=canvas.style.marginTop;var oldMarginBottom=canvas.style.marginBottom;var oldDocumentBodyMargin=document.body.style.margin;var oldDocumentOverflow=document.documentElement.style.overflow;var oldDocumentScroll=document.body.scroll;var oldImageRendering=canvas.style.imageRendering;function restoreOldStyle(){var fullscreenElement=document.fullscreenElement||document.webkitFullscreenElement||document.msFullscreenElement;if(!fullscreenElement){document.removeEventListener("fullscreenchange",restoreOldStyle);document.removeEventListener("webkitfullscreenchange",restoreOldStyle);setCanvasElementSize(canvas,oldWidth,oldHeight);canvas.style.width=oldCssWidth;canvas.style.height=oldCssHeight;canvas.style.backgroundColor=oldBackgroundColor;if(!oldDocumentBackgroundColor)document.body.style.backgroundColor="white";document.body.style.backgroundColor=oldDocumentBackgroundColor;canvas.style.paddingLeft=oldPaddingLeft;canvas.style.paddingRight=oldPaddingRight;canvas.style.paddingTop=oldPaddingTop;canvas.style.paddingBottom=oldPaddingBottom;canvas.style.marginLeft=oldMarginLeft;canvas.style.marginRight=oldMarginRight;canvas.style.marginTop=oldMarginTop;canvas.style.marginBottom=oldMarginBottom;document.body.style.margin=oldDocumentBodyMargin;document.documentElement.style.overflow=oldDocumentOverflow;document.body.scroll=oldDocumentScroll;canvas.style.imageRendering=oldImageRendering;if(canvas.GLctxObject)canvas.GLctxObject.GLctx.viewport(0,0,oldWidth,oldHeight);if(currentFullscreenStrategy.canvasResizedCallback){wasmTable.get(currentFullscreenStrategy.canvasResizedCallback)(37,0,currentFullscreenStrategy.canvasResizedCallbackUserData)}}}document.addEventListener("fullscreenchange",restoreOldStyle);document.addEventListener("webkitfullscreenchange",restoreOldStyle);return restoreOldStyle}Module["registerRestoreOldStyle"]=registerRestoreOldStyle;function setLetterbox(element,topBottom,leftRight){element.style.paddingLeft=element.style.paddingRight=leftRight+"px";element.style.paddingTop=element.style.paddingBottom=topBottom+"px"}Module["setLetterbox"]=setLetterbox;function _JSEvents_resizeCanvasForFullscreen(target,strategy){var restoreOldStyle=registerRestoreOldStyle(target);var cssWidth=strategy.softFullscreen?innerWidth:screen.width;var cssHeight=strategy.softFullscreen?innerHeight:screen.height;var rect=getBoundingClientRect(target);var windowedCssWidth=rect.width;var windowedCssHeight=rect.height;var canvasSize=getCanvasElementSize(target);var windowedRttWidth=canvasSize[0];var windowedRttHeight=canvasSize[1];if(strategy.scaleMode==3){setLetterbox(target,(cssHeight-windowedCssHeight)/2,(cssWidth-windowedCssWidth)/2);cssWidth=windowedCssWidth;cssHeight=windowedCssHeight}else if(strategy.scaleMode==2){if(cssWidth*windowedRttHeightx*h)w=h*x/y|0;topMargin=(screenHeight-h)/2|0}if(inPixelPerfectFullscreenMode){setCanvasElementSize(canvas,w,h);if(canvas.GLctxObject)canvas.GLctxObject.GLctx.viewport(0,0,w,h)}if(inHiDPIFullscreenMode){topMargin/=dpr;w/=dpr;h/=dpr;w=Math.round(w*1e4)/1e4;h=Math.round(h*1e4)/1e4;topMargin=Math.round(topMargin*1e4)/1e4}if(inCenteredWithoutScalingFullscreenMode){var t=(innerHeight-jstoi_q(canvas.style.height))/2;var b=(innerWidth-jstoi_q(canvas.style.width))/2;setLetterbox(canvas,t,b)}else{canvas.style.width=w+"px";canvas.style.height=h+"px";var b=(innerWidth-w)/2;setLetterbox(canvas,topMargin,b)}if(!inCenteredWithoutScalingFullscreenMode&¤tFullscreenStrategy.canvasResizedCallback){wasmTable.get(currentFullscreenStrategy.canvasResizedCallback)(37,0,currentFullscreenStrategy.canvasResizedCallbackUserData)}}Module["softFullscreenResizeWebGLRenderTarget"]=softFullscreenResizeWebGLRenderTarget;function doRequestFullscreen(target,strategy){if(!JSEvents.fullscreenEnabled())return-1;target=findEventTarget(target);if(!target)return-4;if(!target.requestFullscreen&&!target.webkitRequestFullscreen){return-3}var canPerformRequests=JSEvents.canPerformEventHandlerRequests();if(!canPerformRequests){if(strategy.deferUntilInEventHandler){JSEvents.deferCall(_JSEvents_requestFullscreen,1,[target,strategy]);return 1}else{return-2}}return _JSEvents_requestFullscreen(target,strategy)}Module["doRequestFullscreen"]=doRequestFullscreen;function _emscripten_request_fullscreen(target,deferUntilInEventHandler){var strategy={scaleMode:0,canvasResolutionScaleMode:0,filteringMode:0,deferUntilInEventHandler:deferUntilInEventHandler,canvasResizedCallbackTargetThread:2};return doRequestFullscreen(target,strategy)}Module["_emscripten_request_fullscreen"]=_emscripten_request_fullscreen;_emscripten_request_fullscreen.sig="iii";function _emscripten_request_fullscreen_strategy(target,deferUntilInEventHandler,fullscreenStrategy){var strategy={scaleMode:HEAP32[fullscreenStrategy>>2],canvasResolutionScaleMode:HEAP32[fullscreenStrategy+4>>2],filteringMode:HEAP32[fullscreenStrategy+8>>2],deferUntilInEventHandler:deferUntilInEventHandler,canvasResizedCallback:HEAP32[fullscreenStrategy+12>>2],canvasResizedCallbackUserData:HEAP32[fullscreenStrategy+16>>2]};return doRequestFullscreen(target,strategy)}Module["_emscripten_request_fullscreen_strategy"]=_emscripten_request_fullscreen_strategy;_emscripten_request_fullscreen_strategy.sig="iiii";function _emscripten_enter_soft_fullscreen(target,fullscreenStrategy){target=findEventTarget(target);if(!target)return-4;var strategy={scaleMode:HEAP32[fullscreenStrategy>>2],canvasResolutionScaleMode:HEAP32[fullscreenStrategy+4>>2],filteringMode:HEAP32[fullscreenStrategy+8>>2],canvasResizedCallback:HEAP32[fullscreenStrategy+12>>2],canvasResizedCallbackUserData:HEAP32[fullscreenStrategy+16>>2],target:target,softFullscreen:true};var restoreOldStyle=_JSEvents_resizeCanvasForFullscreen(target,strategy);document.documentElement.style.overflow="hidden";document.body.scroll="no";document.body.style.margin="0px";var hiddenElements=hideEverythingExceptGivenElement(target);function restoreWindowedState(){restoreOldStyle();restoreHiddenElements(hiddenElements);removeEventListener("resize",softFullscreenResizeWebGLRenderTarget);if(strategy.canvasResizedCallback){wasmTable.get(strategy.canvasResizedCallback)(37,0,strategy.canvasResizedCallbackUserData)}currentFullscreenStrategy=0}restoreOldWindowedStyle=restoreWindowedState;currentFullscreenStrategy=strategy;addEventListener("resize",softFullscreenResizeWebGLRenderTarget);if(strategy.canvasResizedCallback){wasmTable.get(strategy.canvasResizedCallback)(37,0,strategy.canvasResizedCallbackUserData)}return 0}Module["_emscripten_enter_soft_fullscreen"]=_emscripten_enter_soft_fullscreen;_emscripten_enter_soft_fullscreen.sig="iii";function _emscripten_exit_soft_fullscreen(){if(restoreOldWindowedStyle)restoreOldWindowedStyle();restoreOldWindowedStyle=null;return 0}Module["_emscripten_exit_soft_fullscreen"]=_emscripten_exit_soft_fullscreen;_emscripten_exit_soft_fullscreen.sig="i";function _emscripten_exit_fullscreen(){if(!JSEvents.fullscreenEnabled())return-1;JSEvents.removeDeferredCalls(_JSEvents_requestFullscreen);var d=specialHTMLTargets[1];if(d.exitFullscreen){d.fullscreenElement&&d.exitFullscreen()}else if(d.webkitExitFullscreen){d.webkitFullscreenElement&&d.webkitExitFullscreen()}else{return-1}return 0}Module["_emscripten_exit_fullscreen"]=_emscripten_exit_fullscreen;_emscripten_exit_fullscreen.sig="i";function fillPointerlockChangeEventData(eventStruct){var pointerLockElement=document.pointerLockElement||document.mozPointerLockElement||document.webkitPointerLockElement||document.msPointerLockElement;var isPointerlocked=!!pointerLockElement;HEAP32[eventStruct>>2]=isPointerlocked;var nodeName=JSEvents.getNodeNameForTarget(pointerLockElement);var id=pointerLockElement&&pointerLockElement.id?pointerLockElement.id:"";stringToUTF8(nodeName,eventStruct+4,128);stringToUTF8(id,eventStruct+132,128)}Module["fillPointerlockChangeEventData"]=fillPointerlockChangeEventData;function registerPointerlockChangeEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.pointerlockChangeEvent)JSEvents.pointerlockChangeEvent=_malloc(260);var pointerlockChangeEventHandlerFunc=function(ev){var e=ev||event;var pointerlockChangeEvent=JSEvents.pointerlockChangeEvent;fillPointerlockChangeEventData(pointerlockChangeEvent);if(wasmTable.get(callbackfunc)(eventTypeId,pointerlockChangeEvent,userData))e.preventDefault()};var eventHandler={target:target,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:pointerlockChangeEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerPointerlockChangeEventCallback"]=registerPointerlockChangeEventCallback;function _emscripten_set_pointerlockchange_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){if(!document||!document.body||!document.body.requestPointerLock&&!document.body.mozRequestPointerLock&&!document.body.webkitRequestPointerLock&&!document.body.msRequestPointerLock){return-1}target=findEventTarget(target);if(!target)return-4;registerPointerlockChangeEventCallback(target,userData,useCapture,callbackfunc,20,"pointerlockchange",targetThread);registerPointerlockChangeEventCallback(target,userData,useCapture,callbackfunc,20,"mozpointerlockchange",targetThread);registerPointerlockChangeEventCallback(target,userData,useCapture,callbackfunc,20,"webkitpointerlockchange",targetThread);registerPointerlockChangeEventCallback(target,userData,useCapture,callbackfunc,20,"mspointerlockchange",targetThread);return 0}Module["_emscripten_set_pointerlockchange_callback_on_thread"]=_emscripten_set_pointerlockchange_callback_on_thread;_emscripten_set_pointerlockchange_callback_on_thread.sig="iiiiii";function registerPointerlockErrorEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){var pointerlockErrorEventHandlerFunc=function(ev){var e=ev||event;if(wasmTable.get(callbackfunc)(eventTypeId,0,userData))e.preventDefault()};var eventHandler={target:target,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:pointerlockErrorEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerPointerlockErrorEventCallback"]=registerPointerlockErrorEventCallback;function _emscripten_set_pointerlockerror_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){if(!document||!document.body.requestPointerLock&&!document.body.mozRequestPointerLock&&!document.body.webkitRequestPointerLock&&!document.body.msRequestPointerLock){return-1}target=findEventTarget(target);if(!target)return-4;registerPointerlockErrorEventCallback(target,userData,useCapture,callbackfunc,38,"pointerlockerror",targetThread);registerPointerlockErrorEventCallback(target,userData,useCapture,callbackfunc,38,"mozpointerlockerror",targetThread);registerPointerlockErrorEventCallback(target,userData,useCapture,callbackfunc,38,"webkitpointerlockerror",targetThread);registerPointerlockErrorEventCallback(target,userData,useCapture,callbackfunc,38,"mspointerlockerror",targetThread);return 0}Module["_emscripten_set_pointerlockerror_callback_on_thread"]=_emscripten_set_pointerlockerror_callback_on_thread;_emscripten_set_pointerlockerror_callback_on_thread.sig="iiiiii";function _emscripten_get_pointerlock_status(pointerlockStatus){if(pointerlockStatus)fillPointerlockChangeEventData(pointerlockStatus);if(!document.body||!document.body.requestPointerLock&&!document.body.mozRequestPointerLock&&!document.body.webkitRequestPointerLock&&!document.body.msRequestPointerLock){return-1}return 0}Module["_emscripten_get_pointerlock_status"]=_emscripten_get_pointerlock_status;_emscripten_get_pointerlock_status.sig="ii";function requestPointerLock(target){if(target.requestPointerLock){target.requestPointerLock()}else if(target.msRequestPointerLock){target.msRequestPointerLock()}else{if(document.body.requestPointerLock||document.body.msRequestPointerLock){return-3}else{return-1}}return 0}Module["requestPointerLock"]=requestPointerLock;function _emscripten_request_pointerlock(target,deferUntilInEventHandler){target=findEventTarget(target);if(!target)return-4;if(!target.requestPointerLock&&!target.msRequestPointerLock){return-1}var canPerformRequests=JSEvents.canPerformEventHandlerRequests();if(!canPerformRequests){if(deferUntilInEventHandler){JSEvents.deferCall(requestPointerLock,2,[target]);return 1}else{return-2}}return requestPointerLock(target)}Module["_emscripten_request_pointerlock"]=_emscripten_request_pointerlock;_emscripten_request_pointerlock.sig="iii";function _emscripten_exit_pointerlock(){JSEvents.removeDeferredCalls(requestPointerLock);if(document.exitPointerLock){document.exitPointerLock()}else if(document.msExitPointerLock){document.msExitPointerLock()}else{return-1}return 0}Module["_emscripten_exit_pointerlock"]=_emscripten_exit_pointerlock;_emscripten_exit_pointerlock.sig="i";function _emscripten_vibrate(msecs){if(!navigator.vibrate)return-1;navigator.vibrate(msecs);return 0}Module["_emscripten_vibrate"]=_emscripten_vibrate;_emscripten_vibrate.sig="ii";function _emscripten_vibrate_pattern(msecsArray,numEntries){if(!navigator.vibrate)return-1;var vibrateList=[];for(var i=0;i>2];vibrateList.push(msecs)}navigator.vibrate(vibrateList);return 0}Module["_emscripten_vibrate_pattern"]=_emscripten_vibrate_pattern;_emscripten_vibrate_pattern.sig="iii";function fillVisibilityChangeEventData(eventStruct){var visibilityStates=["hidden","visible","prerender","unloaded"];var visibilityState=visibilityStates.indexOf(document.visibilityState);HEAP32[eventStruct>>2]=document.hidden;HEAP32[eventStruct+4>>2]=visibilityState}Module["fillVisibilityChangeEventData"]=fillVisibilityChangeEventData;function registerVisibilityChangeEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.visibilityChangeEvent)JSEvents.visibilityChangeEvent=_malloc(8);var visibilityChangeEventHandlerFunc=function(ev){var e=ev||event;var visibilityChangeEvent=JSEvents.visibilityChangeEvent;fillVisibilityChangeEventData(visibilityChangeEvent);if(wasmTable.get(callbackfunc)(eventTypeId,visibilityChangeEvent,userData))e.preventDefault()};var eventHandler={target:target,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:visibilityChangeEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerVisibilityChangeEventCallback"]=registerVisibilityChangeEventCallback;function _emscripten_set_visibilitychange_callback_on_thread(userData,useCapture,callbackfunc,targetThread){if(!specialHTMLTargets[1]){return-4}registerVisibilityChangeEventCallback(specialHTMLTargets[1],userData,useCapture,callbackfunc,21,"visibilitychange",targetThread);return 0}Module["_emscripten_set_visibilitychange_callback_on_thread"]=_emscripten_set_visibilitychange_callback_on_thread;_emscripten_set_visibilitychange_callback_on_thread.sig="iiiii";function _emscripten_get_visibility_status(visibilityStatus){if(typeof document.visibilityState==="undefined"&&typeof document.hidden==="undefined"){return-1}fillVisibilityChangeEventData(visibilityStatus);return 0}Module["_emscripten_get_visibility_status"]=_emscripten_get_visibility_status;_emscripten_get_visibility_status.sig="ii";function registerTouchEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.touchEvent)JSEvents.touchEvent=_malloc(1696);target=findEventTarget(target);var touchEventHandlerFunc=function(e){var touches={};var et=e.touches;for(var i=0;i>3]=e.timeStamp;var idx=touchEvent>>2;HEAP32[idx+3]=e.ctrlKey;HEAP32[idx+4]=e.shiftKey;HEAP32[idx+5]=e.altKey;HEAP32[idx+6]=e.metaKey;idx+=7;var targetRect=getBoundingClientRect(target);var numTouches=0;for(var i in touches){var t=touches[i];HEAP32[idx+0]=t.identifier;HEAP32[idx+1]=t.screenX;HEAP32[idx+2]=t.screenY;HEAP32[idx+3]=t.clientX;HEAP32[idx+4]=t.clientY;HEAP32[idx+5]=t.pageX;HEAP32[idx+6]=t.pageY;HEAP32[idx+7]=t.isChanged;HEAP32[idx+8]=t.onTarget;HEAP32[idx+9]=t.clientX-targetRect.left;HEAP32[idx+10]=t.clientY-targetRect.top;idx+=13;if(++numTouches>31){break}}HEAP32[touchEvent+8>>2]=numTouches;if(wasmTable.get(callbackfunc)(eventTypeId,touchEvent,userData))e.preventDefault()};var eventHandler={target:target,allowsDeferredCalls:eventTypeString=="touchstart"||eventTypeString=="touchend",eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:touchEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerTouchEventCallback"]=registerTouchEventCallback;function _emscripten_set_touchstart_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,22,"touchstart",targetThread);return 0}Module["_emscripten_set_touchstart_callback_on_thread"]=_emscripten_set_touchstart_callback_on_thread;_emscripten_set_touchstart_callback_on_thread.sig="iiiiii";function _emscripten_set_touchend_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,23,"touchend",targetThread);return 0}Module["_emscripten_set_touchend_callback_on_thread"]=_emscripten_set_touchend_callback_on_thread;_emscripten_set_touchend_callback_on_thread.sig="iiiiii";function _emscripten_set_touchmove_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,24,"touchmove",targetThread);return 0}Module["_emscripten_set_touchmove_callback_on_thread"]=_emscripten_set_touchmove_callback_on_thread;_emscripten_set_touchmove_callback_on_thread.sig="iiiiii";function _emscripten_set_touchcancel_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){registerTouchEventCallback(target,userData,useCapture,callbackfunc,25,"touchcancel",targetThread);return 0}Module["_emscripten_set_touchcancel_callback_on_thread"]=_emscripten_set_touchcancel_callback_on_thread;_emscripten_set_touchcancel_callback_on_thread.sig="iiiiii";function fillGamepadEventData(eventStruct,e){HEAPF64[eventStruct>>3]=e.timestamp;for(var i=0;i>3]=e.axes[i]}for(var i=0;i>3]=e.buttons[i].value}else{HEAPF64[eventStruct+i*8+528>>3]=e.buttons[i]}}for(var i=0;i>2]=e.buttons[i].pressed}else{HEAP32[eventStruct+i*4+1040>>2]=e.buttons[i]==1}}HEAP32[eventStruct+1296>>2]=e.connected;HEAP32[eventStruct+1300>>2]=e.index;HEAP32[eventStruct+8>>2]=e.axes.length;HEAP32[eventStruct+12>>2]=e.buttons.length;stringToUTF8(e.id,eventStruct+1304,64);stringToUTF8(e.mapping,eventStruct+1368,64)}Module["fillGamepadEventData"]=fillGamepadEventData;function registerGamepadEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.gamepadEvent)JSEvents.gamepadEvent=_malloc(1432);var gamepadEventHandlerFunc=function(ev){var e=ev||event;var gamepadEvent=JSEvents.gamepadEvent;fillGamepadEventData(gamepadEvent,e["gamepad"]);if(wasmTable.get(callbackfunc)(eventTypeId,gamepadEvent,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),allowsDeferredCalls:true,eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:gamepadEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerGamepadEventCallback"]=registerGamepadEventCallback;function _emscripten_set_gamepadconnected_callback_on_thread(userData,useCapture,callbackfunc,targetThread){if(!navigator.getGamepads&&!navigator.webkitGetGamepads)return-1;registerGamepadEventCallback(2,userData,useCapture,callbackfunc,26,"gamepadconnected",targetThread);return 0}Module["_emscripten_set_gamepadconnected_callback_on_thread"]=_emscripten_set_gamepadconnected_callback_on_thread;_emscripten_set_gamepadconnected_callback_on_thread.sig="iiiii";function _emscripten_set_gamepaddisconnected_callback_on_thread(userData,useCapture,callbackfunc,targetThread){if(!navigator.getGamepads&&!navigator.webkitGetGamepads)return-1;registerGamepadEventCallback(2,userData,useCapture,callbackfunc,27,"gamepaddisconnected",targetThread);return 0}Module["_emscripten_set_gamepaddisconnected_callback_on_thread"]=_emscripten_set_gamepaddisconnected_callback_on_thread;_emscripten_set_gamepaddisconnected_callback_on_thread.sig="iiiii";function _emscripten_sample_gamepad_data(){return(JSEvents.lastGamepadState=navigator.getGamepads?navigator.getGamepads():navigator.webkitGetGamepads?navigator.webkitGetGamepads():null)?0:-1}Module["_emscripten_sample_gamepad_data"]=_emscripten_sample_gamepad_data;_emscripten_sample_gamepad_data.sig="i";function _emscripten_get_num_gamepads(){return JSEvents.lastGamepadState.length}Module["_emscripten_get_num_gamepads"]=_emscripten_get_num_gamepads;_emscripten_get_num_gamepads.sig="i";function _emscripten_get_gamepad_status(index,gamepadState){if(index<0||index>=JSEvents.lastGamepadState.length)return-5;if(!JSEvents.lastGamepadState[index])return-7;fillGamepadEventData(gamepadState,JSEvents.lastGamepadState[index]);return 0}Module["_emscripten_get_gamepad_status"]=_emscripten_get_gamepad_status;_emscripten_get_gamepad_status.sig="iii";function registerBeforeUnloadEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString){var beforeUnloadEventHandlerFunc=function(ev){var e=ev||event;var confirmationMessage=wasmTable.get(callbackfunc)(eventTypeId,0,userData);if(confirmationMessage){confirmationMessage=UTF8ToString(confirmationMessage)}if(confirmationMessage){e.preventDefault();e.returnValue=confirmationMessage;return confirmationMessage}};var eventHandler={target:findEventTarget(target),eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:beforeUnloadEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerBeforeUnloadEventCallback"]=registerBeforeUnloadEventCallback;function _emscripten_set_beforeunload_callback_on_thread(userData,callbackfunc,targetThread){if(typeof onbeforeunload==="undefined")return-1;if(targetThread!==1)return-5;registerBeforeUnloadEventCallback(2,userData,true,callbackfunc,28,"beforeunload");return 0}Module["_emscripten_set_beforeunload_callback_on_thread"]=_emscripten_set_beforeunload_callback_on_thread;_emscripten_set_beforeunload_callback_on_thread.sig="iii";function fillBatteryEventData(eventStruct,e){HEAPF64[eventStruct>>3]=e.chargingTime;HEAPF64[eventStruct+8>>3]=e.dischargingTime;HEAPF64[eventStruct+16>>3]=e.level;HEAP32[eventStruct+24>>2]=e.charging}Module["fillBatteryEventData"]=fillBatteryEventData;function battery(){return navigator.battery||navigator.mozBattery||navigator.webkitBattery}Module["battery"]=battery;function registerBatteryEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){if(!JSEvents.batteryEvent)JSEvents.batteryEvent=_malloc(32);var batteryEventHandlerFunc=function(ev){var e=ev||event;var batteryEvent=JSEvents.batteryEvent;fillBatteryEventData(batteryEvent,battery());if(wasmTable.get(callbackfunc)(eventTypeId,batteryEvent,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:batteryEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["registerBatteryEventCallback"]=registerBatteryEventCallback;function _emscripten_set_batterychargingchange_callback_on_thread(userData,callbackfunc,targetThread){if(!battery())return-1;registerBatteryEventCallback(battery(),userData,true,callbackfunc,29,"chargingchange",targetThread);return 0}Module["_emscripten_set_batterychargingchange_callback_on_thread"]=_emscripten_set_batterychargingchange_callback_on_thread;_emscripten_set_batterychargingchange_callback_on_thread.sig="iii";function _emscripten_set_batterylevelchange_callback_on_thread(userData,callbackfunc,targetThread){if(!battery())return-1;registerBatteryEventCallback(battery(),userData,true,callbackfunc,30,"levelchange",targetThread);return 0}Module["_emscripten_set_batterylevelchange_callback_on_thread"]=_emscripten_set_batterylevelchange_callback_on_thread;_emscripten_set_batterylevelchange_callback_on_thread.sig="iii";function _emscripten_get_battery_status(batteryState){if(!battery())return-1;fillBatteryEventData(batteryState,battery());return 0}Module["_emscripten_get_battery_status"]=_emscripten_get_battery_status;_emscripten_get_battery_status.sig="ii";function _emscripten_set_element_css_size(target,width,height){target=findEventTarget(target);if(!target)return-4;target.style.width=width+"px";target.style.height=height+"px";return 0}Module["_emscripten_set_element_css_size"]=_emscripten_set_element_css_size;_emscripten_set_element_css_size.sig="iiii";function _emscripten_get_element_css_size(target,width,height){target=findEventTarget(target);if(!target)return-4;var rect=getBoundingClientRect(target);HEAPF64[width>>3]=rect.width;HEAPF64[height>>3]=rect.height;return 0}Module["_emscripten_get_element_css_size"]=_emscripten_get_element_css_size;_emscripten_get_element_css_size.sig="iiii";function _emscripten_html5_remove_all_event_listeners(){JSEvents.removeAllEventListeners()}Module["_emscripten_html5_remove_all_event_listeners"]=_emscripten_html5_remove_all_event_listeners;_emscripten_html5_remove_all_event_listeners.sig="v";function _emscripten_request_animation_frame(cb,userData){return requestAnimationFrame(function(timeStamp){wasmTable.get(cb)(timeStamp,userData)})}Module["_emscripten_request_animation_frame"]=_emscripten_request_animation_frame;function _emscripten_cancel_animation_frame(id){cancelAnimationFrame(id)}Module["_emscripten_cancel_animation_frame"]=_emscripten_cancel_animation_frame;function _emscripten_request_animation_frame_loop(cb,userData){function tick(timeStamp){if(wasmTable.get(cb)(timeStamp,userData)){requestAnimationFrame(tick)}}return requestAnimationFrame(tick)}Module["_emscripten_request_animation_frame_loop"]=_emscripten_request_animation_frame_loop;function polyfillSetImmediate(){}Module["polyfillSetImmediate"]=polyfillSetImmediate;function _emscripten_set_immediate(cb,userData){polyfillSetImmediate();return emSetImmediate(function(){callUserCallback(function(){wasmTable.get(cb)(userData)})})}Module["_emscripten_set_immediate"]=_emscripten_set_immediate;function _emscripten_clear_immediate(id){emClearImmediate(id)}Module["_emscripten_clear_immediate"]=_emscripten_clear_immediate;function _emscripten_set_immediate_loop(cb,userData){polyfillSetImmediate();function tick(){callUserCallback(function(){if(wasmTable.get(cb)(userData)){emSetImmediate(tick)}})}return emSetImmediate(tick)}Module["_emscripten_set_immediate_loop"]=_emscripten_set_immediate_loop;function _emscripten_set_timeout(cb,msecs,userData){return setTimeout(function(){callUserCallback(function(){wasmTable.get(cb)(userData)})},msecs)}Module["_emscripten_set_timeout"]=_emscripten_set_timeout;function _emscripten_clear_timeout(id){clearTimeout(id)}Module["_emscripten_clear_timeout"]=_emscripten_clear_timeout;function _emscripten_set_timeout_loop(cb,msecs,userData){function tick(){var t=performance.now();var n=t+msecs;callUserCallback(function(){if(wasmTable.get(cb)(t,userData)){setTimeout(tick,n-performance.now())}})}return setTimeout(tick,0)}Module["_emscripten_set_timeout_loop"]=_emscripten_set_timeout_loop;function _emscripten_set_interval(cb,msecs,userData){return setInterval(function(){callUserCallback(function(){wasmTable.get(cb)(userData)})},msecs)}Module["_emscripten_set_interval"]=_emscripten_set_interval;function _emscripten_clear_interval(id){clearInterval(id)}Module["_emscripten_clear_interval"]=_emscripten_clear_interval;function _emscripten_date_now(){return Date.now()}Module["_emscripten_date_now"]=_emscripten_date_now;function _emscripten_performance_now(){return performance.now()}Module["_emscripten_performance_now"]=_emscripten_performance_now;function _emscripten_console_log(str){out(UTF8ToString(str))}Module["_emscripten_console_log"]=_emscripten_console_log;function _emscripten_console_warn(str){console.warn(UTF8ToString(str))}Module["_emscripten_console_warn"]=_emscripten_console_warn;function _emscripten_console_error(str){err(UTF8ToString(str))}Module["_emscripten_console_error"]=_emscripten_console_error;function _emscripten_throw_number(number){throw number}Module["_emscripten_throw_number"]=_emscripten_throw_number;function _emscripten_throw_string(str){throw UTF8ToString(str)}Module["_emscripten_throw_string"]=_emscripten_throw_string;function _emscripten_unwind_to_js_event_loop(){throw"unwind"}Module["_emscripten_unwind_to_js_event_loop"]=_emscripten_unwind_to_js_event_loop;function _emscripten_get_device_pixel_ratio(){return typeof devicePixelRatio==="number"&&devicePixelRatio||1}Module["_emscripten_get_device_pixel_ratio"]=_emscripten_get_device_pixel_ratio;_emscripten_get_device_pixel_ratio.sig="d";function checkWasiClock(clock_id){return clock_id==0||clock_id==1||clock_id==2||clock_id==3}Module["checkWasiClock"]=checkWasiClock;function _clock_time_get(clk_id,precision_low,precision_high,ptime){if(!checkWasiClock(clk_id)){return 28}var now;if(clk_id===0){now=Date.now()}else if(_emscripten_get_now_is_monotonic){now=_emscripten_get_now()}else{return 52}var nsec=Math.round(now*1e3*1e3);HEAP32[ptime>>2]=nsec>>>0;HEAP32[ptime+4>>2]=nsec/Math.pow(2,32)>>>0;return 0}Module["_clock_time_get"]=_clock_time_get;_clock_time_get.sig="iiiii";function _clock_res_get(clk_id,pres){if(!checkWasiClock(clk_id)){return 28}var nsec;if(clk_id===0){nsec=1e3*1e3}else if(_emscripten_get_now_is_monotonic){nsec=_emscripten_get_now_res()}else{return 52}HEAP32[pres>>2]=nsec>>>0;HEAP32[pres+4>>2]=nsec/Math.pow(2,32)>>>0;return 0}Module["_clock_res_get"]=_clock_res_get;_clock_res_get.sig="iii";function writeI53ToI64Clamped(ptr,num){if(num>0x8000000000000000){HEAPU32[ptr>>2]=4294967295;HEAPU32[ptr+4>>2]=2147483647}else if(num<-0x8000000000000000){HEAPU32[ptr>>2]=0;HEAPU32[ptr+4>>2]=2147483648}else{HEAPU32[ptr>>2]=num;HEAPU32[ptr+4>>2]=(num-HEAPU32[ptr>>2])/4294967296}}Module["writeI53ToI64Clamped"]=writeI53ToI64Clamped;function writeI53ToI64Signaling(ptr,num){if(num>0x8000000000000000||num<-0x8000000000000000){throw"RangeError:"+num}HEAPU32[ptr>>2]=num;HEAPU32[ptr+4>>2]=(num-HEAPU32[ptr>>2])/4294967296}Module["writeI53ToI64Signaling"]=writeI53ToI64Signaling;function writeI53ToU64Clamped(ptr,num){if(num>0x10000000000000000)HEAPU32[ptr>>2]=HEAPU32[ptr+4>>2]=4294967295;else if(num<0)HEAPU32[ptr>>2]=HEAPU32[ptr+4>>2]=0;else{HEAPU32[ptr>>2]=num;HEAPU32[ptr+4>>2]=(num-HEAPU32[ptr>>2])/4294967296}}Module["writeI53ToU64Clamped"]=writeI53ToU64Clamped;function writeI53ToU64Signaling(ptr,num){if(num<0||num>0x10000000000000000){throw"RangeError:"+num}HEAPU32[ptr>>2]=num;HEAPU32[ptr+4>>2]=(num-HEAPU32[ptr>>2])/4294967296}Module["writeI53ToU64Signaling"]=writeI53ToU64Signaling;function readI53FromI64(ptr){return HEAPU32[ptr>>2]+HEAP32[ptr+4>>2]*4294967296}Module["readI53FromI64"]=readI53FromI64;function readI53FromU64(ptr){return HEAPU32[ptr>>2]+HEAPU32[ptr+4>>2]*4294967296}Module["readI53FromU64"]=readI53FromU64;function _emscripten_dlopen(filename,flags,user_data,onsuccess,onerror){function errorCallback(e){DLFCN.errorMsg="Could not load dynamic lib: "+UTF8ToString(filename)+"\n"+e;callUserCallback(function(){wasmTable.get(onerror)(user_data)})}function successCallback(handle){callUserCallback(function(){wasmTable.get(onsuccess)(user_data,handle)})}var promise=dlopenInternal(filename,flags,{loadAsync:true});if(promise){promise.then(successCallback,errorCallback)}else{errorCallback()}}Module["_emscripten_dlopen"]=_emscripten_dlopen;_emscripten_dlopen.sig="iii";function _dladdr(addr,info){var fname=stringToNewUTF8(getExecutableName());HEAP32[info>>2]=fname;HEAP32[info+4>>2]=0;HEAP32[info+8>>2]=0;HEAP32[info+12>>2]=0;return 1}Module["_dladdr"]=_dladdr;_dladdr.sig="iii";function _llvm_eh_typeid_for(type){return type}Module["_llvm_eh_typeid_for"]=_llvm_eh_typeid_for;function ___cxa_get_exception_ptr(ptr){return new CatchInfo(ptr).get_exception_ptr()}Module["___cxa_get_exception_ptr"]=___cxa_get_exception_ptr;function ___cxa_call_unexpected(exception){err("Unexpected exception thrown, this is not properly supported - aborting");ABORT=true;throw exception}Module["___cxa_call_unexpected"]=___cxa_call_unexpected;function ___cxa_find_matching_catch(){var thrown=exceptionLast;if(!thrown){setTempRet0(0);return 0|0}var info=new ExceptionInfo(thrown);var thrownType=info.get_type();var catchInfo=new CatchInfo;catchInfo.set_base_ptr(thrown);catchInfo.set_adjusted_ptr(thrown);if(!thrownType){setTempRet0(0);return catchInfo.ptr|0}var typeArray=Array.prototype.slice.call(arguments);for(var i=0;i0){dependenciesFulfilled=onload}else{onload()}}};script.onerror=function(){if(onerror)onerror()};script.src=UTF8ToString(url);document.body.appendChild(script)}Module["_emscripten_async_load_script"]=_emscripten_async_load_script;function _emscripten_get_main_loop_timing(mode,value){if(mode)HEAP32[mode>>2]=Browser.mainLoop.timingMode;if(value)HEAP32[value>>2]=Browser.mainLoop.timingValue}Module["_emscripten_get_main_loop_timing"]=_emscripten_get_main_loop_timing;_emscripten_get_main_loop_timing.sig="vii";function _emscripten_set_main_loop(func,fps,simulateInfiniteLoop){var browserIterationFunc=wasmTable.get(func);setMainLoop(browserIterationFunc,fps,simulateInfiniteLoop)}Module["_emscripten_set_main_loop"]=_emscripten_set_main_loop;function _emscripten_set_main_loop_arg(func,arg,fps,simulateInfiniteLoop){var browserIterationFunc=function(){wasmTable.get(func)(arg)};setMainLoop(browserIterationFunc,fps,simulateInfiniteLoop,arg)}Module["_emscripten_set_main_loop_arg"]=_emscripten_set_main_loop_arg;_emscripten_set_main_loop_arg.sig="viiii";function _emscripten_cancel_main_loop(){Browser.mainLoop.pause();Browser.mainLoop.func=null}Module["_emscripten_cancel_main_loop"]=_emscripten_cancel_main_loop;_emscripten_cancel_main_loop.sig="v";function _emscripten_pause_main_loop(){Browser.mainLoop.pause()}Module["_emscripten_pause_main_loop"]=_emscripten_pause_main_loop;_emscripten_pause_main_loop.sig="v";function _emscripten_resume_main_loop(){Browser.mainLoop.resume()}Module["_emscripten_resume_main_loop"]=_emscripten_resume_main_loop;_emscripten_resume_main_loop.sig="v";function __emscripten_push_main_loop_blocker(func,arg,name){Browser.mainLoop.queue.push({func:function(){wasmTable.get(func)(arg)},name:UTF8ToString(name),counted:true});Browser.mainLoop.updateStatus()}Module["__emscripten_push_main_loop_blocker"]=__emscripten_push_main_loop_blocker;function __emscripten_push_uncounted_main_loop_blocker(func,arg,name){Browser.mainLoop.queue.push({func:function(){wasmTable.get(func)(arg)},name:UTF8ToString(name),counted:false});Browser.mainLoop.updateStatus()}Module["__emscripten_push_uncounted_main_loop_blocker"]=__emscripten_push_uncounted_main_loop_blocker;function _emscripten_set_main_loop_expected_blockers(num){Browser.mainLoop.expectedBlockers=num;Browser.mainLoop.remainingBlockers=num;Browser.mainLoop.updateStatus()}Module["_emscripten_set_main_loop_expected_blockers"]=_emscripten_set_main_loop_expected_blockers;_emscripten_set_main_loop_expected_blockers.sig="vi";function _emscripten_async_call(func,arg,millis){function wrapper(){wasmTable.get(func)(arg)}if(millis>=0||ENVIRONMENT_IS_NODE){safeSetTimeout(wrapper,millis)}else{Browser.safeRequestAnimationFrame(wrapper)}}Module["_emscripten_async_call"]=_emscripten_async_call;_emscripten_async_call.sig="viii";function _emscripten_get_window_title(){var buflen=256;if(!_emscripten_get_window_title.buffer){_emscripten_get_window_title.buffer=_malloc(buflen)}writeAsciiToMemory(document.title.slice(0,buflen-1),_emscripten_get_window_title.buffer);return _emscripten_get_window_title.buffer}Module["_emscripten_get_window_title"]=_emscripten_get_window_title;_emscripten_get_window_title.sig="iv";function _emscripten_set_window_title(title){setWindowTitle(AsciiToString(title))}Module["_emscripten_set_window_title"]=_emscripten_set_window_title;_emscripten_set_window_title.sig="vi";function _emscripten_get_screen_size(width,height){HEAP32[width>>2]=screen.width;HEAP32[height>>2]=screen.height}Module["_emscripten_get_screen_size"]=_emscripten_get_screen_size;_emscripten_get_screen_size.sig="vii";function _emscripten_hide_mouse(){var styleSheet=document.styleSheets[0];var rules=styleSheet.cssRules;for(var i=0;i>2]=canvas.width;HEAP32[height>>2]=canvas.height;HEAP32[isFullscreen>>2]=Browser.isFullscreen?1:0}Module["_emscripten_get_canvas_size"]=_emscripten_get_canvas_size;_emscripten_get_canvas_size.sig="viii";function _emscripten_create_worker(url){url=UTF8ToString(url);var id=Browser.workers.length;var info={worker:new Worker(url),callbacks:[],awaited:0,buffer:0,bufferSize:0};info.worker.onmessage=function info_worker_onmessage(msg){if(ABORT)return;var info=Browser.workers[id];if(!info)return;var callbackId=msg.data["callbackId"];var callbackInfo=info.callbacks[callbackId];if(!callbackInfo)return;if(msg.data["finalResponse"]){info.awaited--;info.callbacks[callbackId]=null}var data=msg.data["data"];if(data){if(!data.byteLength)data=new Uint8Array(data);if(!info.buffer||info.bufferSize>2]=canvas.width;HEAP32[h>>2]=canvas.height;return buf}return 0}Module["_emscripten_get_preloaded_image_data"]=_emscripten_get_preloaded_image_data;_emscripten_get_preloaded_image_data.sig="iiii";function _emscripten_get_preloaded_image_data_from_FILE(file,w,h){var fd=Module["_fileno"](file);var stream=FS.getStream(fd);if(stream){return _emscripten_get_preloaded_image_data(stream.path,w,h)}return 0}Module["_emscripten_get_preloaded_image_data_from_FILE"]=_emscripten_get_preloaded_image_data_from_FILE;_emscripten_get_preloaded_image_data_from_FILE.sig="iiii";var wget={wgetRequests:{},nextWgetRequestHandle:0,getNextWgetRequestHandle:function(){var handle=wget.nextWgetRequestHandle;wget.nextWgetRequestHandle++;return handle}};Module["wget"]=wget;function _emscripten_async_wget(url,file,onload,onerror){var _url=UTF8ToString(url);var _file=UTF8ToString(file);_file=PATH_FS.resolve(_file);function doCallback(callback){if(callback){callUserCallback(function(){var stack=stackSave();wasmTable.get(callback)(allocate(intArrayFromString(_file),ALLOC_STACK));stackRestore(stack)})}}var destinationDirectory=PATH.dirname(_file);FS.createPreloadedFile(destinationDirectory,PATH.basename(_file),_url,true,true,function(){doCallback(onload)},function(){doCallback(onerror)},false,false,function(){try{FS.unlink(_file)}catch(e){}FS.mkdirTree(destinationDirectory)})}Module["_emscripten_async_wget"]=_emscripten_async_wget;_emscripten_async_wget.sig="viiii";function _emscripten_async_wget_data(url,arg,onload,onerror){asyncLoad(UTF8ToString(url),function(byteArray){callUserCallback(function(){var buffer=_malloc(byteArray.length);HEAPU8.set(byteArray,buffer);wasmTable.get(onload)(arg,buffer,byteArray.length);_free(buffer)})},function(){if(onerror){callUserCallback(function(){wasmTable.get(onerror)(arg)})}},true)}Module["_emscripten_async_wget_data"]=_emscripten_async_wget_data;_emscripten_async_wget_data.sig="viiii";function _emscripten_async_wget2(url,file,request,param,arg,onload,onerror,onprogress){var _url=UTF8ToString(url);var _file=UTF8ToString(file);_file=PATH_FS.resolve(_file);var _request=UTF8ToString(request);var _param=UTF8ToString(param);var index=_file.lastIndexOf("/");var http=new XMLHttpRequest;http.open(_request,_url,true);http.responseType="arraybuffer";var handle=wget.getNextWgetRequestHandle();var destinationDirectory=PATH.dirname(_file);http.onload=function http_onload(e){if(http.status>=200&&http.status<300){try{FS.unlink(_file)}catch(e){}FS.mkdirTree(destinationDirectory);FS.createDataFile(_file.substr(0,index),_file.substr(index+1),new Uint8Array(http.response),true,true,false);if(onload){var stack=stackSave();wasmTable.get(onload)(handle,arg,allocate(intArrayFromString(_file),ALLOC_STACK));stackRestore(stack)}}else{if(onerror)wasmTable.get(onerror)(handle,arg,http.status)}delete wget.wgetRequests[handle]};http.onerror=function http_onerror(e){if(onerror)wasmTable.get(onerror)(handle,arg,http.status);delete wget.wgetRequests[handle]};http.onprogress=function http_onprogress(e){if(e.lengthComputable||e.lengthComputable===undefined&&e.total!=0){var percentComplete=e.loaded/e.total*100;if(onprogress)wasmTable.get(onprogress)(handle,arg,percentComplete)}};http.onabort=function http_onabort(e){delete wget.wgetRequests[handle]};if(_request=="POST"){http.setRequestHeader("Content-type","application/x-www-form-urlencoded");http.send(_param)}else{http.send(null)}wget.wgetRequests[handle]=http;return handle}Module["_emscripten_async_wget2"]=_emscripten_async_wget2;_emscripten_async_wget2.sig="iiiiiiiii";function _emscripten_async_wget2_data(url,request,param,arg,free,onload,onerror,onprogress){var _url=UTF8ToString(url);var _request=UTF8ToString(request);var _param=UTF8ToString(param);var http=new XMLHttpRequest;http.open(_request,_url,true);http.responseType="arraybuffer";var handle=wget.getNextWgetRequestHandle();function onerrorjs(){if(onerror){var statusText=0;if(http.statusText){var len=lengthBytesUTF8(http.statusText)+1;statusText=stackAlloc(len);stringToUTF8(http.statusText,statusText,len)}wasmTable.get(onerror)(handle,arg,http.status,statusText)}}http.onload=function http_onload(e){if(http.status>=200&&http.status<300||http.status===0&&_url.substr(0,4).toLowerCase()!="http"){var byteArray=new Uint8Array(http.response);var buffer=_malloc(byteArray.length);HEAPU8.set(byteArray,buffer);if(onload)wasmTable.get(onload)(handle,arg,buffer,byteArray.length);if(free)_free(buffer)}else{onerrorjs()}delete wget.wgetRequests[handle]};http.onerror=function http_onerror(e){onerrorjs();delete wget.wgetRequests[handle]};http.onprogress=function http_onprogress(e){if(onprogress)wasmTable.get(onprogress)(handle,arg,e.loaded,e.lengthComputable||e.lengthComputable===undefined?e.total:0)};http.onabort=function http_onabort(e){delete wget.wgetRequests[handle]};if(_request=="POST"){http.setRequestHeader("Content-type","application/x-www-form-urlencoded");http.send(_param)}else{http.send(null)}wget.wgetRequests[handle]=http;return handle}Module["_emscripten_async_wget2_data"]=_emscripten_async_wget2_data;_emscripten_async_wget2_data.sig="iiiiiiiii";function _emscripten_async_wget2_abort(handle){var http=wget.wgetRequests[handle];if(http){http.abort()}}Module["_emscripten_async_wget2_abort"]=_emscripten_async_wget2_abort;_emscripten_async_wget2_abort.sig="vi";function _setNetworkCallback(event,userData,callback){function _callback(data){try{if(event==="error"){var sp=stackSave();var msg=allocate(intArrayFromString(data[2]),ALLOC_STACK);wasmTable.get(callback)(data[0],data[1],msg,userData);stackRestore(sp)}else{wasmTable.get(callback)(data,userData)}}catch(e){if(e instanceof ExitStatus){return}else{if(e&&typeof e==="object"&&e.stack)err("exception thrown: "+[e,e.stack]);throw e}}}Module["websocket"]["on"](event,callback?_callback:null)}Module["_setNetworkCallback"]=_setNetworkCallback;function _emscripten_set_socket_error_callback(userData,callback){_setNetworkCallback("error",userData,callback)}Module["_emscripten_set_socket_error_callback"]=_emscripten_set_socket_error_callback;function _emscripten_set_socket_open_callback(userData,callback){_setNetworkCallback("open",userData,callback)}Module["_emscripten_set_socket_open_callback"]=_emscripten_set_socket_open_callback;function _emscripten_set_socket_listen_callback(userData,callback){_setNetworkCallback("listen",userData,callback)}Module["_emscripten_set_socket_listen_callback"]=_emscripten_set_socket_listen_callback;function _emscripten_set_socket_connection_callback(userData,callback){_setNetworkCallback("connection",userData,callback)}Module["_emscripten_set_socket_connection_callback"]=_emscripten_set_socket_connection_callback;function _emscripten_set_socket_message_callback(userData,callback){_setNetworkCallback("message",userData,callback)}Module["_emscripten_set_socket_message_callback"]=_emscripten_set_socket_message_callback;function _emscripten_set_socket_close_callback(userData,callback){_setNetworkCallback("close",userData,callback)}Module["_emscripten_set_socket_close_callback"]=_emscripten_set_socket_close_callback;function _emscripten_webgl_enable_ANGLE_instanced_arrays(ctx){return __webgl_enable_ANGLE_instanced_arrays(GL.contexts[ctx].GLctx)}Module["_emscripten_webgl_enable_ANGLE_instanced_arrays"]=_emscripten_webgl_enable_ANGLE_instanced_arrays;function _emscripten_webgl_enable_OES_vertex_array_object(ctx){return __webgl_enable_OES_vertex_array_object(GL.contexts[ctx].GLctx)}Module["_emscripten_webgl_enable_OES_vertex_array_object"]=_emscripten_webgl_enable_OES_vertex_array_object;function _emscripten_webgl_enable_WEBGL_draw_buffers(ctx){return __webgl_enable_WEBGL_draw_buffers(GL.contexts[ctx].GLctx)}Module["_emscripten_webgl_enable_WEBGL_draw_buffers"]=_emscripten_webgl_enable_WEBGL_draw_buffers;function _emscripten_webgl_enable_WEBGL_multi_draw(ctx){return __webgl_enable_WEBGL_multi_draw(GL.contexts[ctx].GLctx)}Module["_emscripten_webgl_enable_WEBGL_multi_draw"]=_emscripten_webgl_enable_WEBGL_multi_draw;function _glPixelStorei(pname,param){if(pname==3317){GL.unpackAlignment=param}GLctx.pixelStorei(pname,param)}Module["_glPixelStorei"]=_glPixelStorei;_glPixelStorei.sig="vii";function _glGetString(name_){var ret=GL.stringCache[name_];if(!ret){switch(name_){case 7939:var exts=GLctx.getSupportedExtensions()||[];exts=exts.concat(exts.map(function(e){return"GL_"+e}));ret=stringToNewUTF8(exts.join(" "));break;case 7936:case 7937:case 37445:case 37446:var s=GLctx.getParameter(name_);if(!s){GL.recordError(1280)}ret=s&&stringToNewUTF8(s);break;case 7938:var glVersion=GLctx.getParameter(7938);{glVersion="OpenGL ES 2.0 ("+glVersion+")"}ret=stringToNewUTF8(glVersion);break;case 35724:var glslVersion=GLctx.getParameter(35724);var ver_re=/^WebGL GLSL ES ([0-9]\.[0-9][0-9]?)(?:$| .*)/;var ver_num=glslVersion.match(ver_re);if(ver_num!==null){if(ver_num[1].length==3)ver_num[1]=ver_num[1]+"0";glslVersion="OpenGL ES GLSL ES "+ver_num[1]+" ("+glslVersion+")"}ret=stringToNewUTF8(glslVersion);break;default:GL.recordError(1280)}GL.stringCache[name_]=ret}return ret}Module["_glGetString"]=_glGetString;_glGetString.sig="ii";function _glGetIntegerv(name_,p){emscriptenWebGLGet(name_,p,0)}Module["_glGetIntegerv"]=_glGetIntegerv;_glGetIntegerv.sig="vii";function _glGetFloatv(name_,p){emscriptenWebGLGet(name_,p,2)}Module["_glGetFloatv"]=_glGetFloatv;_glGetFloatv.sig="vii";function _glGetBooleanv(name_,p){emscriptenWebGLGet(name_,p,4)}Module["_glGetBooleanv"]=_glGetBooleanv;_glGetBooleanv.sig="vii";function _glDeleteTextures(n,textures){for(var i=0;i>2];var texture=GL.textures[id];if(!texture)continue;GLctx.deleteTexture(texture);texture.name=0;GL.textures[id]=null}}Module["_glDeleteTextures"]=_glDeleteTextures;_glDeleteTextures.sig="vii";function _glCompressedTexImage2D(target,level,internalFormat,width,height,border,imageSize,data){GLctx["compressedTexImage2D"](target,level,internalFormat,width,height,border,data?HEAPU8.subarray(data,data+imageSize):null)}Module["_glCompressedTexImage2D"]=_glCompressedTexImage2D;_glCompressedTexImage2D.sig="viiiiiiii";function _glCompressedTexSubImage2D(target,level,xoffset,yoffset,width,height,format,imageSize,data){GLctx["compressedTexSubImage2D"](target,level,xoffset,yoffset,width,height,format,data?HEAPU8.subarray(data,data+imageSize):null)}Module["_glCompressedTexSubImage2D"]=_glCompressedTexSubImage2D;_glCompressedTexSubImage2D.sig="viiiiiiiii";function _glTexImage2D(target,level,internalFormat,width,height,border,format,type,pixels){GLctx.texImage2D(target,level,internalFormat,width,height,border,format,type,pixels?emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,internalFormat):null)}Module["_glTexImage2D"]=_glTexImage2D;_glTexImage2D.sig="viiiiiiiii";function _glTexSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixels){var pixelData=null;if(pixels)pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,0);GLctx.texSubImage2D(target,level,xoffset,yoffset,width,height,format,type,pixelData)}Module["_glTexSubImage2D"]=_glTexSubImage2D;_glTexSubImage2D.sig="viiiiiiiii";function _glReadPixels(x,y,width,height,format,type,pixels){var pixelData=emscriptenWebGLGetTexPixelData(type,format,width,height,pixels,format);if(!pixelData){GL.recordError(1280);return}GLctx.readPixels(x,y,width,height,format,type,pixelData)}Module["_glReadPixels"]=_glReadPixels;_glReadPixels.sig="viiiiiii";function _glBindTexture(target,texture){GLctx.bindTexture(target,GL.textures[texture])}Module["_glBindTexture"]=_glBindTexture;_glBindTexture.sig="vii";function _glGetTexParameterfv(target,pname,params){if(!params){GL.recordError(1281);return}HEAPF32[params>>2]=GLctx.getTexParameter(target,pname)}Module["_glGetTexParameterfv"]=_glGetTexParameterfv;_glGetTexParameterfv.sig="viii";function _glGetTexParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getTexParameter(target,pname)}Module["_glGetTexParameteriv"]=_glGetTexParameteriv;_glGetTexParameteriv.sig="viii";function _glTexParameterfv(target,pname,params){var param=HEAPF32[params>>2];GLctx.texParameterf(target,pname,param)}Module["_glTexParameterfv"]=_glTexParameterfv;_glTexParameterfv.sig="viii";function _glTexParameteriv(target,pname,params){var param=HEAP32[params>>2];GLctx.texParameteri(target,pname,param)}Module["_glTexParameteriv"]=_glTexParameteriv;_glTexParameteriv.sig="viii";function _glIsTexture(id){var texture=GL.textures[id];if(!texture)return 0;return GLctx.isTexture(texture)}Module["_glIsTexture"]=_glIsTexture;_glIsTexture.sig="ii";function _glGenBuffers(n,buffers){__glGenObject(n,buffers,"createBuffer",GL.buffers)}Module["_glGenBuffers"]=_glGenBuffers;_glGenBuffers.sig="vii";function _glGenTextures(n,textures){__glGenObject(n,textures,"createTexture",GL.textures)}Module["_glGenTextures"]=_glGenTextures;_glGenTextures.sig="vii";function _glDeleteBuffers(n,buffers){for(var i=0;i>2];var buffer=GL.buffers[id];if(!buffer)continue;GLctx.deleteBuffer(buffer);buffer.name=0;GL.buffers[id]=null}}Module["_glDeleteBuffers"]=_glDeleteBuffers;_glDeleteBuffers.sig="vii";function _glGetBufferParameteriv(target,value,data){if(!data){GL.recordError(1281);return}HEAP32[data>>2]=GLctx.getBufferParameter(target,value)}Module["_glGetBufferParameteriv"]=_glGetBufferParameteriv;_glGetBufferParameteriv.sig="viii";function _glBufferData(target,size,data,usage){GLctx.bufferData(target,data?HEAPU8.subarray(data,data+size):size,usage)}Module["_glBufferData"]=_glBufferData;_glBufferData.sig="viiii";function _glBufferSubData(target,offset,size,data){GLctx.bufferSubData(target,offset,HEAPU8.subarray(data,data+size))}Module["_glBufferSubData"]=_glBufferSubData;_glBufferSubData.sig="viiii";function _glGenQueriesEXT(n,ids){for(var i=0;i>2]=0;return}var id=GL.getNewId(GL.queries);query.name=id;GL.queries[id]=query;HEAP32[ids+i*4>>2]=id}}Module["_glGenQueriesEXT"]=_glGenQueriesEXT;_glGenQueriesEXT.sig="vii";function _glDeleteQueriesEXT(n,ids){for(var i=0;i>2];var query=GL.queries[id];if(!query)continue;GLctx.disjointTimerQueryExt["deleteQueryEXT"](query);GL.queries[id]=null}}Module["_glDeleteQueriesEXT"]=_glDeleteQueriesEXT;_glDeleteQueriesEXT.sig="vii";function _glIsQueryEXT(id){var query=GL.queries[id];if(!query)return 0;return GLctx.disjointTimerQueryExt["isQueryEXT"](query)}Module["_glIsQueryEXT"]=_glIsQueryEXT;_glIsQueryEXT.sig="ii";function _glBeginQueryEXT(target,id){GLctx.disjointTimerQueryExt["beginQueryEXT"](target,GL.queries[id])}Module["_glBeginQueryEXT"]=_glBeginQueryEXT;_glBeginQueryEXT.sig="vii";function _glEndQueryEXT(target){GLctx.disjointTimerQueryExt["endQueryEXT"](target)}Module["_glEndQueryEXT"]=_glEndQueryEXT;_glEndQueryEXT.sig="vi";function _glQueryCounterEXT(id,target){GLctx.disjointTimerQueryExt["queryCounterEXT"](GL.queries[id],target)}Module["_glQueryCounterEXT"]=_glQueryCounterEXT;_glQueryCounterEXT.sig="vii";function _glGetQueryivEXT(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.disjointTimerQueryExt["getQueryEXT"](target,pname)}Module["_glGetQueryivEXT"]=_glGetQueryivEXT;_glGetQueryivEXT.sig="viii";function _glGetQueryObjectivEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname);var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}HEAP32[params>>2]=ret}Module["_glGetQueryObjectivEXT"]=_glGetQueryObjectivEXT;_glGetQueryObjectivEXT.sig="viii";function _glGetQueryObjectuivEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname);var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}HEAP32[params>>2]=ret}Module["_glGetQueryObjectuivEXT"]=_glGetQueryObjectuivEXT;_glGetQueryObjectuivEXT.sig="viii";function _glGetQueryObjecti64vEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param;{param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname)}var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}writeI53ToI64(params,ret)}Module["_glGetQueryObjecti64vEXT"]=_glGetQueryObjecti64vEXT;_glGetQueryObjecti64vEXT.sig="viii";function _glGetQueryObjectui64vEXT(id,pname,params){if(!params){GL.recordError(1281);return}var query=GL.queries[id];var param;{param=GLctx.disjointTimerQueryExt["getQueryObjectEXT"](query,pname)}var ret;if(typeof param=="boolean"){ret=param?1:0}else{ret=param}writeI53ToI64(params,ret)}Module["_glGetQueryObjectui64vEXT"]=_glGetQueryObjectui64vEXT;_glGetQueryObjectui64vEXT.sig="viii";function _glIsBuffer(buffer){var b=GL.buffers[buffer];if(!b)return 0;return GLctx.isBuffer(b)}Module["_glIsBuffer"]=_glIsBuffer;_glIsBuffer.sig="ii";function _glGenRenderbuffers(n,renderbuffers){__glGenObject(n,renderbuffers,"createRenderbuffer",GL.renderbuffers)}Module["_glGenRenderbuffers"]=_glGenRenderbuffers;_glGenRenderbuffers.sig="vii";function _glDeleteRenderbuffers(n,renderbuffers){for(var i=0;i>2];var renderbuffer=GL.renderbuffers[id];if(!renderbuffer)continue;GLctx.deleteRenderbuffer(renderbuffer);renderbuffer.name=0;GL.renderbuffers[id]=null}}Module["_glDeleteRenderbuffers"]=_glDeleteRenderbuffers;_glDeleteRenderbuffers.sig="vii";function _glBindRenderbuffer(target,renderbuffer){GLctx.bindRenderbuffer(target,GL.renderbuffers[renderbuffer])}Module["_glBindRenderbuffer"]=_glBindRenderbuffer;_glBindRenderbuffer.sig="vii";function _glGetRenderbufferParameteriv(target,pname,params){if(!params){GL.recordError(1281);return}HEAP32[params>>2]=GLctx.getRenderbufferParameter(target,pname)}Module["_glGetRenderbufferParameteriv"]=_glGetRenderbufferParameteriv;_glGetRenderbufferParameteriv.sig="viii";function _glIsRenderbuffer(renderbuffer){var rb=GL.renderbuffers[renderbuffer];if(!rb)return 0;return GLctx.isRenderbuffer(rb)}Module["_glIsRenderbuffer"]=_glIsRenderbuffer;_glIsRenderbuffer.sig="ii";function _glGetUniformfv(program,location,params){emscriptenWebGLGetUniform(program,location,params,2)}Module["_glGetUniformfv"]=_glGetUniformfv;_glGetUniformfv.sig="viii";function _glGetUniformiv(program,location,params){emscriptenWebGLGetUniform(program,location,params,0)}Module["_glGetUniformiv"]=_glGetUniformiv;_glGetUniformiv.sig="viii";function _glGetUniformLocation(program,name){name=UTF8ToString(name);if(program=GL.programs[program]){webglPrepareUniformLocationsBeforeFirstUse(program);var uniformLocsById=program.uniformLocsById;var arrayIndex=0;var uniformBaseName=name;var leftBrace=webglGetLeftBracePos(name);if(leftBrace>0){arrayIndex=jstoi_q(name.slice(leftBrace+1))>>>0;uniformBaseName=name.slice(0,leftBrace)}var sizeAndId=program.uniformSizeAndIdsByName[uniformBaseName];if(sizeAndId&&arrayIndex>2]=GLctx.getVertexAttribOffset(index,pname)}Module["_glGetVertexAttribPointerv"]=_glGetVertexAttribPointerv;_glGetVertexAttribPointerv.sig="viii";function _glUniform1f(location,v0){GLctx.uniform1f(webglGetUniformLocation(location),v0)}Module["_glUniform1f"]=_glUniform1f;_glUniform1f.sig="vif";function _glUniform2f(location,v0,v1){GLctx.uniform2f(webglGetUniformLocation(location),v0,v1)}Module["_glUniform2f"]=_glUniform2f;_glUniform2f.sig="viff";function _glUniform3f(location,v0,v1,v2){GLctx.uniform3f(webglGetUniformLocation(location),v0,v1,v2)}Module["_glUniform3f"]=_glUniform3f;_glUniform3f.sig="vifff";function _glUniform4f(location,v0,v1,v2,v3){GLctx.uniform4f(webglGetUniformLocation(location),v0,v1,v2,v3)}Module["_glUniform4f"]=_glUniform4f;_glUniform4f.sig="viffff";function _glUniform1i(location,v0){GLctx.uniform1i(webglGetUniformLocation(location),v0)}Module["_glUniform1i"]=_glUniform1i;_glUniform1i.sig="vii";function _glUniform2i(location,v0,v1){GLctx.uniform2i(webglGetUniformLocation(location),v0,v1)}Module["_glUniform2i"]=_glUniform2i;_glUniform2i.sig="viii";function _glUniform3i(location,v0,v1,v2){GLctx.uniform3i(webglGetUniformLocation(location),v0,v1,v2)}Module["_glUniform3i"]=_glUniform3i;_glUniform3i.sig="viiii";function _glUniform4i(location,v0,v1,v2,v3){GLctx.uniform4i(webglGetUniformLocation(location),v0,v1,v2,v3)}Module["_glUniform4i"]=_glUniform4i;_glUniform4i.sig="viiiii";function _glUniform1iv(location,count,value){if(count<=288){var view=__miniTempWebGLIntBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAP32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1iv(webglGetUniformLocation(location),view)}Module["_glUniform1iv"]=_glUniform1iv;_glUniform1iv.sig="viii";function _glUniform2iv(location,count,value){if(count<=144){var view=__miniTempWebGLIntBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2iv(webglGetUniformLocation(location),view)}Module["_glUniform2iv"]=_glUniform2iv;_glUniform2iv.sig="viii";function _glUniform3iv(location,count,value){if(count<=96){var view=__miniTempWebGLIntBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3iv(webglGetUniformLocation(location),view)}Module["_glUniform3iv"]=_glUniform3iv;_glUniform3iv.sig="viii";function _glUniform4iv(location,count,value){if(count<=72){var view=__miniTempWebGLIntBuffers[4*count-1];for(var i=0;i<4*count;i+=4){view[i]=HEAP32[value+4*i>>2];view[i+1]=HEAP32[value+(4*i+4)>>2];view[i+2]=HEAP32[value+(4*i+8)>>2];view[i+3]=HEAP32[value+(4*i+12)>>2]}}else{var view=HEAP32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4iv(webglGetUniformLocation(location),view)}Module["_glUniform4iv"]=_glUniform4iv;_glUniform4iv.sig="viii";function _glUniform1fv(location,count,value){if(count<=288){var view=miniTempWebGLFloatBuffers[count-1];for(var i=0;i>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*4>>2)}GLctx.uniform1fv(webglGetUniformLocation(location),view)}Module["_glUniform1fv"]=_glUniform1fv;_glUniform1fv.sig="viii";function _glUniform2fv(location,count,value){if(count<=144){var view=miniTempWebGLFloatBuffers[2*count-1];for(var i=0;i<2*count;i+=2){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*8>>2)}GLctx.uniform2fv(webglGetUniformLocation(location),view)}Module["_glUniform2fv"]=_glUniform2fv;_glUniform2fv.sig="viii";function _glUniform3fv(location,count,value){if(count<=96){var view=miniTempWebGLFloatBuffers[3*count-1];for(var i=0;i<3*count;i+=3){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*12>>2)}GLctx.uniform3fv(webglGetUniformLocation(location),view)}Module["_glUniform3fv"]=_glUniform3fv;_glUniform3fv.sig="viii";function _glUniform4fv(location,count,value){if(count<=72){var view=miniTempWebGLFloatBuffers[4*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<4*count;i+=4){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3]}}else{var view=HEAPF32.subarray(value>>2,value+count*16>>2)}GLctx.uniform4fv(webglGetUniformLocation(location),view)}Module["_glUniform4fv"]=_glUniform4fv;_glUniform4fv.sig="viii";function _glUniformMatrix2fv(location,count,transpose,value){if(count<=72){var view=miniTempWebGLFloatBuffers[4*count-1];for(var i=0;i<4*count;i+=4){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2];view[i+3]=HEAPF32[value+(4*i+12)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*16>>2)}GLctx.uniformMatrix2fv(webglGetUniformLocation(location),!!transpose,view)}Module["_glUniformMatrix2fv"]=_glUniformMatrix2fv;_glUniformMatrix2fv.sig="viiii";function _glUniformMatrix3fv(location,count,transpose,value){if(count<=32){var view=miniTempWebGLFloatBuffers[9*count-1];for(var i=0;i<9*count;i+=9){view[i]=HEAPF32[value+4*i>>2];view[i+1]=HEAPF32[value+(4*i+4)>>2];view[i+2]=HEAPF32[value+(4*i+8)>>2];view[i+3]=HEAPF32[value+(4*i+12)>>2];view[i+4]=HEAPF32[value+(4*i+16)>>2];view[i+5]=HEAPF32[value+(4*i+20)>>2];view[i+6]=HEAPF32[value+(4*i+24)>>2];view[i+7]=HEAPF32[value+(4*i+28)>>2];view[i+8]=HEAPF32[value+(4*i+32)>>2]}}else{var view=HEAPF32.subarray(value>>2,value+count*36>>2)}GLctx.uniformMatrix3fv(webglGetUniformLocation(location),!!transpose,view)}Module["_glUniformMatrix3fv"]=_glUniformMatrix3fv;_glUniformMatrix3fv.sig="viiii";function _glUniformMatrix4fv(location,count,transpose,value){if(count<=18){var view=miniTempWebGLFloatBuffers[16*count-1];var heap=HEAPF32;value>>=2;for(var i=0;i<16*count;i+=16){var dst=value+i;view[i]=heap[dst];view[i+1]=heap[dst+1];view[i+2]=heap[dst+2];view[i+3]=heap[dst+3];view[i+4]=heap[dst+4];view[i+5]=heap[dst+5];view[i+6]=heap[dst+6];view[i+7]=heap[dst+7];view[i+8]=heap[dst+8];view[i+9]=heap[dst+9];view[i+10]=heap[dst+10];view[i+11]=heap[dst+11];view[i+12]=heap[dst+12];view[i+13]=heap[dst+13];view[i+14]=heap[dst+14];view[i+15]=heap[dst+15]}}else{var view=HEAPF32.subarray(value>>2,value+count*64>>2)}GLctx.uniformMatrix4fv(webglGetUniformLocation(location),!!transpose,view)}Module["_glUniformMatrix4fv"]=_glUniformMatrix4fv;_glUniformMatrix4fv.sig="viiii";function _glBindBuffer(target,buffer){GLctx.bindBuffer(target,GL.buffers[buffer])}Module["_glBindBuffer"]=_glBindBuffer;_glBindBuffer.sig="vii";function _glVertexAttrib1fv(index,v){GLctx.vertexAttrib1f(index,HEAPF32[v>>2])}Module["_glVertexAttrib1fv"]=_glVertexAttrib1fv;_glVertexAttrib1fv.sig="vii";function _glVertexAttrib2fv(index,v){GLctx.vertexAttrib2f(index,HEAPF32[v>>2],HEAPF32[v+4>>2])}Module["_glVertexAttrib2fv"]=_glVertexAttrib2fv;_glVertexAttrib2fv.sig="vii";function _glVertexAttrib3fv(index,v){GLctx.vertexAttrib3f(index,HEAPF32[v>>2],HEAPF32[v+4>>2],HEAPF32[v+8>>2])}Module["_glVertexAttrib3fv"]=_glVertexAttrib3fv;_glVertexAttrib3fv.sig="vii";function _glVertexAttrib4fv(index,v){GLctx.vertexAttrib4f(index,HEAPF32[v>>2],HEAPF32[v+4>>2],HEAPF32[v+8>>2],HEAPF32[v+12>>2])}Module["_glVertexAttrib4fv"]=_glVertexAttrib4fv;_glVertexAttrib4fv.sig="vii";function _glGetAttribLocation(program,name){return GLctx.getAttribLocation(GL.programs[program],UTF8ToString(name))}Module["_glGetAttribLocation"]=_glGetAttribLocation;_glGetAttribLocation.sig="iii";function _glGetActiveAttrib(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveAttrib",program,index,bufSize,length,size,type,name)}Module["_glGetActiveAttrib"]=_glGetActiveAttrib;_glGetActiveAttrib.sig="viiiiiii";function _glGetActiveUniform(program,index,bufSize,length,size,type,name){__glGetActiveAttribOrUniform("getActiveUniform",program,index,bufSize,length,size,type,name)}Module["_glGetActiveUniform"]=_glGetActiveUniform;_glGetActiveUniform.sig="viiiiiii";function _glCreateShader(shaderType){var id=GL.getNewId(GL.shaders);GL.shaders[id]=GLctx.createShader(shaderType);return id}Module["_glCreateShader"]=_glCreateShader;_glCreateShader.sig="ii";function _glDeleteShader(id){if(!id)return;var shader=GL.shaders[id];if(!shader){GL.recordError(1281);return}GLctx.deleteShader(shader);GL.shaders[id]=null}Module["_glDeleteShader"]=_glDeleteShader;_glDeleteShader.sig="vi";function _glGetAttachedShaders(program,maxCount,count,shaders){var result=GLctx.getAttachedShaders(GL.programs[program]);var len=result.length;if(len>maxCount){len=maxCount}HEAP32[count>>2]=len;for(var i=0;i>2]=id}}Module["_glGetAttachedShaders"]=_glGetAttachedShaders;_glGetAttachedShaders.sig="viiii";function _glShaderSource(shader,count,string,length){var source=GL.getSource(shader,count,string,length);GLctx.shaderSource(GL.shaders[shader],source)}Module["_glShaderSource"]=_glShaderSource;_glShaderSource.sig="viiii";function _glGetShaderSource(shader,bufSize,length,source){var result=GLctx.getShaderSource(GL.shaders[shader]);if(!result)return;var numBytesWrittenExclNull=bufSize>0&&source?stringToUTF8(result,source,bufSize):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}Module["_glGetShaderSource"]=_glGetShaderSource;_glGetShaderSource.sig="viiii";function _glCompileShader(shader){GLctx.compileShader(GL.shaders[shader])}Module["_glCompileShader"]=_glCompileShader;_glCompileShader.sig="vi";function _glGetShaderInfoLog(shader,maxLength,length,infoLog){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}Module["_glGetShaderInfoLog"]=_glGetShaderInfoLog;_glGetShaderInfoLog.sig="viiii";function _glGetShaderiv(shader,pname,p){if(!p){GL.recordError(1281);return}if(pname==35716){var log=GLctx.getShaderInfoLog(GL.shaders[shader]);if(log===null)log="(unknown error)";var logLength=log?log.length+1:0;HEAP32[p>>2]=logLength}else if(pname==35720){var source=GLctx.getShaderSource(GL.shaders[shader]);var sourceLength=source?source.length+1:0;HEAP32[p>>2]=sourceLength}else{HEAP32[p>>2]=GLctx.getShaderParameter(GL.shaders[shader],pname)}}Module["_glGetShaderiv"]=_glGetShaderiv;_glGetShaderiv.sig="viii";function _glGetProgramiv(program,pname,p){if(!p){GL.recordError(1281);return}if(program>=GL.counter){GL.recordError(1281);return}program=GL.programs[program];if(pname==35716){var log=GLctx.getProgramInfoLog(program);if(log===null)log="(unknown error)";HEAP32[p>>2]=log.length+1}else if(pname==35719){if(!program.maxUniformLength){for(var i=0;i>2]=program.maxUniformLength}else if(pname==35722){if(!program.maxAttributeLength){for(var i=0;i>2]=program.maxAttributeLength}else if(pname==35381){if(!program.maxUniformBlockNameLength){for(var i=0;i>2]=program.maxUniformBlockNameLength}else{HEAP32[p>>2]=GLctx.getProgramParameter(program,pname)}}Module["_glGetProgramiv"]=_glGetProgramiv;_glGetProgramiv.sig="viii";function _glIsShader(shader){var s=GL.shaders[shader];if(!s)return 0;return GLctx.isShader(s)}Module["_glIsShader"]=_glIsShader;_glIsShader.sig="ii";function _glCreateProgram(){var id=GL.getNewId(GL.programs);var program=GLctx.createProgram();program.name=id;program.maxUniformLength=program.maxAttributeLength=program.maxUniformBlockNameLength=0;program.uniformIdCounter=1;GL.programs[id]=program;return id}Module["_glCreateProgram"]=_glCreateProgram;_glCreateProgram.sig="i";function _glDeleteProgram(id){if(!id)return;var program=GL.programs[id];if(!program){GL.recordError(1281);return}GLctx.deleteProgram(program);program.name=0;GL.programs[id]=null}Module["_glDeleteProgram"]=_glDeleteProgram;_glDeleteProgram.sig="vi";function _glAttachShader(program,shader){GLctx.attachShader(GL.programs[program],GL.shaders[shader])}Module["_glAttachShader"]=_glAttachShader;_glAttachShader.sig="vii";function _glDetachShader(program,shader){GLctx.detachShader(GL.programs[program],GL.shaders[shader])}Module["_glDetachShader"]=_glDetachShader;_glDetachShader.sig="vii";function _glGetShaderPrecisionFormat(shaderType,precisionType,range,precision){var result=GLctx.getShaderPrecisionFormat(shaderType,precisionType);HEAP32[range>>2]=result.rangeMin;HEAP32[range+4>>2]=result.rangeMax;HEAP32[precision>>2]=result.precision}Module["_glGetShaderPrecisionFormat"]=_glGetShaderPrecisionFormat;_glGetShaderPrecisionFormat.sig="viiii";function _glLinkProgram(program){program=GL.programs[program];GLctx.linkProgram(program);program.uniformLocsById=0;program.uniformSizeAndIdsByName={}}Module["_glLinkProgram"]=_glLinkProgram;_glLinkProgram.sig="vi";function _glGetProgramInfoLog(program,maxLength,length,infoLog){var log=GLctx.getProgramInfoLog(GL.programs[program]);if(log===null)log="(unknown error)";var numBytesWrittenExclNull=maxLength>0&&infoLog?stringToUTF8(log,infoLog,maxLength):0;if(length)HEAP32[length>>2]=numBytesWrittenExclNull}Module["_glGetProgramInfoLog"]=_glGetProgramInfoLog;_glGetProgramInfoLog.sig="viiii";function _glUseProgram(program){program=GL.programs[program];GLctx.useProgram(program);GLctx.currentProgram=program}Module["_glUseProgram"]=_glUseProgram;_glUseProgram.sig="vi";function _glValidateProgram(program){GLctx.validateProgram(GL.programs[program])}Module["_glValidateProgram"]=_glValidateProgram;_glValidateProgram.sig="vi";function _glIsProgram(program){program=GL.programs[program];if(!program)return 0;return GLctx.isProgram(program)}Module["_glIsProgram"]=_glIsProgram;_glIsProgram.sig="ii";function _glBindAttribLocation(program,index,name){GLctx.bindAttribLocation(GL.programs[program],index,UTF8ToString(name))}Module["_glBindAttribLocation"]=_glBindAttribLocation;_glBindAttribLocation.sig="viii";function _glBindFramebuffer(target,framebuffer){GLctx.bindFramebuffer(target,GL.framebuffers[framebuffer])}Module["_glBindFramebuffer"]=_glBindFramebuffer;_glBindFramebuffer.sig="vii";function _glGenFramebuffers(n,ids){__glGenObject(n,ids,"createFramebuffer",GL.framebuffers)}Module["_glGenFramebuffers"]=_glGenFramebuffers;_glGenFramebuffers.sig="vii";function _glDeleteFramebuffers(n,framebuffers){for(var i=0;i>2];var framebuffer=GL.framebuffers[id];if(!framebuffer)continue;GLctx.deleteFramebuffer(framebuffer);framebuffer.name=0;GL.framebuffers[id]=null}}Module["_glDeleteFramebuffers"]=_glDeleteFramebuffers;_glDeleteFramebuffers.sig="vii";function _glFramebufferRenderbuffer(target,attachment,renderbuffertarget,renderbuffer){GLctx.framebufferRenderbuffer(target,attachment,renderbuffertarget,GL.renderbuffers[renderbuffer])}Module["_glFramebufferRenderbuffer"]=_glFramebufferRenderbuffer;_glFramebufferRenderbuffer.sig="viiii";function _glFramebufferTexture2D(target,attachment,textarget,texture,level){GLctx.framebufferTexture2D(target,attachment,textarget,GL.textures[texture],level)}Module["_glFramebufferTexture2D"]=_glFramebufferTexture2D;_glFramebufferTexture2D.sig="viiiii";function _glGetFramebufferAttachmentParameteriv(target,attachment,pname,params){var result=GLctx.getFramebufferAttachmentParameter(target,attachment,pname);if(result instanceof WebGLRenderbuffer||result instanceof WebGLTexture){result=result.name|0}HEAP32[params>>2]=result}Module["_glGetFramebufferAttachmentParameteriv"]=_glGetFramebufferAttachmentParameteriv;_glGetFramebufferAttachmentParameteriv.sig="viiii";function _glIsFramebuffer(framebuffer){var fb=GL.framebuffers[framebuffer];if(!fb)return 0;return GLctx.isFramebuffer(fb)}Module["_glIsFramebuffer"]=_glIsFramebuffer;_glIsFramebuffer.sig="ii";function _glGenVertexArrays(n,arrays){__glGenObject(n,arrays,"createVertexArray",GL.vaos)}Module["_glGenVertexArrays"]=_glGenVertexArrays;_glGenVertexArrays.sig="vii";function _glDeleteVertexArrays(n,vaos){for(var i=0;i>2];GLctx["deleteVertexArray"](GL.vaos[id]);GL.vaos[id]=null}}Module["_glDeleteVertexArrays"]=_glDeleteVertexArrays;_glDeleteVertexArrays.sig="vii";function _glBindVertexArray(vao){GLctx["bindVertexArray"](GL.vaos[vao])}Module["_glBindVertexArray"]=_glBindVertexArray;_glBindVertexArray.sig="vi";function _glIsVertexArray(array){var vao=GL.vaos[array];if(!vao)return 0;return GLctx["isVertexArray"](vao)}Module["_glIsVertexArray"]=_glIsVertexArray;_glIsVertexArray.sig="ii";function _glVertexPointer(){throw"Legacy GL function (glVertexPointer) called. If you want legacy GL emulation, you need to compile with -s LEGACY_GL_EMULATION=1 to enable legacy GL emulation."}Module["_glVertexPointer"]=_glVertexPointer;function _glMatrixMode(){throw"Legacy GL function (glMatrixMode) called. If you want legacy GL emulation, you need to compile with -s LEGACY_GL_EMULATION=1 to enable legacy GL emulation."}Module["_glMatrixMode"]=_glMatrixMode;function _glBegin(){throw"Legacy GL function (glBegin) called. If you want legacy GL emulation, you need to compile with -s LEGACY_GL_EMULATION=1 to enable legacy GL emulation."}Module["_glBegin"]=_glBegin;function _glLoadIdentity(){throw"Legacy GL function (glLoadIdentity) called. If you want legacy GL emulation, you need to compile with -s LEGACY_GL_EMULATION=1 to enable legacy GL emulation."}Module["_glLoadIdentity"]=_glLoadIdentity;function _glGenVertexArraysOES(n,arrays){__glGenObject(n,arrays,"createVertexArray",GL.vaos)}Module["_glGenVertexArraysOES"]=_glGenVertexArraysOES;_glGenVertexArraysOES.sig="vii";function _glDeleteVertexArraysOES(n,vaos){for(var i=0;i>2];GLctx["deleteVertexArray"](GL.vaos[id]);GL.vaos[id]=null}}Module["_glDeleteVertexArraysOES"]=_glDeleteVertexArraysOES;_glDeleteVertexArraysOES.sig="vii";function _glBindVertexArrayOES(vao){GLctx["bindVertexArray"](GL.vaos[vao])}Module["_glBindVertexArrayOES"]=_glBindVertexArrayOES;_glBindVertexArrayOES.sig="vi";function _glIsVertexArrayOES(array){var vao=GL.vaos[array];if(!vao)return 0;return GLctx["isVertexArray"](vao)}Module["_glIsVertexArrayOES"]=_glIsVertexArrayOES;_glIsVertexArrayOES.sig="ii";function _glVertexAttribPointer(index,size,type,normalized,stride,ptr){GLctx.vertexAttribPointer(index,size,type,!!normalized,stride,ptr)}Module["_glVertexAttribPointer"]=_glVertexAttribPointer;_glVertexAttribPointer.sig="viiiiii";function _glEnableVertexAttribArray(index){GLctx.enableVertexAttribArray(index)}Module["_glEnableVertexAttribArray"]=_glEnableVertexAttribArray;_glEnableVertexAttribArray.sig="vi";function _glDisableVertexAttribArray(index){GLctx.disableVertexAttribArray(index)}Module["_glDisableVertexAttribArray"]=_glDisableVertexAttribArray;_glDisableVertexAttribArray.sig="vi";function _glDrawArrays(mode,first,count){GLctx.drawArrays(mode,first,count)}Module["_glDrawArrays"]=_glDrawArrays;_glDrawArrays.sig="viii";function _glDrawElements(mode,count,type,indices){GLctx.drawElements(mode,count,type,indices)}Module["_glDrawElements"]=_glDrawElements;_glDrawElements.sig="viiii";function _glShaderBinary(){GL.recordError(1280)}Module["_glShaderBinary"]=_glShaderBinary;_glShaderBinary.sig="v";function _glReleaseShaderCompiler(){}Module["_glReleaseShaderCompiler"]=_glReleaseShaderCompiler;_glReleaseShaderCompiler.sig="v";function _glGetError(){var error=GLctx.getError()||GL.lastError;GL.lastError=0;return error}Module["_glGetError"]=_glGetError;_glGetError.sig="i";function _glVertexAttribDivisor(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}Module["_glVertexAttribDivisor"]=_glVertexAttribDivisor;_glVertexAttribDivisor.sig="vii";function _glDrawArraysInstanced(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}Module["_glDrawArraysInstanced"]=_glDrawArraysInstanced;_glDrawArraysInstanced.sig="viiii";function _glDrawElementsInstanced(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}Module["_glDrawElementsInstanced"]=_glDrawElementsInstanced;_glDrawElementsInstanced.sig="viiiii";function _glVertexAttribDivisorNV(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}Module["_glVertexAttribDivisorNV"]=_glVertexAttribDivisorNV;_glVertexAttribDivisorNV.sig="vii";function _glDrawArraysInstancedNV(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}Module["_glDrawArraysInstancedNV"]=_glDrawArraysInstancedNV;_glDrawArraysInstancedNV.sig="viiii";function _glDrawElementsInstancedNV(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}Module["_glDrawElementsInstancedNV"]=_glDrawElementsInstancedNV;_glDrawElementsInstancedNV.sig="viiiii";function _glVertexAttribDivisorEXT(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}Module["_glVertexAttribDivisorEXT"]=_glVertexAttribDivisorEXT;_glVertexAttribDivisorEXT.sig="vii";function _glDrawArraysInstancedEXT(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}Module["_glDrawArraysInstancedEXT"]=_glDrawArraysInstancedEXT;_glDrawArraysInstancedEXT.sig="viiii";function _glDrawElementsInstancedEXT(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}Module["_glDrawElementsInstancedEXT"]=_glDrawElementsInstancedEXT;_glDrawElementsInstancedEXT.sig="viiiii";function _glVertexAttribDivisorARB(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}Module["_glVertexAttribDivisorARB"]=_glVertexAttribDivisorARB;_glVertexAttribDivisorARB.sig="vii";function _glDrawArraysInstancedARB(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}Module["_glDrawArraysInstancedARB"]=_glDrawArraysInstancedARB;_glDrawArraysInstancedARB.sig="viiii";function _glDrawElementsInstancedARB(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}Module["_glDrawElementsInstancedARB"]=_glDrawElementsInstancedARB;_glDrawElementsInstancedARB.sig="viiiii";function _glVertexAttribDivisorANGLE(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}Module["_glVertexAttribDivisorANGLE"]=_glVertexAttribDivisorANGLE;_glVertexAttribDivisorANGLE.sig="vii";function _glDrawArraysInstancedANGLE(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}Module["_glDrawArraysInstancedANGLE"]=_glDrawArraysInstancedANGLE;_glDrawArraysInstancedANGLE.sig="viiii";function _glDrawElementsInstancedANGLE(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}Module["_glDrawElementsInstancedANGLE"]=_glDrawElementsInstancedANGLE;_glDrawElementsInstancedANGLE.sig="viiiii";function _glDrawBuffers(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}Module["_glDrawBuffers"]=_glDrawBuffers;_glDrawBuffers.sig="vii";function _glDrawBuffersEXT(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}Module["_glDrawBuffersEXT"]=_glDrawBuffersEXT;_glDrawBuffersEXT.sig="vii";function _glDrawBuffersWEBGL(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}Module["_glDrawBuffersWEBGL"]=_glDrawBuffersWEBGL;_glDrawBuffersWEBGL.sig="vii";function _glColorMask(red,green,blue,alpha){GLctx.colorMask(!!red,!!green,!!blue,!!alpha)}Module["_glColorMask"]=_glColorMask;_glColorMask.sig="viiii";function _glDepthMask(flag){GLctx.depthMask(!!flag)}Module["_glDepthMask"]=_glDepthMask;_glDepthMask.sig="vi";function _glSampleCoverage(value,invert){GLctx.sampleCoverage(value,!!invert)}Module["_glSampleCoverage"]=_glSampleCoverage;_glSampleCoverage.sig="vii";function _glMultiDrawArrays(mode,firsts,counts,drawcount){GLctx.multiDrawWebgl["multiDrawArraysWEBGL"](mode,HEAP32,firsts>>2,HEAP32,counts>>2,drawcount)}Module["_glMultiDrawArrays"]=_glMultiDrawArrays;_glMultiDrawArrays.sig="viiii";function _glMultiDrawArraysANGLE(mode,firsts,counts,drawcount){GLctx.multiDrawWebgl["multiDrawArraysWEBGL"](mode,HEAP32,firsts>>2,HEAP32,counts>>2,drawcount)}Module["_glMultiDrawArraysANGLE"]=_glMultiDrawArraysANGLE;_glMultiDrawArraysANGLE.sig="viiii";function _glMultiDrawArraysWEBGL(mode,firsts,counts,drawcount){GLctx.multiDrawWebgl["multiDrawArraysWEBGL"](mode,HEAP32,firsts>>2,HEAP32,counts>>2,drawcount)}Module["_glMultiDrawArraysWEBGL"]=_glMultiDrawArraysWEBGL;_glMultiDrawArraysWEBGL.sig="viiii";function _glMultiDrawArraysInstancedANGLE(mode,firsts,counts,instanceCounts,drawcount){GLctx.multiDrawWebgl["multiDrawArraysInstancedWEBGL"](mode,HEAP32,firsts>>2,HEAP32,counts>>2,HEAP32,instanceCounts>>2,drawcount)}Module["_glMultiDrawArraysInstancedANGLE"]=_glMultiDrawArraysInstancedANGLE;_glMultiDrawArraysInstancedANGLE.sig="viiiii";function _glMultiDrawArraysInstancedWEBGL(mode,firsts,counts,instanceCounts,drawcount){GLctx.multiDrawWebgl["multiDrawArraysInstancedWEBGL"](mode,HEAP32,firsts>>2,HEAP32,counts>>2,HEAP32,instanceCounts>>2,drawcount)}Module["_glMultiDrawArraysInstancedWEBGL"]=_glMultiDrawArraysInstancedWEBGL;_glMultiDrawArraysInstancedWEBGL.sig="viiiii";function _glMultiDrawElements(mode,counts,type,offsets,drawcount){GLctx.multiDrawWebgl["multiDrawElementsWEBGL"](mode,HEAP32,counts>>2,type,HEAP32,offsets>>2,drawcount)}Module["_glMultiDrawElements"]=_glMultiDrawElements;_glMultiDrawElements.sig="viiiii";function _glMultiDrawElementsANGLE(mode,counts,type,offsets,drawcount){GLctx.multiDrawWebgl["multiDrawElementsWEBGL"](mode,HEAP32,counts>>2,type,HEAP32,offsets>>2,drawcount)}Module["_glMultiDrawElementsANGLE"]=_glMultiDrawElementsANGLE;_glMultiDrawElementsANGLE.sig="viiiii";function _glMultiDrawElementsWEBGL(mode,counts,type,offsets,drawcount){GLctx.multiDrawWebgl["multiDrawElementsWEBGL"](mode,HEAP32,counts>>2,type,HEAP32,offsets>>2,drawcount)}Module["_glMultiDrawElementsWEBGL"]=_glMultiDrawElementsWEBGL;_glMultiDrawElementsWEBGL.sig="viiiii";function _glMultiDrawElementsInstancedANGLE(mode,counts,type,offsets,instanceCounts,drawcount){GLctx.multiDrawWebgl["multiDrawElementsInstancedWEBGL"](mode,HEAP32,counts>>2,type,HEAP32,offsets>>2,HEAP32,instanceCounts>>2,drawcount)}Module["_glMultiDrawElementsInstancedANGLE"]=_glMultiDrawElementsInstancedANGLE;_glMultiDrawElementsInstancedANGLE.sig="viiiiii";function _glMultiDrawElementsInstancedWEBGL(mode,counts,type,offsets,instanceCounts,drawcount){GLctx.multiDrawWebgl["multiDrawElementsInstancedWEBGL"](mode,HEAP32,counts>>2,type,HEAP32,offsets>>2,HEAP32,instanceCounts>>2,drawcount)}Module["_glMultiDrawElementsInstancedWEBGL"]=_glMultiDrawElementsInstancedWEBGL;_glMultiDrawElementsInstancedWEBGL.sig="viiiiii";function _glFinish(){GLctx["finish"]()}Module["_glFinish"]=_glFinish;_glFinish.sig="v";function _glFlush(){GLctx["flush"]()}Module["_glFlush"]=_glFlush;_glFlush.sig="v";function _glClearDepth(x0){GLctx["clearDepth"](x0)}Module["_glClearDepth"]=_glClearDepth;_glClearDepth.sig="vi";function _glClearDepthf(x0){GLctx["clearDepth"](x0)}Module["_glClearDepthf"]=_glClearDepthf;_glClearDepthf.sig="vi";function _glDepthFunc(x0){GLctx["depthFunc"](x0)}Module["_glDepthFunc"]=_glDepthFunc;_glDepthFunc.sig="vi";function _glEnable(x0){GLctx["enable"](x0)}Module["_glEnable"]=_glEnable;_glEnable.sig="vi";function _glDisable(x0){GLctx["disable"](x0)}Module["_glDisable"]=_glDisable;_glDisable.sig="vi";function _glFrontFace(x0){GLctx["frontFace"](x0)}Module["_glFrontFace"]=_glFrontFace;_glFrontFace.sig="vi";function _glCullFace(x0){GLctx["cullFace"](x0)}Module["_glCullFace"]=_glCullFace;_glCullFace.sig="vi";function _glClear(x0){GLctx["clear"](x0)}Module["_glClear"]=_glClear;_glClear.sig="vi";function _glLineWidth(x0){GLctx["lineWidth"](x0)}Module["_glLineWidth"]=_glLineWidth;_glLineWidth.sig="vi";function _glClearStencil(x0){GLctx["clearStencil"](x0)}Module["_glClearStencil"]=_glClearStencil;_glClearStencil.sig="vi";function _glStencilMask(x0){GLctx["stencilMask"](x0)}Module["_glStencilMask"]=_glStencilMask;_glStencilMask.sig="vi";function _glCheckFramebufferStatus(x0){return GLctx["checkFramebufferStatus"](x0)}Module["_glCheckFramebufferStatus"]=_glCheckFramebufferStatus;_glCheckFramebufferStatus.sig="ii";function _glGenerateMipmap(x0){GLctx["generateMipmap"](x0)}Module["_glGenerateMipmap"]=_glGenerateMipmap;_glGenerateMipmap.sig="vi";function _glActiveTexture(x0){GLctx["activeTexture"](x0)}Module["_glActiveTexture"]=_glActiveTexture;_glActiveTexture.sig="vi";function _glBlendEquation(x0){GLctx["blendEquation"](x0)}Module["_glBlendEquation"]=_glBlendEquation;_glBlendEquation.sig="vi";function _glIsEnabled(x0){return GLctx["isEnabled"](x0)}Module["_glIsEnabled"]=_glIsEnabled;_glIsEnabled.sig="ii";function _glBlendFunc(x0,x1){GLctx["blendFunc"](x0,x1)}Module["_glBlendFunc"]=_glBlendFunc;_glBlendFunc.sig="vii";function _glBlendEquationSeparate(x0,x1){GLctx["blendEquationSeparate"](x0,x1)}Module["_glBlendEquationSeparate"]=_glBlendEquationSeparate;_glBlendEquationSeparate.sig="vii";function _glDepthRange(x0,x1){GLctx["depthRange"](x0,x1)}Module["_glDepthRange"]=_glDepthRange;_glDepthRange.sig="vii";function _glDepthRangef(x0,x1){GLctx["depthRange"](x0,x1)}Module["_glDepthRangef"]=_glDepthRangef;_glDepthRangef.sig="vii";function _glStencilMaskSeparate(x0,x1){GLctx["stencilMaskSeparate"](x0,x1)}Module["_glStencilMaskSeparate"]=_glStencilMaskSeparate;_glStencilMaskSeparate.sig="vii";function _glHint(x0,x1){GLctx["hint"](x0,x1)}Module["_glHint"]=_glHint;_glHint.sig="vii";function _glPolygonOffset(x0,x1){GLctx["polygonOffset"](x0,x1)}Module["_glPolygonOffset"]=_glPolygonOffset;_glPolygonOffset.sig="vii";function _glVertexAttrib1f(x0,x1){GLctx["vertexAttrib1f"](x0,x1)}Module["_glVertexAttrib1f"]=_glVertexAttrib1f;_glVertexAttrib1f.sig="vii";function _glTexParameteri(x0,x1,x2){GLctx["texParameteri"](x0,x1,x2)}Module["_glTexParameteri"]=_glTexParameteri;_glTexParameteri.sig="viii";function _glTexParameterf(x0,x1,x2){GLctx["texParameterf"](x0,x1,x2)}Module["_glTexParameterf"]=_glTexParameterf;_glTexParameterf.sig="viii";function _glVertexAttrib2f(x0,x1,x2){GLctx["vertexAttrib2f"](x0,x1,x2)}Module["_glVertexAttrib2f"]=_glVertexAttrib2f;_glVertexAttrib2f.sig="viii";function _glStencilFunc(x0,x1,x2){GLctx["stencilFunc"](x0,x1,x2)}Module["_glStencilFunc"]=_glStencilFunc;_glStencilFunc.sig="viii";function _glStencilOp(x0,x1,x2){GLctx["stencilOp"](x0,x1,x2)}Module["_glStencilOp"]=_glStencilOp;_glStencilOp.sig="viii";function _glViewport(x0,x1,x2,x3){GLctx["viewport"](x0,x1,x2,x3)}Module["_glViewport"]=_glViewport;_glViewport.sig="viiii";function _glClearColor(x0,x1,x2,x3){GLctx["clearColor"](x0,x1,x2,x3)}Module["_glClearColor"]=_glClearColor;_glClearColor.sig="viiii";function _glScissor(x0,x1,x2,x3){GLctx["scissor"](x0,x1,x2,x3)}Module["_glScissor"]=_glScissor;_glScissor.sig="viiii";function _glVertexAttrib3f(x0,x1,x2,x3){GLctx["vertexAttrib3f"](x0,x1,x2,x3)}Module["_glVertexAttrib3f"]=_glVertexAttrib3f;_glVertexAttrib3f.sig="viiii";function _glRenderbufferStorage(x0,x1,x2,x3){GLctx["renderbufferStorage"](x0,x1,x2,x3)}Module["_glRenderbufferStorage"]=_glRenderbufferStorage;_glRenderbufferStorage.sig="viiii";function _glBlendFuncSeparate(x0,x1,x2,x3){GLctx["blendFuncSeparate"](x0,x1,x2,x3)}Module["_glBlendFuncSeparate"]=_glBlendFuncSeparate;_glBlendFuncSeparate.sig="viiii";function _glBlendColor(x0,x1,x2,x3){GLctx["blendColor"](x0,x1,x2,x3)}Module["_glBlendColor"]=_glBlendColor;_glBlendColor.sig="vffff";function _glStencilFuncSeparate(x0,x1,x2,x3){GLctx["stencilFuncSeparate"](x0,x1,x2,x3)}Module["_glStencilFuncSeparate"]=_glStencilFuncSeparate;_glStencilFuncSeparate.sig="viiii";function _glStencilOpSeparate(x0,x1,x2,x3){GLctx["stencilOpSeparate"](x0,x1,x2,x3)}Module["_glStencilOpSeparate"]=_glStencilOpSeparate;_glStencilOpSeparate.sig="viiii";function _glVertexAttrib4f(x0,x1,x2,x3,x4){GLctx["vertexAttrib4f"](x0,x1,x2,x3,x4)}Module["_glVertexAttrib4f"]=_glVertexAttrib4f;_glVertexAttrib4f.sig="viiiii";function _glCopyTexImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}Module["_glCopyTexImage2D"]=_glCopyTexImage2D;_glCopyTexImage2D.sig="viiiiiiii";function _glCopyTexSubImage2D(x0,x1,x2,x3,x4,x5,x6,x7){GLctx["copyTexSubImage2D"](x0,x1,x2,x3,x4,x5,x6,x7)}Module["_glCopyTexSubImage2D"]=_glCopyTexSubImage2D;_glCopyTexSubImage2D.sig="viiiiiiii";function _emscripten_glGenVertexArrays(n,arrays){__glGenObject(n,arrays,"createVertexArray",GL.vaos)}Module["_emscripten_glGenVertexArrays"]=_emscripten_glGenVertexArrays;_emscripten_glGenVertexArrays.sig="vii";function _emscripten_glDeleteVertexArrays(n,vaos){for(var i=0;i>2];GLctx["deleteVertexArray"](GL.vaos[id]);GL.vaos[id]=null}}Module["_emscripten_glDeleteVertexArrays"]=_emscripten_glDeleteVertexArrays;_emscripten_glDeleteVertexArrays.sig="vii";function _emscripten_glBindVertexArray(vao){GLctx["bindVertexArray"](GL.vaos[vao])}Module["_emscripten_glBindVertexArray"]=_emscripten_glBindVertexArray;_emscripten_glBindVertexArray.sig="vi";function _emscripten_glIsVertexArray(array){var vao=GL.vaos[array];if(!vao)return 0;return GLctx["isVertexArray"](vao)}Module["_emscripten_glIsVertexArray"]=_emscripten_glIsVertexArray;_emscripten_glIsVertexArray.sig="ii";function _emscripten_glVertexPointer(){throw"Legacy GL function (glVertexPointer) called. If you want legacy GL emulation, you need to compile with -s LEGACY_GL_EMULATION=1 to enable legacy GL emulation."}Module["_emscripten_glVertexPointer"]=_emscripten_glVertexPointer;function _emscripten_glMatrixMode(){throw"Legacy GL function (glMatrixMode) called. If you want legacy GL emulation, you need to compile with -s LEGACY_GL_EMULATION=1 to enable legacy GL emulation."}Module["_emscripten_glMatrixMode"]=_emscripten_glMatrixMode;function _emscripten_glBegin(){throw"Legacy GL function (glBegin) called. If you want legacy GL emulation, you need to compile with -s LEGACY_GL_EMULATION=1 to enable legacy GL emulation."}Module["_emscripten_glBegin"]=_emscripten_glBegin;function _emscripten_glLoadIdentity(){throw"Legacy GL function (glLoadIdentity) called. If you want legacy GL emulation, you need to compile with -s LEGACY_GL_EMULATION=1 to enable legacy GL emulation."}Module["_emscripten_glLoadIdentity"]=_emscripten_glLoadIdentity;function _emscripten_glVertexAttribDivisor(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}Module["_emscripten_glVertexAttribDivisor"]=_emscripten_glVertexAttribDivisor;_emscripten_glVertexAttribDivisor.sig="vii";function _emscripten_glDrawArraysInstanced(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}Module["_emscripten_glDrawArraysInstanced"]=_emscripten_glDrawArraysInstanced;_emscripten_glDrawArraysInstanced.sig="viiii";function _emscripten_glDrawElementsInstanced(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}Module["_emscripten_glDrawElementsInstanced"]=_emscripten_glDrawElementsInstanced;_emscripten_glDrawElementsInstanced.sig="viiiii";function _emscripten_glVertexAttribDivisorNV(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}Module["_emscripten_glVertexAttribDivisorNV"]=_emscripten_glVertexAttribDivisorNV;_emscripten_glVertexAttribDivisorNV.sig="vii";function _emscripten_glDrawArraysInstancedNV(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}Module["_emscripten_glDrawArraysInstancedNV"]=_emscripten_glDrawArraysInstancedNV;_emscripten_glDrawArraysInstancedNV.sig="viiii";function _emscripten_glDrawElementsInstancedNV(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}Module["_emscripten_glDrawElementsInstancedNV"]=_emscripten_glDrawElementsInstancedNV;_emscripten_glDrawElementsInstancedNV.sig="viiiii";function _emscripten_glVertexAttribDivisorEXT(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}Module["_emscripten_glVertexAttribDivisorEXT"]=_emscripten_glVertexAttribDivisorEXT;_emscripten_glVertexAttribDivisorEXT.sig="vii";function _emscripten_glDrawArraysInstancedEXT(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}Module["_emscripten_glDrawArraysInstancedEXT"]=_emscripten_glDrawArraysInstancedEXT;_emscripten_glDrawArraysInstancedEXT.sig="viiii";function _emscripten_glDrawElementsInstancedEXT(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}Module["_emscripten_glDrawElementsInstancedEXT"]=_emscripten_glDrawElementsInstancedEXT;_emscripten_glDrawElementsInstancedEXT.sig="viiiii";function _emscripten_glVertexAttribDivisorARB(index,divisor){GLctx["vertexAttribDivisor"](index,divisor)}Module["_emscripten_glVertexAttribDivisorARB"]=_emscripten_glVertexAttribDivisorARB;_emscripten_glVertexAttribDivisorARB.sig="vii";function _emscripten_glDrawArraysInstancedARB(mode,first,count,primcount){GLctx["drawArraysInstanced"](mode,first,count,primcount)}Module["_emscripten_glDrawArraysInstancedARB"]=_emscripten_glDrawArraysInstancedARB;_emscripten_glDrawArraysInstancedARB.sig="viiii";function _emscripten_glDrawElementsInstancedARB(mode,count,type,indices,primcount){GLctx["drawElementsInstanced"](mode,count,type,indices,primcount)}Module["_emscripten_glDrawElementsInstancedARB"]=_emscripten_glDrawElementsInstancedARB;_emscripten_glDrawElementsInstancedARB.sig="viiiii";function _emscripten_glDrawBuffers(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}Module["_emscripten_glDrawBuffers"]=_emscripten_glDrawBuffers;_emscripten_glDrawBuffers.sig="vii";function _emscripten_glDrawBuffersEXT(n,bufs){var bufArray=tempFixedLengthArray[n];for(var i=0;i>2]}GLctx["drawBuffers"](bufArray)}Module["_emscripten_glDrawBuffersEXT"]=_emscripten_glDrawBuffersEXT;_emscripten_glDrawBuffersEXT.sig="vii";function _emscripten_glMultiDrawArrays(mode,firsts,counts,drawcount){GLctx.multiDrawWebgl["multiDrawArraysWEBGL"](mode,HEAP32,firsts>>2,HEAP32,counts>>2,drawcount)}Module["_emscripten_glMultiDrawArrays"]=_emscripten_glMultiDrawArrays;_emscripten_glMultiDrawArrays.sig="viiii";function _emscripten_glMultiDrawArraysANGLE(mode,firsts,counts,drawcount){GLctx.multiDrawWebgl["multiDrawArraysWEBGL"](mode,HEAP32,firsts>>2,HEAP32,counts>>2,drawcount)}Module["_emscripten_glMultiDrawArraysANGLE"]=_emscripten_glMultiDrawArraysANGLE;_emscripten_glMultiDrawArraysANGLE.sig="viiii";function _emscripten_glMultiDrawArraysWEBGL(mode,firsts,counts,drawcount){GLctx.multiDrawWebgl["multiDrawArraysWEBGL"](mode,HEAP32,firsts>>2,HEAP32,counts>>2,drawcount)}Module["_emscripten_glMultiDrawArraysWEBGL"]=_emscripten_glMultiDrawArraysWEBGL;_emscripten_glMultiDrawArraysWEBGL.sig="viiii";function _emscripten_glMultiDrawArraysInstancedANGLE(mode,firsts,counts,instanceCounts,drawcount){GLctx.multiDrawWebgl["multiDrawArraysInstancedWEBGL"](mode,HEAP32,firsts>>2,HEAP32,counts>>2,HEAP32,instanceCounts>>2,drawcount)}Module["_emscripten_glMultiDrawArraysInstancedANGLE"]=_emscripten_glMultiDrawArraysInstancedANGLE;_emscripten_glMultiDrawArraysInstancedANGLE.sig="viiiii";function _emscripten_glMultiDrawArraysInstancedWEBGL(mode,firsts,counts,instanceCounts,drawcount){GLctx.multiDrawWebgl["multiDrawArraysInstancedWEBGL"](mode,HEAP32,firsts>>2,HEAP32,counts>>2,HEAP32,instanceCounts>>2,drawcount)}Module["_emscripten_glMultiDrawArraysInstancedWEBGL"]=_emscripten_glMultiDrawArraysInstancedWEBGL;_emscripten_glMultiDrawArraysInstancedWEBGL.sig="viiiii";function _emscripten_glMultiDrawElements(mode,counts,type,offsets,drawcount){GLctx.multiDrawWebgl["multiDrawElementsWEBGL"](mode,HEAP32,counts>>2,type,HEAP32,offsets>>2,drawcount)}Module["_emscripten_glMultiDrawElements"]=_emscripten_glMultiDrawElements;_emscripten_glMultiDrawElements.sig="viiiii";function _emscripten_glMultiDrawElementsANGLE(mode,counts,type,offsets,drawcount){GLctx.multiDrawWebgl["multiDrawElementsWEBGL"](mode,HEAP32,counts>>2,type,HEAP32,offsets>>2,drawcount)}Module["_emscripten_glMultiDrawElementsANGLE"]=_emscripten_glMultiDrawElementsANGLE;_emscripten_glMultiDrawElementsANGLE.sig="viiiii";function _emscripten_glMultiDrawElementsWEBGL(mode,counts,type,offsets,drawcount){GLctx.multiDrawWebgl["multiDrawElementsWEBGL"](mode,HEAP32,counts>>2,type,HEAP32,offsets>>2,drawcount)}Module["_emscripten_glMultiDrawElementsWEBGL"]=_emscripten_glMultiDrawElementsWEBGL;_emscripten_glMultiDrawElementsWEBGL.sig="viiiii";function _emscripten_glMultiDrawElementsInstancedANGLE(mode,counts,type,offsets,instanceCounts,drawcount){GLctx.multiDrawWebgl["multiDrawElementsInstancedWEBGL"](mode,HEAP32,counts>>2,type,HEAP32,offsets>>2,HEAP32,instanceCounts>>2,drawcount)}Module["_emscripten_glMultiDrawElementsInstancedANGLE"]=_emscripten_glMultiDrawElementsInstancedANGLE;_emscripten_glMultiDrawElementsInstancedANGLE.sig="viiiiii";function _emscripten_glMultiDrawElementsInstancedWEBGL(mode,counts,type,offsets,instanceCounts,drawcount){GLctx.multiDrawWebgl["multiDrawElementsInstancedWEBGL"](mode,HEAP32,counts>>2,type,HEAP32,offsets>>2,HEAP32,instanceCounts>>2,drawcount)}Module["_emscripten_glMultiDrawElementsInstancedWEBGL"]=_emscripten_glMultiDrawElementsInstancedWEBGL;_emscripten_glMultiDrawElementsInstancedWEBGL.sig="viiiiii";function _emscripten_glClearDepth(x0){GLctx["clearDepth"](x0)}Module["_emscripten_glClearDepth"]=_emscripten_glClearDepth;_emscripten_glClearDepth.sig="vi";function _emscripten_glDepthRange(x0,x1){GLctx["depthRange"](x0,x1)}Module["_emscripten_glDepthRange"]=_emscripten_glDepthRange;_emscripten_glDepthRange.sig="vii";function writeGLArray(arr,dst,dstLength,heapType){var len=arr.length;var writeLength=dstLength>2)+i]=arr[i]}return len}Module["writeGLArray"]=writeGLArray;function _emscripten_webgl_init_context_attributes(attributes){var a=attributes>>2;for(var i=0;i<56>>2;++i){HEAP32[a+i]=0}HEAP32[a+(0>>2)]=HEAP32[a+(4>>2)]=HEAP32[a+(12>>2)]=HEAP32[a+(16>>2)]=HEAP32[a+(32>>2)]=HEAP32[a+(40>>2)]=1}Module["_emscripten_webgl_init_context_attributes"]=_emscripten_webgl_init_context_attributes;var __emscripten_webgl_power_preferences=["default","low-power","high-performance"];Module["__emscripten_webgl_power_preferences"]=__emscripten_webgl_power_preferences;function _emscripten_webgl_do_create_context(target,attributes){var a=attributes>>2;var powerPreference=HEAP32[a+(24>>2)];var contextAttributes={"alpha":!!HEAP32[a+(0>>2)],"depth":!!HEAP32[a+(4>>2)],"stencil":!!HEAP32[a+(8>>2)],"antialias":!!HEAP32[a+(12>>2)],"premultipliedAlpha":!!HEAP32[a+(16>>2)],"preserveDrawingBuffer":!!HEAP32[a+(20>>2)],"powerPreference":__emscripten_webgl_power_preferences[powerPreference],"failIfMajorPerformanceCaveat":!!HEAP32[a+(28>>2)],majorVersion:HEAP32[a+(32>>2)],minorVersion:HEAP32[a+(36>>2)],enableExtensionsByDefault:HEAP32[a+(40>>2)],explicitSwapControl:HEAP32[a+(44>>2)],proxyContextToMainThread:HEAP32[a+(48>>2)],renderViaOffscreenBackBuffer:HEAP32[a+(52>>2)]};var canvas=findCanvasEventTarget(target);if(!canvas){return 0}if(contextAttributes.explicitSwapControl){return 0}var contextHandle=GL.createContext(canvas,contextAttributes);return contextHandle}Module["_emscripten_webgl_do_create_context"]=_emscripten_webgl_do_create_context;_emscripten_webgl_do_create_context.sig="iii";function _emscripten_webgl_create_context(a0,a1){return _emscripten_webgl_do_create_context(a0,a1)}Module["_emscripten_webgl_create_context"]=_emscripten_webgl_create_context;_emscripten_webgl_create_context.sig="iii";function _emscripten_webgl_do_get_current_context(){return GL.currentContext?GL.currentContext.handle:0}Module["_emscripten_webgl_do_get_current_context"]=_emscripten_webgl_do_get_current_context;_emscripten_webgl_do_get_current_context.sig="i";function _emscripten_webgl_get_current_context(){return _emscripten_webgl_do_get_current_context()}Module["_emscripten_webgl_get_current_context"]=_emscripten_webgl_get_current_context;_emscripten_webgl_get_current_context.sig="i";function _emscripten_webgl_do_commit_frame(){if(!GL.currentContext||!GL.currentContext.GLctx){return-3}if(!GL.currentContext.attributes.explicitSwapControl){return-3}return 0}Module["_emscripten_webgl_do_commit_frame"]=_emscripten_webgl_do_commit_frame;_emscripten_webgl_do_commit_frame.sig="i";function _emscripten_webgl_commit_frame(){return _emscripten_webgl_do_commit_frame()}Module["_emscripten_webgl_commit_frame"]=_emscripten_webgl_commit_frame;_emscripten_webgl_commit_frame.sig="i";function _emscripten_webgl_make_context_current(contextHandle){var success=GL.makeContextCurrent(contextHandle);return success?0:-5}Module["_emscripten_webgl_make_context_current"]=_emscripten_webgl_make_context_current;function _emscripten_webgl_get_drawing_buffer_size(contextHandle,width,height){var GLContext=GL.getContext(contextHandle);if(!GLContext||!GLContext.GLctx||!width||!height){return-5}HEAP32[width>>2]=GLContext.GLctx.drawingBufferWidth;HEAP32[height>>2]=GLContext.GLctx.drawingBufferHeight;return 0}Module["_emscripten_webgl_get_drawing_buffer_size"]=_emscripten_webgl_get_drawing_buffer_size;_emscripten_webgl_get_drawing_buffer_size.sig="iiii";function _emscripten_webgl_get_context_attributes(c,a){if(!a)return-5;c=GL.contexts[c];if(!c)return-3;var t=c.GLctx;if(!t)return-3;t=t.getContextAttributes();HEAP32[a>>2]=t.alpha;HEAP32[a+4>>2]=t.depth;HEAP32[a+8>>2]=t.stencil;HEAP32[a+12>>2]=t.antialias;HEAP32[a+16>>2]=t.premultipliedAlpha;HEAP32[a+20>>2]=t.preserveDrawingBuffer;var power=t["powerPreference"]&&__emscripten_webgl_power_preferences.indexOf(t["powerPreference"]);HEAP32[a+24>>2]=power;HEAP32[a+28>>2]=t.failIfMajorPerformanceCaveat;HEAP32[a+32>>2]=c.version;HEAP32[a+36>>2]=0;HEAP32[a+40>>2]=c.attributes.enableExtensionsByDefault;return 0}Module["_emscripten_webgl_get_context_attributes"]=_emscripten_webgl_get_context_attributes;_emscripten_webgl_get_context_attributes.sig="iii";function _emscripten_webgl_destroy_context(contextHandle){if(GL.currentContext==contextHandle)GL.currentContext=0;GL.deleteContext(contextHandle)}Module["_emscripten_webgl_destroy_context"]=_emscripten_webgl_destroy_context;_emscripten_webgl_destroy_context.sig="vi";function _emscripten_webgl_destroy_context_before_on_calling_thread(contextHandle){if(_emscripten_webgl_get_current_context()==contextHandle)_emscripten_webgl_make_context_current(0)}Module["_emscripten_webgl_destroy_context_before_on_calling_thread"]=_emscripten_webgl_destroy_context_before_on_calling_thread;function _emscripten_webgl_enable_extension(contextHandle,extension){var context=GL.getContext(contextHandle);var extString=UTF8ToString(extension);if(extString.startsWith("GL_"))extString=extString.substr(3);if(extString=="ANGLE_instanced_arrays")__webgl_enable_ANGLE_instanced_arrays(GLctx);if(extString=="OES_vertex_array_object")__webgl_enable_OES_vertex_array_object(GLctx);if(extString=="WEBGL_draw_buffers")__webgl_enable_WEBGL_draw_buffers(GLctx);if(extString=="WEBGL_multi_draw")__webgl_enable_WEBGL_multi_draw(GLctx);var ext=context.GLctx.getExtension(extString);return!!ext}Module["_emscripten_webgl_enable_extension"]=_emscripten_webgl_enable_extension;_emscripten_webgl_enable_extension.sig="iii";function _emscripten_supports_offscreencanvas(){return 0}Module["_emscripten_supports_offscreencanvas"]=_emscripten_supports_offscreencanvas;function __registerWebGlEventCallback(target,userData,useCapture,callbackfunc,eventTypeId,eventTypeString,targetThread){var webGlEventHandlerFunc=function(ev){var e=ev||event;if(wasmTable.get(callbackfunc)(eventTypeId,0,userData))e.preventDefault()};var eventHandler={target:findEventTarget(target),eventTypeString:eventTypeString,callbackfunc:callbackfunc,handlerFunc:webGlEventHandlerFunc,useCapture:useCapture};JSEvents.registerOrRemoveHandler(eventHandler)}Module["__registerWebGlEventCallback"]=__registerWebGlEventCallback;function _emscripten_set_webglcontextlost_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){__registerWebGlEventCallback(target,userData,useCapture,callbackfunc,31,"webglcontextlost",targetThread);return 0}Module["_emscripten_set_webglcontextlost_callback_on_thread"]=_emscripten_set_webglcontextlost_callback_on_thread;_emscripten_set_webglcontextlost_callback_on_thread.sig="iiiiii";function _emscripten_set_webglcontextrestored_callback_on_thread(target,userData,useCapture,callbackfunc,targetThread){__registerWebGlEventCallback(target,userData,useCapture,callbackfunc,32,"webglcontextrestored",targetThread);return 0}Module["_emscripten_set_webglcontextrestored_callback_on_thread"]=_emscripten_set_webglcontextrestored_callback_on_thread;_emscripten_set_webglcontextrestored_callback_on_thread.sig="iiiiii";function _emscripten_is_webgl_context_lost(contextHandle){return!GL.contexts[contextHandle]||GL.contexts[contextHandle].GLctx.isContextLost()}Module["_emscripten_is_webgl_context_lost"]=_emscripten_is_webgl_context_lost;_emscripten_is_webgl_context_lost.sig="ii";function _emscripten_webgl_get_supported_extensions(){return stringToNewUTF8(GLctx.getSupportedExtensions().join(" "))}Module["_emscripten_webgl_get_supported_extensions"]=_emscripten_webgl_get_supported_extensions;_emscripten_webgl_get_supported_extensions.sig="i";function _emscripten_webgl_get_program_parameter_d(program,param){return GLctx.getProgramParameter(GL.programs[program],param)}Module["_emscripten_webgl_get_program_parameter_d"]=_emscripten_webgl_get_program_parameter_d;_emscripten_webgl_get_program_parameter_d.sig="fii";function _emscripten_webgl_get_program_info_log_utf8(program){return stringToNewUTF8(GLctx.getProgramInfoLog(GL.programs[program]))}Module["_emscripten_webgl_get_program_info_log_utf8"]=_emscripten_webgl_get_program_info_log_utf8;_emscripten_webgl_get_program_info_log_utf8.sig="ii";function _emscripten_webgl_get_shader_parameter_d(shader,param){return GLctx.getShaderParameter(GL.shaders[shader],param)}Module["_emscripten_webgl_get_shader_parameter_d"]=_emscripten_webgl_get_shader_parameter_d;_emscripten_webgl_get_shader_parameter_d.sig="fii";function _emscripten_webgl_get_shader_info_log_utf8(shader){return stringToNewUTF8(GLctx.getShaderInfoLog(GL.shaders[shader]))}Module["_emscripten_webgl_get_shader_info_log_utf8"]=_emscripten_webgl_get_shader_info_log_utf8;_emscripten_webgl_get_shader_info_log_utf8.sig="ii";function _emscripten_webgl_get_shader_source_utf8(shader){return stringToNewUTF8(GLctx.getShaderSource(GL.shaders[shader]))}Module["_emscripten_webgl_get_shader_source_utf8"]=_emscripten_webgl_get_shader_source_utf8;_emscripten_webgl_get_shader_source_utf8.sig="ii";function _emscripten_webgl_get_vertex_attrib_d(index,param){return GLctx.getVertexAttrib(index,param)}Module["_emscripten_webgl_get_vertex_attrib_d"]=_emscripten_webgl_get_vertex_attrib_d;_emscripten_webgl_get_vertex_attrib_d.sig="iii";function _emscripten_webgl_get_vertex_attrib_o(index,param){var obj=GLctx.getVertexAttrib(index,param);return obj&&obj.name}Module["_emscripten_webgl_get_vertex_attrib_o"]=_emscripten_webgl_get_vertex_attrib_o;_emscripten_webgl_get_vertex_attrib_o.sig="iii";function _emscripten_webgl_get_vertex_attrib_v(index,param,dst,dstLength,dstType){return writeGLArray(GLctx.getVertexAttrib(index,param),dst,dstLength,dstType)}Module["_emscripten_webgl_get_vertex_attrib_v"]=_emscripten_webgl_get_vertex_attrib_v;_emscripten_webgl_get_vertex_attrib_v.sig="iiiiii";function _emscripten_webgl_get_uniform_d(program,location){return GLctx.getUniform(GL.programs[program],webglGetUniformLocation(location))}Module["_emscripten_webgl_get_uniform_d"]=_emscripten_webgl_get_uniform_d;_emscripten_webgl_get_uniform_d.sig="fii";function _emscripten_webgl_get_uniform_v(program,location,dst,dstLength,dstType){return writeGLArray(GLctx.getUniform(GL.programs[program],webglGetUniformLocation(location)),dst,dstLength,dstType)}Module["_emscripten_webgl_get_uniform_v"]=_emscripten_webgl_get_uniform_v;_emscripten_webgl_get_uniform_v.sig="iiiiii";function _emscripten_webgl_get_parameter_v(param,dst,dstLength,dstType){return writeGLArray(GLctx.getParameter(param),dst,dstLength,dstType)}Module["_emscripten_webgl_get_parameter_v"]=_emscripten_webgl_get_parameter_v;_emscripten_webgl_get_parameter_v.sig="iiiii";function _emscripten_webgl_get_parameter_d(param){return GLctx.getParameter(param)}Module["_emscripten_webgl_get_parameter_d"]=_emscripten_webgl_get_parameter_d;_emscripten_webgl_get_parameter_d.sig="fi";function _emscripten_webgl_get_parameter_o(param){var obj=GLctx.getParameter(param);return obj&&obj.name}Module["_emscripten_webgl_get_parameter_o"]=_emscripten_webgl_get_parameter_o;_emscripten_webgl_get_parameter_o.sig="ii";function _emscripten_webgl_get_parameter_utf8(param){return stringToNewUTF8(GLctx.getParameter(param))}Module["_emscripten_webgl_get_parameter_utf8"]=_emscripten_webgl_get_parameter_utf8;_emscripten_webgl_get_parameter_utf8.sig="ii";function _emscripten_webgl_get_parameter_i64v(param,dst){writeI53ToI64(dst,GLctx.getParameter(param))}Module["_emscripten_webgl_get_parameter_i64v"]=_emscripten_webgl_get_parameter_i64v;_emscripten_webgl_get_parameter_i64v.sig="vii";function _SDL_GetTicks(){return Date.now()-SDL.startTime|0}Module["_SDL_GetTicks"]=_SDL_GetTicks;_SDL_GetTicks.sig="i";function _SDL_LockSurface(surf){var surfData=SDL.surfaces[surf];surfData.locked++;if(surfData.locked>1)return 0;if(!surfData.buffer){surfData.buffer=_malloc(surfData.width*surfData.height*4);HEAP32[surf+20>>2]=surfData.buffer}HEAP32[surf+20>>2]=surfData.buffer;if(surf==SDL.screen&&Module.screenIsReadOnly&&surfData.image)return 0;if(SDL.defaults.discardOnLock){if(!surfData.image){surfData.image=surfData.ctx.createImageData(surfData.width,surfData.height)}if(!SDL.defaults.opaqueFrontBuffer)return}else{surfData.image=surfData.ctx.getImageData(0,0,surfData.width,surfData.height)}if(surf==SDL.screen&&SDL.defaults.opaqueFrontBuffer){var data=surfData.image.data;var num=data.length;for(var i=0;i>2],y:HEAP32[rect+4>>2],w:HEAP32[rect+8>>2],h:HEAP32[rect+12>>2]}},updateRect:function(rect,r){HEAP32[rect>>2]=r.x;HEAP32[rect+4>>2]=r.y;HEAP32[rect+8>>2]=r.w;HEAP32[rect+12>>2]=r.h},intersectionOfRects:function(first,second){var leftX=Math.max(first.x,second.x);var leftY=Math.max(first.y,second.y);var rightX=Math.min(first.x+first.w,second.x+second.w);var rightY=Math.min(first.y+first.h,second.y+second.h);return{x:leftX,y:leftY,w:Math.max(leftX,rightX)-leftX,h:Math.max(leftY,rightY)-leftY}},checkPixelFormat:function(fmt){},loadColorToCSSRGB:function(color){var rgba=HEAP32[color>>2];return"rgb("+(rgba&255)+","+(rgba>>8&255)+","+(rgba>>16&255)+")"},loadColorToCSSRGBA:function(color){var rgba=HEAP32[color>>2];return"rgba("+(rgba&255)+","+(rgba>>8&255)+","+(rgba>>16&255)+","+(rgba>>24&255)/255+")"},translateColorToCSSRGBA:function(rgba){return"rgba("+(rgba&255)+","+(rgba>>8&255)+","+(rgba>>16&255)+","+(rgba>>>24)/255+")"},translateRGBAToCSSRGBA:function(r,g,b,a){return"rgba("+(r&255)+","+(g&255)+","+(b&255)+","+(a&255)/255+")"},translateRGBAToColor:function(r,g,b,a){return r|g<<8|b<<16|a<<24},makeSurface:function(width,height,flags,usePageCanvas,source,rmask,gmask,bmask,amask){flags=flags||0;var is_SDL_HWSURFACE=flags&1;var is_SDL_HWPALETTE=flags&2097152;var is_SDL_OPENGL=flags&67108864;var surf=_malloc(60);var pixelFormat=_malloc(44);var bpp=is_SDL_HWPALETTE?1:4;var buffer=0;if(!is_SDL_HWSURFACE&&!is_SDL_OPENGL){buffer=_malloc(width*height*4)}HEAP32[surf>>2]=flags;HEAP32[surf+4>>2]=pixelFormat;HEAP32[surf+8>>2]=width;HEAP32[surf+12>>2]=height;HEAP32[surf+16>>2]=width*bpp;HEAP32[surf+20>>2]=buffer;HEAP32[surf+36>>2]=0;HEAP32[surf+40>>2]=0;HEAP32[surf+44>>2]=Module["canvas"].width;HEAP32[surf+48>>2]=Module["canvas"].height;HEAP32[surf+56>>2]=1;HEAP32[pixelFormat>>2]=-2042224636;HEAP32[pixelFormat+4>>2]=0;HEAP8[pixelFormat+8>>0]=bpp*8;HEAP8[pixelFormat+9>>0]=bpp;HEAP32[pixelFormat+12>>2]=rmask||255;HEAP32[pixelFormat+16>>2]=gmask||65280;HEAP32[pixelFormat+20>>2]=bmask||16711680;HEAP32[pixelFormat+24>>2]=amask||4278190080;SDL.GL=SDL.GL||is_SDL_OPENGL;var canvas;if(!usePageCanvas){if(SDL.canvasPool.length>0){canvas=SDL.canvasPool.pop()}else{canvas=document.createElement("canvas")}canvas.width=width;canvas.height=height}else{canvas=Module["canvas"]}var webGLContextAttributes={antialias:SDL.glAttributes[13]!=0&&SDL.glAttributes[14]>1,depth:SDL.glAttributes[6]>0,stencil:SDL.glAttributes[7]>0,alpha:SDL.glAttributes[3]>0};var ctx=Browser.createContext(canvas,is_SDL_OPENGL,usePageCanvas,webGLContextAttributes);SDL.surfaces[surf]={width:width,height:height,canvas:canvas,ctx:ctx,surf:surf,buffer:buffer,pixelFormat:pixelFormat,alpha:255,flags:flags,locked:0,usePageCanvas:usePageCanvas,source:source,isFlagSet:function(flag){return flags&flag}};return surf},copyIndexedColorData:function(surfData,rX,rY,rW,rH){if(!surfData.colors){return}var fullWidth=Module["canvas"].width;var fullHeight=Module["canvas"].height;var startX=rX||0;var startY=rY||0;var endX=(rW||fullWidth-startX)+startX;var endY=(rH||fullHeight-startY)+startY;var buffer=surfData.buffer;if(!surfData.image.data32){surfData.image.data32=new Uint32Array(surfData.image.data.buffer)}var data32=surfData.image.data32;var colors32=surfData.colors32;for(var y=startY;y>0]]}}},freeSurface:function(surf){var refcountPointer=surf+56;var refcount=HEAP32[refcountPointer>>2];if(refcount>1){HEAP32[refcountPointer>>2]=refcount-1;return}var info=SDL.surfaces[surf];if(!info.usePageCanvas&&info.canvas)SDL.canvasPool.push(info.canvas);if(info.buffer)_free(info.buffer);_free(info.pixelFormat);_free(surf);SDL.surfaces[surf]=null;if(surf===SDL.screen){SDL.screen=null}},blitSurface:function(src,srcrect,dst,dstrect,scale){var srcData=SDL.surfaces[src];var dstData=SDL.surfaces[dst];var sr,dr;if(srcrect){sr=SDL.loadRect(srcrect)}else{sr={x:0,y:0,w:srcData.width,h:srcData.height}}if(dstrect){dr=SDL.loadRect(dstrect)}else{dr={x:0,y:0,w:srcData.width,h:srcData.height}}if(dstData.clipRect){var widthScale=!scale||sr.w===0?1:sr.w/dr.w;var heightScale=!scale||sr.h===0?1:sr.h/dr.h;dr=SDL.intersectionOfRects(dstData.clipRect,dr);sr.w=dr.w*widthScale;sr.h=dr.h*heightScale;if(dstrect){SDL.updateRect(dstrect,dr)}}var blitw,blith;if(scale){blitw=dr.w;blith=dr.h}else{blitw=sr.w;blith=sr.h}if(sr.w===0||sr.h===0||blitw===0||blith===0){return 0}var oldAlpha=dstData.ctx.globalAlpha;dstData.ctx.globalAlpha=srcData.alpha/255;dstData.ctx.drawImage(srcData.canvas,sr.x,sr.y,sr.w,sr.h,dr.x,dr.y,blitw,blith);dstData.ctx.globalAlpha=oldAlpha;if(dst!=SDL.screen){warnOnce("WARNING: copying canvas data to memory for compatibility");_SDL_LockSurface(dst);dstData.locked--}return 0},downFingers:{},savedKeydown:null,receiveEvent:function(event){function unpressAllPressedKeys(){for(var code in SDL.keyboardMap){SDL.events.push({type:"keyup",keyCode:SDL.keyboardMap[code]})}}switch(event.type){case"touchstart":case"touchmove":{event.preventDefault();var touches=[];if(event.type==="touchstart"){for(var i=0;i0?Math.max(delta,1):Math.min(delta,-1);var button=delta>0?3:4;SDL.events.push({type:"mousedown",button:button,pageX:event.pageX,pageY:event.pageY});SDL.events.push({type:"mouseup",button:button,pageX:event.pageX,pageY:event.pageY});SDL.events.push({type:"wheel",deltaX:0,deltaY:delta});event.preventDefault();break;case"mousemove":if(SDL.DOMButtons[0]===1){SDL.events.push({type:"touchmove",touch:{identifier:0,deviceID:-1,pageX:event.pageX,pageY:event.pageY}})}if(Browser.pointerLock){if("mozMovementX"in event){event["movementX"]=event["mozMovementX"];event["movementY"]=event["mozMovementY"]}if(event["movementX"]==0&&event["movementY"]==0){event.preventDefault();return}}case"keydown":case"keyup":case"keypress":case"mousedown":case"mouseup":if(event.type!=="keydown"||!SDL_unicode()&&!SDL.textInput||(event.keyCode===8||event.keyCode===9)){event.preventDefault()}if(event.type=="mousedown"){SDL.DOMButtons[event.button]=1;SDL.events.push({type:"touchstart",touch:{identifier:0,deviceID:-1,pageX:event.pageX,pageY:event.pageY}})}else if(event.type=="mouseup"){if(!SDL.DOMButtons[event.button]){return}SDL.events.push({type:"touchend",touch:{identifier:0,deviceID:-1,pageX:event.pageX,pageY:event.pageY}});SDL.DOMButtons[event.button]=0}if(event.type==="keydown"||event.type==="mousedown"){SDL.canRequestFullscreen=true}else if(event.type==="keyup"||event.type==="mouseup"){if(SDL.isRequestingFullscreen){Module["requestFullscreen"](true,true);SDL.isRequestingFullscreen=false}SDL.canRequestFullscreen=false}if(event.type==="keypress"&&SDL.savedKeydown){SDL.savedKeydown.keypressCharCode=event.charCode;SDL.savedKeydown=null}else if(event.type==="keydown"){SDL.savedKeydown=event}if(event.type!=="keypress"||SDL.textInput){SDL.events.push(event)}break;case"mouseout":for(var i=0;i<3;i++){if(SDL.DOMButtons[i]){SDL.events.push({type:"mouseup",button:i,pageX:event.pageX,pageY:event.pageY});SDL.DOMButtons[i]=0}}event.preventDefault();break;case"focus":SDL.events.push(event);event.preventDefault();break;case"blur":SDL.events.push(event);unpressAllPressedKeys();event.preventDefault();break;case"visibilitychange":SDL.events.push({type:"visibilitychange",visible:!document.hidden});unpressAllPressedKeys();event.preventDefault();break;case"unload":if(Browser.mainLoop.runner){SDL.events.push(event);Browser.mainLoop.runner()}return;case"resize":SDL.events.push(event);if(event.preventDefault){event.preventDefault()}break}if(SDL.events.length>=1e4){err("SDL event queue full, dropping events");SDL.events=SDL.events.slice(0,1e4)}SDL.flushEventsToHandler();return},lookupKeyCodeForEvent:function(event){var code=event.keyCode;if(code>=65&&code<=90){code+=32}else{code=SDL.keyCodes[event.keyCode]||event.keyCode;if(event.location===2&&code>=(224|1<<10)&&code<=(227|1<<10)){code+=4}}return code},handleEvent:function(event){if(event.handled)return;event.handled=true;switch(event.type){case"touchstart":case"touchend":case"touchmove":{Browser.calculateMouseEvent(event);break}case"keydown":case"keyup":{var down=event.type==="keydown";var code=SDL.lookupKeyCodeForEvent(event);HEAP8[SDL.keyboardState+code>>0]=down;SDL.modState=(HEAP8[SDL.keyboardState+1248>>0]?64:0)|(HEAP8[SDL.keyboardState+1249>>0]?1:0)|(HEAP8[SDL.keyboardState+1250>>0]?256:0)|(HEAP8[SDL.keyboardState+1252>>0]?128:0)|(HEAP8[SDL.keyboardState+1253>>0]?2:0)|(HEAP8[SDL.keyboardState+1254>>0]?512:0);if(down){SDL.keyboardMap[code]=event.keyCode}else{delete SDL.keyboardMap[code]}break}case"mousedown":case"mouseup":if(event.type=="mousedown"){SDL.buttonState|=1<0){if(SDL.makeCEvent(SDL.events.shift(),ptr)!==false)return 1}return 0}else{return SDL.events.length>0}},makeCEvent:function(event,ptr){if(typeof event==="number"){_memcpy(ptr,event,28);_free(event);return}SDL.handleEvent(event);switch(event.type){case"keydown":case"keyup":{var down=event.type==="keydown";var key=SDL.lookupKeyCodeForEvent(event);var scan;if(key>=1024){scan=key-1024}else{scan=SDL.scanCodes[key]||key}HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];HEAP8[ptr+8>>0]=down?1:0;HEAP8[ptr+9>>0]=0;HEAP32[ptr+12>>2]=scan;HEAP32[ptr+16>>2]=key;HEAP16[ptr+20>>1]=SDL.modState;HEAP32[ptr+24>>2]=event.keypressCharCode||key;break}case"keypress":{HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];var cStr=intArrayFromString(String.fromCharCode(event.charCode));for(var i=0;i>0]=cStr[i]}break}case"mousedown":case"mouseup":case"mousemove":{if(event.type!="mousemove"){var down=event.type==="mousedown";HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];HEAP32[ptr+4>>2]=0;HEAP32[ptr+8>>2]=0;HEAP32[ptr+12>>2]=0;HEAP8[ptr+16>>0]=event.button+1;HEAP8[ptr+17>>0]=down?1:0;HEAP32[ptr+20>>2]=Browser.mouseX;HEAP32[ptr+24>>2]=Browser.mouseY}else{HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];HEAP32[ptr+4>>2]=0;HEAP32[ptr+8>>2]=0;HEAP32[ptr+12>>2]=0;HEAP32[ptr+16>>2]=SDL.buttonState;HEAP32[ptr+20>>2]=Browser.mouseX;HEAP32[ptr+24>>2]=Browser.mouseY;HEAP32[ptr+28>>2]=Browser.mouseMovementX;HEAP32[ptr+32>>2]=Browser.mouseMovementY}break}case"wheel":{HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];HEAP32[ptr+16>>2]=event.deltaX;HEAP32[ptr+20>>2]=event.deltaY;break}case"touchstart":case"touchend":case"touchmove":{var touch=event.touch;if(!Browser.touches[touch.identifier])break;var w=Module["canvas"].width;var h=Module["canvas"].height;var x=Browser.touches[touch.identifier].x/w;var y=Browser.touches[touch.identifier].y/h;var lx=Browser.lastTouches[touch.identifier].x/w;var ly=Browser.lastTouches[touch.identifier].y/h;var dx=x-lx;var dy=y-ly;if(touch["deviceID"]===undefined)touch.deviceID=SDL.TOUCH_DEFAULT_ID;if(dx===0&&dy===0&&event.type==="touchmove")return false;HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];HEAP32[ptr+4>>2]=_SDL_GetTicks();tempI64=[touch.deviceID>>>0,(tempDouble=touch.deviceID,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[ptr+8>>2]=tempI64[0],HEAP32[ptr+12>>2]=tempI64[1];tempI64=[touch.identifier>>>0,(tempDouble=touch.identifier,+Math.abs(tempDouble)>=1?tempDouble>0?(Math.min(+Math.floor(tempDouble/4294967296),4294967295)|0)>>>0:~~+Math.ceil((tempDouble-+(~~tempDouble>>>0))/4294967296)>>>0:0)],HEAP32[ptr+16>>2]=tempI64[0],HEAP32[ptr+20>>2]=tempI64[1];HEAPF32[ptr+24>>2]=x;HEAPF32[ptr+28>>2]=y;HEAPF32[ptr+32>>2]=dx;HEAPF32[ptr+36>>2]=dy;if(touch.force!==undefined){HEAPF32[ptr+40>>2]=touch.force}else{HEAPF32[ptr+40>>2]=event.type=="touchend"?0:1}break}case"unload":{HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];break}case"resize":{HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];HEAP32[ptr+4>>2]=event.w;HEAP32[ptr+8>>2]=event.h;break}case"joystick_button_up":case"joystick_button_down":{var state=event.type==="joystick_button_up"?0:1;HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];HEAP8[ptr+4>>0]=event.index;HEAP8[ptr+5>>0]=event.button;HEAP8[ptr+6>>0]=state;break}case"joystick_axis_motion":{HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];HEAP8[ptr+4>>0]=event.index;HEAP8[ptr+5>>0]=event.axis;HEAP32[ptr+8>>2]=SDL.joystickAxisValueConversion(event.value);break}case"focus":{var SDL_WINDOWEVENT_FOCUS_GAINED=12;HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];HEAP32[ptr+4>>2]=0;HEAP8[ptr+8>>0]=SDL_WINDOWEVENT_FOCUS_GAINED;break}case"blur":{var SDL_WINDOWEVENT_FOCUS_LOST=13;HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];HEAP32[ptr+4>>2]=0;HEAP8[ptr+8>>0]=SDL_WINDOWEVENT_FOCUS_LOST;break}case"visibilitychange":{var SDL_WINDOWEVENT_SHOWN=1;var SDL_WINDOWEVENT_HIDDEN=2;var visibilityEventID=event.visible?SDL_WINDOWEVENT_SHOWN:SDL_WINDOWEVENT_HIDDEN;HEAP32[ptr>>2]=SDL.DOMEventToSDLEvent[event.type];HEAP32[ptr+4>>2]=0;HEAP8[ptr+8>>0]=visibilityEventID;break}default:throw"Unhandled SDL event: "+event.type}},makeFontString:function(height,fontName){if(fontName.charAt(0)!="'"&&fontName.charAt(0)!='"'){fontName='"'+fontName+'"'}return height+"px "+fontName+", serif"},estimateTextWidth:function(fontData,text){var h=fontData.size;var fontString=SDL.makeFontString(h,fontData.name);var tempCtx=SDL_ttfContext();tempCtx.font=fontString;var ret=tempCtx.measureText(text).width|0;return ret},allocateChannels:function(num){if(SDL.numChannels&&SDL.numChannels>=num&&num!=0)return;SDL.numChannels=num;SDL.channels=[];for(var i=0;i>1]/32768}}else if(audio.format==8){for(var j=0;j>0];channelData[j]=(v>=0?v-128:v+128)/128}}else if(audio.format==33056){for(var j=0;j>2]}}else{throw"Invalid SDL audio format "+audio.format+"!"}}},debugSurface:function(surfData){out("dumping surface "+[surfData.surf,surfData.source,surfData.width,surfData.height]);var image=surfData.ctx.getImageData(0,0,surfData.width,surfData.height);var data=image.data;var num=Math.min(surfData.width,surfData.height);for(var i=0;i0}},queryJoysticks:function(){for(var joystick in SDL.lastJoystickState){var state=SDL.getGamepad(joystick-1);var prevState=SDL.lastJoystickState[joystick];if(typeof state==="undefined")return;if(state===null)return;if(typeof state.timestamp!=="number"||state.timestamp!==prevState.timestamp||!state.timestamp){var i;for(i=0;ideviceIndex&&deviceIndex>=0){return gamepads[deviceIndex]}return null}};Module["SDL"]=SDL;function SDL_unicode(){return SDL.unicode}Module["SDL_unicode"]=SDL_unicode;function _SDL_Linked_Version(){if(SDL.version===null){SDL.version=_malloc(3);HEAP8[SDL.version+0>>0]=1;HEAP8[SDL.version+1>>0]=3;HEAP8[SDL.version+2>>0]=0}return SDL.version}Module["_SDL_Linked_Version"]=_SDL_Linked_Version;_SDL_Linked_Version.sig="i";function _SDL_Init(initFlags){SDL.startTime=Date.now();SDL.initFlags=initFlags;if(!Module["doNotCaptureKeyboard"]){var keyboardListeningElement=Module["keyboardListeningElement"]||document;keyboardListeningElement.addEventListener("keydown",SDL.receiveEvent);keyboardListeningElement.addEventListener("keyup",SDL.receiveEvent);keyboardListeningElement.addEventListener("keypress",SDL.receiveEvent);window.addEventListener("focus",SDL.receiveEvent);window.addEventListener("blur",SDL.receiveEvent);document.addEventListener("visibilitychange",SDL.receiveEvent)}window.addEventListener("unload",SDL.receiveEvent);SDL.keyboardState=_malloc(65536);zeroMemory(SDL.keyboardState,65536);SDL.DOMEventToSDLEvent["keydown"]=768;SDL.DOMEventToSDLEvent["keyup"]=769;SDL.DOMEventToSDLEvent["keypress"]=771;SDL.DOMEventToSDLEvent["mousedown"]=1025;SDL.DOMEventToSDLEvent["mouseup"]=1026;SDL.DOMEventToSDLEvent["mousemove"]=1024;SDL.DOMEventToSDLEvent["wheel"]=1027;SDL.DOMEventToSDLEvent["touchstart"]=1792;SDL.DOMEventToSDLEvent["touchend"]=1793;SDL.DOMEventToSDLEvent["touchmove"]=1794;SDL.DOMEventToSDLEvent["unload"]=256;SDL.DOMEventToSDLEvent["resize"]=28673;SDL.DOMEventToSDLEvent["visibilitychange"]=512;SDL.DOMEventToSDLEvent["focus"]=512;SDL.DOMEventToSDLEvent["blur"]=512;SDL.DOMEventToSDLEvent["joystick_axis_motion"]=1536;SDL.DOMEventToSDLEvent["joystick_button_down"]=1539;SDL.DOMEventToSDLEvent["joystick_button_up"]=1540;return 0}Module["_SDL_Init"]=_SDL_Init;_SDL_Init.sig="ii";function _SDL_WasInit(){if(SDL.startTime===null){_SDL_Init()}return 1}Module["_SDL_WasInit"]=_SDL_WasInit;_SDL_WasInit.sig="i";function _SDL_GetVideoInfo(){var ret=_malloc(5*4);HEAP32[ret+0>>2]=0;HEAP32[ret+4>>2]=0;HEAP32[ret+8>>2]=0;HEAP32[ret+12>>2]=Module["canvas"].width;HEAP32[ret+16>>2]=Module["canvas"].height;return ret}Module["_SDL_GetVideoInfo"]=_SDL_GetVideoInfo;_SDL_GetVideoInfo.sig="i";function _SDL_ListModes(format,flags){return-1}Module["_SDL_ListModes"]=_SDL_ListModes;function _SDL_VideoModeOK(width,height,depth,flags){return depth}Module["_SDL_VideoModeOK"]=_SDL_VideoModeOK;function _SDL_VideoDriverName(buf,max_size){if(SDL.startTime===null){return 0}var driverName=[101,109,115,99,114,105,112,116,101,110,95,115,100,108,95,100,114,105,118,101,114];var index=0;var size=driverName.length;if(max_size<=size){size=max_size-1}while(index>0]=value;index++}HEAP8[buf+index>>0]=0;return buf}Module["_SDL_VideoDriverName"]=_SDL_VideoDriverName;_SDL_VideoDriverName.sig="iii";function _SDL_AudioDriverName(buf,max_size){return _SDL_VideoDriverName(buf,max_size)}Module["_SDL_AudioDriverName"]=_SDL_AudioDriverName;function _SDL_SetVideoMode(width,height,depth,flags){["touchstart","touchend","touchmove","mousedown","mouseup","mousemove","DOMMouseScroll","mousewheel","wheel","mouseout"].forEach(function(event){Module["canvas"].addEventListener(event,SDL.receiveEvent,true)});var canvas=Module["canvas"];if(width==0&&height==0){width=canvas.width;height=canvas.height}if(!SDL.addedResizeListener){SDL.addedResizeListener=true;Browser.resizeListeners.push(function(w,h){if(!SDL.settingVideoMode){SDL.receiveEvent({type:"resize",w:w,h:h})}})}SDL.settingVideoMode=true;Browser.setCanvasSize(width,height);SDL.settingVideoMode=false;if(SDL.screen){SDL.freeSurface(SDL.screen);assert(!SDL.screen)}if(SDL.GL)flags=flags|67108864;SDL.screen=SDL.makeSurface(width,height,flags,true,"screen");return SDL.screen}Module["_SDL_SetVideoMode"]=_SDL_SetVideoMode;_SDL_SetVideoMode.sig="iiiii";function _SDL_GetVideoSurface(){return SDL.screen}Module["_SDL_GetVideoSurface"]=_SDL_GetVideoSurface;_SDL_GetVideoSurface.sig="i";function _SDL_AudioQuit(){for(var i=0;i0){return}if(surfData.isFlagSet(2097152)){SDL.copyIndexedColorData(surfData)}else if(!surfData.colors){var data=surfData.image.data;var buffer=surfData.buffer;assert(buffer%4==0,"Invalid buffer offset: "+buffer);var src=buffer>>2;var dst=0;var isScreen=surf==SDL.screen;var num;if(typeof CanvasPixelArray!=="undefined"&&data instanceof CanvasPixelArray){num=data.length;while(dst>8&255;data[dst+2]=val>>16&255;data[dst+3]=isScreen?255:val>>24&255;src++;dst+=4}}else{var data32=new Uint32Array(data.buffer);if(isScreen&&SDL.defaults.opaqueFrontBuffer){num=data32.length;data32.set(HEAP32.subarray(src,src+num));var data8=new Uint8Array(data.buffer);var i=3;var j=i+4*num;if(num%8==0){while(i>0]*4;var start=base+x*4;data[start]=colors[val];data[start+1]=colors[val+1];data[start+2]=colors[val+2]}s+=width*3}}surfData.ctx.putImageData(surfData.image,0,0)}Module["_SDL_UnlockSurface"]=_SDL_UnlockSurface;_SDL_UnlockSurface.sig="vi";function _SDL_Flip(surf){}Module["_SDL_Flip"]=_SDL_Flip;function _SDL_UpdateRect(surf,x,y,w,h){}Module["_SDL_UpdateRect"]=_SDL_UpdateRect;function _SDL_UpdateRects(surf,numrects,rects){}Module["_SDL_UpdateRects"]=_SDL_UpdateRects;function _SDL_Delay(delay){if(!ENVIRONMENT_IS_WORKER)abort("SDL_Delay called on the main thread! Potential infinite loop, quitting. (consider building with async support like ASYNCIFY)");var now=Date.now();while(Date.now()-now>2]=65536}return SDL.keyboardState}Module["_SDL_GetKeyboardState"]=_SDL_GetKeyboardState;_SDL_GetKeyboardState.sig="ii";function _SDL_GetKeyState(){return _SDL_GetKeyboardState()}Module["_SDL_GetKeyState"]=_SDL_GetKeyState;function _SDL_GetKeyName(key){if(!SDL.keyName){SDL.keyName=allocate(intArrayFromString("unknown key"),ALLOC_NORMAL)}return SDL.keyName}Module["_SDL_GetKeyName"]=_SDL_GetKeyName;_SDL_GetKeyName.sig="ii";function _SDL_GetModState(){return SDL.modState}Module["_SDL_GetModState"]=_SDL_GetModState;_SDL_GetModState.sig="i";function _SDL_GetMouseState(x,y){if(x)HEAP32[x>>2]=Browser.mouseX;if(y)HEAP32[y>>2]=Browser.mouseY;return SDL.buttonState}Module["_SDL_GetMouseState"]=_SDL_GetMouseState;_SDL_GetMouseState.sig="iii";function _SDL_WarpMouse(x,y){return}Module["_SDL_WarpMouse"]=_SDL_WarpMouse;_SDL_WarpMouse.sig="vii";function _SDL_ShowCursor(toggle){switch(toggle){case 0:if(Browser.isFullscreen){Module["canvas"].requestPointerLock();return 0}else{return 1}break;case 1:Module["canvas"].exitPointerLock();return 1;break;case-1:return!Browser.pointerLock;break;default:out("SDL_ShowCursor called with unknown toggle parameter value: "+toggle+".");break}}Module["_SDL_ShowCursor"]=_SDL_ShowCursor;_SDL_ShowCursor.sig="ii";function _SDL_GetError(){if(!SDL.errorMessage){SDL.errorMessage=allocate(intArrayFromString("unknown SDL-emscripten error"),ALLOC_NORMAL)}return SDL.errorMessage}Module["_SDL_GetError"]=_SDL_GetError;_SDL_GetError.sig="i";function _SDL_SetError(){}Module["_SDL_SetError"]=_SDL_SetError;function _SDL_malloc(size){return _malloc(size)}Module["_SDL_malloc"]=_SDL_malloc;_SDL_malloc.sig="ii";function _SDL_free(ptr){_free(ptr)}Module["_SDL_free"]=_SDL_free;_SDL_free.sig="vi";function _SDL_CreateRGBSurface(flags,width,height,depth,rmask,gmask,bmask,amask){return SDL.makeSurface(width,height,flags,false,"CreateRGBSurface",rmask,gmask,bmask,amask)}Module["_SDL_CreateRGBSurface"]=_SDL_CreateRGBSurface;_SDL_CreateRGBSurface.sig="iiiiiiiii";function _SDL_CreateRGBSurfaceFrom(pixels,width,height,depth,pitch,rmask,gmask,bmask,amask){var surf=SDL.makeSurface(width,height,0,false,"CreateRGBSurfaceFrom",rmask,gmask,bmask,amask);if(depth!==32){out("TODO: Partially unimplemented SDL_CreateRGBSurfaceFrom called!");return surf}var data=SDL.surfaces[surf];var image=data.ctx.createImageData(width,height);var pitchOfDst=width*4;for(var row=0;row>0]}}data.ctx.putImageData(image,0,0);return surf}Module["_SDL_CreateRGBSurfaceFrom"]=_SDL_CreateRGBSurfaceFrom;_SDL_CreateRGBSurfaceFrom.sig="iiiiiiiiii";function _SDL_ConvertSurface(surf,format,flags){if(format){SDL.checkPixelFormat(format)}var oldData=SDL.surfaces[surf];var ret=SDL.makeSurface(oldData.width,oldData.height,oldData.flags,false,"copy:"+oldData.source);var newData=SDL.surfaces[ret];newData.ctx.globalCompositeOperation="copy";newData.ctx.drawImage(oldData.canvas,0,0);newData.ctx.globalCompositeOperation=oldData.ctx.globalCompositeOperation;return ret}Module["_SDL_ConvertSurface"]=_SDL_ConvertSurface;_SDL_ConvertSurface.sig="iiii";function _SDL_DisplayFormatAlpha(surf){return _SDL_ConvertSurface(surf)}Module["_SDL_DisplayFormatAlpha"]=_SDL_DisplayFormatAlpha;function _SDL_FreeSurface(surf){if(surf)SDL.freeSurface(surf)}Module["_SDL_FreeSurface"]=_SDL_FreeSurface;_SDL_FreeSurface.sig="vi";function _SDL_UpperBlit(src,srcrect,dst,dstrect){return SDL.blitSurface(src,srcrect,dst,dstrect,false)}Module["_SDL_UpperBlit"]=_SDL_UpperBlit;_SDL_UpperBlit.sig="iiiii";function _SDL_UpperBlitScaled(src,srcrect,dst,dstrect){return SDL.blitSurface(src,srcrect,dst,dstrect,true)}Module["_SDL_UpperBlitScaled"]=_SDL_UpperBlitScaled;_SDL_UpperBlitScaled.sig="iiiii";function _SDL_LowerBlit(a0,a1,a2,a3){return _SDL_UpperBlit(a0,a1,a2,a3)}Module["_SDL_LowerBlit"]=_SDL_LowerBlit;_SDL_LowerBlit.sig="iiiii";function _SDL_LowerBlitScaled(a0,a1,a2,a3){return _SDL_UpperBlitScaled(a0,a1,a2,a3)}Module["_SDL_LowerBlitScaled"]=_SDL_LowerBlitScaled;_SDL_LowerBlitScaled.sig="iiiii";function _SDL_GetClipRect(surf,rect){assert(rect);var surfData=SDL.surfaces[surf];var r=surfData.clipRect||{x:0,y:0,w:surfData.width,h:surfData.height};SDL.updateRect(rect,r)}Module["_SDL_GetClipRect"]=_SDL_GetClipRect;_SDL_GetClipRect.sig="vii";function _SDL_SetClipRect(surf,rect){var surfData=SDL.surfaces[surf];if(rect){surfData.clipRect=SDL.intersectionOfRects({x:0,y:0,w:surfData.width,h:surfData.height},SDL.loadRect(rect))}else{delete surfData.clipRect}}Module["_SDL_SetClipRect"]=_SDL_SetClipRect;_SDL_SetClipRect.sig="vii";function _SDL_FillRect(surf,rect,color){var surfData=SDL.surfaces[surf];assert(!surfData.locked);if(surfData.isFlagSet(2097152)){color=surfData.colors32[color]}var r=rect?SDL.loadRect(rect):{x:0,y:0,w:surfData.width,h:surfData.height};if(surfData.clipRect){r=SDL.intersectionOfRects(surfData.clipRect,r);if(rect){SDL.updateRect(rect,r)}}surfData.ctx.save();surfData.ctx.fillStyle=SDL.translateColorToCSSRGBA(color);surfData.ctx.fillRect(r.x,r.y,r.w,r.h);surfData.ctx.restore();return 0}Module["_SDL_FillRect"]=_SDL_FillRect;_SDL_FillRect.sig="iiii";function _SDL_BlitSurface(src,srcrect,dst,dstrect){return SDL.blitSurface(src,srcrect,dst,dstrect,false)}Module["_SDL_BlitSurface"]=_SDL_BlitSurface;_SDL_BlitSurface.sig="iiiii";function _SDL_BlitScaled(src,srcrect,dst,dstrect){return SDL.blitSurface(src,srcrect,dst,dstrect,true)}Module["_SDL_BlitScaled"]=_SDL_BlitScaled;_SDL_BlitScaled.sig="iiiii";function _zoomSurface(src,x,y,smooth){var srcData=SDL.surfaces[src];var w=srcData.width*x;var h=srcData.height*y;var ret=SDL.makeSurface(Math.abs(w),Math.abs(h),srcData.flags,false,"zoomSurface");var dstData=SDL.surfaces[ret];if(x>=0&&y>=0)dstData.ctx.drawImage(srcData.canvas,0,0,w,h);else{dstData.ctx.save();dstData.ctx.scale(x<0?-1:1,y<0?-1:1);dstData.ctx.drawImage(srcData.canvas,w<0?w:0,h<0?h:0,Math.abs(w),Math.abs(h));dstData.ctx.restore()}return ret}Module["_zoomSurface"]=_zoomSurface;function _rotozoomSurface(src,angle,zoom,smooth){if(angle%360===0){return _zoomSurface(src,zoom,zoom,smooth)}var srcData=SDL.surfaces[src];var w=srcData.width*zoom;var h=srcData.height*zoom;var diagonal=Math.ceil(Math.sqrt(Math.pow(w,2)+Math.pow(h,2)));var ret=SDL.makeSurface(diagonal,diagonal,srcData.flags,false,"rotozoomSurface");var dstData=SDL.surfaces[ret];dstData.ctx.translate(diagonal/2,diagonal/2);dstData.ctx.rotate(-angle*Math.PI/180);dstData.ctx.drawImage(srcData.canvas,-w/2,-h/2,w,h);return ret}Module["_rotozoomSurface"]=_rotozoomSurface;function _SDL_SetAlpha(surf,flag,alpha){var surfData=SDL.surfaces[surf];surfData.alpha=alpha;if(!(flag&65536)){surfData.alpha=255}}Module["_SDL_SetAlpha"]=_SDL_SetAlpha;_SDL_SetAlpha.sig="iiii";function _SDL_SetColorKey(surf,flag,key){warnOnce("SDL_SetColorKey is a no-op for performance reasons");return 0}Module["_SDL_SetColorKey"]=_SDL_SetColorKey;function _SDL_PollEvent(ptr){return SDL.pollEvent(ptr)}Module["_SDL_PollEvent"]=_SDL_PollEvent;_SDL_PollEvent.sig="ii";function _SDL_PushEvent(ptr){var copy=_malloc(28);_memcpy(copy,ptr,28);SDL.events.push(copy);return 0}Module["_SDL_PushEvent"]=_SDL_PushEvent;_SDL_PushEvent.sig="ii";function _SDL_PeepEvents(events,requestedEventCount,action,from,to){switch(action){case 2:{assert(requestedEventCount==1);var index=0;var retrievedEventCount=0;while(index>0];surfData.colors[index+1]=HEAPU8[colors+(i*4+1)>>0];surfData.colors[index+2]=HEAPU8[colors+(i*4+2)>>0];surfData.colors[index+3]=255}return 1}Module["_SDL_SetColors"]=_SDL_SetColors;_SDL_SetColors.sig="iiiii";function _SDL_SetPalette(surf,flags,colors,firstColor,nColors){return _SDL_SetColors(surf,colors,firstColor,nColors)}Module["_SDL_SetPalette"]=_SDL_SetPalette;function _SDL_MapRGB(fmt,r,g,b){SDL.checkPixelFormat(fmt);return r&255|(g&255)<<8|(b&255)<<16|4278190080}Module["_SDL_MapRGB"]=_SDL_MapRGB;_SDL_MapRGB.sig="iiiii";function _SDL_MapRGBA(fmt,r,g,b,a){SDL.checkPixelFormat(fmt);return r&255|(g&255)<<8|(b&255)<<16|(a&255)<<24}Module["_SDL_MapRGBA"]=_SDL_MapRGBA;_SDL_MapRGBA.sig="iiiiii";function _SDL_GetRGB(pixel,fmt,r,g,b){SDL.checkPixelFormat(fmt);if(r){HEAP8[r>>0]=pixel&255}if(g){HEAP8[g>>0]=pixel>>8&255}if(b){HEAP8[b>>0]=pixel>>16&255}}Module["_SDL_GetRGB"]=_SDL_GetRGB;_SDL_GetRGB.sig="viiiii";function _SDL_GetRGBA(pixel,fmt,r,g,b,a){SDL.checkPixelFormat(fmt);if(r){HEAP8[r>>0]=pixel&255}if(g){HEAP8[g>>0]=pixel>>8&255}if(b){HEAP8[b>>0]=pixel>>16&255}if(a){HEAP8[a>>0]=pixel>>24&255}}Module["_SDL_GetRGBA"]=_SDL_GetRGBA;_SDL_GetRGBA.sig="viiiiii";function _SDL_GetAppState(){var state=0;if(Browser.pointerLock){state|=1}if(document.hasFocus()){state|=2}state|=4;return state}Module["_SDL_GetAppState"]=_SDL_GetAppState;_SDL_GetAppState.sig="i";function _SDL_WM_GrabInput(){}Module["_SDL_WM_GrabInput"]=_SDL_WM_GrabInput;function _SDL_WM_ToggleFullScreen(surf){if(Browser.exitFullscreen()){return 1}else{if(!SDL.canRequestFullscreen){return 0}SDL.isRequestingFullscreen=true;return 1}}Module["_SDL_WM_ToggleFullScreen"]=_SDL_WM_ToggleFullScreen;_SDL_WM_ToggleFullScreen.sig="ii";function _IMG_Init(flags){return flags}Module["_IMG_Init"]=_IMG_Init;function _SDL_FreeRW(rwopsID){SDL.rwops[rwopsID]=null;while(SDL.rwops.length>0&&SDL.rwops[SDL.rwops.length-1]===null){SDL.rwops.pop()}}Module["_SDL_FreeRW"]=_SDL_FreeRW;_SDL_FreeRW.sig="vi";function _IMG_Load_RW(rwopsID,freeSrc){try{var cleanup=function(){if(rwops&&freeSrc)_SDL_FreeRW(rwopsID)};var addCleanup=function(func){var old=cleanup;cleanup=function added_cleanup(){old();func()}};var callStbImage=function(func,params){var x=Module["_malloc"](4);var y=Module["_malloc"](4);var comp=Module["_malloc"](4);addCleanup(function(){Module["_free"](x);Module["_free"](y);Module["_free"](comp);if(data)Module["_stbi_image_free"](data)});var data=Module["_"+func].apply(null,params.concat([x,y,comp,0]));if(!data)return null;return{rawData:true,data:data,width:HEAP32[x>>2],height:HEAP32[y>>2],size:HEAP32[x>>2]*HEAP32[y>>2]*HEAP32[comp>>2],bpp:HEAP32[comp>>2]}};var rwops=SDL.rwops[rwopsID];if(rwops===undefined){return 0}var raw;var filename=rwops.filename;if(filename===undefined){warnOnce("Only file names that have been preloaded are supported for IMG_Load_RW. Consider using STB_IMAGE=1 if you want synchronous image decoding (see settings.js), or package files with --use-preload-plugins");return 0}if(!raw){filename=PATH_FS.resolve(filename);raw=Module["preloadedImages"][filename];if(!raw){if(raw===null)err("Trying to reuse preloaded image, but freePreloadedMediaOnUse is set!");warnOnce("Cannot find preloaded image "+filename);warnOnce("Cannot find preloaded image "+filename+". Consider using STB_IMAGE=1 if you want synchronous image decoding (see settings.js), or package files with --use-preload-plugins");return 0}else if(Module["freePreloadedMediaOnUse"]){Module["preloadedImages"][filename]=null}}var surf=SDL.makeSurface(raw.width,raw.height,0,false,"load:"+filename);var surfData=SDL.surfaces[surf];surfData.ctx.globalCompositeOperation="copy";if(!raw.rawData){surfData.ctx.drawImage(raw,0,0,raw.width,raw.height,0,0,raw.width,raw.height)}else{var imageData=surfData.ctx.getImageData(0,0,surfData.width,surfData.height);if(raw.bpp==4){imageData.data.set(HEAPU8.subarray(raw.data,raw.data+raw.size))}else if(raw.bpp==3){var pixels=raw.size/3;var data=imageData.data;var sourcePtr=raw.data;var destPtr=0;for(var i=0;i>0];data[destPtr++]=HEAPU8[sourcePtr++>>0];data[destPtr++]=HEAPU8[sourcePtr++>>0];data[destPtr++]=255}}else if(raw.bpp==2){var pixels=raw.size;var data=imageData.data;var sourcePtr=raw.data;var destPtr=0;for(var i=0;i>0];var alpha=HEAPU8[sourcePtr++>>0];data[destPtr++]=gray;data[destPtr++]=gray;data[destPtr++]=gray;data[destPtr++]=alpha}}else if(raw.bpp==1){var pixels=raw.size;var data=imageData.data;var sourcePtr=raw.data;var destPtr=0;for(var i=0;i>0];data[destPtr++]=value;data[destPtr++]=value;data[destPtr++]=value;data[destPtr++]=255}}else{err("cannot handle bpp "+raw.bpp);return 0}surfData.ctx.putImageData(imageData,0,0)}surfData.ctx.globalCompositeOperation="source-over";_SDL_LockSurface(surf);surfData.locked--;if(SDL.GL){surfData.canvas=surfData.ctx=null}return surf}finally{cleanup()}}Module["_IMG_Load_RW"]=_IMG_Load_RW;_IMG_Load_RW.sig="iii";function _SDL_RWFromFile(_name,mode){var id=SDL.rwops.length;var name=UTF8ToString(_name);SDL.rwops.push({filename:name,mimetype:Browser.getMimetype(name)});return id}Module["_SDL_RWFromFile"]=_SDL_RWFromFile;_SDL_RWFromFile.sig="iii";function _IMG_Load(filename){var rwops=_SDL_RWFromFile(filename);var result=_IMG_Load_RW(rwops,1);return result}Module["_IMG_Load"]=_IMG_Load;_IMG_Load.sig="ii";function _SDL_LoadBMP(a0){return _IMG_Load(a0)}Module["_SDL_LoadBMP"]=_SDL_LoadBMP;_SDL_LoadBMP.sig="ii";function _SDL_LoadBMP_RW(a0,a1){return _IMG_Load_RW(a0,a1)}Module["_SDL_LoadBMP_RW"]=_SDL_LoadBMP_RW;_SDL_LoadBMP_RW.sig="iii";function _IMG_Quit(){out("IMG_Quit called (and ignored)")}Module["_IMG_Quit"]=_IMG_Quit;function _SDL_OpenAudio(desired,obtained){try{SDL.audio={freq:HEAPU32[desired>>2],format:HEAPU16[desired+4>>1],channels:HEAPU8[desired+6>>0],samples:HEAPU16[desired+8>>1],callback:HEAPU32[desired+16>>2],userdata:HEAPU32[desired+20>>2],paused:true,timer:null};if(SDL.audio.format==8){SDL.audio.silence=128}else if(SDL.audio.format==32784){SDL.audio.silence=0}else if(SDL.audio.format==33056){SDL.audio.silence=0}else{throw"Invalid SDL audio format "+SDL.audio.format+"!"}if(SDL.audio.freq<=0){throw"Unsupported sound frequency "+SDL.audio.freq+"!"}else if(SDL.audio.freq<=22050){SDL.audio.freq=22050}else if(SDL.audio.freq<=32e3){SDL.audio.freq=32e3}else if(SDL.audio.freq<=44100){SDL.audio.freq=44100}else if(SDL.audio.freq<=48e3){SDL.audio.freq=48e3}else if(SDL.audio.freq<=96e3){SDL.audio.freq=96e3}else{throw"Unsupported sound frequency "+SDL.audio.freq+"!"}if(SDL.audio.channels==0){SDL.audio.channels=1}else if(SDL.audio.channels<0||SDL.audio.channels>32){throw"Unsupported number of audio channels for SDL audio: "+SDL.audio.channels+"!"}else if(SDL.audio.channels!=1&&SDL.audio.channels!=2){out("Warning: Using untested number of audio channels "+SDL.audio.channels)}if(SDL.audio.samples<128||SDL.audio.samples>524288){throw"Unsupported audio callback buffer size "+SDL.audio.samples+"!"}else if((SDL.audio.samples&SDL.audio.samples-1)!=0){throw"Audio callback buffer size "+SDL.audio.samples+" must be a power-of-two!"}var totalSamples=SDL.audio.samples*SDL.audio.channels;if(SDL.audio.format==8){SDL.audio.bytesPerSample=1}else if(SDL.audio.format==32784){SDL.audio.bytesPerSample=2}else if(SDL.audio.format==33056){SDL.audio.bytesPerSample=4}else{throw"Invalid SDL audio format "+SDL.audio.format+"!"}SDL.audio.bufferSize=totalSamples*SDL.audio.bytesPerSample;SDL.audio.bufferDurationSecs=SDL.audio.bufferSize/SDL.audio.bytesPerSample/SDL.audio.channels/SDL.audio.freq;SDL.audio.bufferingDelay=50/1e3;SDL.audio.buffer=_malloc(SDL.audio.bufferSize);SDL.audio.numSimultaneouslyQueuedBuffers=Module["SDL_numSimultaneouslyQueuedBuffers"]||5;SDL.audio.queueNewAudioData=function SDL_queueNewAudioData(){if(!SDL.audio)return;for(var i=0;i=SDL.audio.bufferingDelay+SDL.audio.bufferDurationSecs*SDL.audio.numSimultaneouslyQueuedBuffers)return;wasmTable.get(SDL.audio.callback)(SDL.audio.userdata,SDL.audio.buffer,SDL.audio.bufferSize);SDL.audio.pushAudio(SDL.audio.buffer,SDL.audio.bufferSize)}};SDL.audio.caller=function SDL_audioCaller(){if(!SDL.audio)return;--SDL.audio.numAudioTimersPending;SDL.audio.queueNewAudioData();var secsUntilNextPlayStart=SDL.audio.nextPlayTime-SDL.audioContext["currentTime"];var preemptBufferFeedSecs=SDL.audio.bufferDurationSecs/2;if(SDL.audio.numAudioTimersPending>2]=SDL.audio.freq;HEAP16[obtained+4>>1]=SDL.audio.format;HEAP8[obtained+6>>0]=SDL.audio.channels;HEAP8[obtained+7>>0]=SDL.audio.silence;HEAP16[obtained+8>>1]=SDL.audio.samples;HEAP32[obtained+16>>2]=SDL.audio.callback;HEAP32[obtained+20>>2]=SDL.audio.userdata}SDL.allocateChannels(32)}catch(e){out('Initializing SDL audio threw an exception: "'+e.toString()+'"! Continuing without audio.');SDL.audio=null;SDL.allocateChannels(0);if(obtained){HEAP32[obtained>>2]=0;HEAP16[obtained+4>>1]=0;HEAP8[obtained+6>>0]=0;HEAP8[obtained+7>>0]=0;HEAP16[obtained+8>>1]=0;HEAP32[obtained+16>>2]=0;HEAP32[obtained+20>>2]=0}}if(!SDL.audio){return-1}return 0}Module["_SDL_OpenAudio"]=_SDL_OpenAudio;_SDL_OpenAudio.sig="iii";function _SDL_PauseAudio(pauseOn){if(!SDL.audio){return}if(pauseOn){if(SDL.audio.timer!==undefined){clearTimeout(SDL.audio.timer);SDL.audio.numAudioTimersPending=0;SDL.audio.timer=undefined}}else if(!SDL.audio.timer){SDL.audio.numAudioTimersPending=1;SDL.audio.timer=safeSetTimeout(SDL.audio.caller,1)}SDL.audio.paused=pauseOn}Module["_SDL_PauseAudio"]=_SDL_PauseAudio;_SDL_PauseAudio.sig="vi";function _SDL_CloseAudio(){if(SDL.audio){if(SDL.audio.callbackRemover){SDL.audio.callbackRemover();SDL.audio.callbackRemover=null}_SDL_PauseAudio(1);_free(SDL.audio.buffer);SDL.audio=null;SDL.allocateChannels(0)}}Module["_SDL_CloseAudio"]=_SDL_CloseAudio;_SDL_CloseAudio.sig="v";function _SDL_LockAudio(){}Module["_SDL_LockAudio"]=_SDL_LockAudio;function _SDL_UnlockAudio(){}Module["_SDL_UnlockAudio"]=_SDL_UnlockAudio;function _SDL_CreateMutex(){return 0}Module["_SDL_CreateMutex"]=_SDL_CreateMutex;function _SDL_LockMutex(){}Module["_SDL_LockMutex"]=_SDL_LockMutex;function _SDL_UnlockMutex(){}Module["_SDL_UnlockMutex"]=_SDL_UnlockMutex;function _SDL_mutexP(){return 0}Module["_SDL_mutexP"]=_SDL_mutexP;function _SDL_mutexV(){return 0}Module["_SDL_mutexV"]=_SDL_mutexV;function _SDL_DestroyMutex(){}Module["_SDL_DestroyMutex"]=_SDL_DestroyMutex;function _SDL_CreateCond(){return 0}Module["_SDL_CreateCond"]=_SDL_CreateCond;function _SDL_CondSignal(){}Module["_SDL_CondSignal"]=_SDL_CondSignal;function _SDL_CondWait(){}Module["_SDL_CondWait"]=_SDL_CondWait;function _SDL_DestroyCond(){}Module["_SDL_DestroyCond"]=_SDL_DestroyCond;function _SDL_StartTextInput(){SDL.textInput=true}Module["_SDL_StartTextInput"]=_SDL_StartTextInput;_SDL_StartTextInput.sig="v";function _SDL_StopTextInput(){SDL.textInput=false}Module["_SDL_StopTextInput"]=_SDL_StopTextInput;_SDL_StopTextInput.sig="v";function _Mix_Init(flags){if(!flags)return 0;return 8}Module["_Mix_Init"]=_Mix_Init;function _Mix_Quit(){}Module["_Mix_Quit"]=_Mix_Quit;function _Mix_OpenAudio(frequency,format,channels,chunksize){SDL.openAudioContext();autoResumeAudioContext(SDL.audioContext);SDL.allocateChannels(32);SDL.mixerFrequency=frequency;SDL.mixerFormat=format;SDL.mixerNumChannels=channels;SDL.mixerChunkSize=chunksize;return 0}Module["_Mix_OpenAudio"]=_Mix_OpenAudio;_Mix_OpenAudio.sig="iiiii";function _Mix_CloseAudio(){_SDL_CloseAudio()}Module["_Mix_CloseAudio"]=_Mix_CloseAudio;_Mix_CloseAudio.sig="v";function _Mix_AllocateChannels(num){SDL.allocateChannels(num);return num}Module["_Mix_AllocateChannels"]=_Mix_AllocateChannels;_Mix_AllocateChannels.sig="ii";function _Mix_ChannelFinished(func){SDL.channelFinished=func}Module["_Mix_ChannelFinished"]=_Mix_ChannelFinished;_Mix_ChannelFinished.sig="vi";function _Mix_Volume(channel,volume){if(channel==-1){for(var i=0;i>1;var buffer=new Float32Array(numSamples);for(var i=0;i>1]/32768}if(SDL.webAudioAvailable()){webAudio={};webAudio.decodedBuffer=buffer}else{audio=new Audio;audio.mozAudioChannelType="content";audio.numChannels=SDL.mixerNumChannels;audio.frequency=SDL.mixerFrequency}var id=SDL.audios.length;SDL.audios.push({source:"",audio:audio,webAudio:webAudio,buffer:buffer});return id}Module["_Mix_QuickLoad_RAW"]=_Mix_QuickLoad_RAW;_Mix_QuickLoad_RAW.sig="iii";function _Mix_FreeChunk(id){SDL.audios[id]=null}Module["_Mix_FreeChunk"]=_Mix_FreeChunk;_Mix_FreeChunk.sig="vi";function _Mix_ReserveChannels(num){SDL.channelMinimumNumber=num}Module["_Mix_ReserveChannels"]=_Mix_ReserveChannels;_Mix_ReserveChannels.sig="ii";function _Mix_PlayChannel(channel,id,loops){var info=SDL.audios[id];if(!info)return-1;if(!info.audio&&!info.webAudio)return-1;if(channel==-1){for(var i=SDL.channelMinimumNumber;i>2]=SDL.estimateTextWidth(fontData,UTF8ToString(text))}if(h){HEAP32[h>>2]=fontData.size}return 0}Module["_TTF_SizeText"]=_TTF_SizeText;_TTF_SizeText.sig="iiiii";function _TTF_SizeUTF8(a0,a1,a2,a3){return _TTF_SizeText(a0,a1,a2,a3)}Module["_TTF_SizeUTF8"]=_TTF_SizeUTF8;_TTF_SizeUTF8.sig="iiiii";function _TTF_GlyphMetrics(font,ch,minx,maxx,miny,maxy,advance){var fontData=SDL.fonts[font];var width=SDL.estimateTextWidth(fontData,String.fromCharCode(ch));if(advance){HEAP32[advance>>2]=width}if(minx){HEAP32[minx>>2]=0}if(maxx){HEAP32[maxx>>2]=width}if(miny){HEAP32[miny>>2]=0}if(maxy){HEAP32[maxy>>2]=fontData.size}}Module["_TTF_GlyphMetrics"]=_TTF_GlyphMetrics;_TTF_GlyphMetrics.sig="iiiiiiii";function _TTF_FontAscent(font){var fontData=SDL.fonts[font];return fontData.size*.98|0}Module["_TTF_FontAscent"]=_TTF_FontAscent;_TTF_FontAscent.sig="ii";function _TTF_FontDescent(font){var fontData=SDL.fonts[font];return fontData.size*.02|0}Module["_TTF_FontDescent"]=_TTF_FontDescent;_TTF_FontDescent.sig="ii";function _TTF_FontHeight(font){var fontData=SDL.fonts[font];return fontData.size}Module["_TTF_FontHeight"]=_TTF_FontHeight;_TTF_FontHeight.sig="ii";function _TTF_FontLineSkip(a0){return _TTF_FontHeight(a0)}Module["_TTF_FontLineSkip"]=_TTF_FontLineSkip;_TTF_FontLineSkip.sig="ii";function _TTF_Quit(){out("TTF_Quit called (and ignored)")}Module["_TTF_Quit"]=_TTF_Quit;var SDL_gfx={drawRectangle:function(surf,x1,y1,x2,y2,action,cssColor){x1=x1<<16>>16;y1=y1<<16>>16;x2=x2<<16>>16;y2=y2<<16>>16;var surfData=SDL.surfaces[surf];assert(!surfData.locked);var x=x1>16;y1=y1<<16>>16;x2=x2<<16>>16;y2=y2<<16>>16;var surfData=SDL.surfaces[surf];assert(!surfData.locked);surfData.ctx.save();surfData.ctx.strokeStyle=cssColor;surfData.ctx.beginPath();surfData.ctx.moveTo(x1,y1);surfData.ctx.lineTo(x2,y2);surfData.ctx.stroke();surfData.ctx.restore()},drawEllipse:function(surf,x,y,rx,ry,action,cssColor){x=x<<16>>16;y=y<<16>>16;rx=rx<<16>>16;ry=ry<<16>>16;var surfData=SDL.surfaces[surf];assert(!surfData.locked);surfData.ctx.save();surfData.ctx.beginPath();surfData.ctx.translate(x,y);surfData.ctx.scale(rx,ry);surfData.ctx.arc(0,0,1,0,2*Math.PI);surfData.ctx.restore();surfData.ctx.save();surfData.ctx[action+"Style"]=cssColor;surfData.ctx[action]();surfData.ctx.restore()},translateColorToCSSRGBA:function(rgba){return"rgba("+(rgba>>>24)+","+(rgba>>16&255)+","+(rgba>>8&255)+","+(rgba&255)+")"}};Module["SDL_gfx"]=SDL_gfx;function _boxColor(surf,x1,y1,x2,y2,color){return SDL_gfx.drawRectangle(surf,x1,y1,x2,y2,"fill",SDL_gfx.translateColorToCSSRGBA(color))}Module["_boxColor"]=_boxColor;function _boxRGBA(surf,x1,y1,x2,y2,r,g,b,a){return SDL_gfx.drawRectangle(surf,x1,y1,x2,y2,"fill",SDL.translateRGBAToCSSRGBA(r,g,b,a))}Module["_boxRGBA"]=_boxRGBA;function _rectangleColor(surf,x1,y1,x2,y2,color){return SDL_gfx.drawRectangle(surf,x1,y1,x2,y2,"stroke",SDL_gfx.translateColorToCSSRGBA(color))}Module["_rectangleColor"]=_rectangleColor;function _rectangleRGBA(surf,x1,y1,x2,y2,r,g,b,a){return SDL_gfx.drawRectangle(surf,x1,y1,x2,y2,"stroke",SDL.translateRGBAToCSSRGBA(r,g,b,a))}Module["_rectangleRGBA"]=_rectangleRGBA;function _ellipseColor(surf,x,y,rx,ry,color){return SDL_gfx.drawEllipse(surf,x,y,rx,ry,"stroke",SDL_gfx.translateColorToCSSRGBA(color))}Module["_ellipseColor"]=_ellipseColor;function _ellipseRGBA(surf,x,y,rx,ry,r,g,b,a){return SDL_gfx.drawEllipse(surf,x,y,rx,ry,"stroke",SDL.translateRGBAToCSSRGBA(r,g,b,a))}Module["_ellipseRGBA"]=_ellipseRGBA;function _filledEllipseColor(surf,x,y,rx,ry,color){return SDL_gfx.drawEllipse(surf,x,y,rx,ry,"fill",SDL_gfx.translateColorToCSSRGBA(color))}Module["_filledEllipseColor"]=_filledEllipseColor;function _filledEllipseRGBA(surf,x,y,rx,ry,r,g,b,a){return SDL_gfx.drawEllipse(surf,x,y,rx,ry,"fill",SDL.translateRGBAToCSSRGBA(r,g,b,a))}Module["_filledEllipseRGBA"]=_filledEllipseRGBA;function _lineColor(surf,x1,y1,x2,y2,color){return SDL_gfx.drawLine(surf,x1,y1,x2,y2,SDL_gfx.translateColorToCSSRGBA(color))}Module["_lineColor"]=_lineColor;function _lineRGBA(surf,x1,y1,x2,y2,r,g,b,a){return SDL_gfx.drawLine(surf,x1,y1,x2,y2,SDL.translateRGBAToCSSRGBA(r,g,b,a))}Module["_lineRGBA"]=_lineRGBA;function _pixelRGBA(surf,x1,y1,r,g,b,a){_boxRGBA(surf,x1,y1,x1,y1,r,g,b,a)}Module["_pixelRGBA"]=_pixelRGBA;function _SDL_GL_SetAttribute(attr,value){if(!(attr in SDL.glAttributes)){abort("Unknown SDL GL attribute ("+attr+"). Please check if your SDL version is supported.")}SDL.glAttributes[attr]=value}Module["_SDL_GL_SetAttribute"]=_SDL_GL_SetAttribute;_SDL_GL_SetAttribute.sig="iii";function _SDL_GL_GetAttribute(attr,value){if(!(attr in SDL.glAttributes)){abort("Unknown SDL GL attribute ("+attr+"). Please check if your SDL version is supported.")}if(value)HEAP32[value>>2]=SDL.glAttributes[attr];return 0}Module["_SDL_GL_GetAttribute"]=_SDL_GL_GetAttribute;_SDL_GL_GetAttribute.sig="iii";function _SDL_GL_SwapBuffers(){if(Browser.doSwapBuffers)Browser.doSwapBuffers()}Module["_SDL_GL_SwapBuffers"]=_SDL_GL_SwapBuffers;_SDL_GL_SwapBuffers.sig="v";function _SDL_GL_ExtensionSupported(extension){return Module.ctx.getExtension(extension)|0}Module["_SDL_GL_ExtensionSupported"]=_SDL_GL_ExtensionSupported;_SDL_GL_ExtensionSupported.sig="ii";function _SDL_DestroyWindow(window){}Module["_SDL_DestroyWindow"]=_SDL_DestroyWindow;function _SDL_DestroyRenderer(renderer){}Module["_SDL_DestroyRenderer"]=_SDL_DestroyRenderer;function _SDL_GetWindowFlags(){}Module["_SDL_GetWindowFlags"]=_SDL_GetWindowFlags;_SDL_GetWindowFlags.sig="iii";function _SDL_GL_SwapWindow(window){}Module["_SDL_GL_SwapWindow"]=_SDL_GL_SwapWindow;function _SDL_GL_MakeCurrent(window,context){}Module["_SDL_GL_MakeCurrent"]=_SDL_GL_MakeCurrent;function _SDL_GL_DeleteContext(context){}Module["_SDL_GL_DeleteContext"]=_SDL_GL_DeleteContext;function _SDL_GL_GetSwapInterval(state){if(Browser.mainLoop.timingMode==1)return Browser.mainLoop.timingValue;else return 0}Module["_SDL_GL_GetSwapInterval"]=_SDL_GL_GetSwapInterval;_SDL_GL_GetSwapInterval.sig="ii";function _SDL_GL_SetSwapInterval(state){_emscripten_set_main_loop_timing(1,state)}Module["_SDL_GL_SetSwapInterval"]=_SDL_GL_SetSwapInterval;function _SDL_SetWindowTitle(window,title){if(title)document.title=UTF8ToString(title)}Module["_SDL_SetWindowTitle"]=_SDL_SetWindowTitle;_SDL_SetWindowTitle.sig="vii";function _SDL_GetWindowSize(window,width,height){var w=Module["canvas"].width;var h=Module["canvas"].height;if(width)HEAP32[width>>2]=w;if(height)HEAP32[height>>2]=h}Module["_SDL_GetWindowSize"]=_SDL_GetWindowSize;_SDL_GetWindowSize.sig="viii";function _SDL_LogSetOutputFunction(callback,userdata){}Module["_SDL_LogSetOutputFunction"]=_SDL_LogSetOutputFunction;function _SDL_SetWindowFullscreen(window,fullscreen){if(Browser.isFullscreen){Module["canvas"].exitFullscreen();return 1}else{return 0}}Module["_SDL_SetWindowFullscreen"]=_SDL_SetWindowFullscreen;_SDL_SetWindowFullscreen.sig="iii";function _SDL_ClearError(){}Module["_SDL_ClearError"]=_SDL_ClearError;function _SDL_SetGamma(r,g,b){return-1}Module["_SDL_SetGamma"]=_SDL_SetGamma;function _SDL_SetGammaRamp(redTable,greenTable,blueTable){return-1}Module["_SDL_SetGammaRamp"]=_SDL_SetGammaRamp;function _SDL_NumJoysticks(){var count=0;var gamepads=SDL.getGamepads();for(var i=0;iaxis){return SDL.joystickAxisValueConversion(gamepad.axes[axis])}return 0}Module["_SDL_JoystickGetAxis"]=_SDL_JoystickGetAxis;_SDL_JoystickGetAxis.sig="iii";function _SDL_JoystickGetHat(joystick,hat){return 0}Module["_SDL_JoystickGetHat"]=_SDL_JoystickGetHat;function _SDL_JoystickGetBall(joystick,ball,dxptr,dyptr){return-1}Module["_SDL_JoystickGetBall"]=_SDL_JoystickGetBall;function _SDL_JoystickGetButton(joystick,button){var gamepad=SDL.getGamepad(joystick-1);if(gamepad&&gamepad.buttons.length>button){return SDL.getJoystickButtonState(gamepad.buttons[button])?1:0}return 0}Module["_SDL_JoystickGetButton"]=_SDL_JoystickGetButton;_SDL_JoystickGetButton.sig="iii";function _SDL_JoystickClose(joystick){delete SDL.lastJoystickState[joystick]}Module["_SDL_JoystickClose"]=_SDL_JoystickClose;_SDL_JoystickClose.sig="vi";function _SDL_InitSubSystem(flags){return 0}Module["_SDL_InitSubSystem"]=_SDL_InitSubSystem;function _SDL_RWFromConstMem(mem,size){var id=SDL.rwops.length;SDL.rwops.push({bytes:mem,count:size});return id}Module["_SDL_RWFromConstMem"]=_SDL_RWFromConstMem;_SDL_RWFromConstMem.sig="iii";function _SDL_RWFromMem(a0,a1){return _SDL_RWFromConstMem(a0,a1)}Module["_SDL_RWFromMem"]=_SDL_RWFromMem;_SDL_RWFromMem.sig="iii";function _SDL_GetNumAudioDrivers(){return 1}Module["_SDL_GetNumAudioDrivers"]=_SDL_GetNumAudioDrivers;function _SDL_GetCurrentAudioDriver(){return allocate(intArrayFromString("Emscripten Audio"),ALLOC_NORMAL)}Module["_SDL_GetCurrentAudioDriver"]=_SDL_GetCurrentAudioDriver;function _SDL_GetAudioDriver(index){return _SDL_GetCurrentAudioDriver()}Module["_SDL_GetAudioDriver"]=_SDL_GetAudioDriver;function _SDL_EnableUNICODE(on){var ret=SDL.unicode||0;SDL.unicode=on;return ret}Module["_SDL_EnableUNICODE"]=_SDL_EnableUNICODE;_SDL_EnableUNICODE.sig="ii";function _SDL_AddTimer(interval,callback,param){return window.setTimeout(function(){wasmTable.get(callback)(interval,param)},interval)}Module["_SDL_AddTimer"]=_SDL_AddTimer;_SDL_AddTimer.sig="iiii";function _SDL_RemoveTimer(id){window.clearTimeout(id);return true}Module["_SDL_RemoveTimer"]=_SDL_RemoveTimer;_SDL_RemoveTimer.sig="ii";function _SDL_CreateThread(){throw"SDL threads cannot be supported in the web platform because they assume shared state. See emscripten_create_worker etc. for a message-passing concurrency model that does let you run code in another thread."}Module["_SDL_CreateThread"]=_SDL_CreateThread;function _SDL_WaitThread(){throw"SDL_WaitThread"}Module["_SDL_WaitThread"]=_SDL_WaitThread;function _SDL_GetThreadID(){throw"SDL_GetThreadID"}Module["_SDL_GetThreadID"]=_SDL_GetThreadID;function _SDL_ThreadID(){return 0}Module["_SDL_ThreadID"]=_SDL_ThreadID;function _SDL_AllocRW(){throw"SDL_AllocRW: TODO"}Module["_SDL_AllocRW"]=_SDL_AllocRW;function _SDL_CondBroadcast(){throw"SDL_CondBroadcast: TODO"}Module["_SDL_CondBroadcast"]=_SDL_CondBroadcast;function _SDL_CondWaitTimeout(){throw"SDL_CondWaitTimeout: TODO"}Module["_SDL_CondWaitTimeout"]=_SDL_CondWaitTimeout;function _SDL_WM_IconifyWindow(){throw"SDL_WM_IconifyWindow TODO"}Module["_SDL_WM_IconifyWindow"]=_SDL_WM_IconifyWindow;function _Mix_SetPostMix(){warnOnce("Mix_SetPostMix: TODO")}Module["_Mix_SetPostMix"]=_Mix_SetPostMix;function _Mix_VolumeChunk(chunk,volume){throw"Mix_VolumeChunk: TODO"}Module["_Mix_VolumeChunk"]=_Mix_VolumeChunk;function _Mix_SetPosition(channel,angle,distance){throw"Mix_SetPosition: TODO"}Module["_Mix_SetPosition"]=_Mix_SetPosition;function _Mix_QuerySpec(){throw"Mix_QuerySpec: TODO"}Module["_Mix_QuerySpec"]=_Mix_QuerySpec;function _Mix_FadeInChannelTimed(){throw"Mix_FadeInChannelTimed"}Module["_Mix_FadeInChannelTimed"]=_Mix_FadeInChannelTimed;function _Mix_FadeOutChannel(){throw"Mix_FadeOutChannel"}Module["_Mix_FadeOutChannel"]=_Mix_FadeOutChannel;function _Mix_Linked_Version(){throw"Mix_Linked_Version: TODO"}Module["_Mix_Linked_Version"]=_Mix_Linked_Version;function _SDL_SaveBMP_RW(){throw"SDL_SaveBMP_RW: TODO"}Module["_SDL_SaveBMP_RW"]=_SDL_SaveBMP_RW;function _SDL_WM_SetIcon(){}Module["_SDL_WM_SetIcon"]=_SDL_WM_SetIcon;function _SDL_HasRDTSC(){return 0}Module["_SDL_HasRDTSC"]=_SDL_HasRDTSC;function _SDL_HasMMX(){return 0}Module["_SDL_HasMMX"]=_SDL_HasMMX;function _SDL_HasMMXExt(){return 0}Module["_SDL_HasMMXExt"]=_SDL_HasMMXExt;function _SDL_Has3DNow(){return 0}Module["_SDL_Has3DNow"]=_SDL_Has3DNow;function _SDL_Has3DNowExt(){return 0}Module["_SDL_Has3DNowExt"]=_SDL_Has3DNowExt;function _SDL_HasSSE(){return 0}Module["_SDL_HasSSE"]=_SDL_HasSSE;function _SDL_HasSSE2(){return 0}Module["_SDL_HasSSE2"]=_SDL_HasSSE2;function _SDL_HasAltiVec(){return 0}Module["_SDL_HasAltiVec"]=_SDL_HasAltiVec;function _glutPostRedisplay(){if(GLUT.displayFunc&&!GLUT.requestedAnimationFrame){GLUT.requestedAnimationFrame=true;Browser.requestAnimationFrame(function(){GLUT.requestedAnimationFrame=false;Browser.mainLoop.runIter(function(){wasmTable.get(GLUT.displayFunc)()})})}}Module["_glutPostRedisplay"]=_glutPostRedisplay;_glutPostRedisplay.sig="v";var GLUT={initTime:null,idleFunc:null,displayFunc:null,keyboardFunc:null,keyboardUpFunc:null,specialFunc:null,specialUpFunc:null,reshapeFunc:null,motionFunc:null,passiveMotionFunc:null,mouseFunc:null,buttons:0,modifiers:0,initWindowWidth:256,initWindowHeight:256,initDisplayMode:18,windowX:0,windowY:0,windowWidth:0,windowHeight:0,requestedAnimationFrame:false,saveModifiers:function(event){GLUT.modifiers=0;if(event["shiftKey"])GLUT.modifiers+=1;if(event["ctrlKey"])GLUT.modifiers+=2;if(event["altKey"])GLUT.modifiers+=4},onMousemove:function(event){var lastX=Browser.mouseX;var lastY=Browser.mouseY;Browser.calculateMouseEvent(event);var newX=Browser.mouseX;var newY=Browser.mouseY;if(newX==lastX&&newY==lastY)return;if(GLUT.buttons==0&&event.target==Module["canvas"]&&GLUT.passiveMotionFunc){event.preventDefault();GLUT.saveModifiers(event);wasmTable.get(GLUT.passiveMotionFunc)(lastX,lastY)}else if(GLUT.buttons!=0&&GLUT.motionFunc){event.preventDefault();GLUT.saveModifiers(event);wasmTable.get(GLUT.motionFunc)(lastX,lastY)}},getSpecialKey:function(keycode){var key=null;switch(keycode){case 8:key=120;break;case 46:key=111;break;case 112:key=1;break;case 113:key=2;break;case 114:key=3;break;case 115:key=4;break;case 116:key=5;break;case 117:key=6;break;case 118:key=7;break;case 119:key=8;break;case 120:key=9;break;case 121:key=10;break;case 122:key=11;break;case 123:key=12;break;case 37:key=100;break;case 38:key=101;break;case 39:key=102;break;case 40:key=103;break;case 33:key=104;break;case 34:key=105;break;case 36:key=106;break;case 35:key=107;break;case 45:key=108;break;case 16:case 5:key=112;break;case 6:key=113;break;case 17:case 3:key=114;break;case 4:key=115;break;case 18:case 2:key=116;break;case 1:key=117;break}return key},getASCIIKey:function(event){if(event["ctrlKey"]||event["altKey"]||event["metaKey"])return null;var keycode=event["keyCode"];if(48<=keycode&&keycode<=57)return keycode;if(65<=keycode&&keycode<=90)return event["shiftKey"]?keycode:keycode+32;if(96<=keycode&&keycode<=105)return keycode-48;if(106<=keycode&&keycode<=111)return keycode-106+42;switch(keycode){case 9:case 13:case 27:case 32:case 61:return keycode}var s=event["shiftKey"];switch(keycode){case 186:return s?58:59;case 187:return s?43:61;case 188:return s?60:44;case 189:return s?95:45;case 190:return s?62:46;case 191:return s?63:47;case 219:return s?123:91;case 220:return s?124:47;case 221:return s?125:93;case 222:return s?34:39}return null},onKeydown:function(event){if(GLUT.specialFunc||GLUT.keyboardFunc){var key=GLUT.getSpecialKey(event["keyCode"]);if(key!==null){if(GLUT.specialFunc){event.preventDefault();GLUT.saveModifiers(event);wasmTable.get(GLUT.specialFunc)(key,Browser.mouseX,Browser.mouseY)}}else{key=GLUT.getASCIIKey(event);if(key!==null&&GLUT.keyboardFunc){event.preventDefault();GLUT.saveModifiers(event);wasmTable.get(GLUT.keyboardFunc)(key,Browser.mouseX,Browser.mouseY)}}}},onKeyup:function(event){if(GLUT.specialUpFunc||GLUT.keyboardUpFunc){var key=GLUT.getSpecialKey(event["keyCode"]);if(key!==null){if(GLUT.specialUpFunc){event.preventDefault();GLUT.saveModifiers(event);wasmTable.get(GLUT.specialUpFunc)(key,Browser.mouseX,Browser.mouseY)}}else{key=GLUT.getASCIIKey(event);if(key!==null&&GLUT.keyboardUpFunc){event.preventDefault();GLUT.saveModifiers(event);wasmTable.get(GLUT.keyboardUpFunc)(key,Browser.mouseX,Browser.mouseY)}}}},touchHandler:function(event){if(event.target!=Module["canvas"]){return}var touches=event.changedTouches,main=touches[0],type="";switch(event.type){case"touchstart":type="mousedown";break;case"touchmove":type="mousemove";break;case"touchend":type="mouseup";break;default:return}var simulatedEvent=document.createEvent("MouseEvent");simulatedEvent.initMouseEvent(type,true,true,window,1,main.screenX,main.screenY,main.clientX,main.clientY,false,false,false,false,0,null);main.target.dispatchEvent(simulatedEvent);event.preventDefault()},onMouseButtonDown:function(event){Browser.calculateMouseEvent(event);GLUT.buttons|=1<0?Math.max(delta,1):Math.min(delta,-1);var button=3;if(delta<0){button=4}if(GLUT.mouseFunc){event.preventDefault();GLUT.saveModifiers(event);wasmTable.get(GLUT.mouseFunc)(button,0,Browser.mouseX,Browser.mouseY)}},onFullscreenEventChange:function(event){var width;var height;if(document["fullscreen"]||document["fullScreen"]||document["mozFullScreen"]||document["webkitIsFullScreen"]){width=screen["width"];height=screen["height"]}else{width=GLUT.windowWidth;height=GLUT.windowHeight;document.removeEventListener("fullscreenchange",GLUT.onFullscreenEventChange,true);document.removeEventListener("mozfullscreenchange",GLUT.onFullscreenEventChange,true);document.removeEventListener("webkitfullscreenchange",GLUT.onFullscreenEventChange,true)}Browser.setCanvasSize(width,height,true);if(GLUT.reshapeFunc){wasmTable.get(GLUT.reshapeFunc)(width,height)}_glutPostRedisplay()}};Module["GLUT"]=GLUT;function _glutGetModifiers(){return GLUT.modifiers}Module["_glutGetModifiers"]=_glutGetModifiers;_glutGetModifiers.sig="i";function _glutInit(argcp,argv){GLUT.initTime=Date.now();var isTouchDevice="ontouchstart"in document.documentElement;if(isTouchDevice){window.addEventListener("touchmove",GLUT.touchHandler,true);window.addEventListener("touchstart",GLUT.touchHandler,true);window.addEventListener("touchend",GLUT.touchHandler,true)}window.addEventListener("keydown",GLUT.onKeydown,true);window.addEventListener("keyup",GLUT.onKeyup,true);window.addEventListener("mousemove",GLUT.onMousemove,true);window.addEventListener("mousedown",GLUT.onMouseButtonDown,true);window.addEventListener("mouseup",GLUT.onMouseButtonUp,true);window.addEventListener("mousewheel",GLUT.onMouseWheel,true);window.addEventListener("DOMMouseScroll",GLUT.onMouseWheel,true);Browser.resizeListeners.push(function(width,height){if(GLUT.reshapeFunc){wasmTable.get(GLUT.reshapeFunc)(width,height)}});__ATEXIT__.push(function(){if(isTouchDevice){window.removeEventListener("touchmove",GLUT.touchHandler,true);window.removeEventListener("touchstart",GLUT.touchHandler,true);window.removeEventListener("touchend",GLUT.touchHandler,true)}window.removeEventListener("keydown",GLUT.onKeydown,true);window.removeEventListener("keyup",GLUT.onKeyup,true);window.removeEventListener("mousemove",GLUT.onMousemove,true);window.removeEventListener("mousedown",GLUT.onMouseButtonDown,true);window.removeEventListener("mouseup",GLUT.onMouseButtonUp,true);window.removeEventListener("mousewheel",GLUT.onMouseWheel,true);window.removeEventListener("DOMMouseScroll",GLUT.onMouseWheel,true);Module["canvas"].width=Module["canvas"].height=1})}Module["_glutInit"]=_glutInit;_glutInit.sig="vii";function _glutInitWindowSize(width,height){Browser.setCanvasSize(GLUT.initWindowWidth=width,GLUT.initWindowHeight=height)}Module["_glutInitWindowSize"]=_glutInitWindowSize;_glutInitWindowSize.sig="vii";function _glutInitWindowPosition(x,y){}Module["_glutInitWindowPosition"]=_glutInitWindowPosition;_glutInitWindowPosition.sig="vii";function _glutGet(type){switch(type){case 100:return 0;case 101:return 0;case 102:return Module["canvas"].width;case 103:return Module["canvas"].height;case 200:return Module["canvas"].width;case 201:return Module["canvas"].height;case 500:return 0;case 501:return 0;case 502:return GLUT.initWindowWidth;case 503:return GLUT.initWindowHeight;case 700:var now=Date.now();return now-GLUT.initTime;case 105:return Module.ctx.getContextAttributes().stencil?8:0;case 106:return Module.ctx.getContextAttributes().depth?8:0;case 110:return Module.ctx.getContextAttributes().alpha?8:0;case 120:return Module.ctx.getContextAttributes().antialias?1:0;default:throw"glutGet("+type+") not implemented yet"}}Module["_glutGet"]=_glutGet;function _glutIdleFunc(func){function callback(){if(GLUT.idleFunc){wasmTable.get(GLUT.idleFunc)();safeSetTimeout(callback,4)}}if(!GLUT.idleFunc){safeSetTimeout(callback,0)}GLUT.idleFunc=func}Module["_glutIdleFunc"]=_glutIdleFunc;_glutIdleFunc.sig="vi";function _glutTimerFunc(msec,func,value){safeSetTimeout(function(){wasmTable.get(func)(value)},msec)}Module["_glutTimerFunc"]=_glutTimerFunc;_glutTimerFunc.sig="viii";function _glutDisplayFunc(func){GLUT.displayFunc=func}Module["_glutDisplayFunc"]=_glutDisplayFunc;_glutDisplayFunc.sig="vi";function _glutKeyboardFunc(func){GLUT.keyboardFunc=func}Module["_glutKeyboardFunc"]=_glutKeyboardFunc;_glutKeyboardFunc.sig="vi";function _glutKeyboardUpFunc(func){GLUT.keyboardUpFunc=func}Module["_glutKeyboardUpFunc"]=_glutKeyboardUpFunc;_glutKeyboardUpFunc.sig="vi";function _glutSpecialFunc(func){GLUT.specialFunc=func}Module["_glutSpecialFunc"]=_glutSpecialFunc;_glutSpecialFunc.sig="vi";function _glutSpecialUpFunc(func){GLUT.specialUpFunc=func}Module["_glutSpecialUpFunc"]=_glutSpecialUpFunc;_glutSpecialUpFunc.sig="vi";function _glutReshapeFunc(func){GLUT.reshapeFunc=func}Module["_glutReshapeFunc"]=_glutReshapeFunc;_glutReshapeFunc.sig="vi";function _glutMotionFunc(func){GLUT.motionFunc=func}Module["_glutMotionFunc"]=_glutMotionFunc;_glutMotionFunc.sig="vi";function _glutPassiveMotionFunc(func){GLUT.passiveMotionFunc=func}Module["_glutPassiveMotionFunc"]=_glutPassiveMotionFunc;_glutPassiveMotionFunc.sig="vi";function _glutMouseFunc(func){GLUT.mouseFunc=func}Module["_glutMouseFunc"]=_glutMouseFunc;_glutMouseFunc.sig="vi";function _glutSetCursor(cursor){var cursorStyle="auto";switch(cursor){case 0:break;case 1:break;case 2:cursorStyle="pointer";break;case 3:break;case 4:cursorStyle="help";break;case 5:break;case 6:break;case 7:cursorStyle="wait";break;case 8:cursorStyle="text";break;case 9:case 102:cursorStyle="crosshair";break;case 10:cursorStyle="ns-resize";break;case 11:cursorStyle="ew-resize";break;case 12:cursorStyle="n-resize";break;case 13:cursorStyle="s-resize";break;case 14:cursorStyle="w-resize";break;case 15:cursorStyle="e-resize";break;case 16:cursorStyle="nw-resize";break;case 17:cursorStyle="ne-resize";break;case 18:cursorStyle="se-resize";break;case 19:cursorStyle="sw-resize";break;case 100:break;case 101:cursorStyle="none";break;default:throw"glutSetCursor: Unknown cursor type: "+cursor}Module["canvas"].style.cursor=cursorStyle}Module["_glutSetCursor"]=_glutSetCursor;_glutSetCursor.sig="vi";function _glutCreateWindow(name){var contextAttributes={antialias:(GLUT.initDisplayMode&128)!=0,depth:(GLUT.initDisplayMode&16)!=0,stencil:(GLUT.initDisplayMode&32)!=0,alpha:(GLUT.initDisplayMode&8)!=0};Module.ctx=Browser.createContext(Module["canvas"],true,true,contextAttributes);return Module.ctx?1:0}Module["_glutCreateWindow"]=_glutCreateWindow;_glutCreateWindow.sig="ii";function _glutDestroyWindow(name){Module.ctx=Browser.destroyContext(Module["canvas"],true,true);return 1}Module["_glutDestroyWindow"]=_glutDestroyWindow;_glutDestroyWindow.sig="ii";function _glutReshapeWindow(width,height){Browser.exitFullscreen();Browser.setCanvasSize(width,height,true);if(GLUT.reshapeFunc){wasmTable.get(GLUT.reshapeFunc)(width,height)}_glutPostRedisplay()}Module["_glutReshapeWindow"]=_glutReshapeWindow;_glutReshapeWindow.sig="vi";function _glutPositionWindow(x,y){Browser.exitFullscreen();_glutPostRedisplay()}Module["_glutPositionWindow"]=_glutPositionWindow;_glutPositionWindow.sig="vii";function _glutFullScreen(){GLUT.windowX=0;GLUT.windowY=0;GLUT.windowWidth=Module["canvas"].width;GLUT.windowHeight=Module["canvas"].height;document.addEventListener("fullscreenchange",GLUT.onFullscreenEventChange,true);document.addEventListener("mozfullscreenchange",GLUT.onFullscreenEventChange,true);document.addEventListener("webkitfullscreenchange",GLUT.onFullscreenEventChange,true);Browser.requestFullscreen(false,false)}Module["_glutFullScreen"]=_glutFullScreen;_glutFullScreen.sig="v";function _glutInitDisplayMode(mode){GLUT.initDisplayMode=mode}Module["_glutInitDisplayMode"]=_glutInitDisplayMode;_glutInitDisplayMode.sig="vi";function _glutSwapBuffers(){}Module["_glutSwapBuffers"]=_glutSwapBuffers;_glutSwapBuffers.sig="v";function _glutMainLoop(){_glutReshapeWindow(Module["canvas"].width,Module["canvas"].height);_glutPostRedisplay();throw"unwind"}Module["_glutMainLoop"]=_glutMainLoop;_glutMainLoop.sig="v";function _XOpenDisplay(){return 1}Module["_XOpenDisplay"]=_XOpenDisplay;function _XCreateWindow(display,parent,x,y,width,height,border_width,depth,class_,visual,valuemask,attributes){Browser.setCanvasSize(width,height);return 2}Module["_XCreateWindow"]=_XCreateWindow;function _XChangeWindowAttributes(){}Module["_XChangeWindowAttributes"]=_XChangeWindowAttributes;function _XSetWMHints(){}Module["_XSetWMHints"]=_XSetWMHints;function _XMapWindow(){}Module["_XMapWindow"]=_XMapWindow;function _XStoreName(){}Module["_XStoreName"]=_XStoreName;function _XInternAtom(display,name_,hmm){return 0}Module["_XInternAtom"]=_XInternAtom;function _XSendEvent(){}Module["_XSendEvent"]=_XSendEvent;function _XPending(display){return 0}Module["_XPending"]=_XPending;var EGL={errorCode:12288,defaultDisplayInitialized:false,currentContext:0,currentReadSurface:0,currentDrawSurface:0,contextAttributes:{alpha:false,depth:false,stencil:false,antialias:false},stringCache:{},setErrorCode:function(code){EGL.errorCode=code},chooseConfig:function(display,attribList,config,config_size,numConfigs){if(display!=62e3){EGL.setErrorCode(12296);return 0}if(attribList){for(;;){var param=HEAP32[attribList>>2];if(param==12321){var alphaSize=HEAP32[attribList+4>>2];EGL.contextAttributes.alpha=alphaSize>0}else if(param==12325){var depthSize=HEAP32[attribList+4>>2];EGL.contextAttributes.depth=depthSize>0}else if(param==12326){var stencilSize=HEAP32[attribList+4>>2];EGL.contextAttributes.stencil=stencilSize>0}else if(param==12337){var samples=HEAP32[attribList+4>>2];EGL.contextAttributes.antialias=samples>0}else if(param==12338){var samples=HEAP32[attribList+4>>2];EGL.contextAttributes.antialias=samples==1}else if(param==12544){var requestedPriority=HEAP32[attribList+4>>2];EGL.contextAttributes.lowLatency=requestedPriority!=12547}else if(param==12344){break}attribList+=8}}if((!config||!config_size)&&!numConfigs){EGL.setErrorCode(12300);return 0}if(numConfigs){HEAP32[numConfigs>>2]=1}if(config&&config_size>0){HEAP32[config>>2]=62002}EGL.setErrorCode(12288);return 1}};Module["EGL"]=EGL;function _eglGetDisplay(nativeDisplayType){EGL.setErrorCode(12288);return 62e3}Module["_eglGetDisplay"]=_eglGetDisplay;_eglGetDisplay.sig="ii";function _eglInitialize(display,majorVersion,minorVersion){if(display==62e3){if(majorVersion){HEAP32[majorVersion>>2]=1}if(minorVersion){HEAP32[minorVersion>>2]=4}EGL.defaultDisplayInitialized=true;EGL.setErrorCode(12288);return 1}else{EGL.setErrorCode(12296);return 0}}Module["_eglInitialize"]=_eglInitialize;_eglInitialize.sig="iiii";function _eglTerminate(display){if(display!=62e3){EGL.setErrorCode(12296);return 0}EGL.currentContext=0;EGL.currentReadSurface=0;EGL.currentDrawSurface=0;EGL.defaultDisplayInitialized=false;EGL.setErrorCode(12288);return 1}Module["_eglTerminate"]=_eglTerminate;_eglTerminate.sig="ii";function _eglGetConfigs(display,configs,config_size,numConfigs){return EGL.chooseConfig(display,0,configs,config_size,numConfigs)}Module["_eglGetConfigs"]=_eglGetConfigs;_eglGetConfigs.sig="iiiii";function _eglChooseConfig(display,attrib_list,configs,config_size,numConfigs){return EGL.chooseConfig(display,attrib_list,configs,config_size,numConfigs)}Module["_eglChooseConfig"]=_eglChooseConfig;_eglChooseConfig.sig="iiiiii";function _eglGetConfigAttrib(display,config,attribute,value){if(display!=62e3){EGL.setErrorCode(12296);return 0}if(config!=62002){EGL.setErrorCode(12293);return 0}if(!value){EGL.setErrorCode(12300);return 0}EGL.setErrorCode(12288);switch(attribute){case 12320:HEAP32[value>>2]=EGL.contextAttributes.alpha?32:24;return 1;case 12321:HEAP32[value>>2]=EGL.contextAttributes.alpha?8:0;return 1;case 12322:HEAP32[value>>2]=8;return 1;case 12323:HEAP32[value>>2]=8;return 1;case 12324:HEAP32[value>>2]=8;return 1;case 12325:HEAP32[value>>2]=EGL.contextAttributes.depth?24:0;return 1;case 12326:HEAP32[value>>2]=EGL.contextAttributes.stencil?8:0;return 1;case 12327:HEAP32[value>>2]=12344;return 1;case 12328:HEAP32[value>>2]=62002;return 1;case 12329:HEAP32[value>>2]=0;return 1;case 12330:HEAP32[value>>2]=4096;return 1;case 12331:HEAP32[value>>2]=16777216;return 1;case 12332:HEAP32[value>>2]=4096;return 1;case 12333:HEAP32[value>>2]=0;return 1;case 12334:HEAP32[value>>2]=0;return 1;case 12335:HEAP32[value>>2]=12344;return 1;case 12337:HEAP32[value>>2]=EGL.contextAttributes.antialias?4:0;return 1;case 12338:HEAP32[value>>2]=EGL.contextAttributes.antialias?1:0;return 1;case 12339:HEAP32[value>>2]=4;return 1;case 12340:HEAP32[value>>2]=12344;return 1;case 12341:case 12342:case 12343:HEAP32[value>>2]=-1;return 1;case 12345:case 12346:HEAP32[value>>2]=0;return 1;case 12347:HEAP32[value>>2]=0;return 1;case 12348:HEAP32[value>>2]=1;return 1;case 12349:case 12350:HEAP32[value>>2]=0;return 1;case 12351:HEAP32[value>>2]=12430;return 1;case 12352:HEAP32[value>>2]=4;return 1;case 12354:HEAP32[value>>2]=0;return 1;default:EGL.setErrorCode(12292);return 0}}Module["_eglGetConfigAttrib"]=_eglGetConfigAttrib;_eglGetConfigAttrib.sig="iiiii";function _eglCreateWindowSurface(display,config,win,attrib_list){if(display!=62e3){EGL.setErrorCode(12296);return 0}if(config!=62002){EGL.setErrorCode(12293);return 0}EGL.setErrorCode(12288);return 62006}Module["_eglCreateWindowSurface"]=_eglCreateWindowSurface;_eglCreateWindowSurface.sig="iiiii";function _eglDestroySurface(display,surface){if(display!=62e3){EGL.setErrorCode(12296);return 0}if(surface!=62006){EGL.setErrorCode(12301);return 1}if(EGL.currentReadSurface==surface){EGL.currentReadSurface=0}if(EGL.currentDrawSurface==surface){EGL.currentDrawSurface=0}EGL.setErrorCode(12288);return 1}Module["_eglDestroySurface"]=_eglDestroySurface;_eglDestroySurface.sig="iii";function _eglCreateContext(display,config,hmm,contextAttribs){if(display!=62e3){EGL.setErrorCode(12296);return 0}var glesContextVersion=1;for(;;){var param=HEAP32[contextAttribs>>2];if(param==12440){glesContextVersion=HEAP32[contextAttribs+4>>2]}else if(param==12344){break}else{EGL.setErrorCode(12292);return 0}contextAttribs+=8}if(glesContextVersion!=2){EGL.setErrorCode(12293);return 0}EGL.contextAttributes.majorVersion=glesContextVersion-1;EGL.contextAttributes.minorVersion=0;EGL.context=GL.createContext(Module["canvas"],EGL.contextAttributes);if(EGL.context!=0){EGL.setErrorCode(12288);GL.makeContextCurrent(EGL.context);Module.useWebGL=true;Browser.moduleContextCreatedCallbacks.forEach(function(callback){callback()});GL.makeContextCurrent(null);return 62004}else{EGL.setErrorCode(12297);return 0}}Module["_eglCreateContext"]=_eglCreateContext;_eglCreateContext.sig="iiiii";function _eglDestroyContext(display,context){if(display!=62e3){EGL.setErrorCode(12296);return 0}if(context!=62004){EGL.setErrorCode(12294);return 0}GL.deleteContext(EGL.context);EGL.setErrorCode(12288);if(EGL.currentContext==context){EGL.currentContext=0}return 1}Module["_eglDestroyContext"]=_eglDestroyContext;_eglDestroyContext.sig="iii";function _eglQuerySurface(display,surface,attribute,value){if(display!=62e3){EGL.setErrorCode(12296);return 0}if(surface!=62006){EGL.setErrorCode(12301);return 0}if(!value){EGL.setErrorCode(12300);return 0}EGL.setErrorCode(12288);switch(attribute){case 12328:HEAP32[value>>2]=62002;return 1;case 12376:return 1;case 12375:HEAP32[value>>2]=Module["canvas"].width;return 1;case 12374:HEAP32[value>>2]=Module["canvas"].height;return 1;case 12432:HEAP32[value>>2]=-1;return 1;case 12433:HEAP32[value>>2]=-1;return 1;case 12434:HEAP32[value>>2]=-1;return 1;case 12422:HEAP32[value>>2]=12420;return 1;case 12441:HEAP32[value>>2]=12442;return 1;case 12435:HEAP32[value>>2]=12437;return 1;case 12416:case 12417:case 12418:case 12419:return 1;default:EGL.setErrorCode(12292);return 0}}Module["_eglQuerySurface"]=_eglQuerySurface;_eglQuerySurface.sig="iiiii";function _eglQueryContext(display,context,attribute,value){if(display!=62e3){EGL.setErrorCode(12296);return 0}if(context!=62004){EGL.setErrorCode(12294);return 0}if(!value){EGL.setErrorCode(12300);return 0}EGL.setErrorCode(12288);switch(attribute){case 12328:HEAP32[value>>2]=62002;return 1;case 12439:HEAP32[value>>2]=12448;return 1;case 12440:HEAP32[value>>2]=EGL.contextAttributes.majorVersion+1;return 1;case 12422:HEAP32[value>>2]=12420;return 1;default:EGL.setErrorCode(12292);return 0}}Module["_eglQueryContext"]=_eglQueryContext;_eglQueryContext.sig="iiiii";function _eglGetError(){return EGL.errorCode}Module["_eglGetError"]=_eglGetError;_eglGetError.sig="i";function _eglQueryString(display,name){if(display!=62e3){EGL.setErrorCode(12296);return 0}EGL.setErrorCode(12288);if(EGL.stringCache[name])return EGL.stringCache[name];var ret;switch(name){case 12371:ret=allocateUTF8("Emscripten");break;case 12372:ret=allocateUTF8("1.4 Emscripten EGL");break;case 12373:ret=allocateUTF8("");break;case 12429:ret=allocateUTF8("OpenGL_ES");break;default:EGL.setErrorCode(12300);return 0}EGL.stringCache[name]=ret;return ret}Module["_eglQueryString"]=_eglQueryString;_eglQueryString.sig="iii";function _eglBindAPI(api){if(api==12448){EGL.setErrorCode(12288);return 1}else{EGL.setErrorCode(12300);return 0}}Module["_eglBindAPI"]=_eglBindAPI;_eglBindAPI.sig="ii";function _eglQueryAPI(){EGL.setErrorCode(12288);return 12448}Module["_eglQueryAPI"]=_eglQueryAPI;_eglQueryAPI.sig="i";function _eglWaitClient(){EGL.setErrorCode(12288);return 1}Module["_eglWaitClient"]=_eglWaitClient;_eglWaitClient.sig="i";function _eglWaitNative(nativeEngineId){EGL.setErrorCode(12288);return 1}Module["_eglWaitNative"]=_eglWaitNative;_eglWaitNative.sig="ii";function _eglWaitGL(){return _eglWaitClient()}Module["_eglWaitGL"]=_eglWaitGL;_eglWaitGL.sig="i";function _eglSwapInterval(display,interval){if(display!=62e3){EGL.setErrorCode(12296);return 0}if(interval==0)_emscripten_set_main_loop_timing(0,0);else _emscripten_set_main_loop_timing(1,interval);EGL.setErrorCode(12288);return 1}Module["_eglSwapInterval"]=_eglSwapInterval;_eglSwapInterval.sig="iii";function _eglMakeCurrent(display,draw,read,context){if(display!=62e3){EGL.setErrorCode(12296);return 0}if(context!=0&&context!=62004){EGL.setErrorCode(12294);return 0}if(read!=0&&read!=62006||draw!=0&&draw!=62006){EGL.setErrorCode(12301);return 0}GL.makeContextCurrent(context?EGL.context:null);EGL.currentContext=context;EGL.currentDrawSurface=draw;EGL.currentReadSurface=read;EGL.setErrorCode(12288);return 1}Module["_eglMakeCurrent"]=_eglMakeCurrent;_eglMakeCurrent.sig="iiiii";function _eglGetCurrentContext(){return EGL.currentContext}Module["_eglGetCurrentContext"]=_eglGetCurrentContext;_eglGetCurrentContext.sig="i";function _eglGetCurrentSurface(readdraw){if(readdraw==12378){return EGL.currentReadSurface}else if(readdraw==12377){return EGL.currentDrawSurface}else{EGL.setErrorCode(12300);return 0}}Module["_eglGetCurrentSurface"]=_eglGetCurrentSurface;_eglGetCurrentSurface.sig="ii";function _eglGetCurrentDisplay(){return EGL.currentContext?62e3:0}Module["_eglGetCurrentDisplay"]=_eglGetCurrentDisplay;_eglGetCurrentDisplay.sig="i";function _eglSwapBuffers(){if(!EGL.defaultDisplayInitialized){EGL.setErrorCode(12289)}else if(!Module.ctx){EGL.setErrorCode(12290)}else if(Module.ctx.isContextLost()){EGL.setErrorCode(12302)}else{EGL.setErrorCode(12288);return 1}return 0}Module["_eglSwapBuffers"]=_eglSwapBuffers;_eglSwapBuffers.sig="iii";function _eglReleaseThread(){EGL.currentContext=0;EGL.currentReadSurface=0;EGL.currentDrawSurface=0;EGL.setErrorCode(12288);return 1}Module["_eglReleaseThread"]=_eglReleaseThread;_eglReleaseThread.sig="i";var GLFW={WindowFromId:function(id){if(id<=0||!GLFW.windows)return null;return GLFW.windows[id-1]},joystickFunc:null,errorFunc:null,monitorFunc:null,active:null,windows:null,monitors:null,monitorString:null,versionString:null,initialTime:null,extensions:null,hints:null,defaultHints:{131073:0,131074:0,131075:1,131076:1,131077:1,135169:8,135170:8,135171:8,135172:8,135173:24,135174:8,135175:0,135176:0,135177:0,135178:0,135179:0,135180:0,135181:0,135182:0,135183:0,139265:196609,139266:1,139267:0,139268:0,139269:0,139270:0,139271:0,139272:0},DOMToGLFWKeyCode:function(keycode){switch(keycode){case 32:return 32;case 222:return 39;case 188:return 44;case 173:return 45;case 189:return 45;case 190:return 46;case 191:return 47;case 48:return 48;case 49:return 49;case 50:return 50;case 51:return 51;case 52:return 52;case 53:return 53;case 54:return 54;case 55:return 55;case 56:return 56;case 57:return 57;case 59:return 59;case 61:return 61;case 187:return 61;case 65:return 65;case 66:return 66;case 67:return 67;case 68:return 68;case 69:return 69;case 70:return 70;case 71:return 71;case 72:return 72;case 73:return 73;case 74:return 74;case 75:return 75;case 76:return 76;case 77:return 77;case 78:return 78;case 79:return 79;case 80:return 80;case 81:return 81;case 82:return 82;case 83:return 83;case 84:return 84;case 85:return 85;case 86:return 86;case 87:return 87;case 88:return 88;case 89:return 89;case 90:return 90;case 219:return 91;case 220:return 92;case 221:return 93;case 192:return 94;case 27:return 256+1;case 112:return 256+2;case 113:return 256+3;case 114:return 256+4;case 115:return 256+5;case 116:return 256+6;case 117:return 256+7;case 118:return 256+8;case 119:return 256+9;case 120:return 256+10;case 121:return 256+11;case 122:return 256+12;case 123:return 256+13;case 124:return 256+14;case 125:return 256+15;case 126:return 256+16;case 127:return 256+17;case 128:return 256+18;case 129:return 256+19;case 130:return 256+20;case 131:return 256+21;case 132:return 256+22;case 133:return 256+23;case 134:return 256+24;case 135:return 256+25;case 136:return 256+26;case 39:return 256+30;case 37:return 256+29;case 40:return 256+28;case 38:return 256+27;case 16:return 256+31;case 17:return 256+33;case 18:return 256+35;case 9:return 256+37;case 13:return 256+38;case 8:return 256+39;case 45:return 256+40;case 46:return 256+41;case 33:return 256+42;case 34:return 256+43;case 36:return 256+44;case 35:return 256+45;case 96:return 256+46;case 97:return 256+47;case 98:return 256+48;case 99:return 256+49;case 100:return 256+50;case 101:return 256+51;case 102:return 256+52;case 103:return 256+53;case 104:return 256+54;case 105:return 256+55;case 111:return 256+56;case 106:return 256+57;case 109:return 256+58;case 107:return 256+59;case 110:return 256+60;case 144:return 256+63;case 20:return 256+64;case 145:return 256+65;case 19:return 256+66;case 91:return 256+67;case 93:return 256+69;default:return-1}},getModBits:function(win){var mod=0;if(win.keys[340])mod|=1;if(win.keys[341])mod|=2;if(win.keys[342])mod|=4;if(win.keys[343])mod|=8;return mod},onKeyPress:function(event){if(!GLFW.active||!GLFW.active.charFunc)return;if(event.ctrlKey||event.metaKey)return;var charCode=event.charCode;if(charCode==0||charCode>=0&&charCode<=31)return;wasmTable.get(GLFW.active.charFunc)(charCode,1)},onKeyChanged:function(keyCode,status){if(!GLFW.active)return;var key=GLFW.DOMToGLFWKeyCode(keyCode);if(key==-1)return;GLFW.active.keys[key]=status;GLFW.active.domKeys[keyCode]=status;if(!GLFW.active.keyFunc)return;wasmTable.get(GLFW.active.keyFunc)(key,status)},onGamepadConnected:function(event){GLFW.refreshJoysticks()},onGamepadDisconnected:function(event){GLFW.refreshJoysticks()},onKeydown:function(event){GLFW.onKeyChanged(event.keyCode,1);if(event.keyCode===8||event.keyCode===9){event.preventDefault()}},onKeyup:function(event){GLFW.onKeyChanged(event.keyCode,0)},onBlur:function(event){if(!GLFW.active)return;for(var i=0;i0){if(eventButton==1){eventButton=2}else{eventButton=1}}return eventButton},onMouseenter:function(event){if(!GLFW.active)return;if(event.target!=Module["canvas"]||!GLFW.active.cursorEnterFunc)return},onMouseleave:function(event){if(!GLFW.active)return;if(event.target!=Module["canvas"]||!GLFW.active.cursorEnterFunc)return},onMouseButtonChanged:function(event,status){if(!GLFW.active)return;Browser.calculateMouseEvent(event);if(event.target!=Module["canvas"])return;var eventButton=GLFW.DOMToGLFWMouseButton(event);if(status==1){GLFW.active.buttons|=1<0?Math.max(delta,1):Math.min(delta,-1);GLFW.wheelPos+=delta;if(!GLFW.active||!GLFW.active.scrollFunc||event.target!=Module["canvas"])return;wasmTable.get(GLFW.active.scrollFunc)(GLFW.wheelPos);event.preventDefault()},onCanvasResize:function(width,height){if(!GLFW.active)return;var resizeNeeded=true;if(document["fullscreen"]||document["fullScreen"]||document["mozFullScreen"]||document["webkitIsFullScreen"]){GLFW.active.storedX=GLFW.active.x;GLFW.active.storedY=GLFW.active.y;GLFW.active.storedWidth=GLFW.active.width;GLFW.active.storedHeight=GLFW.active.height;GLFW.active.x=GLFW.active.y=0;GLFW.active.width=screen.width;GLFW.active.height=screen.height;GLFW.active.fullscreen=true}else if(GLFW.active.fullscreen==true){GLFW.active.x=GLFW.active.storedX;GLFW.active.y=GLFW.active.storedY;GLFW.active.width=GLFW.active.storedWidth;GLFW.active.height=GLFW.active.storedHeight;GLFW.active.fullscreen=false}else if(GLFW.active.width!=width||GLFW.active.height!=height){GLFW.active.width=width;GLFW.active.height=height}else{resizeNeeded=false}if(resizeNeeded){Browser.setCanvasSize(GLFW.active.width,GLFW.active.height,true);GLFW.onWindowSizeChanged();GLFW.onFramebufferSizeChanged()}},onWindowSizeChanged:function(){if(!GLFW.active)return;if(!GLFW.active.windowSizeFunc)return;wasmTable.get(GLFW.active.windowSizeFunc)(GLFW.active.width,GLFW.active.height)},onFramebufferSizeChanged:function(){if(!GLFW.active)return;if(!GLFW.active.framebufferSizeFunc)return},getTime:function(){return _emscripten_get_now()/1e3},setWindowTitle:function(winid,title){var win=GLFW.WindowFromId(winid);if(!win)return;win.title=UTF8ToString(title);if(GLFW.active.id==win.id){document.title=win.title}},setJoystickCallback:function(cbfun){GLFW.joystickFunc=cbfun;GLFW.refreshJoysticks()},joys:{},lastGamepadState:null,lastGamepadStateFrame:null,refreshJoysticks:function(){if(Browser.mainLoop.currentFrameNumber!==GLFW.lastGamepadStateFrame||!Browser.mainLoop.currentFrameNumber){GLFW.lastGamepadState=navigator.getGamepads?navigator.getGamepads():navigator.webkitGetGamepads?navigator.webkitGetGamepads:null;GLFW.lastGamepadStateFrame=Browser.mainLoop.currentFrameNumber;for(var joy=0;joy0},getCursorPos:function(winid,x,y){setValue(x,Browser.mouseX,"double");setValue(y,Browser.mouseY,"double")},getMousePos:function(winid,x,y){setValue(x,Browser.mouseX,"i32");setValue(y,Browser.mouseY,"i32")},setCursorPos:function(winid,x,y){},getWindowPos:function(winid,x,y){var wx=0;var wy=0;var win=GLFW.WindowFromId(winid);if(win){wx=win.x;wy=win.y}if(x){setValue(x,wx,"i32")}if(y){setValue(y,wy,"i32")}},setWindowPos:function(winid,x,y){var win=GLFW.WindowFromId(winid);if(!win)return;win.x=x;win.y=y},getWindowSize:function(winid,width,height){var ww=0;var wh=0;var win=GLFW.WindowFromId(winid);if(win){ww=win.width;wh=win.height}if(width){setValue(width,ww,"i32")}if(height){setValue(height,wh,"i32")}},setWindowSize:function(winid,width,height){var win=GLFW.WindowFromId(winid);if(!win)return;if(GLFW.active.id==win.id){if(width==screen.width&&height==screen.height){Browser.requestFullscreen()}else{Browser.exitFullscreen();Browser.setCanvasSize(width,height);win.width=width;win.height=height}}if(!win.windowSizeFunc)return;wasmTable.get(win.windowSizeFunc)(width,height)},createWindow:function(width,height,title,monitor,share){var i,id;for(i=0;i0)throw"glfwCreateWindow only supports one window at time currently";id=i+1;if(width<=0||height<=0)return 0;if(monitor){Browser.requestFullscreen()}else{Browser.setCanvasSize(width,height)}for(i=0;i0;if(i==GLFW.windows.length){if(useWebGL){var contextAttributes={antialias:GLFW.hints[135181]>1,depth:GLFW.hints[135173]>0,stencil:GLFW.hints[135174]>0,alpha:GLFW.hints[135172]>0};Module.ctx=Browser.createContext(Module["canvas"],true,true,contextAttributes)}else{Browser.init()}}if(!Module.ctx&&useWebGL)return 0;var win=new GLFW_Window(id,width,height,title,monitor,share);if(id-1==GLFW.windows.length){GLFW.windows.push(win)}else{GLFW.windows[id-1]=win}GLFW.active=win;return win.id},destroyWindow:function(winid){var win=GLFW.WindowFromId(winid);if(!win)return;GLFW.windows[win.id-1]=null;if(GLFW.active.id==win.id)GLFW.active=null;for(var i=0;i>2];if(val){return 0}}return 1}Module["_uuid_is_null"]=_uuid_is_null;function _uuid_parse(inp,uu){inp=UTF8ToString(inp);if(inp.length===36){var i=0;var uuid=new Array(16);inp.toLowerCase().replace(/[0-9a-f]{2}/g,function(byte){if(i<16){uuid[i++]=parseInt(byte,16)}});if(i<16){return-1}else{writeArrayToMemory(uuid,uu);return 0}}else{return-1}}Module["_uuid_parse"]=_uuid_parse;function _uuid_unparse(uu,out,upper){var i=0;var uuid="xxxx-xx-xx-xx-xxxxxx".replace(/[x]/g,function(c){var r=upper?HEAPU8[uu+i>>0].toString(16).toUpperCase():HEAPU8[uu+i>>0].toString(16);r=r.length===1?"0"+r:r;i++;return r});stringToUTF8(uuid,out,37)}Module["_uuid_unparse"]=_uuid_unparse;function _uuid_unparse_lower(uu,out){_uuid_unparse(uu,out)}Module["_uuid_unparse_lower"]=_uuid_unparse_lower;function _uuid_unparse_upper(uu,out){_uuid_unparse(uu,out,true)}Module["_uuid_unparse_upper"]=_uuid_unparse_upper;function _uuid_type(uu){return 4}Module["_uuid_type"]=_uuid_type;function _uuid_variant(uu){return 1}Module["_uuid_variant"]=_uuid_variant;var GLEW={isLinaroFork:1,extensions:null,error:{0:null,1:null,2:null,3:null,4:null,5:null,6:null,7:null,8:null},version:{1:null,2:null,3:null,4:null},errorStringConstantFromCode:function(error){if(GLEW.isLinaroFork){switch(error){case 4:return"OpenGL ES lib expected, found OpenGL lib";case 5:return"OpenGL lib expected, found OpenGL ES lib";case 6:return"Missing EGL version";case 7:return"EGL 1.1 and up are supported";default:break}}switch(error){case 0:return"No error";case 1:return"Missing GL version";case 2:return"GL 1.1 and up are supported";case 3:return"GLX 1.2 and up are supported";default:return null}},errorString:function(error){if(!GLEW.error[error]){var string=GLEW.errorStringConstantFromCode(error);if(!string){string="Unknown error";error=8}GLEW.error[error]=allocate(intArrayFromString(string),ALLOC_NORMAL)}return GLEW.error[error]},versionStringConstantFromCode:function(name){switch(name){case 1:return"1.10.0";case 2:return"1";case 3:return"10";case 4:return"0";default:return null}},versionString:function(name){if(!GLEW.version[name]){var string=GLEW.versionStringConstantFromCode(name);if(!string)return 0;GLEW.version[name]=allocate(intArrayFromString(string),ALLOC_NORMAL)}return GLEW.version[name]},extensionIsSupported:function(name){if(!GLEW.extensions){GLEW.extensions=UTF8ToString(_glGetString(7939)).split(" ")}if(GLEW.extensions.includes(name))return 1;return GLEW.extensions.includes("GL_"+name)}};Module["GLEW"]=GLEW;function _glewInit(){return 0}Module["_glewInit"]=_glewInit;function _glewIsSupported(name){var exts=UTF8ToString(name).split(" ");for(var i=0;i0)};req.onerror=function(error){callback(error)}})}};Module["IDBStore"]=IDBStore;function _emscripten_idb_async_load(db,id,arg,onload,onerror){IDBStore.getFile(UTF8ToString(db),UTF8ToString(id),function(error,byteArray){if(error){if(onerror)wasmTable.get(onerror)(arg);return}var buffer=_malloc(byteArray.length);HEAPU8.set(byteArray,buffer);wasmTable.get(onload)(arg,buffer,byteArray.length);_free(buffer)})}Module["_emscripten_idb_async_load"]=_emscripten_idb_async_load;function _emscripten_idb_async_store(db,id,ptr,num,arg,onstore,onerror){IDBStore.setFile(UTF8ToString(db),UTF8ToString(id),new Uint8Array(HEAPU8.subarray(ptr,ptr+num)),function(error){if(error){if(onerror)wasmTable.get(onerror)(arg);return}if(onstore)wasmTable.get(onstore)(arg)})}Module["_emscripten_idb_async_store"]=_emscripten_idb_async_store;function _emscripten_idb_async_delete(db,id,arg,ondelete,onerror){IDBStore.deleteFile(UTF8ToString(db),UTF8ToString(id),function(error){if(error){if(onerror)wasmTable.get(onerror)(arg);return}if(ondelete)wasmTable.get(ondelete)(arg)})}Module["_emscripten_idb_async_delete"]=_emscripten_idb_async_delete;function _emscripten_idb_async_exists(db,id,arg,oncheck,onerror){IDBStore.existsFile(UTF8ToString(db),UTF8ToString(id),function(error,exists){if(error){if(onerror)wasmTable.get(onerror)(arg);return}if(oncheck)wasmTable.get(oncheck)(arg,exists)})}Module["_emscripten_idb_async_exists"]=_emscripten_idb_async_exists;function _emscripten_idb_load(){throw"Please compile your program with async support in order to use synchronous operations like emscripten_idb_load, etc."}Module["_emscripten_idb_load"]=_emscripten_idb_load;function _emscripten_idb_store(){throw"Please compile your program with async support in order to use synchronous operations like emscripten_idb_store, etc."}Module["_emscripten_idb_store"]=_emscripten_idb_store;function _emscripten_idb_delete(){throw"Please compile your program with async support in order to use synchronous operations like emscripten_idb_delete, etc."}Module["_emscripten_idb_delete"]=_emscripten_idb_delete;function _emscripten_idb_exists(){throw"Please compile your program with async support in order to use synchronous operations like emscripten_idb_exists, etc."}Module["_emscripten_idb_exists"]=_emscripten_idb_exists;function runAndAbortIfError(func){try{return func()}catch(e){abort(e)}}Module["runAndAbortIfError"]=runAndAbortIfError;function _emscripten_sleep(){throw"Please compile your program with async support in order to use asynchronous operations like emscripten_sleep"}Module["_emscripten_sleep"]=_emscripten_sleep;function _emscripten_wget(){throw"Please compile your program with async support in order to use asynchronous operations like emscripten_wget"}Module["_emscripten_wget"]=_emscripten_wget;function _emscripten_wget_data(){throw"Please compile your program with async support in order to use asynchronous operations like emscripten_wget_data"}Module["_emscripten_wget_data"]=_emscripten_wget_data;function _emscripten_scan_registers(){throw"Please compile your program with async support in order to use asynchronous operations like emscripten_scan_registers"}Module["_emscripten_scan_registers"]=_emscripten_scan_registers;function _emscripten_fiber_init(){throw"Please compile your program with async support in order to use asynchronous operations like emscripten_fiber_init"}Module["_emscripten_fiber_init"]=_emscripten_fiber_init;function _emscripten_fiber_init_from_current_context(){throw"Please compile your program with async support in order to use asynchronous operations like emscripten_fiber_init_from_current_context"}Module["_emscripten_fiber_init_from_current_context"]=_emscripten_fiber_init_from_current_context;function _emscripten_fiber_swap(){throw"Please compile your program with async support in order to use asynchronous operations like emscripten_fiber_swap"}Module["_emscripten_fiber_swap"]=_emscripten_fiber_swap;function _emscripten_is_main_browser_thread(){return!ENVIRONMENT_IS_WORKER}Module["_emscripten_is_main_browser_thread"]=_emscripten_is_main_browser_thread;function ___cxa_thread_atexit(a0,a1){return _atexit(a0,a1)}Module["___cxa_thread_atexit"]=___cxa_thread_atexit;___cxa_thread_atexit.sig="iii";function ___cxa_thread_atexit_impl(a0,a1){return _atexit(a0,a1)}Module["___cxa_thread_atexit_impl"]=___cxa_thread_atexit_impl;___cxa_thread_atexit_impl.sig="iii";Module["requestFullscreen"]=function Module_requestFullscreen(lockPointer,resizeCanvas){Browser.requestFullscreen(lockPointer,resizeCanvas)};Module["requestAnimationFrame"]=function Module_requestAnimationFrame(func){Browser.requestAnimationFrame(func)};Module["setCanvasSize"]=function Module_setCanvasSize(width,height,noUpdates){Browser.setCanvasSize(width,height,noUpdates)};Module["pauseMainLoop"]=function Module_pauseMainLoop(){Browser.mainLoop.pause()};Module["resumeMainLoop"]=function Module_resumeMainLoop(){Browser.mainLoop.resume()};Module["getUserMedia"]=function Module_getUserMedia(){Browser.getUserMedia()};Module["createContext"]=function Module_createContext(canvas,useWebGL,setInModule,webGLContextAttributes){return Browser.createContext(canvas,useWebGL,setInModule,webGLContextAttributes)};var FSNode=function(parent,name,mode,rdev){if(!parent){parent=this}this.parent=parent;this.mount=parent.mount;this.mounted=null;this.id=FS.nextInode++;this.name=name;this.mode=mode;this.node_ops={};this.stream_ops={};this.rdev=rdev};var readMode=292|73;var writeMode=146;Object.defineProperties(FSNode.prototype,{read:{get:function(){return(this.mode&readMode)===readMode},set:function(val){val?this.mode|=readMode:this.mode&=~readMode}},write:{get:function(){return(this.mode&writeMode)===writeMode},set:function(val){val?this.mode|=writeMode:this.mode&=~writeMode}},isFolder:{get:function(){return FS.isDir(this.mode)}},isDevice:{get:function(){return FS.isChrdev(this.mode)}}});FS.FSNode=FSNode;FS.staticInit();Module["FS_createPath"]=FS.createPath;Module["FS_createDataFile"]=FS.createDataFile;Module["FS_createPreloadedFile"]=FS.createPreloadedFile;Module["FS_createLazyFile"]=FS.createLazyFile;Module["FS_createDevice"]=FS.createDevice;Module["FS_unlink"]=FS.unlink;if(ENVIRONMENT_IS_NODE){var fs=require("fs");var NODEJS_PATH=require("path");NODEFS.staticInit()}ERRNO_CODES={"EPERM":63,"ENOENT":44,"ESRCH":71,"EINTR":27,"EIO":29,"ENXIO":60,"E2BIG":1,"ENOEXEC":45,"EBADF":8,"ECHILD":12,"EAGAIN":6,"EWOULDBLOCK":6,"ENOMEM":48,"EACCES":2,"EFAULT":21,"ENOTBLK":105,"EBUSY":10,"EEXIST":20,"EXDEV":75,"ENODEV":43,"ENOTDIR":54,"EISDIR":31,"EINVAL":28,"ENFILE":41,"EMFILE":33,"ENOTTY":59,"ETXTBSY":74,"EFBIG":22,"ENOSPC":51,"ESPIPE":70,"EROFS":69,"EMLINK":34,"EPIPE":64,"EDOM":18,"ERANGE":68,"ENOMSG":49,"EIDRM":24,"ECHRNG":106,"EL2NSYNC":156,"EL3HLT":107,"EL3RST":108,"ELNRNG":109,"EUNATCH":110,"ENOCSI":111,"EL2HLT":112,"EDEADLK":16,"ENOLCK":46,"EBADE":113,"EBADR":114,"EXFULL":115,"ENOANO":104,"EBADRQC":103,"EBADSLT":102,"EDEADLOCK":16,"EBFONT":101,"ENOSTR":100,"ENODATA":116,"ETIME":117,"ENOSR":118,"ENONET":119,"ENOPKG":120,"EREMOTE":121,"ENOLINK":47,"EADV":122,"ESRMNT":123,"ECOMM":124,"EPROTO":65,"EMULTIHOP":36,"EDOTDOT":125,"EBADMSG":9,"ENOTUNIQ":126,"EBADFD":127,"EREMCHG":128,"ELIBACC":129,"ELIBBAD":130,"ELIBSCN":131,"ELIBMAX":132,"ELIBEXEC":133,"ENOSYS":52,"ENOTEMPTY":55,"ENAMETOOLONG":37,"ELOOP":32,"EOPNOTSUPP":138,"EPFNOSUPPORT":139,"ECONNRESET":15,"ENOBUFS":42,"EAFNOSUPPORT":5,"EPROTOTYPE":67,"ENOTSOCK":57,"ENOPROTOOPT":50,"ESHUTDOWN":140,"ECONNREFUSED":14,"EADDRINUSE":3,"ECONNABORTED":13,"ENETUNREACH":40,"ENETDOWN":38,"ETIMEDOUT":73,"EHOSTDOWN":142,"EHOSTUNREACH":23,"EINPROGRESS":26,"EALREADY":7,"EDESTADDRREQ":17,"EMSGSIZE":35,"EPROTONOSUPPORT":66,"ESOCKTNOSUPPORT":137,"EADDRNOTAVAIL":4,"ENETRESET":39,"EISCONN":30,"ENOTCONN":53,"ETOOMANYREFS":141,"EUSERS":136,"EDQUOT":19,"ESTALE":72,"ENOTSUP":138,"ENOMEDIUM":148,"EILSEQ":25,"EOVERFLOW":61,"ECANCELED":11,"ENOTRECOVERABLE":56,"EOWNERDEAD":62,"ESTRPIPE":135};var GLctx;for(var i=0;i<32;++i)tempFixedLengthArray.push(new Array(i));var miniTempWebGLFloatBuffersStorage=new Float32Array(288);for(var i=0;i<288;++i){miniTempWebGLFloatBuffers[i]=miniTempWebGLFloatBuffersStorage.subarray(0,i+1)}var __miniTempWebGLIntBuffersStorage=new Int32Array(288);for(var i=0;i<288;++i){__miniTempWebGLIntBuffers[i]=__miniTempWebGLIntBuffersStorage.subarray(0,i+1)}var emSetImmediate;var emClearImmediate;if(typeof setImmediate!=="undefined"){emSetImmediate=setImmediate;emClearImmediate=clearImmediate}else if(typeof addEventListener==="function"){var __setImmediate_id_counter=0;var __setImmediate_queue=[];var __setImmediate_message_id="_si";function __setImmediate_cb(e){if(e.data===__setImmediate_message_id){e.stopPropagation();__setImmediate_queue.shift()();++__setImmediate_id_counter}}addEventListener("message",__setImmediate_cb,true);emSetImmediate=function(func){postMessage(__setImmediate_message_id,"*");return __setImmediate_id_counter+__setImmediate_queue.push(func)-1};emClearImmediate=function(id){var index=id-__setImmediate_id_counter;if(index>=0&&index<__setImmediate_queue.length)__setImmediate_queue[index]=function(){}}}var ASSERTIONS=false;function intArrayFromString(stringy,dontAddNull,length){var len=length>0?length:lengthBytesUTF8(stringy)+1;var u8array=new Array(len);var numBytesWritten=stringToUTF8Array(stringy,u8array,0,u8array.length);if(dontAddNull)u8array.length=numBytesWritten;return u8array}function intArrayToString(array){var ret=[];for(var i=0;i255){if(ASSERTIONS){assert(false,"Character code "+chr+" ("+String.fromCharCode(chr)+") at offset "+i+" not in 0x00-0xFF.")}chr&=255}ret.push(String.fromCharCode(chr))}return ret.join("")}var decodeBase64=typeof atob==="function"?atob:function(input){var keyStr="ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/=";var output="";var chr1,chr2,chr3;var enc1,enc2,enc3,enc4;var i=0;input=input.replace(/[^A-Za-z0-9\+\/\=]/g,"");do{enc1=keyStr.indexOf(input.charAt(i++));enc2=keyStr.indexOf(input.charAt(i++));enc3=keyStr.indexOf(input.charAt(i++));enc4=keyStr.indexOf(input.charAt(i++));chr1=enc1<<2|enc2>>4;chr2=(enc2&15)<<4|enc3>>2;chr3=(enc3&3)<<6|enc4;output=output+String.fromCharCode(chr1);if(enc3!==64){output=output+String.fromCharCode(chr2)}if(enc4!==64){output=output+String.fromCharCode(chr3)}}while(i>2]=allocateUTF8OnStack(thisProgram);for(var i=1;i>2)+i]=allocateUTF8OnStack(args[i-1])}HEAP32[(argv>>2)+argc]=0;try{var ret=entryFunction(argc,argv);exit(ret,true)}catch(e){if(e instanceof ExitStatus||e=="unwind"){return}var toLog=e;if(e&&typeof e==="object"&&e.stack){toLog=[e,e.stack]}err("exception thrown: "+toLog);quit_(1,e)}finally{calledMain=true}}var dylibsLoaded=false;function run(args){args=args||arguments_;if(runDependencies>0){return}if(!dylibsLoaded){preloadDylibs();dylibsLoaded=true;if(runDependencies>0){return}}preRun();if(runDependencies>0){return}function doRun(){if(calledRun)return;calledRun=true;Module["calledRun"]=true;if(ABORT)return;initRuntime();preMain();readyPromiseResolve(Module);if(Module["onRuntimeInitialized"])Module["onRuntimeInitialized"]();if(shouldRunNow)callMain(args);postRun()}if(Module["setStatus"]){Module["setStatus"]("Running...");setTimeout(function(){setTimeout(function(){Module["setStatus"]("")},1);doRun()},1)}else{doRun()}}Module["run"]=run;function exit(status,implicit){EXITSTATUS=status;if(keepRuntimeAlive()){}else{exitRuntime()}procExit(status)}function procExit(code){EXITSTATUS=code;if(!keepRuntimeAlive()){if(Module["onExit"])Module["onExit"](code);ABORT=true}quit_(code,new ExitStatus(code))}if(Module["preInit"]){if(typeof Module["preInit"]=="function")Module["preInit"]=[Module["preInit"]];while(Module["preInit"].length>0){Module["preInit"].pop()()}}var shouldRunNow=true;if(Module["noInitialRun"])shouldRunNow=false;run(); - - - return _createPyodideModule.ready -} -); -})(); -globalThis._createPyodideModule = _createPyodideModule; diff --git a/spaces/haakohu/deep_privacy2/configs/defaults.py b/spaces/haakohu/deep_privacy2/configs/defaults.py deleted file mode 100644 index 4f831200940c0aa9658bab3a82d6ad5714048d3c..0000000000000000000000000000000000000000 --- a/spaces/haakohu/deep_privacy2/configs/defaults.py +++ /dev/null @@ -1,53 +0,0 @@ -import pathlib -import os -import torch -from tops.config import LazyCall as L - -if "PRETRAINED_CHECKPOINTS_PATH" in os.environ: - PRETRAINED_CHECKPOINTS_PATH = pathlib.Path(os.environ["PRETRAINED_CHECKPOINTS_PATH"]) -else: - PRETRAINED_CHECKPOINTS_PATH = pathlib.Path("pretrained_checkpoints") -if "BASE_OUTPUT_DIR" in os.environ: - BASE_OUTPUT_DIR = pathlib.Path(os.environ["BASE_OUTPUT_DIR"]) -else: - BASE_OUTPUT_DIR = pathlib.Path("outputs") - - - -common = dict( - logger_backend=["wandb", "stdout", "json", "image_dumper"], - wandb_project="deep_privacy2", - output_dir=BASE_OUTPUT_DIR, - experiment_name=None, # Optional experiment name to show on wandb -) - -train = dict( - batch_size=32, - seed=0, - ims_per_log=1024, - ims_per_val=int(200e3), - max_images_to_train=int(12e6), - amp=dict( - enabled=True, - scaler_D=L(torch.cuda.amp.GradScaler)(init_scale=2**16, growth_factor=4, growth_interval=100, enabled="${..enabled}"), - scaler_G=L(torch.cuda.amp.GradScaler)(init_scale=2**16, growth_factor=4, growth_interval=100, enabled="${..enabled}"), - ), - fp16_ddp_accumulate=False, # All gather gradients in fp16? - broadcast_buffers=False, - bias_act_plugin_enabled=True, - grid_sample_gradfix_enabled=True, - conv2d_gradfix_enabled=False, - channels_last=False, - compile_G=dict( - enabled=False, - mode="default" # default, reduce-overhead or max-autotune - ), - compile_D=dict( - enabled=False, - mode="default" # default, reduce-overhead or max-autotune - ) -) - -# exponential moving average -EMA = dict(rampup=0.05) - diff --git a/spaces/hamelcubsfan/AutoGPT/autogpt/config/ai_config.py b/spaces/hamelcubsfan/AutoGPT/autogpt/config/ai_config.py deleted file mode 100644 index d50c30beee9dc8009f63415378ae1c6a399f0037..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/autogpt/config/ai_config.py +++ /dev/null @@ -1,121 +0,0 @@ -# sourcery skip: do-not-use-staticmethod -""" -A module that contains the AIConfig class object that contains the configuration -""" -from __future__ import annotations - -import os -from typing import Type - -import yaml - - -class AIConfig: - """ - A class object that contains the configuration information for the AI - - Attributes: - ai_name (str): The name of the AI. - ai_role (str): The description of the AI's role. - ai_goals (list): The list of objectives the AI is supposed to complete. - """ - - def __init__( - self, ai_name: str = "", ai_role: str = "", ai_goals: list | None = None - ) -> None: - """ - Initialize a class instance - - Parameters: - ai_name (str): The name of the AI. - ai_role (str): The description of the AI's role. - ai_goals (list): The list of objectives the AI is supposed to complete. - Returns: - None - """ - if ai_goals is None: - ai_goals = [] - self.ai_name = ai_name - self.ai_role = ai_role - self.ai_goals = ai_goals - - # Soon this will go in a folder where it remembers more stuff about the run(s) - SAVE_FILE = os.path.join(os.path.dirname(__file__), "..", "ai_settings.yaml") - - @staticmethod - def load(config_file: str = SAVE_FILE) -> "AIConfig": - """ - Returns class object with parameters (ai_name, ai_role, ai_goals) loaded from - yaml file if yaml file exists, - else returns class with no parameters. - - Parameters: - config_file (int): The path to the config yaml file. - DEFAULT: "../ai_settings.yaml" - - Returns: - cls (object): An instance of given cls object - """ - - try: - with open(config_file, encoding="utf-8") as file: - config_params = yaml.load(file, Loader=yaml.FullLoader) - except FileNotFoundError: - config_params = {} - - ai_name = config_params.get("ai_name", "") - ai_role = config_params.get("ai_role", "") - ai_goals = config_params.get("ai_goals", []) - # type: Type[AIConfig] - return AIConfig(ai_name, ai_role, ai_goals) - - def save(self, config_file: str = SAVE_FILE) -> None: - """ - Saves the class parameters to the specified file yaml file path as a yaml file. - - Parameters: - config_file(str): The path to the config yaml file. - DEFAULT: "../ai_settings.yaml" - - Returns: - None - """ - - config = { - "ai_name": self.ai_name, - "ai_role": self.ai_role, - "ai_goals": self.ai_goals, - } - with open(config_file, "w", encoding="utf-8") as file: - yaml.dump(config, file, allow_unicode=True) - - def construct_full_prompt(self) -> str: - """ - Returns a prompt to the user with the class information in an organized fashion. - - Parameters: - None - - Returns: - full_prompt (str): A string containing the initial prompt for the user - including the ai_name, ai_role and ai_goals. - """ - - prompt_start = ( - "Your decisions must always be made independently without" - " seeking user assistance. Play to your strengths as an LLM and pursue" - " simple strategies with no legal complications." - "" - ) - - from autogpt.prompt import get_prompt - - # Construct full prompt - full_prompt = ( - f"You are {self.ai_name}, {self.ai_role}\n{prompt_start}\n\nGOALS:\n\n" - ) - for i, goal in enumerate(self.ai_goals): - full_prompt += f"{i+1}. {goal}\n" - - full_prompt += f"\n\n{get_prompt()}" - return full_prompt diff --git a/spaces/hamelcubsfan/AutoGPT/ui/app.py b/spaces/hamelcubsfan/AutoGPT/ui/app.py deleted file mode 100644 index d7dbd31e901969d090292215935bdbc3d9d75e37..0000000000000000000000000000000000000000 --- a/spaces/hamelcubsfan/AutoGPT/ui/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import gradio as gr -import utils -from api import AutoAPI, get_openai_api_key -import os, shutil -import json - -FILE_DIR = os.path.dirname(os.path.abspath(__file__)) -OUTPUT_DIR = os.path.join(os.path.dirname(FILE_DIR), "auto_gpt_workspace") -if not os.path.exists(OUTPUT_DIR): - os.mkdir(OUTPUT_DIR) - -CSS = """ -#chatbot {font-family: monospace;} -#files .generating {display: none;} -#files .min {min-height: 0px;} -""" - -with gr.Blocks(css=CSS) as app: - with gr.Column() as setup_pane: - gr.Markdown(f"""# Auto-GPT - 1. Duplicate this Space: Duplicate Space This will **NOT** work without duplication! - 2. Enter your OpenAI API Key below. - """) - with gr.Row(): - open_ai_key = gr.Textbox( - value=get_openai_api_key(), - label="OpenAI API Key", - type="password", - ) - gr.Markdown( - "3. Fill the values below, then click 'Start'. There are example values you can load at the bottom of this page." - ) - with gr.Row(): - ai_name = gr.Textbox(label="AI Name", placeholder="e.g. Entrepreneur-GPT") - ai_role = gr.Textbox( - label="AI Role", - placeholder="e.g. an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.", - ) - top_5_goals = gr.Dataframe( - row_count=(5, "fixed"), - col_count=(1, "fixed"), - headers=["AI Goals - Enter up to 5"], - type="array" - ) - start_btn = gr.Button("Start", variant="primary") - with open(os.path.join(FILE_DIR, "examples.json"), "r") as f: - example_values = json.load(f) - gr.Examples( - example_values, - [ai_name, ai_role, top_5_goals], - ) - with gr.Column(visible=False) as main_pane: - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot(elem_id="chatbot") - with gr.Row(): - yes_btn = gr.Button("Yes", variant="primary", interactive=False) - consecutive_yes = gr.Slider( - 1, 10, 1, step=1, label="Consecutive Yes", interactive=False - ) - custom_response = gr.Textbox( - label="Custom Response", - placeholder="Press 'Enter' to Submit.", - interactive=False, - ) - with gr.Column(scale=1): - gr.HTML( - lambda: f""" - Generated Files -
        {utils.format_directory(OUTPUT_DIR)}
        - """, every=3, elem_id="files" - ) - download_btn = gr.Button("Download All Files") - - chat_history = gr.State([[None, None]]) - api = gr.State(None) - - def start(open_ai_key, ai_name, ai_role, top_5_goals): - auto_api = AutoAPI(open_ai_key, ai_name, ai_role, top_5_goals) - return gr.Column.update(visible=False), gr.Column.update(visible=True), auto_api - - def bot_response(chat, api): - messages = [] - for message in api.get_chatbot_response(): - messages.append(message) - chat[-1][1] = "\n".join(messages) + "..." - yield chat - chat[-1][1] = "\n".join(messages) - yield chat - - def send_message(count, chat, api, message="Y"): - if message != "Y": - count = 1 - for i in range(count): - chat.append([message, None]) - yield chat, count - i - api.send_message(message) - for updated_chat in bot_response(chat, api): - yield updated_chat, count - i - - def activate_inputs(): - return { - yes_btn: gr.Button.update(interactive=True), - consecutive_yes: gr.Slider.update(interactive=True), - custom_response: gr.Textbox.update(interactive=True), - } - - def deactivate_inputs(): - return { - yes_btn: gr.Button.update(interactive=False), - consecutive_yes: gr.Slider.update(interactive=False), - custom_response: gr.Textbox.update(interactive=False), - } - - start_btn.click( - start, - [open_ai_key, ai_name, ai_role, top_5_goals], - [setup_pane, main_pane, api], - ).then(bot_response, [chat_history, api], chatbot).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - yes_btn.click( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, [consecutive_yes, chat_history, api], [chatbot, consecutive_yes] - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - custom_response.submit( - deactivate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ).then( - send_message, - [consecutive_yes, chat_history, api, custom_response], - [chatbot, consecutive_yes], - ).then( - activate_inputs, None, [yes_btn, consecutive_yes, custom_response] - ) - - def download_all_files(): - shutil.make_archive("outputs", "zip", OUTPUT_DIR) - - download_btn.click(download_all_files).then(None, _js=utils.DOWNLOAD_OUTPUTS_JS) - -app.queue(concurrency_count=20).launch(file_directories=[OUTPUT_DIR]) diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/densepose_head.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/densepose_head.py deleted file mode 100644 index 363970681db36a41d5bc5b1960960a2a8bf23855..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/densepose_head.py +++ /dev/null @@ -1,1216 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import math -from dataclasses import dataclass -from enum import Enum -import fvcore.nn.weight_init as weight_init -import torch -from torch import nn -from torch.nn import functional as F - -from detectron2.config import CfgNode -from detectron2.layers import Conv2d, ConvTranspose2d, interpolate -from detectron2.structures.boxes import matched_boxlist_iou -from detectron2.utils.registry import Registry - -from .data.structures import DensePoseOutput - -ROI_DENSEPOSE_HEAD_REGISTRY = Registry("ROI_DENSEPOSE_HEAD") - - -class DensePoseUVConfidenceType(Enum): - """ - Statistical model type for confidence learning, possible values: - - "iid_iso": statistically independent identically distributed residuals - with anisotropic covariance - - "indep_aniso": statistically independent residuals with anisotropic - covariances - For details, see: - N. Neverova, D. Novotny, A. Vedaldi "Correlated Uncertainty for Learning - Dense Correspondences from Noisy Labels", p. 918--926, in Proc. NIPS 2019 - """ - - # fmt: off - IID_ISO = "iid_iso" - INDEP_ANISO = "indep_aniso" - # fmt: on - - -@dataclass -class DensePoseUVConfidenceConfig: - """ - Configuration options for confidence on UV data - """ - - enabled: bool = False - # lower bound on UV confidences - epsilon: float = 0.01 - type: DensePoseUVConfidenceType = DensePoseUVConfidenceType.IID_ISO - - -@dataclass -class DensePoseConfidenceModelConfig: - """ - Configuration options for confidence models - """ - - # confidence for U and V values - uv_confidence: DensePoseUVConfidenceConfig - - @staticmethod - def from_cfg(cfg: CfgNode) -> "DensePoseConfidenceModelConfig": - return DensePoseConfidenceModelConfig( - uv_confidence=DensePoseUVConfidenceConfig( - enabled=cfg.MODEL.ROI_DENSEPOSE_HEAD.UV_CONFIDENCE.ENABLED, - epsilon=cfg.MODEL.ROI_DENSEPOSE_HEAD.UV_CONFIDENCE.EPSILON, - type=DensePoseUVConfidenceType(cfg.MODEL.ROI_DENSEPOSE_HEAD.UV_CONFIDENCE.TYPE), - ) - ) - - -def initialize_module_params(module): - for name, param in module.named_parameters(): - if "bias" in name: - nn.init.constant_(param, 0) - elif "weight" in name: - nn.init.kaiming_normal_(param, mode="fan_out", nonlinearity="relu") - - -@ROI_DENSEPOSE_HEAD_REGISTRY.register() -class DensePoseDeepLabHead(nn.Module): - def __init__(self, cfg, input_channels): - super(DensePoseDeepLabHead, self).__init__() - # fmt: off - hidden_dim = cfg.MODEL.ROI_DENSEPOSE_HEAD.CONV_HEAD_DIM - kernel_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.CONV_HEAD_KERNEL - norm = cfg.MODEL.ROI_DENSEPOSE_HEAD.DEEPLAB.NORM - self.n_stacked_convs = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_STACKED_CONVS - self.use_nonlocal = cfg.MODEL.ROI_DENSEPOSE_HEAD.DEEPLAB.NONLOCAL_ON - # fmt: on - pad_size = kernel_size // 2 - n_channels = input_channels - - self.ASPP = ASPP(input_channels, [6, 12, 56], n_channels) # 6, 12, 56 - self.add_module("ASPP", self.ASPP) - - if self.use_nonlocal: - self.NLBlock = NONLocalBlock2D(input_channels, bn_layer=True) - self.add_module("NLBlock", self.NLBlock) - # weight_init.c2_msra_fill(self.ASPP) - - for i in range(self.n_stacked_convs): - norm_module = nn.GroupNorm(32, hidden_dim) if norm == "GN" else None - layer = Conv2d( - n_channels, - hidden_dim, - kernel_size, - stride=1, - padding=pad_size, - bias=not norm, - norm=norm_module, - ) - weight_init.c2_msra_fill(layer) - n_channels = hidden_dim - layer_name = self._get_layer_name(i) - self.add_module(layer_name, layer) - self.n_out_channels = hidden_dim - # initialize_module_params(self) - - def forward(self, features): - x0 = features - x = self.ASPP(x0) - if self.use_nonlocal: - x = self.NLBlock(x) - output = x - for i in range(self.n_stacked_convs): - layer_name = self._get_layer_name(i) - x = getattr(self, layer_name)(x) - x = F.relu(x) - output = x - return output - - def _get_layer_name(self, i): - layer_name = "body_conv_fcn{}".format(i + 1) - return layer_name - - -# Copied from -# https://github.com/pytorch/vision/blob/master/torchvision/models/segmentation/deeplabv3.py -# See https://arxiv.org/pdf/1706.05587.pdf for details -class ASPPConv(nn.Sequential): - def __init__(self, in_channels, out_channels, dilation): - modules = [ - nn.Conv2d( - in_channels, out_channels, 3, padding=dilation, dilation=dilation, bias=False - ), - nn.GroupNorm(32, out_channels), - nn.ReLU(), - ] - super(ASPPConv, self).__init__(*modules) - - -class ASPPPooling(nn.Sequential): - def __init__(self, in_channels, out_channels): - super(ASPPPooling, self).__init__( - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(in_channels, out_channels, 1, bias=False), - nn.GroupNorm(32, out_channels), - nn.ReLU(), - ) - - def forward(self, x): - size = x.shape[-2:] - x = super(ASPPPooling, self).forward(x) - return F.interpolate(x, size=size, mode="bilinear", align_corners=False) - - -class ASPP(nn.Module): - def __init__(self, in_channels, atrous_rates, out_channels): - super(ASPP, self).__init__() - modules = [] - modules.append( - nn.Sequential( - nn.Conv2d(in_channels, out_channels, 1, bias=False), - nn.GroupNorm(32, out_channels), - nn.ReLU(), - ) - ) - - rate1, rate2, rate3 = tuple(atrous_rates) - modules.append(ASPPConv(in_channels, out_channels, rate1)) - modules.append(ASPPConv(in_channels, out_channels, rate2)) - modules.append(ASPPConv(in_channels, out_channels, rate3)) - modules.append(ASPPPooling(in_channels, out_channels)) - - self.convs = nn.ModuleList(modules) - - self.project = nn.Sequential( - nn.Conv2d(5 * out_channels, out_channels, 1, bias=False), - # nn.BatchNorm2d(out_channels), - nn.ReLU() - # nn.Dropout(0.5) - ) - - def forward(self, x): - res = [] - for conv in self.convs: - res.append(conv(x)) - res = torch.cat(res, dim=1) - return self.project(res) - - -# copied from -# https://github.com/AlexHex7/Non-local_pytorch/blob/master/lib/non_local_embedded_gaussian.py -# See https://arxiv.org/abs/1711.07971 for details -class _NonLocalBlockND(nn.Module): - def __init__( - self, in_channels, inter_channels=None, dimension=3, sub_sample=True, bn_layer=True - ): - super(_NonLocalBlockND, self).__init__() - - assert dimension in [1, 2, 3] - - self.dimension = dimension - self.sub_sample = sub_sample - - self.in_channels = in_channels - self.inter_channels = inter_channels - - if self.inter_channels is None: - self.inter_channels = in_channels // 2 - if self.inter_channels == 0: - self.inter_channels = 1 - - if dimension == 3: - conv_nd = nn.Conv3d - max_pool_layer = nn.MaxPool3d(kernel_size=(1, 2, 2)) - bn = nn.GroupNorm # (32, hidden_dim) #nn.BatchNorm3d - elif dimension == 2: - conv_nd = nn.Conv2d - max_pool_layer = nn.MaxPool2d(kernel_size=(2, 2)) - bn = nn.GroupNorm # (32, hidden_dim)nn.BatchNorm2d - else: - conv_nd = nn.Conv1d - max_pool_layer = nn.MaxPool1d(kernel_size=2) - bn = nn.GroupNorm # (32, hidden_dim)nn.BatchNorm1d - - self.g = conv_nd( - in_channels=self.in_channels, - out_channels=self.inter_channels, - kernel_size=1, - stride=1, - padding=0, - ) - - if bn_layer: - self.W = nn.Sequential( - conv_nd( - in_channels=self.inter_channels, - out_channels=self.in_channels, - kernel_size=1, - stride=1, - padding=0, - ), - bn(32, self.in_channels), - ) - nn.init.constant_(self.W[1].weight, 0) - nn.init.constant_(self.W[1].bias, 0) - else: - self.W = conv_nd( - in_channels=self.inter_channels, - out_channels=self.in_channels, - kernel_size=1, - stride=1, - padding=0, - ) - nn.init.constant_(self.W.weight, 0) - nn.init.constant_(self.W.bias, 0) - - self.theta = conv_nd( - in_channels=self.in_channels, - out_channels=self.inter_channels, - kernel_size=1, - stride=1, - padding=0, - ) - self.phi = conv_nd( - in_channels=self.in_channels, - out_channels=self.inter_channels, - kernel_size=1, - stride=1, - padding=0, - ) - - if sub_sample: - self.g = nn.Sequential(self.g, max_pool_layer) - self.phi = nn.Sequential(self.phi, max_pool_layer) - - def forward(self, x): - """ - :param x: (b, c, t, h, w) - :return: - """ - - batch_size = x.size(0) - - g_x = self.g(x).view(batch_size, self.inter_channels, -1) - g_x = g_x.permute(0, 2, 1) - - theta_x = self.theta(x).view(batch_size, self.inter_channels, -1) - theta_x = theta_x.permute(0, 2, 1) - phi_x = self.phi(x).view(batch_size, self.inter_channels, -1) - f = torch.matmul(theta_x, phi_x) - f_div_C = F.softmax(f, dim=-1) - - y = torch.matmul(f_div_C, g_x) - y = y.permute(0, 2, 1).contiguous() - y = y.view(batch_size, self.inter_channels, *x.size()[2:]) - W_y = self.W(y) - z = W_y + x - - return z - - -class NONLocalBlock2D(_NonLocalBlockND): - def __init__(self, in_channels, inter_channels=None, sub_sample=True, bn_layer=True): - super(NONLocalBlock2D, self).__init__( - in_channels, - inter_channels=inter_channels, - dimension=2, - sub_sample=sub_sample, - bn_layer=bn_layer, - ) - - -@ROI_DENSEPOSE_HEAD_REGISTRY.register() -class DensePoseV1ConvXHead(nn.Module): - def __init__(self, cfg, input_channels): - super(DensePoseV1ConvXHead, self).__init__() - # fmt: off - hidden_dim = cfg.MODEL.ROI_DENSEPOSE_HEAD.CONV_HEAD_DIM - kernel_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.CONV_HEAD_KERNEL - self.n_stacked_convs = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_STACKED_CONVS - # fmt: on - pad_size = kernel_size // 2 - n_channels = input_channels - for i in range(self.n_stacked_convs): - layer = Conv2d(n_channels, hidden_dim, kernel_size, stride=1, padding=pad_size) - layer_name = self._get_layer_name(i) - self.add_module(layer_name, layer) - n_channels = hidden_dim - self.n_out_channels = n_channels - initialize_module_params(self) - - def forward(self, features): - x = features - output = x - for i in range(self.n_stacked_convs): - layer_name = self._get_layer_name(i) - x = getattr(self, layer_name)(x) - x = F.relu(x) - output = x - return output - - def _get_layer_name(self, i): - layer_name = "body_conv_fcn{}".format(i + 1) - return layer_name - - -class DensePosePredictor(nn.Module): - def __init__(self, cfg, input_channels): - - super(DensePosePredictor, self).__init__() - dim_in = input_channels - n_segm_chan = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_COARSE_SEGM_CHANNELS - dim_out_patches = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_PATCHES + 1 - kernel_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECONV_KERNEL - self.ann_index_lowres = ConvTranspose2d( - dim_in, n_segm_chan, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - self.index_uv_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - self.u_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - self.v_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - self.scale_factor = cfg.MODEL.ROI_DENSEPOSE_HEAD.UP_SCALE - self.confidence_model_cfg = DensePoseConfidenceModelConfig.from_cfg(cfg) - self._initialize_confidence_estimation_layers(cfg, self.confidence_model_cfg, dim_in) - initialize_module_params(self) - - def forward(self, head_outputs): - ann_index_lowres = self.ann_index_lowres(head_outputs) - index_uv_lowres = self.index_uv_lowres(head_outputs) - u_lowres = self.u_lowres(head_outputs) - v_lowres = self.v_lowres(head_outputs) - - def interp2d(input): - return interpolate( - input, scale_factor=self.scale_factor, mode="bilinear", align_corners=False - ) - - ann_index = interp2d(ann_index_lowres) - index_uv = interp2d(index_uv_lowres) - u = interp2d(u_lowres) - v = interp2d(v_lowres) - ( - (sigma_1, sigma_2, kappa_u, kappa_v), - (sigma_1_lowres, sigma_2_lowres, kappa_u_lowres, kappa_v_lowres), - (ann_index, index_uv), - ) = self._forward_confidence_estimation_layers( - self.confidence_model_cfg, head_outputs, interp2d, ann_index, index_uv - ) - return ( - (ann_index, index_uv, u, v), - (ann_index_lowres, index_uv_lowres, u_lowres, v_lowres), - (sigma_1, sigma_2, kappa_u, kappa_v), - (sigma_1_lowres, sigma_2_lowres, kappa_u_lowres, kappa_v_lowres), - ) - - def _initialize_confidence_estimation_layers( - self, cfg: CfgNode, confidence_model_cfg: DensePoseConfidenceModelConfig, dim_in: int - ): - dim_out_patches = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_PATCHES + 1 - kernel_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.DECONV_KERNEL - if confidence_model_cfg.uv_confidence.enabled: - if confidence_model_cfg.uv_confidence.type == DensePoseUVConfidenceType.IID_ISO: - self.sigma_2_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - elif confidence_model_cfg.uv_confidence.type == DensePoseUVConfidenceType.INDEP_ANISO: - self.sigma_2_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - self.kappa_u_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - self.kappa_v_lowres = ConvTranspose2d( - dim_in, dim_out_patches, kernel_size, stride=2, padding=int(kernel_size / 2 - 1) - ) - else: - raise ValueError( - f"Unknown confidence model type: {confidence_model_cfg.confidence_model_type}" - ) - - def _forward_confidence_estimation_layers( - self, confidence_model_cfg, head_outputs, interp2d, ann_index, index_uv - ): - sigma_1, sigma_2, kappa_u, kappa_v = None, None, None, None - sigma_1_lowres, sigma_2_lowres, kappa_u_lowres, kappa_v_lowres = None, None, None, None - if confidence_model_cfg.uv_confidence.enabled: - if confidence_model_cfg.uv_confidence.type == DensePoseUVConfidenceType.IID_ISO: - sigma_2_lowres = self.sigma_2_lowres(head_outputs) - sigma_2 = interp2d(sigma_2_lowres) - elif confidence_model_cfg.uv_confidence.type == DensePoseUVConfidenceType.INDEP_ANISO: - sigma_2_lowres = self.sigma_2_lowres(head_outputs) - kappa_u_lowres = self.kappa_u_lowres(head_outputs) - kappa_v_lowres = self.kappa_v_lowres(head_outputs) - sigma_2 = interp2d(sigma_2_lowres) - kappa_u = interp2d(kappa_u_lowres) - kappa_v = interp2d(kappa_v_lowres) - else: - raise ValueError( - f"Unknown confidence model type: {confidence_model_cfg.confidence_model_type}" - ) - return ( - (sigma_1, sigma_2, kappa_u, kappa_v), - (sigma_1_lowres, sigma_2_lowres, kappa_u_lowres, kappa_v_lowres), - (ann_index, index_uv), - ) - - -class DensePoseDataFilter(object): - def __init__(self, cfg): - self.iou_threshold = cfg.MODEL.ROI_DENSEPOSE_HEAD.FG_IOU_THRESHOLD - - @torch.no_grad() - def __call__(self, proposals_with_targets): - """ - Filters proposals with targets to keep only the ones relevant for - DensePose training - proposals: list(Instances), each element of the list corresponds to - various instances (proposals, GT for boxes and densepose) for one - image - """ - proposals_filtered = [] - for proposals_per_image in proposals_with_targets: - if not hasattr(proposals_per_image, "gt_densepose"): - continue - assert hasattr(proposals_per_image, "gt_boxes") - assert hasattr(proposals_per_image, "proposal_boxes") - gt_boxes = proposals_per_image.gt_boxes - est_boxes = proposals_per_image.proposal_boxes - # apply match threshold for densepose head - iou = matched_boxlist_iou(gt_boxes, est_boxes) - iou_select = iou > self.iou_threshold - proposals_per_image = proposals_per_image[iou_select] - assert len(proposals_per_image.gt_boxes) == len(proposals_per_image.proposal_boxes) - # filter out any target without densepose annotation - gt_densepose = proposals_per_image.gt_densepose - assert len(proposals_per_image.gt_boxes) == len(proposals_per_image.gt_densepose) - selected_indices = [ - i for i, dp_target in enumerate(gt_densepose) if dp_target is not None - ] - if len(selected_indices) != len(gt_densepose): - proposals_per_image = proposals_per_image[selected_indices] - assert len(proposals_per_image.gt_boxes) == len(proposals_per_image.proposal_boxes) - assert len(proposals_per_image.gt_boxes) == len(proposals_per_image.gt_densepose) - proposals_filtered.append(proposals_per_image) - return proposals_filtered - - -def build_densepose_head(cfg, input_channels): - head_name = cfg.MODEL.ROI_DENSEPOSE_HEAD.NAME - return ROI_DENSEPOSE_HEAD_REGISTRY.get(head_name)(cfg, input_channels) - - -def build_densepose_predictor(cfg, input_channels): - predictor = DensePosePredictor(cfg, input_channels) - return predictor - - -def build_densepose_data_filter(cfg): - dp_filter = DensePoseDataFilter(cfg) - return dp_filter - - -def densepose_inference(densepose_outputs, densepose_confidences, detections): - """ - Infer dense pose estimate based on outputs from the DensePose head - and detections. The estimate for each detection instance is stored in its - "pred_densepose" attribute. - - Args: - densepose_outputs (tuple(`torch.Tensor`)): iterable containing 4 elements: - - s (:obj: `torch.Tensor`): coarse segmentation tensor of size (N, A, H, W), - - i (:obj: `torch.Tensor`): fine segmentation tensor of size (N, C, H, W), - - u (:obj: `torch.Tensor`): U coordinates for each class of size (N, C, H, W), - - v (:obj: `torch.Tensor`): V coordinates for each class of size (N, C, H, W), - where N is the total number of detections in a batch, - A is the number of coarse segmentations labels - (e.g. 15 for coarse body parts + background), - C is the number of fine segmentation labels - (e.g. 25 for fine body parts + background), - W is the resolution along the X axis - H is the resolution along the Y axis - densepose_confidences (tuple(`torch.Tensor`)): iterable containing 4 elements: - - sigma_1 (:obj: `torch.Tensor`): global confidences for UV coordinates - of size (N, C, H, W) - - sigma_2 (:obj: `torch.Tensor`): individual confidences for UV coordinates - of size (N, C, H, W) - - kappa_u (:obj: `torch.Tensor`): first component of confidence direction - vector of size (N, C, H, W) - - kappa_v (:obj: `torch.Tensor`): second component of confidence direction - vector of size (N, C, H, W) - detections (list[Instances]): A list of N Instances, where N is the number of images - in the batch. Instances are modified by this method: "pred_densepose" attribute - is added to each instance, the attribute contains the corresponding - DensePoseOutput object. - """ - # DensePose outputs: segmentation, body part indices, U, V - s, index_uv, u, v = densepose_outputs - sigma_1, sigma_2, kappa_u, kappa_v = densepose_confidences - k = 0 - for detection in detections: - n_i = len(detection) - s_i = s[k : k + n_i] - index_uv_i = index_uv[k : k + n_i] - u_i = u[k : k + n_i] - v_i = v[k : k + n_i] - _local_vars = locals() - confidences = { - name: _local_vars[name] - for name in ("sigma_1", "sigma_2", "kappa_u", "kappa_v") - if _local_vars.get(name) is not None - } - densepose_output_i = DensePoseOutput(s_i, index_uv_i, u_i, v_i, confidences) - detection.pred_densepose = densepose_output_i - k += n_i - - -def _linear_interpolation_utilities(v_norm, v0_src, size_src, v0_dst, size_dst, size_z): - """ - Computes utility values for linear interpolation at points v. - The points are given as normalized offsets in the source interval - (v0_src, v0_src + size_src), more precisely: - v = v0_src + v_norm * size_src / 256.0 - The computed utilities include lower points v_lo, upper points v_hi, - interpolation weights v_w and flags j_valid indicating whether the - points falls into the destination interval (v0_dst, v0_dst + size_dst). - - Args: - v_norm (:obj: `torch.Tensor`): tensor of size N containing - normalized point offsets - v0_src (:obj: `torch.Tensor`): tensor of size N containing - left bounds of source intervals for normalized points - size_src (:obj: `torch.Tensor`): tensor of size N containing - source interval sizes for normalized points - v0_dst (:obj: `torch.Tensor`): tensor of size N containing - left bounds of destination intervals - size_dst (:obj: `torch.Tensor`): tensor of size N containing - destination interval sizes - size_z (int): interval size for data to be interpolated - - Returns: - v_lo (:obj: `torch.Tensor`): int tensor of size N containing - indices of lower values used for interpolation, all values are - integers from [0, size_z - 1] - v_hi (:obj: `torch.Tensor`): int tensor of size N containing - indices of upper values used for interpolation, all values are - integers from [0, size_z - 1] - v_w (:obj: `torch.Tensor`): float tensor of size N containing - interpolation weights - j_valid (:obj: `torch.Tensor`): uint8 tensor of size N containing - 0 for points outside the estimation interval - (v0_est, v0_est + size_est) and 1 otherwise - """ - v = v0_src + v_norm * size_src / 256.0 - j_valid = (v - v0_dst >= 0) * (v - v0_dst < size_dst) - v_grid = (v - v0_dst) * size_z / size_dst - v_lo = v_grid.floor().long().clamp(min=0, max=size_z - 1) - v_hi = (v_lo + 1).clamp(max=size_z - 1) - v_grid = torch.min(v_hi.float(), v_grid) - v_w = v_grid - v_lo.float() - return v_lo, v_hi, v_w, j_valid - - -def _grid_sampling_utilities( - zh, zw, bbox_xywh_est, bbox_xywh_gt, index_gt, x_norm, y_norm, index_bbox -): - """ - Prepare tensors used in grid sampling. - - Args: - z_est (:obj: `torch.Tensor`): tensor of size (N,C,H,W) with estimated - values of Z to be extracted for the points X, Y and channel - indices I - bbox_xywh_est (:obj: `torch.Tensor`): tensor of size (N, 4) containing - estimated bounding boxes in format XYWH - bbox_xywh_gt (:obj: `torch.Tensor`): tensor of size (N, 4) containing - matched ground truth bounding boxes in format XYWH - index_gt (:obj: `torch.Tensor`): tensor of size K with point labels for - ground truth points - x_norm (:obj: `torch.Tensor`): tensor of size K with X normalized - coordinates of ground truth points. Image X coordinates can be - obtained as X = Xbbox + x_norm * Wbbox / 255 - y_norm (:obj: `torch.Tensor`): tensor of size K with Y normalized - coordinates of ground truth points. Image Y coordinates can be - obtained as Y = Ybbox + y_norm * Hbbox / 255 - index_bbox (:obj: `torch.Tensor`): tensor of size K with bounding box - indices for each ground truth point. The values are thus in - [0, N-1] - - Returns: - j_valid (:obj: `torch.Tensor`): uint8 tensor of size M containing - 0 for points to be discarded and 1 for points to be selected - y_lo (:obj: `torch.Tensor`): int tensor of indices of upper values - in z_est for each point - y_hi (:obj: `torch.Tensor`): int tensor of indices of lower values - in z_est for each point - x_lo (:obj: `torch.Tensor`): int tensor of indices of left values - in z_est for each point - x_hi (:obj: `torch.Tensor`): int tensor of indices of right values - in z_est for each point - w_ylo_xlo (:obj: `torch.Tensor`): float tensor of size M; - contains upper-left value weight for each point - w_ylo_xhi (:obj: `torch.Tensor`): float tensor of size M; - contains upper-right value weight for each point - w_yhi_xlo (:obj: `torch.Tensor`): float tensor of size M; - contains lower-left value weight for each point - w_yhi_xhi (:obj: `torch.Tensor`): float tensor of size M; - contains lower-right value weight for each point - """ - - x0_gt, y0_gt, w_gt, h_gt = bbox_xywh_gt[index_bbox].unbind(dim=1) - x0_est, y0_est, w_est, h_est = bbox_xywh_est[index_bbox].unbind(dim=1) - x_lo, x_hi, x_w, jx_valid = _linear_interpolation_utilities( - x_norm, x0_gt, w_gt, x0_est, w_est, zw - ) - y_lo, y_hi, y_w, jy_valid = _linear_interpolation_utilities( - y_norm, y0_gt, h_gt, y0_est, h_est, zh - ) - j_valid = jx_valid * jy_valid - - w_ylo_xlo = (1.0 - x_w) * (1.0 - y_w) - w_ylo_xhi = x_w * (1.0 - y_w) - w_yhi_xlo = (1.0 - x_w) * y_w - w_yhi_xhi = x_w * y_w - - return j_valid, y_lo, y_hi, x_lo, x_hi, w_ylo_xlo, w_ylo_xhi, w_yhi_xlo, w_yhi_xhi - - -def _extract_at_points_packed( - z_est, - index_bbox_valid, - slice_index_uv, - y_lo, - y_hi, - x_lo, - x_hi, - w_ylo_xlo, - w_ylo_xhi, - w_yhi_xlo, - w_yhi_xhi, -): - """ - Extract ground truth values z_gt for valid point indices and estimated - values z_est using bilinear interpolation over top-left (y_lo, x_lo), - top-right (y_lo, x_hi), bottom-left (y_hi, x_lo) and bottom-right - (y_hi, x_hi) values in z_est with corresponding weights: - w_ylo_xlo, w_ylo_xhi, w_yhi_xlo and w_yhi_xhi. - Use slice_index_uv to slice dim=1 in z_est - """ - z_est_sampled = ( - z_est[index_bbox_valid, slice_index_uv, y_lo, x_lo] * w_ylo_xlo - + z_est[index_bbox_valid, slice_index_uv, y_lo, x_hi] * w_ylo_xhi - + z_est[index_bbox_valid, slice_index_uv, y_hi, x_lo] * w_yhi_xlo - + z_est[index_bbox_valid, slice_index_uv, y_hi, x_hi] * w_yhi_xhi - ) - return z_est_sampled - - -def _resample_data( - z, bbox_xywh_src, bbox_xywh_dst, wout, hout, mode="nearest", padding_mode="zeros" -): - """ - Args: - z (:obj: `torch.Tensor`): tensor of size (N,C,H,W) with data to be - resampled - bbox_xywh_src (:obj: `torch.Tensor`): tensor of size (N,4) containing - source bounding boxes in format XYWH - bbox_xywh_dst (:obj: `torch.Tensor`): tensor of size (N,4) containing - destination bounding boxes in format XYWH - Return: - zresampled (:obj: `torch.Tensor`): tensor of size (N, C, Hout, Wout) - with resampled values of z, where D is the discretization size - """ - n = bbox_xywh_src.size(0) - assert n == bbox_xywh_dst.size(0), ( - "The number of " - "source ROIs for resampling ({}) should be equal to the number " - "of destination ROIs ({})".format(bbox_xywh_src.size(0), bbox_xywh_dst.size(0)) - ) - x0src, y0src, wsrc, hsrc = bbox_xywh_src.unbind(dim=1) - x0dst, y0dst, wdst, hdst = bbox_xywh_dst.unbind(dim=1) - x0dst_norm = 2 * (x0dst - x0src) / wsrc - 1 - y0dst_norm = 2 * (y0dst - y0src) / hsrc - 1 - x1dst_norm = 2 * (x0dst + wdst - x0src) / wsrc - 1 - y1dst_norm = 2 * (y0dst + hdst - y0src) / hsrc - 1 - grid_w = torch.arange(wout, device=z.device, dtype=torch.float) / wout - grid_h = torch.arange(hout, device=z.device, dtype=torch.float) / hout - grid_w_expanded = grid_w[None, None, :].expand(n, hout, wout) - grid_h_expanded = grid_h[None, :, None].expand(n, hout, wout) - dx_expanded = (x1dst_norm - x0dst_norm)[:, None, None].expand(n, hout, wout) - dy_expanded = (y1dst_norm - y0dst_norm)[:, None, None].expand(n, hout, wout) - x0_expanded = x0dst_norm[:, None, None].expand(n, hout, wout) - y0_expanded = y0dst_norm[:, None, None].expand(n, hout, wout) - grid_x = grid_w_expanded * dx_expanded + x0_expanded - grid_y = grid_h_expanded * dy_expanded + y0_expanded - grid = torch.stack((grid_x, grid_y), dim=3) - # resample Z from (N, C, H, W) into (N, C, Hout, Wout) - zresampled = F.grid_sample(z, grid, mode=mode, padding_mode=padding_mode, align_corners=True) - return zresampled - - -def _extract_single_tensors_from_matches_one_image( - proposals_targets, bbox_with_dp_offset, bbox_global_offset -): - i_gt_all = [] - x_norm_all = [] - y_norm_all = [] - u_gt_all = [] - v_gt_all = [] - s_gt_all = [] - bbox_xywh_gt_all = [] - bbox_xywh_est_all = [] - # Ibbox_all == k should be true for all data that corresponds - # to bbox_xywh_gt[k] and bbox_xywh_est[k] - # index k here is global wrt images - i_bbox_all = [] - # at offset k (k is global) contains index of bounding box data - # within densepose output tensor - i_with_dp = [] - - boxes_xywh_est = proposals_targets.proposal_boxes.clone() - boxes_xywh_gt = proposals_targets.gt_boxes.clone() - n_i = len(boxes_xywh_est) - assert n_i == len(boxes_xywh_gt) - - if n_i: - boxes_xywh_est.tensor[:, 2] -= boxes_xywh_est.tensor[:, 0] - boxes_xywh_est.tensor[:, 3] -= boxes_xywh_est.tensor[:, 1] - boxes_xywh_gt.tensor[:, 2] -= boxes_xywh_gt.tensor[:, 0] - boxes_xywh_gt.tensor[:, 3] -= boxes_xywh_gt.tensor[:, 1] - if hasattr(proposals_targets, "gt_densepose"): - densepose_gt = proposals_targets.gt_densepose - for k, box_xywh_est, box_xywh_gt, dp_gt in zip( - range(n_i), boxes_xywh_est.tensor, boxes_xywh_gt.tensor, densepose_gt - ): - if (dp_gt is not None) and (len(dp_gt.x) > 0): - i_gt_all.append(dp_gt.i) - x_norm_all.append(dp_gt.x) - y_norm_all.append(dp_gt.y) - u_gt_all.append(dp_gt.u) - v_gt_all.append(dp_gt.v) - s_gt_all.append(dp_gt.segm.unsqueeze(0)) - bbox_xywh_gt_all.append(box_xywh_gt.view(-1, 4)) - bbox_xywh_est_all.append(box_xywh_est.view(-1, 4)) - i_bbox_k = torch.full_like(dp_gt.i, bbox_with_dp_offset + len(i_with_dp)) - i_bbox_all.append(i_bbox_k) - i_with_dp.append(bbox_global_offset + k) - return ( - i_gt_all, - x_norm_all, - y_norm_all, - u_gt_all, - v_gt_all, - s_gt_all, - bbox_xywh_gt_all, - bbox_xywh_est_all, - i_bbox_all, - i_with_dp, - ) - - -def _extract_single_tensors_from_matches(proposals_with_targets): - i_img = [] - i_gt_all = [] - x_norm_all = [] - y_norm_all = [] - u_gt_all = [] - v_gt_all = [] - s_gt_all = [] - bbox_xywh_gt_all = [] - bbox_xywh_est_all = [] - i_bbox_all = [] - i_with_dp_all = [] - n = 0 - for i, proposals_targets_per_image in enumerate(proposals_with_targets): - n_i = proposals_targets_per_image.proposal_boxes.tensor.size(0) - if not n_i: - continue - ( - i_gt_img, - x_norm_img, - y_norm_img, - u_gt_img, - v_gt_img, - s_gt_img, - bbox_xywh_gt_img, - bbox_xywh_est_img, - i_bbox_img, - i_with_dp_img, - ) = _extract_single_tensors_from_matches_one_image( # noqa - proposals_targets_per_image, len(i_with_dp_all), n - ) - i_gt_all.extend(i_gt_img) - x_norm_all.extend(x_norm_img) - y_norm_all.extend(y_norm_img) - u_gt_all.extend(u_gt_img) - v_gt_all.extend(v_gt_img) - s_gt_all.extend(s_gt_img) - bbox_xywh_gt_all.extend(bbox_xywh_gt_img) - bbox_xywh_est_all.extend(bbox_xywh_est_img) - i_bbox_all.extend(i_bbox_img) - i_with_dp_all.extend(i_with_dp_img) - i_img.extend([i] * len(i_with_dp_img)) - n += n_i - # concatenate all data into a single tensor - if (n > 0) and (len(i_with_dp_all) > 0): - i_gt = torch.cat(i_gt_all, 0).long() - x_norm = torch.cat(x_norm_all, 0) - y_norm = torch.cat(y_norm_all, 0) - u_gt = torch.cat(u_gt_all, 0) - v_gt = torch.cat(v_gt_all, 0) - s_gt = torch.cat(s_gt_all, 0) - bbox_xywh_gt = torch.cat(bbox_xywh_gt_all, 0) - bbox_xywh_est = torch.cat(bbox_xywh_est_all, 0) - i_bbox = torch.cat(i_bbox_all, 0).long() - else: - i_gt = None - x_norm = None - y_norm = None - u_gt = None - v_gt = None - s_gt = None - bbox_xywh_gt = None - bbox_xywh_est = None - i_bbox = None - return ( - i_img, - i_with_dp_all, - bbox_xywh_est, - bbox_xywh_gt, - i_gt, - x_norm, - y_norm, - u_gt, - v_gt, - s_gt, - i_bbox, - ) - - -class IIDIsotropicGaussianUVLoss(nn.Module): - """ - Loss for the case of iid residuals with isotropic covariance: - $Sigma_i = sigma_i^2 I$ - The loss (negative log likelihood) is then: - $1/2 sum_{i=1}^n (log(2 pi) + 2 log sigma_i^2 + ||delta_i||^2 / sigma_i^2)$, - where $delta_i=(u - u', v - v')$ is a 2D vector containing UV coordinates - difference between estimated and ground truth UV values - For details, see: - N. Neverova, D. Novotny, A. Vedaldi "Correlated Uncertainty for Learning - Dense Correspondences from Noisy Labels", p. 918--926, in Proc. NIPS 2019 - """ - - def __init__(self, sigma_lower_bound: float): - super(IIDIsotropicGaussianUVLoss, self).__init__() - self.sigma_lower_bound = sigma_lower_bound - self.log2pi = math.log(2 * math.pi) - - def forward( - self, - u: torch.Tensor, - v: torch.Tensor, - sigma_u: torch.Tensor, - target_u: torch.Tensor, - target_v: torch.Tensor, - ): - # compute $\sigma_i^2$ - # use sigma_lower_bound to avoid degenerate solution for variance - # (sigma -> 0) - sigma2 = F.softplus(sigma_u) + self.sigma_lower_bound - # compute \|delta_i\|^2 - delta_t_delta = (u - target_u) ** 2 + (v - target_v) ** 2 - # the total loss from the formula above: - loss = 0.5 * (self.log2pi + 2 * torch.log(sigma2) + delta_t_delta / sigma2) - return loss.sum() - - -class IndepAnisotropicGaussianUVLoss(nn.Module): - """ - Loss for the case of independent residuals with anisotropic covariances: - $Sigma_i = sigma_i^2 I + r_i r_i^T$ - The loss (negative log likelihood) is then: - $1/2 sum_{i=1}^n (log(2 pi) - + log sigma_i^2 (sigma_i^2 + ||r_i||^2) - + ||delta_i||^2 / sigma_i^2 - - ^2 / (sigma_i^2 * (sigma_i^2 + ||r_i||^2)))$, - where $delta_i=(u - u', v - v')$ is a 2D vector containing UV coordinates - difference between estimated and ground truth UV values - For details, see: - N. Neverova, D. Novotny, A. Vedaldi "Correlated Uncertainty for Learning - Dense Correspondences from Noisy Labels", p. 918--926, in Proc. NIPS 2019 - """ - - def __init__(self, sigma_lower_bound: float): - super(IndepAnisotropicGaussianUVLoss, self).__init__() - self.sigma_lower_bound = sigma_lower_bound - self.log2pi = math.log(2 * math.pi) - - def forward( - self, - u: torch.Tensor, - v: torch.Tensor, - sigma_u: torch.Tensor, - kappa_u_est: torch.Tensor, - kappa_v_est: torch.Tensor, - target_u: torch.Tensor, - target_v: torch.Tensor, - ): - # compute $\sigma_i^2$ - sigma2 = F.softplus(sigma_u) + self.sigma_lower_bound - # compute \|r_i\|^2 - r_sqnorm2 = kappa_u_est ** 2 + kappa_v_est ** 2 - delta_u = u - target_u - delta_v = v - target_v - # compute \|delta_i\|^2 - delta_sqnorm = delta_u ** 2 + delta_v ** 2 - delta_u_r_u = delta_u * kappa_u_est - delta_v_r_v = delta_v * kappa_v_est - # compute the scalar product - delta_r = delta_u_r_u + delta_v_r_v - # compute squared scalar product ^2 - delta_r_sqnorm = delta_r ** 2 - denom2 = sigma2 * (sigma2 + r_sqnorm2) - loss = 0.5 * ( - self.log2pi + torch.log(denom2) + delta_sqnorm / sigma2 - delta_r_sqnorm / denom2 - ) - return loss.sum() - - -class DensePoseLosses(object): - def __init__(self, cfg): - # fmt: off - self.heatmap_size = cfg.MODEL.ROI_DENSEPOSE_HEAD.HEATMAP_SIZE - self.w_points = cfg.MODEL.ROI_DENSEPOSE_HEAD.POINT_REGRESSION_WEIGHTS - self.w_part = cfg.MODEL.ROI_DENSEPOSE_HEAD.PART_WEIGHTS - self.w_segm = cfg.MODEL.ROI_DENSEPOSE_HEAD.INDEX_WEIGHTS - self.n_segm_chan = cfg.MODEL.ROI_DENSEPOSE_HEAD.NUM_COARSE_SEGM_CHANNELS - # fmt: on - self.confidence_model_cfg = DensePoseConfidenceModelConfig.from_cfg(cfg) - if self.confidence_model_cfg.uv_confidence.type == DensePoseUVConfidenceType.IID_ISO: - self.uv_loss_with_confidences = IIDIsotropicGaussianUVLoss( - self.confidence_model_cfg.uv_confidence.epsilon - ) - elif self.confidence_model_cfg.uv_confidence.type == DensePoseUVConfidenceType.INDEP_ANISO: - self.uv_loss_with_confidences = IndepAnisotropicGaussianUVLoss( - self.confidence_model_cfg.uv_confidence.epsilon - ) - - def __call__(self, proposals_with_gt, densepose_outputs, densepose_confidences): - losses = {} - # densepose outputs are computed for all images and all bounding boxes; - # i.e. if a batch has 4 images with (3, 1, 2, 1) proposals respectively, - # the outputs will have size(0) == 3+1+2+1 == 7 - s, index_uv, u, v = densepose_outputs - sigma_1, sigma_2, kappa_u, kappa_v = densepose_confidences - conf_type = self.confidence_model_cfg.uv_confidence.type - assert u.size(2) == v.size(2) - assert u.size(3) == v.size(3) - assert u.size(2) == index_uv.size(2) - assert u.size(3) == index_uv.size(3) - - with torch.no_grad(): - ( - index_uv_img, - i_with_dp, - bbox_xywh_est, - bbox_xywh_gt, - index_gt_all, - x_norm, - y_norm, - u_gt_all, - v_gt_all, - s_gt, - index_bbox, - ) = _extract_single_tensors_from_matches( # noqa - proposals_with_gt - ) - n_batch = len(i_with_dp) - - # NOTE: we need to keep the same computation graph on all the GPUs to - # perform reduction properly. Hence even if we have no data on one - # of the GPUs, we still need to generate the computation graph. - # Add fake (zero) loss in the form Tensor.sum() * 0 - if not n_batch: - losses["loss_densepose_I"] = index_uv.sum() * 0 - losses["loss_densepose_S"] = s.sum() * 0 - if self.confidence_model_cfg.uv_confidence.enabled: - losses["loss_densepose_UV"] = (u.sum() + v.sum()) * 0 - if conf_type == DensePoseUVConfidenceType.IID_ISO: - losses["loss_densepose_UV"] += sigma_2.sum() * 0 - elif conf_type == DensePoseUVConfidenceType.INDEP_ANISO: - losses["loss_densepose_UV"] += ( - sigma_2.sum() + kappa_u.sum() + kappa_v.sum() - ) * 0 - else: - losses["loss_densepose_U"] = u.sum() * 0 - losses["loss_densepose_V"] = v.sum() * 0 - return losses - - zh = u.size(2) - zw = u.size(3) - - ( - j_valid, - y_lo, - y_hi, - x_lo, - x_hi, - w_ylo_xlo, - w_ylo_xhi, - w_yhi_xlo, - w_yhi_xhi, - ) = _grid_sampling_utilities( # noqa - zh, zw, bbox_xywh_est, bbox_xywh_gt, index_gt_all, x_norm, y_norm, index_bbox - ) - - j_valid_fg = j_valid * (index_gt_all > 0) - - u_gt = u_gt_all[j_valid_fg] - u_est_all = _extract_at_points_packed( - u[i_with_dp], - index_bbox, - index_gt_all, - y_lo, - y_hi, - x_lo, - x_hi, - w_ylo_xlo, - w_ylo_xhi, - w_yhi_xlo, - w_yhi_xhi, - ) - u_est = u_est_all[j_valid_fg] - - v_gt = v_gt_all[j_valid_fg] - v_est_all = _extract_at_points_packed( - v[i_with_dp], - index_bbox, - index_gt_all, - y_lo, - y_hi, - x_lo, - x_hi, - w_ylo_xlo, - w_ylo_xhi, - w_yhi_xlo, - w_yhi_xhi, - ) - v_est = v_est_all[j_valid_fg] - - index_uv_gt = index_gt_all[j_valid] - index_uv_est_all = _extract_at_points_packed( - index_uv[i_with_dp], - index_bbox, - slice(None), - y_lo, - y_hi, - x_lo, - x_hi, - w_ylo_xlo[:, None], - w_ylo_xhi[:, None], - w_yhi_xlo[:, None], - w_yhi_xhi[:, None], - ) - index_uv_est = index_uv_est_all[j_valid, :] - - if self.confidence_model_cfg.uv_confidence.enabled: - sigma_2_est_all = _extract_at_points_packed( - sigma_2[i_with_dp], - index_bbox, - index_gt_all, - y_lo, - y_hi, - x_lo, - x_hi, - w_ylo_xlo, - w_ylo_xhi, - w_yhi_xlo, - w_yhi_xhi, - ) - sigma_2_est = sigma_2_est_all[j_valid_fg] - if conf_type in [DensePoseUVConfidenceType.INDEP_ANISO]: - kappa_u_est_all = _extract_at_points_packed( - kappa_u[i_with_dp], - index_bbox, - index_gt_all, - y_lo, - y_hi, - x_lo, - x_hi, - w_ylo_xlo, - w_ylo_xhi, - w_yhi_xlo, - w_yhi_xhi, - ) - kappa_u_est = kappa_u_est_all[j_valid_fg] - kappa_v_est_all = _extract_at_points_packed( - kappa_v[i_with_dp], - index_bbox, - index_gt_all, - y_lo, - y_hi, - x_lo, - x_hi, - w_ylo_xlo, - w_ylo_xhi, - w_yhi_xlo, - w_yhi_xhi, - ) - kappa_v_est = kappa_v_est_all[j_valid_fg] - - # Resample everything to the estimated data size, no need to resample - # S_est then: - s_est = s[i_with_dp] - with torch.no_grad(): - s_gt = _resample_data( - s_gt.unsqueeze(1), - bbox_xywh_gt, - bbox_xywh_est, - self.heatmap_size, - self.heatmap_size, - mode="nearest", - padding_mode="zeros", - ).squeeze(1) - - # add point-based losses: - if self.confidence_model_cfg.uv_confidence.enabled: - if conf_type == DensePoseUVConfidenceType.IID_ISO: - uv_loss = ( - self.uv_loss_with_confidences(u_est, v_est, sigma_2_est, u_gt, v_gt) - * self.w_points - ) - losses["loss_densepose_UV"] = uv_loss - elif conf_type == DensePoseUVConfidenceType.INDEP_ANISO: - uv_loss = ( - self.uv_loss_with_confidences( - u_est, v_est, sigma_2_est, kappa_u_est, kappa_v_est, u_gt, v_gt - ) - * self.w_points - ) - losses["loss_densepose_UV"] = uv_loss - else: - raise ValueError(f"Unknown confidence model type: {conf_type}") - else: - u_loss = F.smooth_l1_loss(u_est, u_gt, reduction="sum") * self.w_points - losses["loss_densepose_U"] = u_loss - v_loss = F.smooth_l1_loss(v_est, v_gt, reduction="sum") * self.w_points - losses["loss_densepose_V"] = v_loss - index_uv_loss = F.cross_entropy(index_uv_est, index_uv_gt.long()) * self.w_part - losses["loss_densepose_I"] = index_uv_loss - - if self.n_segm_chan == 2: - s_gt = s_gt > 0 - s_loss = F.cross_entropy(s_est, s_gt.long()) * self.w_segm - losses["loss_densepose_S"] = s_loss - return losses - - -def build_densepose_losses(cfg): - losses = DensePoseLosses(cfg) - return losses diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/structures/test_imagelist.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/structures/test_imagelist.py deleted file mode 100644 index abeb35569ddc34a618735f4989dfbfae23d47bc1..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/tests/structures/test_imagelist.py +++ /dev/null @@ -1,38 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved - -import unittest -from typing import Sequence -import torch - -from detectron2.structures import ImageList - - -class TestImageList(unittest.TestCase): - def test_imagelist_padding_shape(self): - class TensorToImageList(torch.nn.Module): - def forward(self, tensors: Sequence[torch.Tensor]): - return ImageList.from_tensors(tensors, 4).tensor - - func = torch.jit.trace( - TensorToImageList(), ([torch.ones((3, 10, 10), dtype=torch.float32)],) - ) - ret = func([torch.ones((3, 15, 20), dtype=torch.float32)]) - self.assertEqual(list(ret.shape), [1, 3, 16, 20], str(ret.shape)) - - func = torch.jit.trace( - TensorToImageList(), - ( - [ - torch.ones((3, 16, 10), dtype=torch.float32), - torch.ones((3, 13, 11), dtype=torch.float32), - ], - ), - ) - ret = func( - [ - torch.ones((3, 25, 20), dtype=torch.float32), - torch.ones((3, 10, 10), dtype=torch.float32), - ] - ) - # does not support calling with different #images - self.assertEqual(list(ret.shape), [2, 3, 28, 20], str(ret.shape)) diff --git a/spaces/hdhzk/bingo/src/lib/bots/bing/index.ts b/spaces/hdhzk/bingo/src/lib/bots/bing/index.ts deleted file mode 100644 index 2c4afae01a345b8415935228566cb30d695e768d..0000000000000000000000000000000000000000 --- a/spaces/hdhzk/bingo/src/lib/bots/bing/index.ts +++ /dev/null @@ -1,421 +0,0 @@ -import { fetch, WebSocket, debug } from '@/lib/isomorphic' -import WebSocketAsPromised from 'websocket-as-promised' -import { - SendMessageParams, - BingConversationStyle, - ConversationResponse, - ChatResponseMessage, - ConversationInfo, - InvocationEventType, - ChatError, - ErrorCode, - ChatUpdateCompleteResponse, - ImageInfo, - KBlobResponse -} from './types' - -import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils' -import { WatchDog, createChunkDecoder } from '@/lib/utils' - -type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }> - -const OPTIONS_SETS = [ - 'nlu_direct_response_filter', - 'deepleo', - 'disable_emoji_spoken_text', - 'responsible_ai_policy_235', - 'enablemm', - 'iycapbing', - 'iyxapbing', - 'objopinion', - 'rweasgv2', - 'dagslnv1', - 'dv3sugg', - 'autosave', - 'iyoloxap', - 'iyoloneutral', - 'clgalileo', - 'gencontentv3', -] - -export class BingWebBot { - protected conversationContext?: ConversationInfo - protected cookie: string - protected ua: string - protected endpoint = '' - private lastText = '' - private asyncTasks: Array> = [] - - constructor(opts: { - cookie: string - ua: string - bingConversationStyle?: BingConversationStyle - conversationContext?: ConversationInfo - }) { - const { cookie, ua, conversationContext } = opts - this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}` - this.ua = ua - this.conversationContext = conversationContext - } - - static buildChatRequest(conversation: ConversationInfo) { - const optionsSets = OPTIONS_SETS - if (conversation.conversationStyle === BingConversationStyle.Precise) { - optionsSets.push('h3precise') - } else if (conversation.conversationStyle === BingConversationStyle.Creative) { - optionsSets.push('h3imaginative') - } - return { - arguments: [ - { - source: 'cib', - optionsSets, - allowedMessageTypes: [ - 'Chat', - 'InternalSearchQuery', - 'Disengaged', - 'InternalLoaderMessage', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - ], - sliceIds: [ - 'winmuid1tf', - 'anssupfor_c', - 'imgchatgptv2', - 'tts2cf', - 'contansperf', - 'mlchatpc8500w', - 'mlchatpc2', - 'ctrlworkpay', - 'winshortmsgtf', - 'cibctrl', - 'sydtransctrl', - 'sydconfigoptc', - '0705trt4', - '517opinion', - '628ajcopus0', - '330uaugs0', - '529rwea', - '0626snptrcs0', - '424dagslnv1', - ], - isStartOfSession: conversation.invocationId === 0, - message: { - author: 'user', - inputMethod: 'Keyboard', - text: conversation.prompt, - imageUrl: conversation.imageUrl, - messageType: 'Chat', - }, - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - participant: { id: conversation.clientId }, - }, - ], - invocationId: conversation.invocationId.toString(), - target: 'chat', - type: InvocationEventType.StreamInvocation, - } - } - - async createConversation(): Promise { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - - let resp: ConversationResponse | undefined - try { - const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' }) - if (response.status === 404) { - throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR) - } - resp = await response.json() as ConversationResponse - } catch (err) { - console.error('create conversation error', err) - } - - if (!resp?.result) { - throw new ChatError('Invalid response', ErrorCode.UNKOWN_ERROR) - } - - const { value, message } = resp.result || {} - if (value !== 'Success') { - const errorMsg = `${value}: ${message}` - if (value === 'UnauthorizedRequest') { - throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED) - } - if (value === 'Forbidden') { - throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR) - } - return resp - } - - private async createContext(conversationStyle: BingConversationStyle) { - if (!this.conversationContext) { - const conversation = await this.createConversation() - this.conversationContext = { - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - clientId: conversation.clientId, - invocationId: 0, - conversationStyle, - prompt: '', - } - } - return this.conversationContext - } - - async sendMessage(params: Params) { - try { - await this.createContext(params.options.bingConversationStyle) - Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl }) - return this.sydneyProxy(params) - } catch (error) { - params.onEvent({ - type: 'ERROR', - error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR), - }) - } - } - - private async sydneyProxy(params: Params) { - const abortController = new AbortController() - const response = await fetch(this.endpoint + '/api/sydney', { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - signal: abortController.signal, - body: JSON.stringify(this.conversationContext!) - }) - if (response.status !== 200) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Unknown error', - ErrorCode.UNKOWN_ERROR, - ), - }) - } - params.signal?.addEventListener('abort', () => { - abortController.abort() - }) - - const textDecoder = createChunkDecoder() - for await (const chunk of streamAsyncIterable(response.body!)) { - this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk))) - } - } - - async sendWs() { - const wsConfig: ConstructorParameters[1] = { - packMessage: websocketUtils.packMessage, - unpackMessage: websocketUtils.unpackMessage, - createWebSocket: (url) => new WebSocket(url, { - headers: { - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'User-Agent': this.ua, - pragma: 'no-cache', - cookie: this.cookie, - } - }) - } - const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig) - - wsp.open().then(() => { - wsp.sendPacked({ protocol: 'json', version: 1 }) - wsp.sendPacked({ type: 6 }) - wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!)) - }) - - return wsp - } - - private async useWs(params: Params) { - const wsp = await this.sendWs() - const watchDog = new WatchDog() - wsp.onUnpackedMessage.addListener((events) => { - watchDog.watch(() => { - wsp.sendPacked({ type: 6 }) - }) - this.parseEvents(params, events) - }) - - wsp.onClose.addListener(() => { - watchDog.reset() - params.onEvent({ type: 'DONE' }) - wsp.removeAllListeners() - }) - - params.signal?.addEventListener('abort', () => { - wsp.removeAllListeners() - wsp.close() - }) - } - - private async createImage(prompt: string, id: string) { - try { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - const query = new URLSearchParams({ - prompt, - id - }) - const response = await fetch(this.endpoint + '/api/image?' + query.toString(), - { - method: 'POST', - headers, - mode: 'cors', - credentials: 'include' - }) - .then(res => res.text()) - if (response) { - this.lastText += '\n' + response - } - } catch (err) { - console.error('Create Image Error', err) - } - } - - private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) { - const imageInfo: ImageInfo = {} - let imageBase64: string | undefined = undefined - const knowledgeRequest = { - imageInfo, - knowledgeRequest: { - invokedSkills: [ - 'ImageById' - ], - subscriptionId: 'Bing.Chat.Multimodal', - invokedSkillsRequestData: { - enableFaceBlur: true - }, - convoData: { - convoid: this.conversationContext?.conversationId, - convotone: conversationStyle, - } - }, - } - - if (imageUrl.startsWith('data:image/')) { - imageBase64 = imageUrl.replace('data:image/', ''); - const partIndex = imageBase64.indexOf(',') - if (partIndex) { - imageBase64 = imageBase64.substring(partIndex + 1) - } - } else { - imageInfo.url = imageUrl - } - return { knowledgeRequest, imageBase64 } - } - - async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise { - if (!imageUrl) { - return - } - await this.createContext(conversationStyle) - const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle) - - const response = await fetch(this.endpoint + '/api/kblob', - { - headers: { - 'Content-Type': 'application/json', - }, - method: 'POST', - mode: 'cors', - credentials: 'include', - body: JSON.stringify(payload), - }) - .then(res => res.json()) - .catch(e => { - console.log('Error', e) - }) - return response - } - - private async generateContent(message: ChatResponseMessage) { - if (message.contentType === 'IMAGE') { - this.asyncTasks.push(this.createImage(message.text, message.messageId)) - } - } - - private async parseEvents(params: Params, events: any) { - const conversation = this.conversationContext! - - events?.forEach(async (event: ChatUpdateCompleteResponse) => { - debug('bing event', event) - if (event.type === 3) { - await Promise.all(this.asyncTasks) - this.asyncTasks = [] - params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } }) - params.onEvent({ type: 'DONE' }) - conversation.invocationId = parseInt(event.invocationId, 10) + 1 - } else if (event.type === 1) { - const messages = event.arguments[0].messages - if (messages) { - const text = convertMessageToMarkdown(messages[0]) - this.lastText = text - params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } }) - } - } else if (event.type === 2) { - const messages = event.item.messages as ChatResponseMessage[] | undefined - if (!messages) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - event.item.result.error || 'Unknown error', - event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT - : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA) - : ErrorCode.UNKOWN_ERROR - ), - }) - return - } - const limited = messages.some((message) => - message.contentOrigin === 'TurnLimiter' - || message.messageType === 'Disengaged' - ) - if (limited) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Sorry, you have reached chat limit in this conversation.', - ErrorCode.CONVERSATION_LIMIT, - ), - }) - return - } - - const lastMessage = event.item.messages.at(-1) as ChatResponseMessage - const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE') - if (specialMessage) { - this.generateContent(specialMessage) - } - - if (lastMessage) { - const text = convertMessageToMarkdown(lastMessage) - this.lastText = text - params.onEvent({ - type: 'UPDATE_ANSWER', - data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions }, - }) - } - } - }) - } - - resetConversation() { - this.conversationContext = undefined - } -} diff --git a/spaces/heiyubili/bingo/src/components/chat-list.tsx b/spaces/heiyubili/bingo/src/components/chat-list.tsx deleted file mode 100644 index 624a78ef0d7be0f1192cf02a81e2e9cf214cb193..0000000000000000000000000000000000000000 --- a/spaces/heiyubili/bingo/src/components/chat-list.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import React from 'react' - -import { Separator } from '@/components/ui/separator' -import { ChatMessage } from '@/components/chat-message' -import { ChatMessageModel } from '@/lib/bots/bing/types' - -export interface ChatList { - messages: ChatMessageModel[] -} - -export function ChatList({ messages }: ChatList) { - if (!messages.length) { - return null - } - - return ( -
        - {messages.map((message, index) => ( - - - {index < messages.length - 1 && ( - - )} - - ))} -
        - ) -} diff --git a/spaces/hemanth-thaluru/sdm-image-colorization-prj/colorizers/util.py b/spaces/hemanth-thaluru/sdm-image-colorization-prj/colorizers/util.py deleted file mode 100644 index dc9d38f4a8dc9df1d0c196fb3241aaa3263ea0ca..0000000000000000000000000000000000000000 --- a/spaces/hemanth-thaluru/sdm-image-colorization-prj/colorizers/util.py +++ /dev/null @@ -1,36 +0,0 @@ - -from PIL import Image -import numpy as np -from skimage import color -import torch -import torch.nn.functional as F -from IPython import embed - -def load_img(img_path): - out_np = np.asarray(Image.open(img_path)) - if(out_np.ndim==2): - out_np = np.tile(out_np[:,:,None],3) - return out_np - -def resize_img(img, HW=(256,256), resample=3): - return np.asarray(Image.fromarray(img).resize((HW[1],HW[0]), resample=resample)) - -def preprocess_img(img_rgb_orig, HW=(256,256), resample=3): - img_rgb_rs = resize_img(img_rgb_orig, HW=HW, resample=resample) - img_lab_orig = color.rgb2lab(img_rgb_orig) - img_lab_rs = color.rgb2lab(img_rgb_rs) - img_l_orig = img_lab_orig[:,:,0] - img_l_rs = img_lab_rs[:,:,0] - tens_orig_l = torch.Tensor(img_l_orig)[None,None,:,:] - tens_rs_l = torch.Tensor(img_l_rs)[None,None,:,:] - return (tens_orig_l, tens_rs_l) - -def postprocess_tens(tens_orig_l, out_ab, mode='bilinear'): - HW_orig = tens_orig_l.shape[2:] - HW = out_ab.shape[2:] - if(HW_orig[0]!=HW[0] or HW_orig[1]!=HW[1]): - out_ab_orig = F.interpolate(out_ab, size=HW_orig, mode='bilinear') - else: - out_ab_orig = out_ab - out_lab_orig = torch.cat((tens_orig_l, out_ab_orig), dim=1) - return color.lab2rgb(out_lab_orig.data.cpu().numpy()[0,...].transpose((1,2,0))) diff --git a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Sofia Pro Font Family Zip [2021].md b/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Sofia Pro Font Family Zip [2021].md deleted file mode 100644 index 63bc6cf0cd58e9b171659cd0d83f239e09980160..0000000000000000000000000000000000000000 --- a/spaces/hilmyblaze/WebUI-Counterfeit-V2.5/Sofia Pro Font Family Zip [2021].md +++ /dev/null @@ -1,97 +0,0 @@ -## Sofia Pro Font Family Zip - - - - ![Sofia Pro Font Family Zip \[2021\]](https://legionfonts.com/img-fonts/sofia-pro-semibold/sofia-pro-semibold-font-abc.jpg) - - - -**LINK ---> [https://ditzcosupo.blogspot.com/?d=2twsic](https://ditzcosupo.blogspot.com/?d=2twsic)** - - - - Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Sofia Pro Font Family Zip": - -# How to Download and Install Sofia Pro Font Family Zip for Free - - - -Sofia Pro is a geometric sans serif font family that combines modernism and harmony of curves. It was created in 2009 and redesigned in 2012 by Mostardesign, a French type foundry. Sofia Pro has 16 fonts, including regular, italic, light, ultralight, medium, semibold, bold, and black styles. It also has a condensed version that is ideal for text, branding, signage, print, and web design creation. - - - -If you are looking for a free download of Sofia Pro Font Family Zip, you are in luck. In this article, we will show you how to get this elegant and versatile font for free and how to install it on your computer. - - - -## Step 1: Download Sofia Pro Font Family Zip - - - -There are many websites that offer free fonts for personal use, but not all of them are reliable or safe. Some may contain viruses or malware that can harm your computer or steal your personal information. To avoid these risks, we recommend you to download Sofia Pro Font Family Zip from one of these trusted sources: - - - -- [Cufon Fonts](https://www.cufonfonts.com/font/sofia-pro): This website has a large collection of free fonts for desktop and web use. You can download Sofia Pro Font Family Zip by clicking on the "Download @font-face Kit" button on the top right corner of the page[^1^]. - -- [Befonts](https://befonts.com/sofia-pro-font-family.html): This website offers free fonts for personal use only. You can download Sofia Pro Font Family Zip by clicking on the "Download" button at the bottom of the page[^2^]. However, please note that this is the free version with only one weight (sofiapro-light). If you want to use the full version with all the weights and styles, you need to purchase a license from the designer. - -- [Dfonts](https://www.dfonts.org/fonts/sofia-pro-font-family/): This website provides free fonts for both personal and commercial use. You can download Sofia Pro Font Family Zip by clicking on the "Download" button at the bottom of the page[^3^]. This is the full version with all the weights and styles. - - - -## Step 2: Install Sofia Pro Font Family Zip - - - -Once you have downloaded Sofia Pro Font Family Zip, you need to install it on your computer. The installation process may vary depending on your operating system, but here are some general steps: - - - -- Extract the zip file to a folder on your computer. - -- Open the folder and find the font files with the extension .ttf or .otf. - -- Double-click on each font file and click on the "Install" button on the top left corner of the window. - -- Alternatively, you can right-click on each font file and select "Install" from the menu. - -- Wait for the installation to complete. - -- Restart your computer or any applications that use fonts to apply the changes. - - - -## Step 3: Enjoy Sofia Pro Font Family Zip - - - -Congratulations! You have successfully downloaded and installed Sofia Pro Font Family Zip for free. Now you can use this beautiful font for your personal or commercial projects. Here are some examples of how Sofia Pro Font Family Zip looks like: - - ```html - - - -### This is Sofia Pro Regular - - - -Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed quis lorem eu nisi consequat dignissim. Quisque vitae nisl id leo tincidunt mattis. - - - -### This is Sofia Pro Italic - - - -Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed quis lorem eu nisi consequat dignissim. Quisque vitae nisl id leo tincidunt mattis. - - - -### This is Sofia Pro Light - - - -Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed quis lorem eu nisi consequat dignissim. Quisque vitae nisl id - - dfd1c89656 \ No newline at end of file diff --git a/spaces/hilsq/bingotest/Dockerfile b/spaces/hilsq/bingotest/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/hilsq/bingotest/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/hsinyuuuuuuu/cat/README.md b/spaces/hsinyuuuuuuu/cat/README.md deleted file mode 100644 index d4d077cbd5c69bc77a47ada528a3fc7135989a2d..0000000000000000000000000000000000000000 --- a/spaces/hsinyuuuuuuu/cat/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Cat -emoji: 🦀 -colorFrom: red -colorTo: green -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: other ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/huedaya/hf-openai-whisper-dev/README.md b/spaces/huedaya/hf-openai-whisper-dev/README.md deleted file mode 100644 index 567339d7eca8fb6a54b7d6c5f795871a117dfc12..0000000000000000000000000000000000000000 --- a/spaces/huedaya/hf-openai-whisper-dev/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: V-to-text (dev) -emoji: 🗣 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/huggingpaul/logo-wizard-logo-diffusion-checkpoint/app.py b/spaces/huggingpaul/logo-wizard-logo-diffusion-checkpoint/app.py deleted file mode 100644 index 12144c2defb756a5c4d2e213f45a6528f72eeac0..0000000000000000000000000000000000000000 --- a/spaces/huggingpaul/logo-wizard-logo-diffusion-checkpoint/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/logo-wizard/logo-diffusion-checkpoint").launch() \ No newline at end of file diff --git a/spaces/hysts/Shap-E/model.py b/spaces/hysts/Shap-E/model.py deleted file mode 100644 index 1ce8b89f1767ec49694c758fb055c855d5b86a61..0000000000000000000000000000000000000000 --- a/spaces/hysts/Shap-E/model.py +++ /dev/null @@ -1,56 +0,0 @@ -import tempfile - -import numpy as np -import PIL.Image -import torch -import trimesh -from diffusers import ShapEImg2ImgPipeline, ShapEPipeline -from diffusers.utils import export_to_ply - - -class Model: - def __init__(self): - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.pipe = ShapEPipeline.from_pretrained("openai/shap-e", torch_dtype=torch.float16) - self.pipe.to(self.device) - - self.pipe_img = ShapEImg2ImgPipeline.from_pretrained("openai/shap-e-img2img", torch_dtype=torch.float16) - self.pipe_img.to(self.device) - - def to_glb(self, ply_path: str) -> str: - mesh = trimesh.load(ply_path) - rot = trimesh.transformations.rotation_matrix(-np.pi / 2, [1, 0, 0]) - mesh = mesh.apply_transform(rot) - rot = trimesh.transformations.rotation_matrix(np.pi, [0, 1, 0]) - mesh = mesh.apply_transform(rot) - mesh_path = tempfile.NamedTemporaryFile(suffix=".glb", delete=False) - mesh.export(mesh_path.name, file_type="glb") - return mesh_path.name - - def run_text(self, prompt: str, seed: int = 0, guidance_scale: float = 15.0, num_steps: int = 64) -> str: - generator = torch.Generator(device=self.device).manual_seed(seed) - images = self.pipe( - prompt, - generator=generator, - guidance_scale=guidance_scale, - num_inference_steps=num_steps, - output_type="mesh", - ).images - ply_path = tempfile.NamedTemporaryFile(suffix=".ply", delete=False, mode="w+b") - export_to_ply(images[0], ply_path.name) - return self.to_glb(ply_path.name) - - def run_image( - self, image: PIL.Image.Image, seed: int = 0, guidance_scale: float = 3.0, num_steps: int = 64 - ) -> str: - generator = torch.Generator(device=self.device).manual_seed(seed) - images = self.pipe_img( - image, - generator=generator, - guidance_scale=guidance_scale, - num_inference_steps=num_steps, - output_type="mesh", - ).images - ply_path = tempfile.NamedTemporaryFile(suffix=".ply", delete=False, mode="w+b") - export_to_ply(images[0], ply_path.name) - return self.to_glb(ply_path.name) diff --git a/spaces/iamironman4279/SadTalker/src/audio2pose_models/audio_encoder.py b/spaces/iamironman4279/SadTalker/src/audio2pose_models/audio_encoder.py deleted file mode 100644 index 6279d2014a2e786a6c549f084339e18d00e50331..0000000000000000000000000000000000000000 --- a/spaces/iamironman4279/SadTalker/src/audio2pose_models/audio_encoder.py +++ /dev/null @@ -1,64 +0,0 @@ -import torch -from torch import nn -from torch.nn import functional as F - -class Conv2d(nn.Module): - def __init__(self, cin, cout, kernel_size, stride, padding, residual=False, *args, **kwargs): - super().__init__(*args, **kwargs) - self.conv_block = nn.Sequential( - nn.Conv2d(cin, cout, kernel_size, stride, padding), - nn.BatchNorm2d(cout) - ) - self.act = nn.ReLU() - self.residual = residual - - def forward(self, x): - out = self.conv_block(x) - if self.residual: - out += x - return self.act(out) - -class AudioEncoder(nn.Module): - def __init__(self, wav2lip_checkpoint, device): - super(AudioEncoder, self).__init__() - - self.audio_encoder = nn.Sequential( - Conv2d(1, 32, kernel_size=3, stride=1, padding=1), - Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(32, 32, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(32, 64, kernel_size=3, stride=(3, 1), padding=1), - Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(64, 64, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(64, 128, kernel_size=3, stride=3, padding=1), - Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True), - Conv2d(128, 128, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(128, 256, kernel_size=3, stride=(3, 2), padding=1), - Conv2d(256, 256, kernel_size=3, stride=1, padding=1, residual=True), - - Conv2d(256, 512, kernel_size=3, stride=1, padding=0), - Conv2d(512, 512, kernel_size=1, stride=1, padding=0),) - - #### load the pre-trained audio_encoder, we do not need to load wav2lip model here. - # wav2lip_state_dict = torch.load(wav2lip_checkpoint, map_location=torch.device(device))['state_dict'] - # state_dict = self.audio_encoder.state_dict() - - # for k,v in wav2lip_state_dict.items(): - # if 'audio_encoder' in k: - # state_dict[k.replace('module.audio_encoder.', '')] = v - # self.audio_encoder.load_state_dict(state_dict) - - - def forward(self, audio_sequences): - # audio_sequences = (B, T, 1, 80, 16) - B = audio_sequences.size(0) - - audio_sequences = torch.cat([audio_sequences[:, i] for i in range(audio_sequences.size(1))], dim=0) - - audio_embedding = self.audio_encoder(audio_sequences) # B, 512, 1, 1 - dim = audio_embedding.shape[1] - audio_embedding = audio_embedding.reshape((B, -1, dim, 1, 1)) - - return audio_embedding.squeeze(-1).squeeze(-1) #B seq_len+1 512 diff --git a/spaces/ik/twi-ewe-mss-tss/app.py b/spaces/ik/twi-ewe-mss-tss/app.py deleted file mode 100644 index a5885787b0d215fc534604f38dbb1bc3ebc165c2..0000000000000000000000000000000000000000 --- a/spaces/ik/twi-ewe-mss-tss/app.py +++ /dev/null @@ -1,85 +0,0 @@ -import gradio as gr -import speech_recognition as sr -from ttsmms import TTS -from deep_translator import GoogleTranslator - -# Initialize the TTS model for Ewe and Twi languages -ewe = TTS("data/ewe") -twi = TTS("data/aka") - -# Create a list of supported languages and their corresponding TTS models -langs = [{"lang": 'ewe', "tts": ewe}, {"lang": 'twi', "tts": twi}] - - -# Function to convert speech to text using Google's speech recognition API -def speech_to_text(audio_file): - r = sr.Recognizer() - with sr.AudioFile(audio_file) as source: - audio = r.record(source) - try: - text = r.recognize_google(audio) - return text - except sr.UnknownValueError: - return None - except sr.RequestError: - print("Speech recognition service unavailable.") - return None - - -# Function to convert text to speech -def text_to_speech(text, lang): - # Find the selected language in the list of supported languages - selected_lang = next((lang_item for lang_item in langs if lang_item["lang"] == lang), None) - if selected_lang is None: - raise ValueError(f"Language '{lang}' is not supported.") - selected_tts = selected_lang["tts"] - # Translate the text to the selected language using Google Translator - translated = GoogleTranslator(source='auto', target=lang).translate(text) - wav_path = "output.wav" - # Generate speech synthesis and save it as a WAV file - selected_tts.synthesis(translated, wav_path=wav_path) - return wav_path, translated - - -# Function to handle the speech to text app -def speech_to_text_app(audio_file): - text = speech_to_text(audio_file) - return text if text else "Unable to transcribe audio." - - -# Function to handle the text to speech output -def text_to_speech_output(text, lang): - wav_path, translated = text_to_speech(text, lang) - return wav_path,translated - - -# Function to handle the speech to text and text to speech app -def speech_to_text_and_tts_app(lang_input, audio_file, text_input): - if audio_file: - print("Converting audio to text:", audio_file) - text = speech_to_text(audio_file) - wav_path, translates = text_to_speech_output(text, lang_input) - return translates, wav_path - else: - wav_path, translates = text_to_speech_output(text_input, lang_input) - return translates, wav_path - - -# Define the Gradio interface inputs and outputs -audio_input = gr.inputs.Audio(source="microphone", type="filepath", label="Record Audio") -text_input = gr.inputs.Textbox(label="Enter your text here") -lang_input = gr.inputs.Dropdown(choices=[lang["lang"] for lang in langs], label="Language") -output_text = gr.outputs.Textbox(label="Transcription") -output_audio = gr.outputs.Audio(label="Text-to-Speech Audio", type='filepath') - -# Create the Gradio interface -interface = gr.Interface( - fn=speech_to_text_and_tts_app, - inputs=[lang_input, audio_input, text_input], - outputs=[output_text, output_audio], - title="English to Twi - Ewe Speech Generator(MMS TTS)", - description="Translate English to Twi and Ewe Language(from Ghana)" -) - -# Launch the interface -interface.launch() diff --git a/spaces/ilumine-AI/AI-Creepypastas/README.md b/spaces/ilumine-AI/AI-Creepypastas/README.md deleted file mode 100644 index 42977ab0f59ef12f6ab9b9de48f1a667b22570f9..0000000000000000000000000000000000000000 --- a/spaces/ilumine-AI/AI-Creepypastas/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: AI-Creepypastas -emoji: 🌖 -colorFrom: purple -colorTo: gray -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/imseldrith/ChatGPT-Detection/templates/index.html b/spaces/imseldrith/ChatGPT-Detection/templates/index.html deleted file mode 100644 index a65c3fde1bb15ddefbbd097b356e61d58c2a6aed..0000000000000000000000000000000000000000 --- a/spaces/imseldrith/ChatGPT-Detection/templates/index.html +++ /dev/null @@ -1,15 +0,0 @@ - - - - - AI Content Detection - - -

        AI Content Detection

        -
        - -
        - -
        - - diff --git a/spaces/imthanhlv/dual-encoder/README.md b/spaces/imthanhlv/dual-encoder/README.md deleted file mode 100644 index f221be6c9907b88741badad000ecf4f1f11fa9b2..0000000000000000000000000000000000000000 --- a/spaces/imthanhlv/dual-encoder/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Dual Encoder -emoji: 🚀 -colorFrom: gray -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/inamXcontru/PoeticTTS/Daossoft Product Key Finder Keygen.md b/spaces/inamXcontru/PoeticTTS/Daossoft Product Key Finder Keygen.md deleted file mode 100644 index d6084df9b4b0f335edc7b89d5919641c410a8945..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Daossoft Product Key Finder Keygen.md +++ /dev/null @@ -1,49 +0,0 @@ -
        -

        How to Use Daossoft Product Key Finder Keygen to Recover Lost Product Keys

        -

        If you have ever lost or forgotten your product keys for Windows, Office, Adobe, or other software, you know how frustrating it can be. You may not be able to activate or reinstall your software without the product keys. Fortunately, there is a solution: Daossoft Product Key Finder Keygen.

        -

        Daossoft Product Key Finder Keygen is a powerful and easy-to-use tool that can scan your computer and find all the product keys for the software installed on your system. It can also generate new product keys for some software that you don't have the original keys for. With Daossoft Product Key Finder Keygen, you can recover your lost product keys in minutes and save them in a text file for backup.

        -

        Daossoft Product Key Finder Keygen


        Download 🌟 https://gohhs.com/2uz43I



        -

        How to Use Daossoft Product Key Finder Keygen

        -

        Using Daossoft Product Key Finder Keygen is very simple. Just follow these steps:

        -
          -
        1. Download and install Daossoft Product Key Finder Keygen from the official website.
        2. -
        3. Run the program and click on the "Start Recovery" button.
        4. -
        5. The program will scan your computer and display all the product keys for the software installed on your system.
        6. -
        7. Select the product keys that you want to recover and click on the "Save to File" button.
        8. -
        9. Choose a location to save the text file that contains your product keys.
        10. -
        11. If you need to generate new product keys for some software that you don't have the original keys for, click on the "Keygen" button and select the software from the list.
        12. -
        13. The program will generate a new product key for the selected software and display it on the screen.
        14. -
        15. Copy and paste the new product key into the activation window of the software.
        16. -
        -

        Congratulations! You have successfully used Daossoft Product Key Finder Keygen to recover your lost product keys.

        -

        Why Choose Daossoft Product Key Finder Keygen

        -

        There are many reasons why you should choose Daossoft Product Key Finder Keygen over other similar tools. Here are some of them:

        -
          -
        • Daossoft Product Key Finder Keygen supports more than 5000 software products, including Windows, Office, Adobe, VMware, SQL Server, Norton, and more.
        • -
        • Daossoft Product Key Finder Keygen can recover product keys from both local and remote computers.
        • -
        • Daossoft Product Key Finder Keygen can generate new product keys for some software that you don't have the original keys for.
        • -
        • Daossoft Product Key Finder Keygen is fast, reliable, and easy to use.
        • -
        • Daossoft Product Key Finder Keygen is 100% safe and virus-free.
        • -
        -

        If you are looking for a professional and effective tool to recover your lost product keys, look no further than Daossoft Product Key Finder Keygen. Download it today and enjoy your software without any hassle.

        - -

        How to Download and Install Daossoft Product Key Finder Keygen

        -

        Downloading and installing Daossoft Product Key Finder Keygen is very easy. Just follow these steps:

        -
          -
        1. Go to the official website of Daossoft Product Key Finder Keygen and click on the "Download" button.
        2. -
        3. Choose a version that suits your system requirements and click on the "Download Now" button.
        4. -
        5. Save the setup file to your computer and run it.
        6. -
        7. Follow the instructions on the screen to complete the installation process.
        8. -
        9. Launch Daossoft Product Key Finder Keygen and enjoy its features.
        10. -
        -

        How to Contact Daossoft Product Key Finder Keygen Support Team

        -

        If you have any questions or problems with Daossoft Product Key Finder Keygen, you can contact the support team for help. Here are some ways to contact them:

        -
          -
        • Email: You can send an email to support@daossoft.com and describe your issue in detail. The support team will reply to you as soon as possible.
        • -
        • Phone: You can call the toll-free number 1-800-DAOS-SOFT and speak to a customer service representative. The phone support is available 24/7.
        • -
        • Live Chat: You can visit the official website of Daossoft Product Key Finder Keygen and click on the "Live Chat" button. You can chat with a support agent online and get instant solutions.
        • -
        • FAQ: You can also check the FAQ section on the website and see if your question has been answered before. The FAQ covers common issues and solutions for Daossoft Product Key Finder Keygen.
        • -
        -

        The support team of Daossoft Product Key Finder Keygen is friendly, professional, and efficient. They will do their best to assist you and resolve your issues.

        d5da3c52bf
        -
        -
        \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Adobe Illustrator Cc 17 1 Amtlib Dll Crack VERIFIED.md b/spaces/inreVtussa/clothingai/Examples/Adobe Illustrator Cc 17 1 Amtlib Dll Crack VERIFIED.md deleted file mode 100644 index 49477549252bd8590ca9140e95fee19cd71660a2..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Adobe Illustrator Cc 17 1 Amtlib Dll Crack VERIFIED.md +++ /dev/null @@ -1,6 +0,0 @@ -

        adobe illustrator cc 17 1 amtlib dll crack


        Download Zip 🗹 https://tiurll.com/2uClSH



        - -1. risk of contracting a virus. Remember that when you install pirated programs like Adobe Illustrator 2017 Crack, you don't know what you are actually installing. This can lead to a serious risk of infection with the virus. I am not kidding. Here is what I found when I wanted to install Adobe Illustrator 2017 Crack on my laptop. I have been using it for several months. One day, I decided to update the version of Adobe Illustrator 2017 Crack. I started the download, and after installation I found that all my documents had been moved to the Google Drive folder. After uninstalling the program, my documents returned to their places. I do not recommend you take risks. 2. License issues. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/inreVtussa/clothingai/Examples/Boyce Avenue Album Download ((BETTER)) Zip.md b/spaces/inreVtussa/clothingai/Examples/Boyce Avenue Album Download ((BETTER)) Zip.md deleted file mode 100644 index 07b59c50116b39c53c0e3f00a35fbb9ff06e38f2..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Boyce Avenue Album Download ((BETTER)) Zip.md +++ /dev/null @@ -1,74 +0,0 @@ -

        Boyce Avenue Album Download Zip


        DOWNLOAD ★★★★★ https://tiurll.com/2uCkYM



        - - . . - -Track listing - -Personnel - -Musicians - -Boyce Avenue - -Eddie Bayers - drums, percussion - -Tom Bukovac - acoustic guitar, electric guitar - -J. T. Corenflos - electric guitar - -Dan Dugmore - steel guitar - -Shannon Forrest - drums, percussion - -Paul Franklin - steel guitar - -B. James Lowry - acoustic guitar - -Steve Nathan - piano - -Dann Huff - electric guitar - -Kevin McKendree - electric guitar - -Steve Patrick - steel guitar - -Jeff Roe - acoustic guitar, electric guitar - -Chris Scruggs - banjo, acoustic guitar, mandolin - -Andy Third - bass guitar - -Production - -Connie Howard - engineer, mixing, producer - -Bradley Oakley - engineer - -Gary Smith - mastering - -References - -Category:2014 albums - -Category:Boyce Avenue albums - -Category:Albums produced by Mark Bright (record producer) - -Category:Big Machine Records albumsDonald Trump holds a $2 billion loan from Beijing and the Chinese government gave him and his family $130,000 in low-interest loans, according to a US watchdog. - -The Presidential Perks Commission was set up in May to investigate Trump’s foreign business entanglements. It was established in response to Trump’s refusal to release his tax returns, which would show whether any foreign entanglements have violated the constitution. - -It’s hard to overstate how big Trump’s financial entanglements are with China and other countries – the findings of the report, which is released on Monday, will expose to light just how intertwined those entanglements are. - -The report examined Trump’s overseas business dealings, the US government’s relationship with those companies, and any conflicts of interest those entanglements pose. - -The report found that Trump has received at least $2 billion in loans from a Chinese state-owned bank and at least $130,000 in cash from the Chinese government. That money went into loans that are worth over $300 million dollars. - -There are also ongoing legal negotiations with the Chinese government for an up to $100 billion investment in the Trump Organisation. - -In the US, the Trump Organisation is Trump’s main business, but he has also made money through the Trump International Hotel in Washington, DC, and through his rental apartment building in Trump Tower in New York. - -He’s also invested in at 4fefd39f24
        -
        -
        -

        diff --git a/spaces/iqovocn/ChuanhuChatGPT/modules/presets.py b/spaces/iqovocn/ChuanhuChatGPT/modules/presets.py deleted file mode 100644 index 23a1ec3bd13fe761653a89920bd087f425e12fcb..0000000000000000000000000000000000000000 --- a/spaces/iqovocn/ChuanhuChatGPT/modules/presets.py +++ /dev/null @@ -1,240 +0,0 @@ -# -*- coding:utf-8 -*- -import os -from pathlib import Path -import gradio as gr -from .webui_locale import I18nAuto - -i18n = I18nAuto() # internationalization - -CHATGLM_MODEL = None -CHATGLM_TOKENIZER = None -LLAMA_MODEL = None -LLAMA_INFERENCER = None - -# ChatGPT 设置 -INITIAL_SYSTEM_PROMPT = "You are a helpful assistant." -API_HOST = "api.openai.com" -COMPLETION_URL = "https://api.openai.com/v1/chat/completions" -BALANCE_API_URL="https://api.openai.com/dashboard/billing/credit_grants" -USAGE_API_URL="https://api.openai.com/dashboard/billing/usage" -HISTORY_DIR = Path("history") -HISTORY_DIR = "history" -TEMPLATES_DIR = "templates" - -# 错误信息 -STANDARD_ERROR_MSG = i18n("☹️发生了错误:") # 错误信息的标准前缀 -GENERAL_ERROR_MSG = i18n("获取对话时发生错误,请查看后台日志") -ERROR_RETRIEVE_MSG = i18n("请检查网络连接,或者API-Key是否有效。") -CONNECTION_TIMEOUT_MSG = i18n("连接超时,无法获取对话。") # 连接超时 -READ_TIMEOUT_MSG = i18n("读取超时,无法获取对话。") # 读取超时 -PROXY_ERROR_MSG = i18n("代理错误,无法获取对话。") # 代理错误 -SSL_ERROR_PROMPT = i18n("SSL错误,无法获取对话。") # SSL 错误 -NO_APIKEY_MSG = i18n("API key为空,请检查是否输入正确。") # API key 长度不足 51 位 -NO_INPUT_MSG = i18n("请输入对话内容。") # 未输入对话内容 -BILLING_NOT_APPLICABLE_MSG = i18n("账单信息不适用") # 本地运行的模型返回的账单信息 - -TIMEOUT_STREAMING = 60 # 流式对话时的超时时间 -TIMEOUT_ALL = 200 # 非流式对话时的超时时间 -ENABLE_STREAMING_OPTION = True # 是否启用选择选择是否实时显示回答的勾选框 -HIDE_MY_KEY = False # 如果你想在UI中隐藏你的 API 密钥,将此值设置为 True -CONCURRENT_COUNT = 100 # 允许同时使用的用户数量 - -SIM_K = 5 -INDEX_QUERY_TEMPRATURE = 1.0 - -CHUANHU_TITLE = i18n("川虎Chat 🚀") - -CHUANHU_DESCRIPTION = i18n("由Bilibili [土川虎虎虎](https://space.bilibili.com/29125536)、[明昭MZhao](https://space.bilibili.com/24807452) 和 [Keldos](https://github.com/Keldos-Li) 开发
        访问川虎Chat的 [GitHub项目](https://github.com/GaiZhenbiao/ChuanhuChatGPT) 下载最新版脚本") - - -ONLINE_MODELS = [ - "gpt-3.5-turbo", - "gpt-3.5-turbo-16k", - "gpt-3.5-turbo-0301", - "gpt-3.5-turbo-0613", - "gpt-4", - "gpt-4-0314", - "gpt-4-0613", - "gpt-4-32k", - "gpt-4-32k-0314", - "gpt-4-32k-0613", - "川虎助理", - "川虎助理 Pro", - "xmchat", - "yuanai-1.0-base_10B", - "yuanai-1.0-translate", - "yuanai-1.0-dialog", - "yuanai-1.0-rhythm_poems", - "minimax-abab4-chat", - "minimax-abab5-chat", -] - -LOCAL_MODELS = [ - "chatglm-6b", - "chatglm-6b-int4", - "chatglm-6b-int4-qe", - "StableLM", - "MOSS", - "llama-7b-hf", - "llama-13b-hf", - "llama-30b-hf", - "llama-65b-hf", -] - -if os.environ.get('HIDE_LOCAL_MODELS', 'false') == 'true': - MODELS = ONLINE_MODELS -else: - MODELS = ONLINE_MODELS + LOCAL_MODELS - -DEFAULT_MODEL = 0 - -os.makedirs("models", exist_ok=True) -os.makedirs("lora", exist_ok=True) -os.makedirs("history", exist_ok=True) -for dir_name in os.listdir("models"): - if os.path.isdir(os.path.join("models", dir_name)): - if dir_name not in MODELS: - MODELS.append(dir_name) - -MODEL_TOKEN_LIMIT = { - "gpt-3.5-turbo": 4096, - "gpt-3.5-turbo-16k": 16384, - "gpt-3.5-turbo-0301": 4096, - "gpt-3.5-turbo-0613": 4096, - "gpt-4": 8192, - "gpt-4-0314": 8192, - "gpt-4-0613": 8192, - "gpt-4-32k": 32768, - "gpt-4-32k-0314": 32768, - "gpt-4-32k-0613": 32768 -} - -TOKEN_OFFSET = 1000 # 模型的token上限减去这个值,得到软上限。到达软上限之后,自动尝试减少token占用。 -DEFAULT_TOKEN_LIMIT = 3000 # 默认的token上限 -REDUCE_TOKEN_FACTOR = 0.5 # 与模型token上限想乘,得到目标token数。减少token占用时,将token占用减少到目标token数以下。 - -REPLY_LANGUAGES = [ - "简体中文", - "繁體中文", - "English", - "日本語", - "Español", - "Français", - "Deutsch", - "跟随问题语言(不稳定)" -] - - -WEBSEARCH_PTOMPT_TEMPLATE = """\ -Web search results: - -{web_results} -Current date: {current_date} - -Instructions: Using the provided web search results, write a comprehensive reply to the given query. Make sure to cite results using [[number](URL)] notation after the reference. If the provided search results refer to multiple subjects with the same name, write separate answers for each subject. -Query: {query} -Reply in {reply_language} -""" - -PROMPT_TEMPLATE = """\ -Context information is below. ---------------------- -{context_str} ---------------------- -Current date: {current_date}. -Using the provided context information, write a comprehensive reply to the given query. -Make sure to cite results using [number] notation after the reference. -If the provided context information refer to multiple subjects with the same name, write separate answers for each subject. -Use prior knowledge only if the given context didn't provide enough information. -Answer the question: {query_str} -Reply in {reply_language} -""" - -REFINE_TEMPLATE = """\ -The original question is as follows: {query_str} -We have provided an existing answer: {existing_answer} -We have the opportunity to refine the existing answer -(only if needed) with some more context below. ------------- -{context_msg} ------------- -Given the new context, refine the original answer to better -Reply in {reply_language} -If the context isn't useful, return the original answer. -""" - -SUMMARIZE_PROMPT = """Write a concise summary of the following: - -{text} - -CONCISE SUMMARY IN 中文:""" - -ALREADY_CONVERTED_MARK = "" - -small_and_beautiful_theme = gr.themes.Soft( - primary_hue=gr.themes.Color( - c50="#EBFAF2", - c100="#CFF3E1", - c200="#A8EAC8", - c300="#77DEA9", - c400="#3FD086", - c500="#02C160", - c600="#06AE56", - c700="#05974E", - c800="#057F45", - c900="#04673D", - c950="#2E5541", - name="small_and_beautiful", - ), - secondary_hue=gr.themes.Color( - c50="#576b95", - c100="#576b95", - c200="#576b95", - c300="#576b95", - c400="#576b95", - c500="#576b95", - c600="#576b95", - c700="#576b95", - c800="#576b95", - c900="#576b95", - c950="#576b95", - ), - neutral_hue=gr.themes.Color( - name="gray", - c50="#f6f7f8", - # c100="#f3f4f6", - c100="#F2F2F2", - c200="#e5e7eb", - c300="#d1d5db", - c400="#B2B2B2", - c500="#808080", - c600="#636363", - c700="#515151", - c800="#393939", - # c900="#272727", - c900="#2B2B2B", - c950="#171717", - ), - radius_size=gr.themes.sizes.radius_sm, - ).set( - # button_primary_background_fill="*primary_500", - button_primary_background_fill_dark="*primary_600", - # button_primary_background_fill_hover="*primary_400", - # button_primary_border_color="*primary_500", - button_primary_border_color_dark="*primary_600", - button_primary_text_color="wihte", - button_primary_text_color_dark="white", - button_secondary_background_fill="*neutral_100", - button_secondary_background_fill_hover="*neutral_50", - button_secondary_background_fill_dark="*neutral_900", - button_secondary_text_color="*neutral_800", - button_secondary_text_color_dark="white", - # background_fill_primary="#F7F7F7", - # background_fill_primary_dark="#1F1F1F", - # block_title_text_color="*primary_500", - block_title_background_fill_dark="*primary_900", - block_label_background_fill_dark="*primary_900", - input_background_fill="#F6F6F6", - chatbot_code_background_color="*neutral_950", - chatbot_code_background_color_dark="*neutral_950", - ) diff --git a/spaces/ismot/1702t1/evaluation/__init__.py b/spaces/ismot/1702t1/evaluation/__init__.py deleted file mode 100644 index 1bf9d8dfba501e83ea5738ff98228c5756949a47..0000000000000000000000000000000000000000 --- a/spaces/ismot/1702t1/evaluation/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -""" -@date: 2021/6/29 -@description: -""" diff --git a/spaces/ispast/Genshin_MB_VITS_TTS/models.py b/spaces/ispast/Genshin_MB_VITS_TTS/models.py deleted file mode 100644 index d29e9010388acda30059431d8d6cbfa3c670e4f2..0000000000000000000000000000000000000000 --- a/spaces/ispast/Genshin_MB_VITS_TTS/models.py +++ /dev/null @@ -1,730 +0,0 @@ -import copy -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding -from pqmf import PQMF -from stft import TorchSTFT -import math - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - -class iSTFT_Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, gin_channels=0): - super(iSTFT_Generator, self).__init__() - # self.h = h - self.gen_istft_n_fft = gen_istft_n_fft - self.gen_istft_hop_size = gen_istft_hop_size - - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = weight_norm(Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.post_n_fft = self.gen_istft_n_fft - self.conv_post = weight_norm(Conv1d(ch, self.post_n_fft + 2, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - self.reflection_pad = torch.nn.ReflectionPad1d((1, 0)) - self.stft = TorchSTFT(filter_length=self.gen_istft_n_fft, hop_length=self.gen_istft_hop_size, win_length=self.gen_istft_n_fft) - def forward(self, x, g=None): - - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.reflection_pad(x) - x = self.conv_post(x) - spec = torch.exp(x[:,:self.post_n_fft // 2 + 1, :]) - phase = math.pi*torch.sin(x[:, self.post_n_fft // 2 + 1:, :]) - out = self.stft.inverse(spec, phase).to(x.device) - return out, None - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class Multiband_iSTFT_Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, subbands, gin_channels=0): - super(Multiband_iSTFT_Generator, self).__init__() - # self.h = h - self.subbands = subbands - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = weight_norm(Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.post_n_fft = gen_istft_n_fft - self.ups.apply(init_weights) - self.reflection_pad = torch.nn.ReflectionPad1d((1, 0)) - self.reshape_pixelshuffle = [] - - self.subband_conv_post = weight_norm(Conv1d(ch, self.subbands*(self.post_n_fft + 2), 7, 1, padding=3)) - - self.subband_conv_post.apply(init_weights) - - self.gen_istft_n_fft = gen_istft_n_fft - self.gen_istft_hop_size = gen_istft_hop_size - - - def forward(self, x, g=None): - stft = TorchSTFT(filter_length=self.gen_istft_n_fft, hop_length=self.gen_istft_hop_size, win_length=self.gen_istft_n_fft).to(x.device) - pqmf = PQMF(x.device) - - x = self.conv_pre(x)#[B, ch, length] - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - - - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - - x = F.leaky_relu(x) - x = self.reflection_pad(x) - x = self.subband_conv_post(x) - x = torch.reshape(x, (x.shape[0], self.subbands, x.shape[1]//self.subbands, x.shape[-1])) - - spec = torch.exp(x[:,:,:self.post_n_fft // 2 + 1, :]) - phase = math.pi*torch.sin(x[:,:, self.post_n_fft // 2 + 1:, :]) - - y_mb_hat = stft.inverse(torch.reshape(spec, (spec.shape[0]*self.subbands, self.gen_istft_n_fft // 2 + 1, spec.shape[-1])), torch.reshape(phase, (phase.shape[0]*self.subbands, self.gen_istft_n_fft // 2 + 1, phase.shape[-1]))) - y_mb_hat = torch.reshape(y_mb_hat, (x.shape[0], self.subbands, 1, y_mb_hat.shape[-1])) - y_mb_hat = y_mb_hat.squeeze(-2) - - y_g_hat = pqmf.synthesis(y_mb_hat) - - return y_g_hat, y_mb_hat - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class Multistream_iSTFT_Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, subbands, gin_channels=0): - super(Multistream_iSTFT_Generator, self).__init__() - # self.h = h - self.subbands = subbands - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = weight_norm(Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3)) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.post_n_fft = gen_istft_n_fft - self.ups.apply(init_weights) - self.reflection_pad = torch.nn.ReflectionPad1d((1, 0)) - self.reshape_pixelshuffle = [] - - self.subband_conv_post = weight_norm(Conv1d(ch, self.subbands*(self.post_n_fft + 2), 7, 1, padding=3)) - - self.subband_conv_post.apply(init_weights) - - self.gen_istft_n_fft = gen_istft_n_fft - self.gen_istft_hop_size = gen_istft_hop_size - - updown_filter = torch.zeros((self.subbands, self.subbands, self.subbands)).float() - for k in range(self.subbands): - updown_filter[k, k, 0] = 1.0 - self.register_buffer("updown_filter", updown_filter) - self.multistream_conv_post = weight_norm(Conv1d(4, 1, kernel_size=63, bias=False, padding=get_padding(63, 1))) - self.multistream_conv_post.apply(init_weights) - - - - def forward(self, x, g=None): - stft = TorchSTFT(filter_length=self.gen_istft_n_fft, hop_length=self.gen_istft_hop_size, win_length=self.gen_istft_n_fft).to(x.device) - # pqmf = PQMF(x.device) - - x = self.conv_pre(x)#[B, ch, length] - - for i in range(self.num_upsamples): - - - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - - - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - - x = F.leaky_relu(x) - x = self.reflection_pad(x) - x = self.subband_conv_post(x) - x = torch.reshape(x, (x.shape[0], self.subbands, x.shape[1]//self.subbands, x.shape[-1])) - - spec = torch.exp(x[:,:,:self.post_n_fft // 2 + 1, :]) - phase = math.pi*torch.sin(x[:,:, self.post_n_fft // 2 + 1:, :]) - - y_mb_hat = stft.inverse(torch.reshape(spec, (spec.shape[0]*self.subbands, self.gen_istft_n_fft // 2 + 1, spec.shape[-1])), torch.reshape(phase, (phase.shape[0]*self.subbands, self.gen_istft_n_fft // 2 + 1, phase.shape[-1]))) - y_mb_hat = torch.reshape(y_mb_hat, (x.shape[0], self.subbands, 1, y_mb_hat.shape[-1])) - y_mb_hat = y_mb_hat.squeeze(-2) - - y_mb_hat = F.conv_transpose1d(y_mb_hat, self.updown_filter.to(x.device) * self.subbands, stride=self.subbands) - - y_g_hat = self.multistream_conv_post(y_mb_hat) - - return y_g_hat, y_mb_hat - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gen_istft_n_fft, - gen_istft_hop_size, - n_speakers=0, - gin_channels=0, - use_sdp=False, - ms_istft_vits=False, - mb_istft_vits = False, - subbands = False, - istft_vits=False, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - self.ms_istft_vits = ms_istft_vits - self.mb_istft_vits = mb_istft_vits - self.istft_vits = istft_vits - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - if mb_istft_vits == True: - print('Mutli-band iSTFT VITS') - self.dec = Multiband_iSTFT_Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, subbands, gin_channels=gin_channels) - elif ms_istft_vits == True: - print('Mutli-stream iSTFT VITS') - self.dec = Multistream_iSTFT_Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, subbands, gin_channels=gin_channels) - elif istft_vits == True: - print('iSTFT-VITS') - self.dec = iSTFT_Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gen_istft_n_fft, gen_istft_hop_size, gin_channels=gin_channels) - else: - print('Decoder Error in json file') - - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o, o_mb = self.dec(z_slice, g=g) - return o, o_mb, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o, o_mb = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, o_mb, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat, o_hat_mb = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, o_hat_mb, y_mask, (z, z_p, z_hat) - diff --git a/spaces/isyslab/NeuroPred-PLM/NeuroPredPLM/model.py b/spaces/isyslab/NeuroPred-PLM/NeuroPredPLM/model.py deleted file mode 100644 index b55bd7d34b8fdca57f4584b76508b0a5b3e7d777..0000000000000000000000000000000000000000 --- a/spaces/isyslab/NeuroPred-PLM/NeuroPredPLM/model.py +++ /dev/null @@ -1,54 +0,0 @@ -""" -main model -""" -import torch -from torch import nn -import numpy as np -import torch.nn.functional as F -from einops import rearrange -import os - -from .utils import length_to_mask, load_model_and_alphabet_core - - -class EsmModel(nn.Module): - def __init__(self, hidden_size=64, num_labels=2, projection_size=24, head=12): - super().__init__() - - basedir = os.path.abspath(os.path.dirname(__file__)) - self.esm, self.alphabet = load_model_and_alphabet_core(os.path.join(basedir, 'args.pt')) - self.num_labels = num_labels - self.head = head - self.hidden_size = hidden_size - self.projection = nn.Linear(hidden_size, projection_size) - self.cov_1 = nn.Conv1d(projection_size, projection_size, kernel_size=3, padding='same') - self.cov_2 = nn.Conv1d(projection_size, int(projection_size/2), kernel_size=1, padding='same') - # self.gating = nn.Linear(projection_size, projection_size) - self.W = nn.Parameter(torch.randn((head, int(projection_size/2)))) - # self.mu = nn.Parameter(torch.randn((1, 768))) - self.fcn = nn.Sequential(nn.Linear(int(projection_size/2)*head, int(projection_size/2)), - nn.ReLU(), nn.Linear(int(projection_size/2), num_labels)) - - - def forward(self, peptide_list, device='cpu'): - peptide_length = [len(i[1]) for i in peptide_list] - batch_converter = self.alphabet.get_batch_converter() - _, _, batch_tokens = batch_converter(peptide_list) - batch_tokens = batch_tokens.to(device) - protein_dict = self.esm(batch_tokens, repr_layers=[12], return_contacts=False) - protein_embeddings = protein_dict["representations"][12][:, 1:, :] - protein_embed = rearrange(protein_embeddings, 'b l (h d)-> (b h) l d', h=self.head) - representations = self.projection(protein_embed) - representations = rearrange(representations, 'b l d -> b d l') - representation_cov = F.relu(self.cov_1(representations)) - representation_cov = F.relu(self.cov_2(representation_cov)) - representations = rearrange(representation_cov, '(b h) d l -> b h l d', h=self.head) - att = torch.einsum('bhld,hd->bhl', representations, self.W) - mask = length_to_mask(torch.tensor(peptide_length)).to(device).int() - att = att.masked_fill(mask.unsqueeze(1)==0, -np.inf) - att= F.softmax(att, dim=-1) - representations = rearrange(representations * att.unsqueeze(-1), 'b h l d -> b l (h d)') - representations = torch.sum(representations, dim=1) - return self.fcn(representations), att - - diff --git a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/evaluation_utils.py b/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/evaluation_utils.py deleted file mode 100644 index 8f913a98ad910db386838463908141fb9dcef442..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/clipseg/evaluation_utils.py +++ /dev/null @@ -1,292 +0,0 @@ -from torch.functional import Tensor -from general_utils import load_model -from torch.utils.data import DataLoader -import torch -import numpy as np - -def denorm(img): - - np_input = False - if isinstance(img, np.ndarray): - img = torch.from_numpy(img) - np_input = True - - mean = torch.Tensor([0.485, 0.456, 0.406]) - std = torch.Tensor([0.229, 0.224, 0.225]) - - img_denorm = (img*std[:,None,None]) + mean[:,None,None] - - if np_input: - img_denorm = np.clip(img_denorm.numpy(), 0, 1) - else: - img_denorm = torch.clamp(img_denorm, 0, 1) - - return img_denorm - - -def norm(img): - mean = torch.Tensor([0.485, 0.456, 0.406]) - std = torch.Tensor([0.229, 0.224, 0.225]) - return (img - mean[:,None,None]) / std[:,None,None] - - -def fast_iou_curve(p, g): - - g = g[p.sort().indices] - p = torch.sigmoid(p.sort().values) - - scores = [] - vals = np.linspace(0, 1, 50) - - for q in vals: - - n = int(len(g) * q) - - valid = torch.where(p > q)[0] - if len(valid) > 0: - n = int(valid[0]) - else: - n = len(g) - - fn = g[:n].sum() - tn = n - fn - tp = g[n:].sum() - fp = len(g) - n - tp - - iou = tp / (tp + fn + fp) - - precision = tp / (tp + fp) - recall = tp / (tp + fn) - - scores += [iou] - - return vals, scores - - -def fast_rp_curve(p, g): - - g = g[p.sort().indices] - p = torch.sigmoid(p.sort().values) - - precisions, recalls = [], [] - vals = np.linspace(p.min(), p.max(), 250) - - for q in p[::100000]: - - n = int(len(g) * q) - - valid = torch.where(p > q)[0] - if len(valid) > 0: - n = int(valid[0]) - else: - n = len(g) - - fn = g[:n].sum() - tn = n - fn - tp = g[n:].sum() - fp = len(g) - n - tp - - iou = tp / (tp + fn + fp) - - precision = tp / (tp + fp) - recall = tp / (tp + fn) - - precisions += [precision] - recalls += [recall] - - return recalls, precisions - - -# Image processing - -def img_preprocess(batch, blur=0, grayscale=False, center_context=None, rect=False, rect_color=(255,0,0), rect_width=2, - brightness=1.0, bg_fac=1, colorize=False, outline=False, image_size=224): - import cv2 - - rw = rect_width - - out = [] - for img, mask in zip(batch[1], batch[2]): - - img = img.cpu() if isinstance(img, torch.Tensor) else torch.from_numpy(img) - mask = mask.cpu() if isinstance(mask, torch.Tensor) else torch.from_numpy(mask) - - img *= brightness - img_bl = img - if blur > 0: # best 5 - img_bl = torch.from_numpy(cv2.GaussianBlur(img.permute(1,2,0).numpy(), (15, 15), blur)).permute(2,0,1) - - if grayscale: - img_bl = img_bl[1][None] - - #img_inp = img_ratio*img*mask + (1-img_ratio)*img_bl - # img_inp = img_ratio*img*mask + (1-img_ratio)*img_bl * (1-mask) - img_inp = img*mask + (bg_fac) * img_bl * (1-mask) - - if rect: - _, bbox = crop_mask(img, mask, context=0.1) - img_inp[:, bbox[2]: bbox[3], max(0, bbox[0]-rw):bbox[0]+rw] = torch.tensor(rect_color)[:,None,None] - img_inp[:, bbox[2]: bbox[3], max(0, bbox[1]-rw):bbox[1]+rw] = torch.tensor(rect_color)[:,None,None] - img_inp[:, max(0, bbox[2]-1): bbox[2]+rw, bbox[0]:bbox[1]] = torch.tensor(rect_color)[:,None,None] - img_inp[:, max(0, bbox[3]-1): bbox[3]+rw, bbox[0]:bbox[1]] = torch.tensor(rect_color)[:,None,None] - - - if center_context is not None: - img_inp = object_crop(img_inp, mask, context=center_context, image_size=image_size) - - if colorize: - img_gray = denorm(img) - img_gray = cv2.cvtColor(img_gray.permute(1,2,0).numpy(), cv2.COLOR_RGB2GRAY) - img_gray = torch.stack([torch.from_numpy(img_gray)]*3) - img_inp = torch.tensor([1,0.2,0.2])[:,None,None] * img_gray * mask + bg_fac * img_gray * (1-mask) - img_inp = norm(img_inp) - - if outline: - cont = cv2.findContours(mask.byte().numpy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE) - outline_img = np.zeros(mask.shape, dtype=np.uint8) - cv2.drawContours(outline_img, cont[0], -1, thickness=5, color=(255, 255, 255)) - outline_img = torch.stack([torch.from_numpy(outline_img)]*3).float() / 255. - img_inp = torch.tensor([1,0,0])[:,None,None] * outline_img + denorm(img_inp) * (1- outline_img) - img_inp = norm(img_inp) - - out += [img_inp] - - return torch.stack(out) - - -def object_crop(img, mask, context=0.0, square=False, image_size=224): - img_crop, bbox = crop_mask(img, mask, context=context, square=square) - img_crop = pad_to_square(img_crop, channel_dim=0) - img_crop = torch.nn.functional.interpolate(img_crop.unsqueeze(0), (image_size, image_size)).squeeze(0) - return img_crop - - -def crop_mask(img, mask, context=0.0, square=False): - - assert img.shape[1:] == mask.shape - - bbox = [mask.max(0).values.argmax(), mask.size(0) - mask.max(0).values.flip(0).argmax()] - bbox += [mask.max(1).values.argmax(), mask.size(1) - mask.max(1).values.flip(0).argmax()] - bbox = [int(x) for x in bbox] - - width, height = (bbox[3] - bbox[2]), (bbox[1] - bbox[0]) - - # square mask - if square: - bbox[0] = int(max(0, bbox[0] - context * height)) - bbox[1] = int(min(mask.size(0), bbox[1] + context * height)) - bbox[2] = int(max(0, bbox[2] - context * width)) - bbox[3] = int(min(mask.size(1), bbox[3] + context * width)) - - width, height = (bbox[3] - bbox[2]), (bbox[1] - bbox[0]) - if height > width: - bbox[2] = int(max(0, (bbox[2] - 0.5*height))) - bbox[3] = bbox[2] + height - else: - bbox[0] = int(max(0, (bbox[0] - 0.5*width))) - bbox[1] = bbox[0] + width - else: - bbox[0] = int(max(0, bbox[0] - context * height)) - bbox[1] = int(min(mask.size(0), bbox[1] + context * height)) - bbox[2] = int(max(0, bbox[2] - context * width)) - bbox[3] = int(min(mask.size(1), bbox[3] + context * width)) - - width, height = (bbox[3] - bbox[2]), (bbox[1] - bbox[0]) - img_crop = img[:, bbox[2]: bbox[3], bbox[0]: bbox[1]] - return img_crop, bbox - - -def pad_to_square(img, channel_dim=2, fill=0): - """ - - - add padding such that a squared image is returned """ - - from torchvision.transforms.functional import pad - - if channel_dim == 2: - img = img.permute(2, 0, 1) - elif channel_dim == 0: - pass - else: - raise ValueError('invalid channel_dim') - - h, w = img.shape[1:] - pady1 = pady2 = padx1 = padx2 = 0 - - if h > w: - padx1 = (h - w) // 2 - padx2 = h - w - padx1 - elif w > h: - pady1 = (w - h) // 2 - pady2 = w - h - pady1 - - img_padded = pad(img, padding=(padx1, pady1, padx2, pady2), padding_mode='constant') - - if channel_dim == 2: - img_padded = img_padded.permute(1, 2, 0) - - return img_padded - - -# qualitative - -def split_sentence(inp, limit=9): - t_new, current_len = [], 0 - for k, t in enumerate(inp.split(' ')): - current_len += len(t) + 1 - t_new += [t+' '] - # not last - if current_len > limit and k != len(inp.split(' ')) - 1: - current_len = 0 - t_new += ['\n'] - - t_new = ''.join(t_new) - return t_new - - -from matplotlib import pyplot as plt - - -def plot(imgs, *preds, labels=None, scale=1, cmap=plt.cm.magma, aps=None, gt_labels=None, vmax=None): - - row_off = 0 if labels is None else 1 - _, ax = plt.subplots(len(imgs) + row_off, 1 + len(preds), figsize=(scale * float(1 + 2*len(preds)), scale * float(len(imgs)*2))) - [a.axis('off') for a in ax.flatten()] - - if labels is not None: - for j in range(len(labels)): - t_new = split_sentence(labels[j], limit=6) - ax[0, 1+ j].text(0.5, 0.1, t_new, ha='center', fontsize=3+ 10*scale) - - - for i in range(len(imgs)): - ax[i + row_off,0].imshow(imgs[i]) - for j in range(len(preds)): - img = preds[j][i][0].detach().cpu().numpy() - - if gt_labels is not None and labels[j] == gt_labels[i]: - print(j, labels[j], gt_labels[i]) - edgecolor = 'red' - if aps is not None: - ax[i + row_off, 1 + j].text(30, 70, f'AP: {aps[i]:.3f}', color='red', fontsize=8) - else: - edgecolor = 'k' - - rect = plt.Rectangle([0,0], img.shape[0], img.shape[1], facecolor="none", - edgecolor=edgecolor, linewidth=3) - ax[i + row_off,1 + j].add_patch(rect) - - if vmax is None: - this_vmax = 1 - elif vmax == 'per_prompt': - this_vmax = max([preds[j][_i][0].max() for _i in range(len(imgs))]) - elif vmax == 'per_image': - this_vmax = max([preds[_j][i][0].max() for _j in range(len(preds))]) - - ax[i + row_off,1 + j].imshow(img, vmin=0, vmax=this_vmax, cmap=cmap) - - - # ax[i,1 + j].imshow(preds[j][i][0].detach().cpu().numpy(), vmin=preds[j].min(), vmax=preds[j].max()) - plt.tight_layout() - plt.subplots_adjust(wspace=0.05, hspace=0.05) \ No newline at end of file diff --git a/spaces/jbilcke-hf/MusicGen/setup.py b/spaces/jbilcke-hf/MusicGen/setup.py deleted file mode 100644 index 78a172b7c90003b689bde40b49cc8fe1fb8107d4..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/MusicGen/setup.py +++ /dev/null @@ -1,65 +0,0 @@ -""" - Copyright (c) Meta Platforms, Inc. and affiliates. - All rights reserved. - - This source code is licensed under the license found in the - LICENSE file in the root directory of this source tree. - -""" - -from pathlib import Path - -from setuptools import setup, find_packages - - -NAME = 'audiocraft' -DESCRIPTION = 'Audio research library for PyTorch' - -URL = 'https://github.com/fairinternal/audiocraft' -AUTHOR = 'FAIR Speech & Audio' -EMAIL = 'defossez@meta.com' -REQUIRES_PYTHON = '>=3.8.0' - -for line in open('audiocraft/__init__.py'): - line = line.strip() - if '__version__' in line: - context = {} - exec(line, context) - VERSION = context['__version__'] - -HERE = Path(__file__).parent - -try: - with open(HERE / "README.md", encoding='utf-8') as f: - long_description = '\n' + f.read() -except FileNotFoundError: - long_description = DESCRIPTION - -REQUIRED = [i.strip() for i in open(HERE / 'requirements.txt') if not i.startswith('#')] - -setup( - name=NAME, - version=VERSION, - description=DESCRIPTION, - author_email=EMAIL, - long_description=long_description, - long_description_content_type='text/markdown', - author=AUTHOR, - url=URL, - python_requires=REQUIRES_PYTHON, - install_requires=REQUIRED, - extras_require={ - 'dev': ['coverage', 'flake8', 'mypy', 'pdoc3', 'pytest'], - }, - packages=find_packages(), - package_data={'audiocraft': ['py.typed']}, - include_package_data=True, - license='MIT License', - classifiers=[ - # Trove classifiers - # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers - 'License :: OSI Approved :: MIT License', - 'Topic :: Multimedia :: Sound/Audio', - 'Topic :: Scientific/Engineering :: Artificial Intelligence', - ], -) diff --git a/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/alert.tsx b/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/alert.tsx deleted file mode 100644 index f589783193a6cfe14032a77b89055cb3e920fe8c..0000000000000000000000000000000000000000 --- a/spaces/jbilcke-hf/ai-clip-factory/src/components/ui/alert.tsx +++ /dev/null @@ -1,59 +0,0 @@ -import * as React from "react" -import { cva, type VariantProps } from "class-variance-authority" - -import { cn } from "@/lib/utils" - -const alertVariants = cva( - "relative w-full rounded-lg border border-stone-200 p-4 [&:has(svg)]:pl-11 [&>svg+div]:translate-y-[-3px] [&>svg]:absolute [&>svg]:left-4 [&>svg]:top-4 [&>svg]:text-stone-950 dark:border-stone-800 dark:[&>svg]:text-stone-50", - { - variants: { - variant: { - default: "bg-white text-stone-950 dark:bg-stone-950 dark:text-stone-50", - destructive: - "border-red-500/50 text-red-500 dark:border-red-500 [&>svg]:text-red-500 dark:border-red-900/50 dark:text-red-900 dark:dark:border-red-900 dark:[&>svg]:text-red-900", - }, - }, - defaultVariants: { - variant: "default", - }, - } -) - -const Alert = React.forwardRef< - HTMLDivElement, - React.HTMLAttributes & VariantProps ->(({ className, variant, ...props }, ref) => ( -
        -)) -Alert.displayName = "Alert" - -const AlertTitle = React.forwardRef< - HTMLParagraphElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( -
        -)) -AlertTitle.displayName = "AlertTitle" - -const AlertDescription = React.forwardRef< - HTMLParagraphElement, - React.HTMLAttributes ->(({ className, ...props }, ref) => ( -
        -)) -AlertDescription.displayName = "AlertDescription" - -export { Alert, AlertTitle, AlertDescription } diff --git a/spaces/jeang/ernie_demo_toy/ernie/models.py b/spaces/jeang/ernie_demo_toy/ernie/models.py deleted file mode 100644 index 81596b8aee3051d7d758ecefb1b4b6d00ea821e2..0000000000000000000000000000000000000000 --- a/spaces/jeang/ernie_demo_toy/ernie/models.py +++ /dev/null @@ -1,51 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- - - -class Models: - BertBaseUncased = 'bert-base-uncased' - BertBaseCased = 'bert-base-cased' - BertLargeUncased = 'bert-large-uncased' - BertLargeCased = 'bert-large-cased' - - RobertaBaseCased = 'roberta-base' - RobertaLargeCased = 'roberta-large' - - XLNetBaseCased = 'xlnet-base-cased' - XLNetLargeCased = 'xlnet-large-cased' - - DistilBertBaseUncased = 'distilbert-base-uncased' - DistilBertBaseMultilingualCased = 'distilbert-base-multilingual-cased' - - AlbertBaseCased = 'albert-base-v1' - AlbertLargeCased = 'albert-large-v1' - AlbertXLargeCased = 'albert-xlarge-v1' - AlbertXXLargeCased = 'albert-xxlarge-v1' - - AlbertBaseCased2 = 'albert-base-v2' - AlbertLargeCased2 = 'albert-large-v2' - AlbertXLargeCased2 = 'albert-xlarge-v2' - AlbertXXLargeCased2 = 'albert-xxlarge-v2' - - -class ModelsByFamily: - Bert = set([Models.BertBaseUncased, Models.BertBaseCased, - Models.BertLargeUncased, Models.BertLargeCased]) - Roberta = set([Models.RobertaBaseCased, Models.RobertaLargeCased]) - XLNet = set([Models.XLNetBaseCased, Models.XLNetLargeCased]) - DistilBert = set([Models.DistilBertBaseUncased, - Models.DistilBertBaseMultilingualCased]) - Albert = set([ - Models.AlbertBaseCased, - Models.AlbertLargeCased, - Models.AlbertXLargeCased, - Models.AlbertXXLargeCased, - Models.AlbertBaseCased2, - Models.AlbertLargeCased2, - Models.AlbertXLargeCased2, - Models.AlbertXXLargeCased2 - ]) - Supported = set([ - getattr(Models, model_type) for model_type - in filter(lambda x: x[:2] != '__', Models.__dict__.keys()) - ]) diff --git a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/modules/fake_fakes.py b/spaces/jgurzoni/image_background_swapper/saicinpainting/training/modules/fake_fakes.py deleted file mode 100644 index 45c4ad559cef2730b771a709197e00ae1c87683c..0000000000000000000000000000000000000000 --- a/spaces/jgurzoni/image_background_swapper/saicinpainting/training/modules/fake_fakes.py +++ /dev/null @@ -1,47 +0,0 @@ -import torch -from kornia import SamplePadding -from kornia.augmentation import RandomAffine, CenterCrop - - -class FakeFakesGenerator: - def __init__(self, aug_proba=0.5, img_aug_degree=30, img_aug_translate=0.2): - self.grad_aug = RandomAffine(degrees=360, - translate=0.2, - padding_mode=SamplePadding.REFLECTION, - keepdim=False, - p=1) - self.img_aug = RandomAffine(degrees=img_aug_degree, - translate=img_aug_translate, - padding_mode=SamplePadding.REFLECTION, - keepdim=True, - p=1) - self.aug_proba = aug_proba - - def __call__(self, input_images, masks): - blend_masks = self._fill_masks_with_gradient(masks) - blend_target = self._make_blend_target(input_images) - result = input_images * (1 - blend_masks) + blend_target * blend_masks - return result, blend_masks - - def _make_blend_target(self, input_images): - batch_size = input_images.shape[0] - permuted = input_images[torch.randperm(batch_size)] - augmented = self.img_aug(input_images) - is_aug = (torch.rand(batch_size, device=input_images.device)[:, None, None, None] < self.aug_proba).float() - result = augmented * is_aug + permuted * (1 - is_aug) - return result - - def _fill_masks_with_gradient(self, masks): - batch_size, _, height, width = masks.shape - grad = torch.linspace(0, 1, steps=width * 2, device=masks.device, dtype=masks.dtype) \ - .view(1, 1, 1, -1).expand(batch_size, 1, height * 2, width * 2) - grad = self.grad_aug(grad) - grad = CenterCrop((height, width))(grad) - grad *= masks - - grad_for_min = grad + (1 - masks) * 10 - grad -= grad_for_min.view(batch_size, -1).min(-1).values[:, None, None, None] - grad /= grad.view(batch_size, -1).max(-1).values[:, None, None, None] + 1e-6 - grad.clamp_(min=0, max=1) - - return grad diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/json_util.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/json_util.py deleted file mode 100644 index 82604f382f9fbc5ba1df4e05a43dc3918bfc7671..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/json_util.py +++ /dev/null @@ -1,918 +0,0 @@ -# Copyright 2009-present MongoDB, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tools for using Python's :mod:`json` module with BSON documents. - -This module provides two helper methods `dumps` and `loads` that wrap the -native :mod:`json` methods and provide explicit BSON conversion to and from -JSON. :class:`~bson.json_util.JSONOptions` provides a way to control how JSON -is emitted and parsed, with the default being the Relaxed Extended JSON format. -:mod:`~bson.json_util` can also generate Canonical or legacy `Extended JSON`_ -when :const:`CANONICAL_JSON_OPTIONS` or :const:`LEGACY_JSON_OPTIONS` is -provided, respectively. - -.. _Extended JSON: https://github.com/mongodb/specifications/blob/master/source/extended-json.rst - -Example usage (deserialization): - -.. doctest:: - - >>> from bson.json_util import loads - >>> loads( - ... '[{"foo": [1, 2]}, {"bar": {"hello": "world"}}, {"code": {"$scope": {}, "$code": "function x() { return 1; }"}}, {"bin": {"$type": "80", "$binary": "AQIDBA=="}}]' - ... ) - [{'foo': [1, 2]}, {'bar': {'hello': 'world'}}, {'code': Code('function x() { return 1; }', {})}, {'bin': Binary(b'...', 128)}] - -Example usage with :const:`RELAXED_JSON_OPTIONS` (the default): - -.. doctest:: - - >>> from bson import Binary, Code - >>> from bson.json_util import dumps - >>> dumps( - ... [ - ... {"foo": [1, 2]}, - ... {"bar": {"hello": "world"}}, - ... {"code": Code("function x() { return 1; }")}, - ... {"bin": Binary(b"\x01\x02\x03\x04")}, - ... ] - ... ) - '[{"foo": [1, 2]}, {"bar": {"hello": "world"}}, {"code": {"$code": "function x() { return 1; }"}}, {"bin": {"$binary": {"base64": "AQIDBA==", "subType": "00"}}}]' - -Example usage (with :const:`CANONICAL_JSON_OPTIONS`): - -.. doctest:: - - >>> from bson import Binary, Code - >>> from bson.json_util import dumps, CANONICAL_JSON_OPTIONS - >>> dumps( - ... [ - ... {"foo": [1, 2]}, - ... {"bar": {"hello": "world"}}, - ... {"code": Code("function x() { return 1; }")}, - ... {"bin": Binary(b"\x01\x02\x03\x04")}, - ... ], - ... json_options=CANONICAL_JSON_OPTIONS, - ... ) - '[{"foo": [{"$numberInt": "1"}, {"$numberInt": "2"}]}, {"bar": {"hello": "world"}}, {"code": {"$code": "function x() { return 1; }"}}, {"bin": {"$binary": {"base64": "AQIDBA==", "subType": "00"}}}]' - -Example usage (with :const:`LEGACY_JSON_OPTIONS`): - -.. doctest:: - - >>> from bson import Binary, Code - >>> from bson.json_util import dumps, LEGACY_JSON_OPTIONS - >>> dumps( - ... [ - ... {"foo": [1, 2]}, - ... {"bar": {"hello": "world"}}, - ... {"code": Code("function x() { return 1; }", {})}, - ... {"bin": Binary(b"\x01\x02\x03\x04")}, - ... ], - ... json_options=LEGACY_JSON_OPTIONS, - ... ) - '[{"foo": [1, 2]}, {"bar": {"hello": "world"}}, {"code": {"$code": "function x() { return 1; }", "$scope": {}}}, {"bin": {"$binary": "AQIDBA==", "$type": "00"}}]' - -Alternatively, you can manually pass the `default` to :func:`json.dumps`. -It won't handle :class:`~bson.binary.Binary` and :class:`~bson.code.Code` -instances (as they are extended strings you can't provide custom defaults), -but it will be faster as there is less recursion. - -.. note:: - If your application does not need the flexibility offered by - :class:`JSONOptions` and spends a large amount of time in the `json_util` - module, look to - `python-bsonjs `_ for a nice - performance improvement. `python-bsonjs` is a fast BSON to MongoDB - Extended JSON converter for Python built on top of - `libbson `_. `python-bsonjs` works best - with PyMongo when using :class:`~bson.raw_bson.RawBSONDocument`. -""" - -import base64 -import datetime -import json -import math -import re -import uuid -from typing import Any, Dict, Mapping, Optional, Sequence, Tuple, Type, Union, cast - -from bson.binary import ALL_UUID_SUBTYPES, UUID_SUBTYPE, Binary, UuidRepresentation -from bson.code import Code -from bson.codec_options import CodecOptions, DatetimeConversion -from bson.datetime_ms import ( - EPOCH_AWARE, - DatetimeMS, - _datetime_to_millis, - _max_datetime_ms, - _millis_to_datetime, -) -from bson.dbref import DBRef -from bson.decimal128 import Decimal128 -from bson.int64 import Int64 -from bson.max_key import MaxKey -from bson.min_key import MinKey -from bson.objectid import ObjectId -from bson.regex import Regex -from bson.son import RE_TYPE, SON -from bson.timestamp import Timestamp -from bson.tz_util import utc - -_RE_OPT_TABLE = { - "i": re.I, - "l": re.L, - "m": re.M, - "s": re.S, - "u": re.U, - "x": re.X, -} - - -class DatetimeRepresentation: - LEGACY = 0 - """Legacy MongoDB Extended JSON datetime representation. - - :class:`datetime.datetime` instances will be encoded to JSON in the - format `{"$date": }`, where `dateAsMilliseconds` is - a 64-bit signed integer giving the number of milliseconds since the Unix - epoch UTC. This was the default encoding before PyMongo version 3.4. - - .. versionadded:: 3.4 - """ - - NUMBERLONG = 1 - """NumberLong datetime representation. - - :class:`datetime.datetime` instances will be encoded to JSON in the - format `{"$date": {"$numberLong": ""}}`, - where `dateAsMilliseconds` is the string representation of a 64-bit signed - integer giving the number of milliseconds since the Unix epoch UTC. - - .. versionadded:: 3.4 - """ - - ISO8601 = 2 - """ISO-8601 datetime representation. - - :class:`datetime.datetime` instances greater than or equal to the Unix - epoch UTC will be encoded to JSON in the format `{"$date": ""}`. - :class:`datetime.datetime` instances before the Unix epoch UTC will be - encoded as if the datetime representation is - :const:`~DatetimeRepresentation.NUMBERLONG`. - - .. versionadded:: 3.4 - """ - - -class JSONMode: - LEGACY = 0 - """Legacy Extended JSON representation. - - In this mode, :func:`~bson.json_util.dumps` produces PyMongo's legacy - non-standard JSON output. Consider using - :const:`~bson.json_util.JSONMode.RELAXED` or - :const:`~bson.json_util.JSONMode.CANONICAL` instead. - - .. versionadded:: 3.5 - """ - - RELAXED = 1 - """Relaxed Extended JSON representation. - - In this mode, :func:`~bson.json_util.dumps` produces Relaxed Extended JSON, - a mostly JSON-like format. Consider using this for things like a web API, - where one is sending a document (or a projection of a document) that only - uses ordinary JSON type primitives. In particular, the ``int``, - :class:`~bson.int64.Int64`, and ``float`` numeric types are represented in - the native JSON number format. This output is also the most human readable - and is useful for debugging and documentation. - - .. seealso:: The specification for Relaxed `Extended JSON`_. - - .. versionadded:: 3.5 - """ - - CANONICAL = 2 - """Canonical Extended JSON representation. - - In this mode, :func:`~bson.json_util.dumps` produces Canonical Extended - JSON, a type preserving format. Consider using this for things like - testing, where one has to precisely specify expected types in JSON. In - particular, the ``int``, :class:`~bson.int64.Int64`, and ``float`` numeric - types are encoded with type wrappers. - - .. seealso:: The specification for Canonical `Extended JSON`_. - - .. versionadded:: 3.5 - """ - - -class JSONOptions(CodecOptions): - json_mode: int - strict_number_long: bool - datetime_representation: int - strict_uuid: bool - - def __init__(self, *args: Any, **kwargs: Any): - """Encapsulates JSON options for :func:`dumps` and :func:`loads`. - - :Parameters: - - `strict_number_long`: If ``True``, :class:`~bson.int64.Int64` objects - are encoded to MongoDB Extended JSON's *Strict mode* type - `NumberLong`, ie ``'{"$numberLong": "" }'``. Otherwise they - will be encoded as an `int`. Defaults to ``False``. - - `datetime_representation`: The representation to use when encoding - instances of :class:`datetime.datetime`. Defaults to - :const:`~DatetimeRepresentation.LEGACY`. - - `strict_uuid`: If ``True``, :class:`uuid.UUID` object are encoded to - MongoDB Extended JSON's *Strict mode* type `Binary`. Otherwise it - will be encoded as ``'{"$uuid": "" }'``. Defaults to ``False``. - - `json_mode`: The :class:`JSONMode` to use when encoding BSON types to - Extended JSON. Defaults to :const:`~JSONMode.LEGACY`. - - `document_class`: BSON documents returned by :func:`loads` will be - decoded to an instance of this class. Must be a subclass of - :class:`collections.MutableMapping`. Defaults to :class:`dict`. - - `uuid_representation`: The :class:`~bson.binary.UuidRepresentation` - to use when encoding and decoding instances of :class:`uuid.UUID`. - Defaults to :const:`~bson.binary.UuidRepresentation.UNSPECIFIED`. - - `tz_aware`: If ``True``, MongoDB Extended JSON's *Strict mode* type - `Date` will be decoded to timezone aware instances of - :class:`datetime.datetime`. Otherwise they will be naive. Defaults - to ``False``. - - `tzinfo`: A :class:`datetime.tzinfo` subclass that specifies the - timezone from which :class:`~datetime.datetime` objects should be - decoded. Defaults to :const:`~bson.tz_util.utc`. - - `datetime_conversion`: Specifies how UTC datetimes should be decoded - within BSON. Valid options include 'datetime_ms' to return as a - DatetimeMS, 'datetime' to return as a datetime.datetime and - raising a ValueError for out-of-range values, 'datetime_auto' to - return DatetimeMS objects when the underlying datetime is - out-of-range and 'datetime_clamp' to clamp to the minimum and - maximum possible datetimes. Defaults to 'datetime'. See - :ref:`handling-out-of-range-datetimes` for details. - - `args`: arguments to :class:`~bson.codec_options.CodecOptions` - - `kwargs`: arguments to :class:`~bson.codec_options.CodecOptions` - - .. seealso:: The specification for Relaxed and Canonical `Extended JSON`_. - - .. versionchanged:: 4.0 - The default for `json_mode` was changed from :const:`JSONMode.LEGACY` - to :const:`JSONMode.RELAXED`. - The default for `uuid_representation` was changed from - :const:`~bson.binary.UuidRepresentation.PYTHON_LEGACY` to - :const:`~bson.binary.UuidRepresentation.UNSPECIFIED`. - - .. versionchanged:: 3.5 - Accepts the optional parameter `json_mode`. - - .. versionchanged:: 4.0 - Changed default value of `tz_aware` to False. - """ - super().__init__() - - def __new__( - cls: Type["JSONOptions"], - strict_number_long: Optional[bool] = None, - datetime_representation: Optional[int] = None, - strict_uuid: Optional[bool] = None, - json_mode: int = JSONMode.RELAXED, - *args: Any, - **kwargs: Any, - ) -> "JSONOptions": - kwargs["tz_aware"] = kwargs.get("tz_aware", False) - if kwargs["tz_aware"]: - kwargs["tzinfo"] = kwargs.get("tzinfo", utc) - if datetime_representation not in ( - DatetimeRepresentation.LEGACY, - DatetimeRepresentation.NUMBERLONG, - DatetimeRepresentation.ISO8601, - None, - ): - raise ValueError( - "JSONOptions.datetime_representation must be one of LEGACY, " - "NUMBERLONG, or ISO8601 from DatetimeRepresentation." - ) - self = cast(JSONOptions, super().__new__(cls, *args, **kwargs)) - if json_mode not in (JSONMode.LEGACY, JSONMode.RELAXED, JSONMode.CANONICAL): - raise ValueError( - "JSONOptions.json_mode must be one of LEGACY, RELAXED, " - "or CANONICAL from JSONMode." - ) - self.json_mode = json_mode - if self.json_mode == JSONMode.RELAXED: - if strict_number_long: - raise ValueError("Cannot specify strict_number_long=True with JSONMode.RELAXED") - if datetime_representation not in (None, DatetimeRepresentation.ISO8601): - raise ValueError( - "datetime_representation must be DatetimeRepresentation." - "ISO8601 or omitted with JSONMode.RELAXED" - ) - if strict_uuid not in (None, True): - raise ValueError("Cannot specify strict_uuid=False with JSONMode.RELAXED") - self.strict_number_long = False - self.datetime_representation = DatetimeRepresentation.ISO8601 - self.strict_uuid = True - elif self.json_mode == JSONMode.CANONICAL: - if strict_number_long not in (None, True): - raise ValueError("Cannot specify strict_number_long=False with JSONMode.RELAXED") - if datetime_representation not in (None, DatetimeRepresentation.NUMBERLONG): - raise ValueError( - "datetime_representation must be DatetimeRepresentation." - "NUMBERLONG or omitted with JSONMode.RELAXED" - ) - if strict_uuid not in (None, True): - raise ValueError("Cannot specify strict_uuid=False with JSONMode.RELAXED") - self.strict_number_long = True - self.datetime_representation = DatetimeRepresentation.NUMBERLONG - self.strict_uuid = True - else: # JSONMode.LEGACY - self.strict_number_long = False - self.datetime_representation = DatetimeRepresentation.LEGACY - self.strict_uuid = False - if strict_number_long is not None: - self.strict_number_long = strict_number_long - if datetime_representation is not None: - self.datetime_representation = datetime_representation - if strict_uuid is not None: - self.strict_uuid = strict_uuid - return self - - def _arguments_repr(self) -> str: - return ( - "strict_number_long={!r}, " - "datetime_representation={!r}, " - "strict_uuid={!r}, json_mode={!r}, {}".format( - self.strict_number_long, - self.datetime_representation, - self.strict_uuid, - self.json_mode, - super()._arguments_repr(), - ) - ) - - def _options_dict(self) -> Dict[Any, Any]: - # TODO: PYTHON-2442 use _asdict() instead - options_dict = super()._options_dict() - options_dict.update( - { - "strict_number_long": self.strict_number_long, - "datetime_representation": self.datetime_representation, - "strict_uuid": self.strict_uuid, - "json_mode": self.json_mode, - } - ) - return options_dict - - def with_options(self, **kwargs: Any) -> "JSONOptions": - """ - Make a copy of this JSONOptions, overriding some options:: - - >>> from bson.json_util import CANONICAL_JSON_OPTIONS - >>> CANONICAL_JSON_OPTIONS.tz_aware - True - >>> json_options = CANONICAL_JSON_OPTIONS.with_options(tz_aware=False, tzinfo=None) - >>> json_options.tz_aware - False - - .. versionadded:: 3.12 - """ - opts = self._options_dict() - for opt in ("strict_number_long", "datetime_representation", "strict_uuid", "json_mode"): - opts[opt] = kwargs.get(opt, getattr(self, opt)) - opts.update(kwargs) - return JSONOptions(**opts) - - -LEGACY_JSON_OPTIONS: JSONOptions = JSONOptions(json_mode=JSONMode.LEGACY) -""":class:`JSONOptions` for encoding to PyMongo's legacy JSON format. - -.. seealso:: The documentation for :const:`bson.json_util.JSONMode.LEGACY`. - -.. versionadded:: 3.5 -""" - -CANONICAL_JSON_OPTIONS: JSONOptions = JSONOptions(json_mode=JSONMode.CANONICAL) -""":class:`JSONOptions` for Canonical Extended JSON. - -.. seealso:: The documentation for :const:`bson.json_util.JSONMode.CANONICAL`. - -.. versionadded:: 3.5 -""" - -RELAXED_JSON_OPTIONS: JSONOptions = JSONOptions(json_mode=JSONMode.RELAXED) -""":class:`JSONOptions` for Relaxed Extended JSON. - -.. seealso:: The documentation for :const:`bson.json_util.JSONMode.RELAXED`. - -.. versionadded:: 3.5 -""" - -DEFAULT_JSON_OPTIONS: JSONOptions = RELAXED_JSON_OPTIONS -"""The default :class:`JSONOptions` for JSON encoding/decoding. - -The same as :const:`RELAXED_JSON_OPTIONS`. - -.. versionchanged:: 4.0 - Changed from :const:`LEGACY_JSON_OPTIONS` to - :const:`RELAXED_JSON_OPTIONS`. - -.. versionadded:: 3.4 -""" - - -def dumps(obj: Any, *args: Any, **kwargs: Any) -> str: - """Helper function that wraps :func:`json.dumps`. - - Recursive function that handles all BSON types including - :class:`~bson.binary.Binary` and :class:`~bson.code.Code`. - - :Parameters: - - `json_options`: A :class:`JSONOptions` instance used to modify the - encoding of MongoDB Extended JSON types. Defaults to - :const:`DEFAULT_JSON_OPTIONS`. - - .. versionchanged:: 4.0 - Now outputs MongoDB Relaxed Extended JSON by default (using - :const:`DEFAULT_JSON_OPTIONS`). - - .. versionchanged:: 3.4 - Accepts optional parameter `json_options`. See :class:`JSONOptions`. - """ - json_options = kwargs.pop("json_options", DEFAULT_JSON_OPTIONS) - return json.dumps(_json_convert(obj, json_options), *args, **kwargs) - - -def loads(s: Union[str, bytes, bytearray], *args: Any, **kwargs: Any) -> Any: - """Helper function that wraps :func:`json.loads`. - - Automatically passes the object_hook for BSON type conversion. - - Raises ``TypeError``, ``ValueError``, ``KeyError``, or - :exc:`~bson.errors.InvalidId` on invalid MongoDB Extended JSON. - - :Parameters: - - `json_options`: A :class:`JSONOptions` instance used to modify the - decoding of MongoDB Extended JSON types. Defaults to - :const:`DEFAULT_JSON_OPTIONS`. - - .. versionchanged:: 4.0 - Now loads :class:`datetime.datetime` instances as naive by default. To - load timezone aware instances utilize the `json_options` parameter. - See :ref:`tz_aware_default_change` for an example. - - .. versionchanged:: 3.5 - Parses Relaxed and Canonical Extended JSON as well as PyMongo's legacy - format. Now raises ``TypeError`` or ``ValueError`` when parsing JSON - type wrappers with values of the wrong type or any extra keys. - - .. versionchanged:: 3.4 - Accepts optional parameter `json_options`. See :class:`JSONOptions`. - """ - json_options = kwargs.pop("json_options", DEFAULT_JSON_OPTIONS) - kwargs["object_pairs_hook"] = lambda pairs: object_pairs_hook(pairs, json_options) - return json.loads(s, *args, **kwargs) - - -def _json_convert(obj: Any, json_options: JSONOptions = DEFAULT_JSON_OPTIONS) -> Any: - """Recursive helper method that converts BSON types so they can be - converted into json. - """ - if hasattr(obj, "items"): - return SON(((k, _json_convert(v, json_options)) for k, v in obj.items())) - elif hasattr(obj, "__iter__") and not isinstance(obj, (str, bytes)): - return [_json_convert(v, json_options) for v in obj] - try: - return default(obj, json_options) - except TypeError: - return obj - - -def object_pairs_hook( - pairs: Sequence[Tuple[str, Any]], json_options: JSONOptions = DEFAULT_JSON_OPTIONS -) -> Any: - return object_hook(json_options.document_class(pairs), json_options) - - -def object_hook(dct: Mapping[str, Any], json_options: JSONOptions = DEFAULT_JSON_OPTIONS) -> Any: - if "$oid" in dct: - return _parse_canonical_oid(dct) - if ( - isinstance(dct.get("$ref"), str) - and "$id" in dct - and isinstance(dct.get("$db"), (str, type(None))) - ): - return _parse_canonical_dbref(dct) - if "$date" in dct: - return _parse_canonical_datetime(dct, json_options) - if "$regex" in dct: - return _parse_legacy_regex(dct) - if "$minKey" in dct: - return _parse_canonical_minkey(dct) - if "$maxKey" in dct: - return _parse_canonical_maxkey(dct) - if "$binary" in dct: - if "$type" in dct: - return _parse_legacy_binary(dct, json_options) - else: - return _parse_canonical_binary(dct, json_options) - if "$code" in dct: - return _parse_canonical_code(dct) - if "$uuid" in dct: - return _parse_legacy_uuid(dct, json_options) - if "$undefined" in dct: - return None - if "$numberLong" in dct: - return _parse_canonical_int64(dct) - if "$timestamp" in dct: - tsp = dct["$timestamp"] - return Timestamp(tsp["t"], tsp["i"]) - if "$numberDecimal" in dct: - return _parse_canonical_decimal128(dct) - if "$dbPointer" in dct: - return _parse_canonical_dbpointer(dct) - if "$regularExpression" in dct: - return _parse_canonical_regex(dct) - if "$symbol" in dct: - return _parse_canonical_symbol(dct) - if "$numberInt" in dct: - return _parse_canonical_int32(dct) - if "$numberDouble" in dct: - return _parse_canonical_double(dct) - return dct - - -def _parse_legacy_regex(doc: Any) -> Any: - pattern = doc["$regex"] - # Check if this is the $regex query operator. - if not isinstance(pattern, (str, bytes)): - return doc - flags = 0 - # PyMongo always adds $options but some other tools may not. - for opt in doc.get("$options", ""): - flags |= _RE_OPT_TABLE.get(opt, 0) - return Regex(pattern, flags) - - -def _parse_legacy_uuid(doc: Any, json_options: JSONOptions) -> Union[Binary, uuid.UUID]: - """Decode a JSON legacy $uuid to Python UUID.""" - if len(doc) != 1: - raise TypeError(f"Bad $uuid, extra field(s): {doc}") - if not isinstance(doc["$uuid"], str): - raise TypeError(f"$uuid must be a string: {doc}") - if json_options.uuid_representation == UuidRepresentation.UNSPECIFIED: - return Binary.from_uuid(uuid.UUID(doc["$uuid"])) - else: - return uuid.UUID(doc["$uuid"]) - - -def _binary_or_uuid(data: Any, subtype: int, json_options: JSONOptions) -> Union[Binary, uuid.UUID]: - # special handling for UUID - if subtype in ALL_UUID_SUBTYPES: - uuid_representation = json_options.uuid_representation - binary_value = Binary(data, subtype) - if uuid_representation == UuidRepresentation.UNSPECIFIED: - return binary_value - if subtype == UUID_SUBTYPE: - # Legacy behavior: use STANDARD with binary subtype 4. - uuid_representation = UuidRepresentation.STANDARD - elif uuid_representation == UuidRepresentation.STANDARD: - # subtype == OLD_UUID_SUBTYPE - # Legacy behavior: STANDARD is the same as PYTHON_LEGACY. - uuid_representation = UuidRepresentation.PYTHON_LEGACY - return binary_value.as_uuid(uuid_representation) - - if subtype == 0: - return cast(uuid.UUID, data) - return Binary(data, subtype) - - -def _parse_legacy_binary(doc: Any, json_options: JSONOptions) -> Union[Binary, uuid.UUID]: - if isinstance(doc["$type"], int): - doc["$type"] = "%02x" % doc["$type"] - subtype = int(doc["$type"], 16) - if subtype >= 0xFFFFFF80: # Handle mongoexport values - subtype = int(doc["$type"][6:], 16) - data = base64.b64decode(doc["$binary"].encode()) - return _binary_or_uuid(data, subtype, json_options) - - -def _parse_canonical_binary(doc: Any, json_options: JSONOptions) -> Union[Binary, uuid.UUID]: - binary = doc["$binary"] - b64 = binary["base64"] - subtype = binary["subType"] - if not isinstance(b64, str): - raise TypeError(f"$binary base64 must be a string: {doc}") - if not isinstance(subtype, str) or len(subtype) > 2: - raise TypeError(f"$binary subType must be a string at most 2 characters: {doc}") - if len(binary) != 2: - raise TypeError(f'$binary must include only "base64" and "subType" components: {doc}') - - data = base64.b64decode(b64.encode()) - return _binary_or_uuid(data, int(subtype, 16), json_options) - - -def _parse_canonical_datetime( - doc: Any, json_options: JSONOptions -) -> Union[datetime.datetime, DatetimeMS]: - """Decode a JSON datetime to python datetime.datetime.""" - dtm = doc["$date"] - if len(doc) != 1: - raise TypeError(f"Bad $date, extra field(s): {doc}") - # mongoexport 2.6 and newer - if isinstance(dtm, str): - # Parse offset - if dtm[-1] == "Z": - dt = dtm[:-1] - offset = "Z" - elif dtm[-6] in ("+", "-") and dtm[-3] == ":": - # (+|-)HH:MM - dt = dtm[:-6] - offset = dtm[-6:] - elif dtm[-5] in ("+", "-"): - # (+|-)HHMM - dt = dtm[:-5] - offset = dtm[-5:] - elif dtm[-3] in ("+", "-"): - # (+|-)HH - dt = dtm[:-3] - offset = dtm[-3:] - else: - dt = dtm - offset = "" - - # Parse the optional factional seconds portion. - dot_index = dt.rfind(".") - microsecond = 0 - if dot_index != -1: - microsecond = int(float(dt[dot_index:]) * 1000000) - dt = dt[:dot_index] - - aware = datetime.datetime.strptime(dt, "%Y-%m-%dT%H:%M:%S").replace( - microsecond=microsecond, tzinfo=utc - ) - - if offset and offset != "Z": - if len(offset) == 6: - hours, minutes = offset[1:].split(":") - secs = int(hours) * 3600 + int(minutes) * 60 - elif len(offset) == 5: - secs = int(offset[1:3]) * 3600 + int(offset[3:]) * 60 - elif len(offset) == 3: - secs = int(offset[1:3]) * 3600 - if offset[0] == "-": - secs *= -1 - aware = aware - datetime.timedelta(seconds=secs) - - if json_options.tz_aware: - if json_options.tzinfo: - aware = aware.astimezone(json_options.tzinfo) - if json_options.datetime_conversion == DatetimeConversion.DATETIME_MS: - return DatetimeMS(aware) - return aware - else: - aware_tzinfo_none = aware.replace(tzinfo=None) - if json_options.datetime_conversion == DatetimeConversion.DATETIME_MS: - return DatetimeMS(aware_tzinfo_none) - return aware_tzinfo_none - return _millis_to_datetime(int(dtm), json_options) - - -def _parse_canonical_oid(doc: Any) -> ObjectId: - """Decode a JSON ObjectId to bson.objectid.ObjectId.""" - if len(doc) != 1: - raise TypeError(f"Bad $oid, extra field(s): {doc}") - return ObjectId(doc["$oid"]) - - -def _parse_canonical_symbol(doc: Any) -> str: - """Decode a JSON symbol to Python string.""" - symbol = doc["$symbol"] - if len(doc) != 1: - raise TypeError(f"Bad $symbol, extra field(s): {doc}") - return str(symbol) - - -def _parse_canonical_code(doc: Any) -> Code: - """Decode a JSON code to bson.code.Code.""" - for key in doc: - if key not in ("$code", "$scope"): - raise TypeError(f"Bad $code, extra field(s): {doc}") - return Code(doc["$code"], scope=doc.get("$scope")) - - -def _parse_canonical_regex(doc: Any) -> Regex: - """Decode a JSON regex to bson.regex.Regex.""" - regex = doc["$regularExpression"] - if len(doc) != 1: - raise TypeError(f"Bad $regularExpression, extra field(s): {doc}") - if len(regex) != 2: - raise TypeError( - 'Bad $regularExpression must include only "pattern"' - 'and "options" components: {}'.format(doc) - ) - opts = regex["options"] - if not isinstance(opts, str): - raise TypeError( - "Bad $regularExpression options, options must be string, was type %s" % (type(opts)) - ) - return Regex(regex["pattern"], opts) - - -def _parse_canonical_dbref(doc: Any) -> DBRef: - """Decode a JSON DBRef to bson.dbref.DBRef.""" - return DBRef(doc.pop("$ref"), doc.pop("$id"), database=doc.pop("$db", None), **doc) - - -def _parse_canonical_dbpointer(doc: Any) -> Any: - """Decode a JSON (deprecated) DBPointer to bson.dbref.DBRef.""" - dbref = doc["$dbPointer"] - if len(doc) != 1: - raise TypeError(f"Bad $dbPointer, extra field(s): {doc}") - if isinstance(dbref, DBRef): - dbref_doc = dbref.as_doc() - # DBPointer must not contain $db in its value. - if dbref.database is not None: - raise TypeError(f"Bad $dbPointer, extra field $db: {dbref_doc}") - if not isinstance(dbref.id, ObjectId): - raise TypeError(f"Bad $dbPointer, $id must be an ObjectId: {dbref_doc}") - if len(dbref_doc) != 2: - raise TypeError(f"Bad $dbPointer, extra field(s) in DBRef: {dbref_doc}") - return dbref - else: - raise TypeError(f"Bad $dbPointer, expected a DBRef: {doc}") - - -def _parse_canonical_int32(doc: Any) -> int: - """Decode a JSON int32 to python int.""" - i_str = doc["$numberInt"] - if len(doc) != 1: - raise TypeError(f"Bad $numberInt, extra field(s): {doc}") - if not isinstance(i_str, str): - raise TypeError(f"$numberInt must be string: {doc}") - return int(i_str) - - -def _parse_canonical_int64(doc: Any) -> Int64: - """Decode a JSON int64 to bson.int64.Int64.""" - l_str = doc["$numberLong"] - if len(doc) != 1: - raise TypeError(f"Bad $numberLong, extra field(s): {doc}") - return Int64(l_str) - - -def _parse_canonical_double(doc: Any) -> float: - """Decode a JSON double to python float.""" - d_str = doc["$numberDouble"] - if len(doc) != 1: - raise TypeError(f"Bad $numberDouble, extra field(s): {doc}") - if not isinstance(d_str, str): - raise TypeError(f"$numberDouble must be string: {doc}") - return float(d_str) - - -def _parse_canonical_decimal128(doc: Any) -> Decimal128: - """Decode a JSON decimal128 to bson.decimal128.Decimal128.""" - d_str = doc["$numberDecimal"] - if len(doc) != 1: - raise TypeError(f"Bad $numberDecimal, extra field(s): {doc}") - if not isinstance(d_str, str): - raise TypeError(f"$numberDecimal must be string: {doc}") - return Decimal128(d_str) - - -def _parse_canonical_minkey(doc: Any) -> MinKey: - """Decode a JSON MinKey to bson.min_key.MinKey.""" - if type(doc["$minKey"]) is not int or doc["$minKey"] != 1: - raise TypeError(f"$minKey value must be 1: {doc}") - if len(doc) != 1: - raise TypeError(f"Bad $minKey, extra field(s): {doc}") - return MinKey() - - -def _parse_canonical_maxkey(doc: Any) -> MaxKey: - """Decode a JSON MaxKey to bson.max_key.MaxKey.""" - if type(doc["$maxKey"]) is not int or doc["$maxKey"] != 1: - raise TypeError("$maxKey value must be 1: %s", (doc,)) - if len(doc) != 1: - raise TypeError(f"Bad $minKey, extra field(s): {doc}") - return MaxKey() - - -def _encode_binary(data: bytes, subtype: int, json_options: JSONOptions) -> Any: - if json_options.json_mode == JSONMode.LEGACY: - return SON([("$binary", base64.b64encode(data).decode()), ("$type", "%02x" % subtype)]) - return { - "$binary": SON([("base64", base64.b64encode(data).decode()), ("subType", "%02x" % subtype)]) - } - - -def default(obj: Any, json_options: JSONOptions = DEFAULT_JSON_OPTIONS) -> Any: - # We preserve key order when rendering SON, DBRef, etc. as JSON by - # returning a SON for those types instead of a dict. - if isinstance(obj, ObjectId): - return {"$oid": str(obj)} - if isinstance(obj, DBRef): - return _json_convert(obj.as_doc(), json_options=json_options) - if isinstance(obj, datetime.datetime): - if json_options.datetime_representation == DatetimeRepresentation.ISO8601: - if not obj.tzinfo: - obj = obj.replace(tzinfo=utc) - assert obj.tzinfo is not None - if obj >= EPOCH_AWARE: - off = obj.tzinfo.utcoffset(obj) - if (off.days, off.seconds, off.microseconds) == (0, 0, 0): # type: ignore - tz_string = "Z" - else: - tz_string = obj.strftime("%z") - millis = int(obj.microsecond / 1000) - fracsecs = ".%03d" % (millis,) if millis else "" - return { - "$date": "{}{}{}".format(obj.strftime("%Y-%m-%dT%H:%M:%S"), fracsecs, tz_string) - } - - millis = _datetime_to_millis(obj) - if json_options.datetime_representation == DatetimeRepresentation.LEGACY: - return {"$date": millis} - return {"$date": {"$numberLong": str(millis)}} - if isinstance(obj, DatetimeMS): - if ( - json_options.datetime_representation == DatetimeRepresentation.ISO8601 - and 0 <= int(obj) <= _max_datetime_ms() - ): - return default(obj.as_datetime(), json_options) - elif json_options.datetime_representation == DatetimeRepresentation.LEGACY: - return {"$date": str(int(obj))} - return {"$date": {"$numberLong": str(int(obj))}} - if json_options.strict_number_long and isinstance(obj, Int64): - return {"$numberLong": str(obj)} - if isinstance(obj, (RE_TYPE, Regex)): - flags = "" - if obj.flags & re.IGNORECASE: - flags += "i" - if obj.flags & re.LOCALE: - flags += "l" - if obj.flags & re.MULTILINE: - flags += "m" - if obj.flags & re.DOTALL: - flags += "s" - if obj.flags & re.UNICODE: - flags += "u" - if obj.flags & re.VERBOSE: - flags += "x" - if isinstance(obj.pattern, str): - pattern = obj.pattern - else: - pattern = obj.pattern.decode("utf-8") - if json_options.json_mode == JSONMode.LEGACY: - return SON([("$regex", pattern), ("$options", flags)]) - return {"$regularExpression": SON([("pattern", pattern), ("options", flags)])} - if isinstance(obj, MinKey): - return {"$minKey": 1} - if isinstance(obj, MaxKey): - return {"$maxKey": 1} - if isinstance(obj, Timestamp): - return {"$timestamp": SON([("t", obj.time), ("i", obj.inc)])} - if isinstance(obj, Code): - if obj.scope is None: - return {"$code": str(obj)} - return SON([("$code", str(obj)), ("$scope", _json_convert(obj.scope, json_options))]) - if isinstance(obj, Binary): - return _encode_binary(obj, obj.subtype, json_options) - if isinstance(obj, bytes): - return _encode_binary(obj, 0, json_options) - if isinstance(obj, uuid.UUID): - if json_options.strict_uuid: - binval = Binary.from_uuid(obj, uuid_representation=json_options.uuid_representation) - return _encode_binary(binval, binval.subtype, json_options) - else: - return {"$uuid": obj.hex} - if isinstance(obj, Decimal128): - return {"$numberDecimal": str(obj)} - if isinstance(obj, bool): - return obj - if json_options.json_mode == JSONMode.CANONICAL and isinstance(obj, int): - if -(2**31) <= obj < 2**31: - return {"$numberInt": str(obj)} - return {"$numberLong": str(obj)} - if json_options.json_mode != JSONMode.LEGACY and isinstance(obj, float): - if math.isnan(obj): - return {"$numberDouble": "NaN"} - elif math.isinf(obj): - representation = "Infinity" if obj > 0 else "-Infinity" - return {"$numberDouble": representation} - elif json_options.json_mode == JSONMode.CANONICAL: - # repr() will return the shortest string guaranteed to produce the - # original value, when float() is called on it. - return {"$numberDouble": str(repr(obj))} - raise TypeError("%r is not JSON serializable" % obj) diff --git a/spaces/jordonpeter01/MusicGen/audiocraft/modules/transformer.py b/spaces/jordonpeter01/MusicGen/audiocraft/modules/transformer.py deleted file mode 100644 index e69cca829d774d0b8b36c0de9b7924373da81b43..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen/audiocraft/modules/transformer.py +++ /dev/null @@ -1,747 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Transformer model, with streaming support, xformer attention support -and easy causal attention with a potentially finite receptive field. - -See `StreamingTransformer` for more information. - -Unlike regular PyTorch Transformer, we make the hard choice that batches are first. -""" - -import typing as tp - -from einops import rearrange -import torch -import torch.nn as nn -from torch.nn import functional as F -from torch.utils.checkpoint import checkpoint as torch_checkpoint -from xformers import ops - -from .rope import RotaryEmbedding -from .streaming import StreamingModule - -_efficient_attention_backend: str = 'torch' - - -def set_efficient_attention_backend(backend: str = 'torch'): - # Using torch by default, it seems a bit faster on older P100 GPUs (~20% faster). - global _efficient_attention_backend - assert _efficient_attention_backend in ['xformers', 'torch'] - _efficient_attention_backend = backend - - -def _get_attention_time_dimension() -> int: - if _efficient_attention_backend == 'torch': - return 2 - else: - return 1 - - -def _is_profiled() -> bool: - # Return true if we are currently running with a xformers profiler activated. - try: - from xformers.profiler import profiler - except ImportError: - return False - return profiler._Profiler._CURRENT_PROFILER is not None - - -def create_norm_fn(norm_type: str, dim: int, **kwargs) -> nn.Module: - """Create normalization module for transformer encoder layer. - - Args: - norm_type (str): Normalization method. - dim (int): Dimension of the normalized layer. - **kwargs (dict): Additional parameters for normalization layer. - Returns: - nn.Module: Normalization module. - """ - if norm_type == 'layer_norm': - return nn.LayerNorm(dim, eps=1e-5, **kwargs) - else: - raise ValueError(f"Unknown norm type: {norm_type}") - - -def create_sin_embedding(positions: torch.Tensor, dim: int, max_period: float = 10000, - dtype: torch.dtype = torch.float32) -> torch.Tensor: - """Create sinusoidal positional embedding, with shape `[B, T, C]`. - - Args: - positions (torch.Tensor): LongTensor of positions. - dim (int): Dimension of the embedding. - max_period (float): Maximum period of the cosine/sine functions. - dtype (torch.dtype or str): dtype to use to generate the embedding. - Returns: - torch.Tensor: Sinusoidal positional embedding. - """ - # We aim for BTC format - assert dim % 2 == 0 - half_dim = dim // 2 - positions = positions.to(dtype) - adim = torch.arange(half_dim, device=positions.device, dtype=dtype).view(1, 1, -1) - max_period_tensor = torch.full([], max_period, device=positions.device, dtype=dtype) # avoid sync point - phase = positions / (max_period_tensor ** (adim / (half_dim - 1))) - return torch.cat([torch.cos(phase), torch.sin(phase)], dim=-1) - - -def expand_repeated_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: - """torch.repeat_interleave(x, dim=2, repeats=n_rep) from xlformers""" - if n_rep == 1: - return x - if _efficient_attention_backend == 'torch': - bs, n_kv_heads, slen, head_dim = x.shape - return ( - x[:, :, None, :, :] - .expand(bs, n_kv_heads, n_rep, slen, head_dim) - .reshape(bs, n_kv_heads * n_rep, slen, head_dim) - ) - else: - bs, slen, n_kv_heads, head_dim = x.shape - return ( - x[:, :, :, None, :] - .expand(bs, slen, n_kv_heads, n_rep, head_dim) - .reshape(bs, slen, n_kv_heads * n_rep, head_dim) - ) - - -class LayerScale(nn.Module): - """Layer scale from [Touvron et al 2021] (https://arxiv.org/pdf/2103.17239.pdf). - This rescales diagonaly the residual outputs close to 0, with a learnt scale. - - Args: - channels (int): Number of channels. - init (float): Initial scale. - channel_last (bool): If True, expect `[*, C]` shaped tensors, otherwise, `[*, C, T]`. - device (torch.device or None): Device on which to initialize the module. - dtype (torch.dtype or None): dtype to use to initialize the module. - """ - def __init__(self, channels: int, init: float = 1e-4, channel_last: bool = True, - device=None, dtype=None): - super().__init__() - self.channel_last = channel_last - self.scale = nn.Parameter( - torch.full((channels,), init, - requires_grad=True, device=device, dtype=dtype)) - - def forward(self, x: torch.Tensor): - if self.channel_last: - return self.scale * x - else: - return self.scale[:, None] * x - - -class StreamingMultiheadAttention(StreamingModule): - """Similar to `nn.MultiheadAttention` but with support for streaming, causal evaluation. - - Args: - embed_dim (int): Dimension to project to. - num_heads (int): Number of heads. - dropout (float): Dropout level. - bias (bool): Use bias in projections. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - rope (`RotaryEmbedding` or None): Rope embedding to use. - cross_attention: Should be true when used as a cross attention. - All keys and values must be available at once, streaming is only for the queries. - Cannot be used with `causal` or `rope` (as it wouldn't make sens to - intepret the time steps in the keys relative to those in the queries). - safe_streaming (bool): Bug fix, will go away with xformers update. - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Sevice on which to initialize. - dtype (torch.dtype or None): dtype to use. - """ - def __init__(self, embed_dim: int, num_heads: int, dropout: float = 0.0, bias: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - rope: tp.Optional[RotaryEmbedding] = None, cross_attention: bool = False, - safe_streaming: bool = True, qk_layer_norm: bool = False, kv_repeat: int = 1, - device=None, dtype=None): - super().__init__() - factory_kwargs = {'device': device, 'dtype': dtype} - if past_context is not None: - assert causal - - self.embed_dim = embed_dim - self.causal = causal - self.past_context = past_context - self.memory_efficient = memory_efficient - self.attention_as_float32 = attention_as_float32 - self.rope = rope - self.cross_attention = cross_attention - self.safe_streaming = safe_streaming - self.num_heads = num_heads - self.dropout = dropout - self.kv_repeat = kv_repeat - if cross_attention: - assert not causal, "Causal cannot work with cross attention." - assert rope is None, "Rope cannot work with cross attention." - - if memory_efficient: - _verify_xformers_memory_efficient_compat() - - self.custom = _is_custom(custom, memory_efficient) - if self.custom: - out_dim = embed_dim - assert num_heads % kv_repeat == 0 - assert not cross_attention or kv_repeat == 1 - num_kv = num_heads // kv_repeat - kv_dim = (embed_dim // num_heads) * num_kv - out_dim += 2 * kv_dim - in_proj = nn.Linear(embed_dim, out_dim, bias=bias, **factory_kwargs) - # We try to follow the default PyTorch MHA convention, to easily compare results. - self.in_proj_weight = in_proj.weight - self.in_proj_bias = in_proj.bias - if bias: - self.in_proj_bias.data.zero_() # Following Pytorch convention - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias, **factory_kwargs) - if bias: - self.out_proj.bias.data.zero_() - else: - assert not qk_layer_norm - assert kv_repeat == 1 - self.mha = nn.MultiheadAttention( - embed_dim, num_heads, dropout=dropout, bias=bias, batch_first=True, - **factory_kwargs) - self.qk_layer_norm = qk_layer_norm - if qk_layer_norm: - assert self.custom - assert kv_repeat == 1 - ln_dim = embed_dim - self.q_layer_norm = nn.LayerNorm(ln_dim) - self.k_layer_norm = nn.LayerNorm(ln_dim) - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - if not self.custom: - # Support compat with regular MHA - keys = [n for n, _ in self.mha.named_parameters()] - for key in keys: - if prefix + key in state_dict: - state_dict[prefix + "mha." + key] = state_dict.pop(prefix + key) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def _get_mask(self, current_steps: int, device: torch.device, dtype: torch.dtype): - # Return a causal mask, accounting for potentially stored past keys/values - # We actually return a bias for the attention score, as this has the same - # convention both in the builtin MHA in Pytorch, and Xformers functions. - time_dim = _get_attention_time_dimension() - if self.memory_efficient: - from xformers.ops import LowerTriangularMask - if current_steps == 1: - # If we only have one step, then we do not need a mask. - return None - elif 'past_keys' in self._streaming_state: - raise RuntimeError('Not supported at the moment') - else: - # Then we can safely use a lower triangular mask - return LowerTriangularMask() - if self._streaming_state: - past_keys = self._streaming_state['past_keys'] - past_steps = past_keys.shape[time_dim] - else: - past_steps = 0 - - queries_pos = torch.arange( - past_steps, current_steps + past_steps, device=device).view(-1, 1) - keys_pos = torch.arange(past_steps + current_steps, device=device).view(1, -1) - delta = queries_pos - keys_pos - valid = delta >= 0 - if self.past_context is not None: - valid &= (delta <= self.past_context) - return torch.where( - valid, - torch.zeros([], device=device, dtype=dtype), - torch.full([], float('-inf'), device=device, dtype=dtype)) - - def _complete_kv(self, k, v): - time_dim = _get_attention_time_dimension() - if self.cross_attention: - # With cross attention we assume all keys and values - # are already available, and streaming is with respect - # to the queries only. - return k, v - # Complete the key/value pair using the streaming state. - if self._streaming_state: - pk = self._streaming_state['past_keys'] - nk = torch.cat([pk, k], dim=time_dim) - if v is k: - nv = nk - else: - pv = self._streaming_state['past_values'] - nv = torch.cat([pv, v], dim=time_dim) - else: - nk = k - nv = v - - assert nk.shape[time_dim] == nv.shape[time_dim] - offset = 0 - if self.past_context is not None: - offset = max(0, nk.shape[time_dim] - self.past_context) - if self._is_streaming: - self._streaming_state['past_keys'] = nk[:, offset:] - if v is not k: - self._streaming_state['past_values'] = nv[:, offset:] - if 'offset' in self._streaming_state: - self._streaming_state['offset'] += offset - else: - self._streaming_state['offset'] = torch.tensor(0) - return nk, nv - - def _apply_rope(self, query: torch.Tensor, key: torch.Tensor): - # TODO: fix and verify layout. - assert _efficient_attention_backend == 'xformers', 'Rope not supported with torch attn.' - # Apply rope embeddings to query and key tensors. - assert self.rope is not None - if 'past_keys' in self._streaming_state: - past_keys_offset = self._streaming_state['past_keys'].shape[1] - else: - past_keys_offset = 0 - if 'offset' in self._streaming_state: - past_context_offset = int(self._streaming_state['offset'].item()) - else: - past_context_offset = 0 - streaming_offset = past_context_offset + past_keys_offset - return self.rope.rotate_qk(query, key, start=streaming_offset) - - def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor, - key_padding_mask=None, need_weights=False, attn_mask=None, - average_attn_weights=True, is_causal=False): - assert attn_mask is None - assert not is_causal, ("new param added in torch 2.0.1 not supported, " - "use the causal args in the constructor.") - - time_dim = _get_attention_time_dimension() - if time_dim == 2: - layout = "b h t d" - else: - layout = "b t h d" - dtype = query.dtype - if self._is_streaming: - assert self.causal or self.cross_attention, \ - "Streaming only available for causal or cross attention" - - if self.causal: - # At the moment we specialize only for the self-attention case. - assert query.shape[1] == key.shape[1], "Causal only for same length query / key / value" - assert value.shape[1] == key.shape[1], "Causal only for same length query / key / value" - attn_mask = self._get_mask(query.shape[1], query.device, query.dtype) - - if self.custom: - # custom implementation - assert need_weights is False - assert key_padding_mask is None - if self.cross_attention: - # Different queries, keys, values, we have to spit manually the weights - # before applying the linear. - dim = self.in_proj_weight.shape[0] // 3 - if self.in_proj_bias is None: - bias_q, bias_k, bias_v = None, None, None - else: - bias_q = self.in_proj_bias[:dim] - bias_k = self.in_proj_bias[dim: 2 * dim] - bias_v = self.in_proj_bias[2 * dim:] - q = nn.functional.linear(query, self.in_proj_weight[:dim], bias_q) - # todo: when streaming, we could actually save k, v and check the shape actually match. - k = nn.functional.linear(key, self.in_proj_weight[dim: 2 * dim], bias_k) - v = nn.functional.linear(value, self.in_proj_weight[2 * dim:], bias_v) - if self.qk_layer_norm is True: - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k, v = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k, v]] - else: - if not _is_profiled(): - # profiling breaks that propertysomehow. - assert query is key, "specialized implementation" - assert value is key, "specialized implementation" - projected = nn.functional.linear(query, self.in_proj_weight, self.in_proj_bias) - if self.kv_repeat == 1: - if time_dim == 2: - bound_layout = "b h p t d" - else: - bound_layout = "b t p h d" - packed = rearrange(projected, f"b t (p h d) -> {bound_layout}", p=3, h=self.num_heads) - q, k, v = ops.unbind(packed, dim=2) - else: - embed_dim = self.embed_dim - per_head_dim = (embed_dim // self.num_heads) - kv_heads = self.num_heads // self.kv_repeat - q = projected[:, :, :embed_dim] - start = embed_dim - end = start + per_head_dim * kv_heads - k = projected[:, :, start: end] - v = projected[:, :, end:] - q = rearrange(q, f"b t (h d) -> {layout}", h=self.num_heads) - k = rearrange(k, f"b t (h d) -> {layout}", h=kv_heads) - v = rearrange(v, f"b t (h d) -> {layout}", h=kv_heads) - - if self.qk_layer_norm is True: - assert self.kv_repeat == 1 - q, k = [rearrange(x, f"{layout} -> b t (h d)") for x in [q, k]] - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k]] - if self.rope: - q, k = self._apply_rope(q, k) - k, v = self._complete_kv(k, v) - if self.kv_repeat > 1: - k = expand_repeated_kv(k, self.kv_repeat) - v = expand_repeated_kv(v, self.kv_repeat) - if self.attention_as_float32: - q, k, v = [x.float() for x in [q, k, v]] - if self.memory_efficient: - p = self.dropout if self.training else 0 - if _efficient_attention_backend == 'torch': - x = torch.nn.functional.scaled_dot_product_attention( - q, k, v, is_causal=attn_mask is not None, dropout_p=p) - else: - x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p) - else: - # We include the dot product as float32, for consistency - # with the other implementations that include that step - # as part of the attention. Note that when using `autocast`, - # the einsums would be done as bfloat16, but the softmax - # would be done as bfloat16, so `attention_as_float32` will - # extend a bit the range of operations done in float32, - # although this should make no difference. - q = q / q.shape[-1] ** 0.5 - key_layout = layout.replace('t', 'k') - query_layout = layout - if self._is_streaming and self.safe_streaming and q.device.type == 'cuda': - with torch.autocast(device_type=q.device.type, dtype=torch.float32): - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - else: - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - if attn_mask is not None: - pre_w = pre_w + attn_mask - w = torch.softmax(pre_w, dim=-1) - w = F.dropout(w, self.dropout, training=self.training).to(v) - # Key and value have the same format. - x = torch.einsum(f"b h t k, {key_layout} -> {layout}", w, v) - x = x.to(dtype) - x = rearrange(x, f"{layout} -> b t (h d)", h=self.num_heads) - x = self.out_proj(x) - else: - key, value = self._complete_kv(key, value) - if self.attention_as_float32: - query, key, value = [x.float() for x in [query, key, value]] - x, _ = self.mha( - query, key, value, key_padding_mask, - need_weights, attn_mask, average_attn_weights) - x = x.to(dtype) - - return x, None - - -class StreamingTransformerLayer(nn.TransformerEncoderLayer): - """TransformerLayer with Streaming / Causal support. - This also integrates cross_attention, when passing `cross_attention=True`, - rather than having two separate classes like in PyTorch. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product in attention. - qk_layer_norm_cross (bool): Same for the cross attention. - cross_attention (bool): If True, expect to get secondary input for cross-attention. - Cross attention will use the default MHA, as it typically won't require - special treatment. - layer_scale (float or None): If not None, LayerScale will be used with - the given value as initial scale. - rope (`RotaryEmbedding` or None): Rope embedding to use. - attention_dropout (float or None): If not None, separate the value of the dimension dropout - in FFN and of the attention dropout. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, dim_feedforward: int = 2048, dropout: float = 0.1, - bias_ff: bool = True, bias_attn: bool = True, causal: bool = False, - past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - qk_layer_norm: bool = False, qk_layer_norm_cross: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - rope: tp.Optional[RotaryEmbedding] = None, attention_dropout: tp.Optional[float] = None, - kv_repeat: int = 1, norm: str = 'layer_norm', device=None, dtype=None, **kwargs): - super().__init__(d_model, num_heads, dim_feedforward, dropout, - device=device, dtype=dtype, batch_first=True, **kwargs) - factory_kwargs = {'device': device, 'dtype': dtype} - # Redefine self_attn to our streaming multi-head attention - attn_kwargs: tp.Dict[str, tp.Any] = { - 'embed_dim': d_model, - 'num_heads': num_heads, - 'dropout': dropout if attention_dropout is None else attention_dropout, - 'bias': bias_attn, - 'custom': custom, - 'memory_efficient': memory_efficient, - 'attention_as_float32': attention_as_float32, - } - self.self_attn: StreamingMultiheadAttention = StreamingMultiheadAttention( - causal=causal, past_context=past_context, rope=rope, qk_layer_norm=qk_layer_norm, - kv_repeat=kv_repeat, **attn_kwargs, **factory_kwargs) # type: ignore - # Redefine feedforward layers to expose bias parameter - self.linear1 = nn.Linear(d_model, dim_feedforward, bias=bias_ff, **factory_kwargs) - self.linear2 = nn.Linear(dim_feedforward, d_model, bias=bias_ff, **factory_kwargs) - - self.layer_scale_1: nn.Module - self.layer_scale_2: nn.Module - if layer_scale is None: - self.layer_scale_1 = nn.Identity() - self.layer_scale_2 = nn.Identity() - else: - self.layer_scale_1 = LayerScale(d_model, layer_scale, **factory_kwargs) - self.layer_scale_2 = LayerScale(d_model, layer_scale, **factory_kwargs) - - self.cross_attention: tp.Optional[nn.Module] = None - if cross_attention: - self.cross_attention = StreamingMultiheadAttention( - cross_attention=True, qk_layer_norm=qk_layer_norm_cross, - **attn_kwargs, **factory_kwargs) - # Norm and dropout - self.dropout_cross = nn.Dropout(dropout) - # eps value matching that used in PyTorch reference implementation. - self.norm_cross = nn.LayerNorm(d_model, eps=1e-5, **factory_kwargs) - self.layer_scale_cross: nn.Module - if layer_scale is None: - self.layer_scale_cross = nn.Identity() - else: - self.layer_scale_cross = LayerScale(d_model, layer_scale, **factory_kwargs) - self.norm1 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - self.norm2 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - - def _cross_attention_block(self, src: torch.Tensor, - cross_attention_src: torch.Tensor) -> torch.Tensor: - assert self.cross_attention is not None - # queries are from src, keys and values from cross_attention_src. - x = self.cross_attention( - src, cross_attention_src, cross_attention_src, need_weights=False)[0] - return self.dropout_cross(x) # type: ignore - - def forward(self, src: torch.Tensor, src_mask: tp.Optional[torch.Tensor] = None, # type: ignore - src_key_padding_mask: tp.Optional[torch.Tensor] = None, - cross_attention_src: tp.Optional[torch.Tensor] = None): - if self.cross_attention is None: - assert cross_attention_src is None - else: - assert cross_attention_src is not None - x = src - if self.norm_first: - x = x + self.layer_scale_1( - self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)) - if cross_attention_src is not None: - x = x + self.layer_scale_cross( - self._cross_attention_block( - self.norm_cross(x), cross_attention_src)) - x = x + self.layer_scale_2(self._ff_block(self.norm2(x))) - else: - x = self.norm1(x + self.layer_scale_1( - self._sa_block(x, src_mask, src_key_padding_mask))) - if cross_attention_src is not None: - x = self.norm_cross( - x + self.layer_scale_cross( - self._cross_attention_block(src, cross_attention_src))) - x = self.norm2(x + self.layer_scale_2(self._ff_block(x))) - return x - - -class StreamingTransformer(StreamingModule): - """Transformer with Streaming / Causal support. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - cross_attention (bool): If True, expect to get secondary input for cross-attention. - layer_scale (float or None): If not None, LayerScale will be used - with the given value as initial scale. - positional_embedding (str): Positional embedding strategy (sin, rope, or sin_rope). - max_period (float): Maximum period of the time embedding. - positional_scale (float): Scale of positional embedding, set to 0 to deactivate. - xpos (bool): Apply xpos exponential decay to positional embedding (rope only). - lr (float or None): learning rate override through the `make_optim_group` API. - weight_decay (float or None): Weight_decay override through the `make_optim_group` API. - layer_class: (subclass of `StreamingTransformerLayer): class to use - to initialize the layers, allowing further customization outside of Audiocraft. - checkpointing (str): Checkpointing strategy to reduce memory usage. - No checkpointing if set to 'none'. Per layer checkpointing using PyTorch - if set to 'torch' (entire layer checkpointed, i.e. linears are evaluated twice, - minimal memory usage, but maximal runtime). Finally, `xformers_default` provide - a policy for opting-out some operations of the checkpointing like - linear layers and attention, providing a middle ground between speed and memory. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, num_layers: int, dim_feedforward: int = 2048, - dropout: float = 0.1, bias_ff: bool = True, bias_attn: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, - custom: bool = False, memory_efficient: bool = False, attention_as_float32: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - positional_embedding: str = 'sin', max_period: float = 10_000, positional_scale: float = 1., - xpos: bool = False, lr: tp.Optional[float] = None, weight_decay: tp.Optional[float] = None, - layer_class: tp.Type[StreamingTransformerLayer] = StreamingTransformerLayer, - checkpointing: str = 'none', device=None, dtype=None, **kwargs): - super().__init__() - assert d_model % num_heads == 0 - - self.positional_embedding = positional_embedding - self.max_period = max_period - self.positional_scale = positional_scale - self.weight_decay = weight_decay - self.lr = lr - - assert positional_embedding in ['sin', 'rope', 'sin_rope'] - self.rope: tp.Optional[RotaryEmbedding] = None - if self.positional_embedding in ['rope', 'sin_rope']: - assert _is_custom(custom, memory_efficient) - self.rope = RotaryEmbedding(d_model // num_heads, max_period=max_period, - xpos=xpos, scale=positional_scale, device=device) - - self.checkpointing = checkpointing - - assert checkpointing in ['none', 'torch', 'xformers_default', 'xformers_mm'] - if self.checkpointing.startswith('xformers'): - _verify_xformers_internal_compat() - - self.layers = nn.ModuleList() - for idx in range(num_layers): - self.layers.append( - layer_class( - d_model=d_model, num_heads=num_heads, dim_feedforward=dim_feedforward, - dropout=dropout, bias_ff=bias_ff, bias_attn=bias_attn, - causal=causal, past_context=past_context, custom=custom, - memory_efficient=memory_efficient, attention_as_float32=attention_as_float32, - cross_attention=cross_attention, layer_scale=layer_scale, rope=self.rope, - device=device, dtype=dtype, **kwargs)) - - if self.checkpointing != 'none': - for layer in self.layers: - # see audiocraft/optim/fsdp.py, magic signal to indicate this requires fixing the - # backward hook inside of FSDP... - layer._magma_checkpointed = True # type: ignore - assert layer.layer_drop == 0., "Need further checking" # type: ignore - - def _apply_layer(self, layer, *args, **kwargs): - method = self.checkpointing - if method == 'none': - return layer(*args, **kwargs) - elif method == 'torch': - return torch_checkpoint(layer, *args, use_reentrant=False, **kwargs) - elif method.startswith('xformers'): - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy - if method == 'xformers_default': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "xformers.efficient_attention_forward_cutlass.default", - "xformers_flash.flash_fwd.default", - "aten.addmm.default", - "aten.mm.default", - ] - elif method == 'xformers_mm': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "aten.addmm.default", - "aten.mm.default", - ] - else: - raise ValueError(f"xformers checkpointing xformers policy {method} is not known.") - policy_fn = _get_default_policy(allow_list) - return checkpoint(layer, *args, policy_fn=policy_fn, **kwargs) - else: - raise ValueError(f"Checkpointing method {method} is unknown.") - - def forward(self, x: torch.Tensor, *args, **kwargs): - B, T, C = x.shape - - if 'offsets' in self._streaming_state: - offsets = self._streaming_state['offsets'] - else: - offsets = torch.zeros(B, dtype=torch.long, device=x.device) - - if self.positional_embedding in ['sin', 'sin_rope']: - positions = torch.arange(T, device=x.device).view(1, -1, 1) - positions = positions + offsets.view(-1, 1, 1) - pos_emb = create_sin_embedding(positions, C, max_period=self.max_period, dtype=x.dtype) - x = x + self.positional_scale * pos_emb - - for layer in self.layers: - x = self._apply_layer(layer, x, *args, **kwargs) - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return x - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - if self.weight_decay is not None: - group["weight_decay"] = self.weight_decay - return group - - -# special attention attention related function - -def _verify_xformers_memory_efficient_compat(): - try: - from xformers.ops import memory_efficient_attention, LowerTriangularMask # noqa - except ImportError: - raise ImportError( - "xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _verify_xformers_internal_compat(): - try: - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy # noqa - except ImportError: - raise ImportError( - "Francisco's fairinternal xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _is_custom(custom: bool, memory_efficient: bool): - return custom or memory_efficient diff --git a/spaces/jordonpeter01/ai-comic-factory/src/app/engine/censorship.ts b/spaces/jordonpeter01/ai-comic-factory/src/app/engine/censorship.ts deleted file mode 100644 index a4bb51e4ddf6e5d07792aa1975889d91c6984d1e..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/ai-comic-factory/src/app/engine/censorship.ts +++ /dev/null @@ -1,39 +0,0 @@ - - -// unfortunately due to abuse by some users, I have to add this NSFW filter -const secretSalt = `${process.env.SECRET_CENSORSHIP_KEY || ""}` - -// TODO the censorship is not implement yet actually - -// I don't want to be banned by Replicate because bad actors are asking -// for some naked anime stuff or whatever -// I also want to avoid a PR scandal due to some bad user generated content - -const forbiddenWords = [ - // those keywords have been generated by looking at the logs of the AI Comic Factory - // those are real requests some users tried to attempt.. :| - "nazi", - "hitler", - "boob", - "boobs", - "boobies", - "nipple", - "nipples", - "nude", - "nudes", - "naked", - "pee", - "peeing", - "erotic", - "sexy" -] - -// temporary utility to make sure Replicate doesn't ban my account -// because of what users do in their prompt -export const filterOutBadWords = (sentence: string) => { - const words = sentence.split(" ") - return words.filter(word => { - const lowerCase = word.toLocaleLowerCase() - return !forbiddenWords.includes(lowerCase) - }).join(" ") -} \ No newline at end of file diff --git a/spaces/juancopi81/whisper-youtube-2-hf_dataset/errors.py b/spaces/juancopi81/whisper-youtube-2-hf_dataset/errors.py deleted file mode 100644 index dbf1a0353c1cb6fe9aab24c268d57bb652f444e1..0000000000000000000000000000000000000000 --- a/spaces/juancopi81/whisper-youtube-2-hf_dataset/errors.py +++ /dev/null @@ -1,4 +0,0 @@ -class DifferentNumberOfArgumentsError(Exception): - - def __init__(self, message: str) -> None: - self.message = message \ No newline at end of file diff --git a/spaces/justYu2001/furniture-detection/models/common.py b/spaces/justYu2001/furniture-detection/models/common.py deleted file mode 100644 index 111af708dea55cb11c8da3bb22d69e659ee78925..0000000000000000000000000000000000000000 --- a/spaces/justYu2001/furniture-detection/models/common.py +++ /dev/null @@ -1,2019 +0,0 @@ -import math -from copy import copy -from pathlib import Path - -import numpy as np -import pandas as pd -import requests -import torch -import torch.nn as nn -import torch.nn.functional as F -from torchvision.ops import DeformConv2d -from PIL import Image -from torch.cuda import amp - -from utils.datasets import letterbox -from utils.general import non_max_suppression, make_divisible, scale_coords, increment_path, xyxy2xywh -from utils.plots import color_list, plot_one_box -from utils.torch_utils import time_synchronized - - -##### basic #### - -def autopad(k, p=None): # kernel, padding - # Pad to 'same' - if p is None: - p = k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-pad - return p - - -class MP(nn.Module): - def __init__(self, k=2): - super(MP, self).__init__() - self.m = nn.MaxPool2d(kernel_size=k, stride=k) - - def forward(self, x): - return self.m(x) - - -class SP(nn.Module): - def __init__(self, k=3, s=1): - super(SP, self).__init__() - self.m = nn.MaxPool2d(kernel_size=k, stride=s, padding=k // 2) - - def forward(self, x): - return self.m(x) - - -class ReOrg(nn.Module): - def __init__(self): - super(ReOrg, self).__init__() - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1) - - -class Concat(nn.Module): - def __init__(self, dimension=1): - super(Concat, self).__init__() - self.d = dimension - - def forward(self, x): - return torch.cat(x, self.d) - - -class Chuncat(nn.Module): - def __init__(self, dimension=1): - super(Chuncat, self).__init__() - self.d = dimension - - def forward(self, x): - x1 = [] - x2 = [] - for xi in x: - xi1, xi2 = xi.chunk(2, self.d) - x1.append(xi1) - x2.append(xi2) - return torch.cat(x1+x2, self.d) - - -class Shortcut(nn.Module): - def __init__(self, dimension=0): - super(Shortcut, self).__init__() - self.d = dimension - - def forward(self, x): - return x[0]+x[1] - - -class Foldcut(nn.Module): - def __init__(self, dimension=0): - super(Foldcut, self).__init__() - self.d = dimension - - def forward(self, x): - x1, x2 = x.chunk(2, self.d) - return x1+x2 - - -class Conv(nn.Module): - # Standard convolution - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super(Conv, self).__init__() - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - - def forward(self, x): - return self.act(self.bn(self.conv(x))) - - def fuseforward(self, x): - return self.act(self.conv(x)) - - -class RobustConv(nn.Module): - # Robust convolution (use high kernel size 7-11 for: downsampling and other layers). Train for 300 - 450 epochs. - def __init__(self, c1, c2, k=7, s=1, p=None, g=1, act=True, layer_scale_init_value=1e-6): # ch_in, ch_out, kernel, stride, padding, groups - super(RobustConv, self).__init__() - self.conv_dw = Conv(c1, c1, k=k, s=s, p=p, g=c1, act=act) - self.conv1x1 = nn.Conv2d(c1, c2, 1, 1, 0, groups=1, bias=True) - self.gamma = nn.Parameter(layer_scale_init_value * torch.ones(c2)) if layer_scale_init_value > 0 else None - - def forward(self, x): - x = x.to(memory_format=torch.channels_last) - x = self.conv1x1(self.conv_dw(x)) - if self.gamma is not None: - x = x.mul(self.gamma.reshape(1, -1, 1, 1)) - return x - - -class RobustConv2(nn.Module): - # Robust convolution 2 (use [32, 5, 2] or [32, 7, 4] or [32, 11, 8] for one of the paths in CSP). - def __init__(self, c1, c2, k=7, s=4, p=None, g=1, act=True, layer_scale_init_value=1e-6): # ch_in, ch_out, kernel, stride, padding, groups - super(RobustConv2, self).__init__() - self.conv_strided = Conv(c1, c1, k=k, s=s, p=p, g=c1, act=act) - self.conv_deconv = nn.ConvTranspose2d(in_channels=c1, out_channels=c2, kernel_size=s, stride=s, - padding=0, bias=True, dilation=1, groups=1 - ) - self.gamma = nn.Parameter(layer_scale_init_value * torch.ones(c2)) if layer_scale_init_value > 0 else None - - def forward(self, x): - x = self.conv_deconv(self.conv_strided(x)) - if self.gamma is not None: - x = x.mul(self.gamma.reshape(1, -1, 1, 1)) - return x - - -def DWConv(c1, c2, k=1, s=1, act=True): - # Depthwise convolution - return Conv(c1, c2, k, s, g=math.gcd(c1, c2), act=act) - - -class GhostConv(nn.Module): - # Ghost Convolution https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=1, s=1, g=1, act=True): # ch_in, ch_out, kernel, stride, groups - super(GhostConv, self).__init__() - c_ = c2 // 2 # hidden channels - self.cv1 = Conv(c1, c_, k, s, None, g, act) - self.cv2 = Conv(c_, c_, 5, 1, None, c_, act) - - def forward(self, x): - y = self.cv1(x) - return torch.cat([y, self.cv2(y)], 1) - - -class Stem(nn.Module): - # Stem - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super(Stem, self).__init__() - c_ = int(c2/2) # hidden channels - self.cv1 = Conv(c1, c_, 3, 2) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(c_, c_, 3, 2) - self.pool = torch.nn.MaxPool2d(2, stride=2) - self.cv4 = Conv(2 * c_, c2, 1, 1) - - def forward(self, x): - x = self.cv1(x) - return self.cv4(torch.cat((self.cv3(self.cv2(x)), self.pool(x)), dim=1)) - - -class DownC(nn.Module): - # Spatial pyramid pooling layer used in YOLOv3-SPP - def __init__(self, c1, c2, n=1, k=2): - super(DownC, self).__init__() - c_ = int(c1) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2//2, 3, k) - self.cv3 = Conv(c1, c2//2, 1, 1) - self.mp = nn.MaxPool2d(kernel_size=k, stride=k) - - def forward(self, x): - return torch.cat((self.cv2(self.cv1(x)), self.cv3(self.mp(x))), dim=1) - - -class SPP(nn.Module): - # Spatial pyramid pooling layer used in YOLOv3-SPP - def __init__(self, c1, c2, k=(5, 9, 13)): - super(SPP, self).__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * (len(k) + 1), c2, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - - def forward(self, x): - x = self.cv1(x) - return self.cv2(torch.cat([x] + [m(x) for m in self.m], 1)) - - -class Bottleneck(nn.Module): - # Darknet bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super(Bottleneck, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c2, 3, 1, g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class Res(nn.Module): - # ResNet bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super(Res, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 3, 1, g=g) - self.cv3 = Conv(c_, c2, 1, 1) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv3(self.cv2(self.cv1(x))) if self.add else self.cv3(self.cv2(self.cv1(x))) - - -class ResX(Res): - # ResNet bottleneck - def __init__(self, c1, c2, shortcut=True, g=32, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - - -class Ghost(nn.Module): - # Ghost Bottleneck https://github.com/huawei-noah/ghostnet - def __init__(self, c1, c2, k=3, s=1): # ch_in, ch_out, kernel, stride - super(Ghost, self).__init__() - c_ = c2 // 2 - self.conv = nn.Sequential(GhostConv(c1, c_, 1, 1), # pw - DWConv(c_, c_, k, s, act=False) if s == 2 else nn.Identity(), # dw - GhostConv(c_, c2, 1, 1, act=False)) # pw-linear - self.shortcut = nn.Sequential(DWConv(c1, c1, k, s, act=False), - Conv(c1, c2, 1, 1, act=False)) if s == 2 else nn.Identity() - - def forward(self, x): - return self.conv(x) + self.shortcut(x) - -##### end of basic ##### - - -##### cspnet ##### - -class SPPCSPC(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)): - super(SPPCSPC, self).__init__() - c_ = int(2 * c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 3, 1) - self.cv4 = Conv(c_, c_, 1, 1) - self.m = nn.ModuleList([nn.MaxPool2d(kernel_size=x, stride=1, padding=x // 2) for x in k]) - self.cv5 = Conv(4 * c_, c_, 1, 1) - self.cv6 = Conv(c_, c_, 3, 1) - self.cv7 = Conv(2 * c_, c2, 1, 1) - - def forward(self, x): - x1 = self.cv4(self.cv3(self.cv1(x))) - y1 = self.cv6(self.cv5(torch.cat([x1] + [m(x1) for m in self.m], 1))) - y2 = self.cv2(x) - return self.cv7(torch.cat((y1, y2), dim=1)) - -class GhostSPPCSPC(SPPCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5, k=(5, 9, 13)): - super().__init__(c1, c2, n, shortcut, g, e, k) - c_ = int(2 * c2 * e) # hidden channels - self.cv1 = GhostConv(c1, c_, 1, 1) - self.cv2 = GhostConv(c1, c_, 1, 1) - self.cv3 = GhostConv(c_, c_, 3, 1) - self.cv4 = GhostConv(c_, c_, 1, 1) - self.cv5 = GhostConv(4 * c_, c_, 1, 1) - self.cv6 = GhostConv(c_, c_, 3, 1) - self.cv7 = GhostConv(2 * c_, c2, 1, 1) - - -class GhostStem(Stem): - # Stem - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super().__init__(c1, c2, k, s, p, g, act) - c_ = int(c2/2) # hidden channels - self.cv1 = GhostConv(c1, c_, 3, 2) - self.cv2 = GhostConv(c_, c_, 1, 1) - self.cv3 = GhostConv(c_, c_, 3, 2) - self.cv4 = GhostConv(2 * c_, c2, 1, 1) - - -class BottleneckCSPA(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(BottleneckCSPA, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.m(self.cv1(x)) - y2 = self.cv2(x) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class BottleneckCSPB(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(BottleneckCSPB, self).__init__() - c_ = int(c2) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - x1 = self.cv1(x) - y1 = self.m(x1) - y2 = self.cv2(x1) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class BottleneckCSPC(nn.Module): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(BottleneckCSPC, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 1, 1) - self.cv4 = Conv(2 * c_, c2, 1, 1) - self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(torch.cat((y1, y2), dim=1)) - - -class ResCSPA(BottleneckCSPA): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class ResCSPB(BottleneckCSPB): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class ResCSPC(BottleneckCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class ResXCSPA(ResCSPA): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class ResXCSPB(ResCSPB): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class ResXCSPC(ResCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Res(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class GhostCSPA(BottleneckCSPA): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)]) - - -class GhostCSPB(BottleneckCSPB): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)]) - - -class GhostCSPC(BottleneckCSPC): - # CSP https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[Ghost(c_, c_) for _ in range(n)]) - -##### end of cspnet ##### - - -##### yolor ##### - -class ImplicitA(nn.Module): - def __init__(self, channel, mean=0., std=.02): - super(ImplicitA, self).__init__() - self.channel = channel - self.mean = mean - self.std = std - self.implicit = nn.Parameter(torch.zeros(1, channel, 1, 1)) - nn.init.normal_(self.implicit, mean=self.mean, std=self.std) - - def forward(self, x): - return self.implicit + x - - -class ImplicitM(nn.Module): - def __init__(self, channel, mean=0., std=.02): - super(ImplicitM, self).__init__() - self.channel = channel - self.mean = mean - self.std = std - self.implicit = nn.Parameter(torch.ones(1, channel, 1, 1)) - nn.init.normal_(self.implicit, mean=self.mean, std=self.std) - - def forward(self, x): - return self.implicit * x - -##### end of yolor ##### - - -##### repvgg ##### - -class RepConv(nn.Module): - # Represented convolution - # https://arxiv.org/abs/2101.03697 - - def __init__(self, c1, c2, k=3, s=1, p=None, g=1, act=True, deploy=False): - super(RepConv, self).__init__() - - self.deploy = deploy - self.groups = g - self.in_channels = c1 - self.out_channels = c2 - - assert k == 3 - assert autopad(k, p) == 1 - - padding_11 = autopad(k, p) - k // 2 - - self.act = nn.SiLU() if act is True else (act if isinstance(act, nn.Module) else nn.Identity()) - - if deploy: - self.rbr_reparam = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=True) - - else: - self.rbr_identity = (nn.BatchNorm2d(num_features=c1) if c2 == c1 and s == 1 else None) - - self.rbr_dense = nn.Sequential( - nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g, bias=False), - nn.BatchNorm2d(num_features=c2), - ) - - self.rbr_1x1 = nn.Sequential( - nn.Conv2d( c1, c2, 1, s, padding_11, groups=g, bias=False), - nn.BatchNorm2d(num_features=c2), - ) - - def forward(self, inputs): - if hasattr(self, "rbr_reparam"): - return self.act(self.rbr_reparam(inputs)) - - if self.rbr_identity is None: - id_out = 0 - else: - id_out = self.rbr_identity(inputs) - - return self.act(self.rbr_dense(inputs) + self.rbr_1x1(inputs) + id_out) - - def get_equivalent_kernel_bias(self): - kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense) - kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1) - kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity) - return ( - kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, - bias3x3 + bias1x1 + biasid, - ) - - def _pad_1x1_to_3x3_tensor(self, kernel1x1): - if kernel1x1 is None: - return 0 - else: - return nn.functional.pad(kernel1x1, [1, 1, 1, 1]) - - def _fuse_bn_tensor(self, branch): - if branch is None: - return 0, 0 - if isinstance(branch, nn.Sequential): - kernel = branch[0].weight - running_mean = branch[1].running_mean - running_var = branch[1].running_var - gamma = branch[1].weight - beta = branch[1].bias - eps = branch[1].eps - else: - assert isinstance(branch, nn.BatchNorm2d) - if not hasattr(self, "id_tensor"): - input_dim = self.in_channels // self.groups - kernel_value = np.zeros( - (self.in_channels, input_dim, 3, 3), dtype=np.float32 - ) - for i in range(self.in_channels): - kernel_value[i, i % input_dim, 1, 1] = 1 - self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device) - kernel = self.id_tensor - running_mean = branch.running_mean - running_var = branch.running_var - gamma = branch.weight - beta = branch.bias - eps = branch.eps - std = (running_var + eps).sqrt() - t = (gamma / std).reshape(-1, 1, 1, 1) - return kernel * t, beta - running_mean * gamma / std - - def repvgg_convert(self): - kernel, bias = self.get_equivalent_kernel_bias() - return ( - kernel.detach().cpu().numpy(), - bias.detach().cpu().numpy(), - ) - - def fuse_conv_bn(self, conv, bn): - - std = (bn.running_var + bn.eps).sqrt() - bias = bn.bias - bn.running_mean * bn.weight / std - - t = (bn.weight / std).reshape(-1, 1, 1, 1) - weights = conv.weight * t - - bn = nn.Identity() - conv = nn.Conv2d(in_channels = conv.in_channels, - out_channels = conv.out_channels, - kernel_size = conv.kernel_size, - stride=conv.stride, - padding = conv.padding, - dilation = conv.dilation, - groups = conv.groups, - bias = True, - padding_mode = conv.padding_mode) - - conv.weight = torch.nn.Parameter(weights) - conv.bias = torch.nn.Parameter(bias) - return conv - - def fuse_repvgg_block(self): - if self.deploy: - return - print(f"RepConv.fuse_repvgg_block") - - self.rbr_dense = self.fuse_conv_bn(self.rbr_dense[0], self.rbr_dense[1]) - - self.rbr_1x1 = self.fuse_conv_bn(self.rbr_1x1[0], self.rbr_1x1[1]) - rbr_1x1_bias = self.rbr_1x1.bias - weight_1x1_expanded = torch.nn.functional.pad(self.rbr_1x1.weight, [1, 1, 1, 1]) - - # Fuse self.rbr_identity - if (isinstance(self.rbr_identity, nn.BatchNorm2d) or isinstance(self.rbr_identity, nn.modules.batchnorm.SyncBatchNorm)): - # print(f"fuse: rbr_identity == BatchNorm2d or SyncBatchNorm") - identity_conv_1x1 = nn.Conv2d( - in_channels=self.in_channels, - out_channels=self.out_channels, - kernel_size=1, - stride=1, - padding=0, - groups=self.groups, - bias=False) - identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.to(self.rbr_1x1.weight.data.device) - identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.squeeze().squeeze() - # print(f" identity_conv_1x1.weight = {identity_conv_1x1.weight.shape}") - identity_conv_1x1.weight.data.fill_(0.0) - identity_conv_1x1.weight.data.fill_diagonal_(1.0) - identity_conv_1x1.weight.data = identity_conv_1x1.weight.data.unsqueeze(2).unsqueeze(3) - # print(f" identity_conv_1x1.weight = {identity_conv_1x1.weight.shape}") - - identity_conv_1x1 = self.fuse_conv_bn(identity_conv_1x1, self.rbr_identity) - bias_identity_expanded = identity_conv_1x1.bias - weight_identity_expanded = torch.nn.functional.pad(identity_conv_1x1.weight, [1, 1, 1, 1]) - else: - # print(f"fuse: rbr_identity != BatchNorm2d, rbr_identity = {self.rbr_identity}") - bias_identity_expanded = torch.nn.Parameter( torch.zeros_like(rbr_1x1_bias) ) - weight_identity_expanded = torch.nn.Parameter( torch.zeros_like(weight_1x1_expanded) ) - - - #print(f"self.rbr_1x1.weight = {self.rbr_1x1.weight.shape}, ") - #print(f"weight_1x1_expanded = {weight_1x1_expanded.shape}, ") - #print(f"self.rbr_dense.weight = {self.rbr_dense.weight.shape}, ") - - self.rbr_dense.weight = torch.nn.Parameter(self.rbr_dense.weight + weight_1x1_expanded + weight_identity_expanded) - self.rbr_dense.bias = torch.nn.Parameter(self.rbr_dense.bias + rbr_1x1_bias + bias_identity_expanded) - - self.rbr_reparam = self.rbr_dense - self.deploy = True - - if self.rbr_identity is not None: - del self.rbr_identity - self.rbr_identity = None - - if self.rbr_1x1 is not None: - del self.rbr_1x1 - self.rbr_1x1 = None - - if self.rbr_dense is not None: - del self.rbr_dense - self.rbr_dense = None - - -class RepBottleneck(Bottleneck): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut=True, g=1, e=0.5) - c_ = int(c2 * e) # hidden channels - self.cv2 = RepConv(c_, c2, 3, 1, g=g) - - -class RepBottleneckCSPA(BottleneckCSPA): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class RepBottleneckCSPB(BottleneckCSPB): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class RepBottleneckCSPC(BottleneckCSPC): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepBottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - -class RepRes(Res): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=1, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.cv2 = RepConv(c_, c_, 3, 1, g=g) - - -class RepResCSPA(ResCSPA): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResCSPB(ResCSPB): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResCSPC(ResCSPC): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepRes(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResX(ResX): - # Standard bottleneck - def __init__(self, c1, c2, shortcut=True, g=32, e=0.5): # ch_in, ch_out, shortcut, groups, expansion - super().__init__(c1, c2, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.cv2 = RepConv(c_, c_, 3, 1, g=g) - - -class RepResXCSPA(ResXCSPA): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResXCSPB(ResXCSPB): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2) # hidden channels - self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - - -class RepResXCSPC(ResXCSPC): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=32, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super().__init__(c1, c2, n, shortcut, g, e) - c_ = int(c2 * e) # hidden channels - self.m = nn.Sequential(*[RepResX(c_, c_, shortcut, g, e=0.5) for _ in range(n)]) - -##### end of repvgg ##### - - -##### transformer ##### - -class TransformerLayer(nn.Module): - # Transformer layer https://arxiv.org/abs/2010.11929 (LayerNorm layers removed for better performance) - def __init__(self, c, num_heads): - super().__init__() - self.q = nn.Linear(c, c, bias=False) - self.k = nn.Linear(c, c, bias=False) - self.v = nn.Linear(c, c, bias=False) - self.ma = nn.MultiheadAttention(embed_dim=c, num_heads=num_heads) - self.fc1 = nn.Linear(c, c, bias=False) - self.fc2 = nn.Linear(c, c, bias=False) - - def forward(self, x): - x = self.ma(self.q(x), self.k(x), self.v(x))[0] + x - x = self.fc2(self.fc1(x)) + x - return x - - -class TransformerBlock(nn.Module): - # Vision Transformer https://arxiv.org/abs/2010.11929 - def __init__(self, c1, c2, num_heads, num_layers): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - self.linear = nn.Linear(c2, c2) # learnable position embedding - self.tr = nn.Sequential(*[TransformerLayer(c2, num_heads) for _ in range(num_layers)]) - self.c2 = c2 - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - b, _, w, h = x.shape - p = x.flatten(2) - p = p.unsqueeze(0) - p = p.transpose(0, 3) - p = p.squeeze(3) - e = self.linear(p) - x = p + e - - x = self.tr(x) - x = x.unsqueeze(3) - x = x.transpose(0, 3) - x = x.reshape(b, self.c2, w, h) - return x - -##### end of transformer ##### - - -##### yolov5 ##### - -class Focus(nn.Module): - # Focus wh information into c-space - def __init__(self, c1, c2, k=1, s=1, p=None, g=1, act=True): # ch_in, ch_out, kernel, stride, padding, groups - super(Focus, self).__init__() - self.conv = Conv(c1 * 4, c2, k, s, p, g, act) - # self.contract = Contract(gain=2) - - def forward(self, x): # x(b,c,w,h) -> y(b,4c,w/2,h/2) - return self.conv(torch.cat([x[..., ::2, ::2], x[..., 1::2, ::2], x[..., ::2, 1::2], x[..., 1::2, 1::2]], 1)) - # return self.conv(self.contract(x)) - - -class SPPF(nn.Module): - # Spatial Pyramid Pooling - Fast (SPPF) layer for YOLOv5 by Glenn Jocher - def __init__(self, c1, c2, k=5): # equivalent to SPP(k=(5, 9, 13)) - super().__init__() - c_ = c1 // 2 # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_ * 4, c2, 1, 1) - self.m = nn.MaxPool2d(kernel_size=k, stride=1, padding=k // 2) - - def forward(self, x): - x = self.cv1(x) - y1 = self.m(x) - y2 = self.m(y1) - return self.cv2(torch.cat([x, y1, y2, self.m(y2)], 1)) - - -class Contract(nn.Module): - # Contract width-height into channels, i.e. x(1,64,80,80) to x(1,256,40,40) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - N, C, H, W = x.size() # assert (H / s == 0) and (W / s == 0), 'Indivisible gain' - s = self.gain - x = x.view(N, C, H // s, s, W // s, s) # x(1,64,40,2,40,2) - x = x.permute(0, 3, 5, 1, 2, 4).contiguous() # x(1,2,2,64,40,40) - return x.view(N, C * s * s, H // s, W // s) # x(1,256,40,40) - - -class Expand(nn.Module): - # Expand channels into width-height, i.e. x(1,64,80,80) to x(1,16,160,160) - def __init__(self, gain=2): - super().__init__() - self.gain = gain - - def forward(self, x): - N, C, H, W = x.size() # assert C / s ** 2 == 0, 'Indivisible gain' - s = self.gain - x = x.view(N, s, s, C // s ** 2, H, W) # x(1,2,2,16,80,80) - x = x.permute(0, 3, 4, 1, 5, 2).contiguous() # x(1,16,80,2,80,2) - return x.view(N, C // s ** 2, H * s, W * s) # x(1,16,160,160) - - -class NMS(nn.Module): - # Non-Maximum Suppression (NMS) module - conf = 0.25 # confidence threshold - iou = 0.45 # IoU threshold - classes = None # (optional list) filter by class - - def __init__(self): - super(NMS, self).__init__() - - def forward(self, x): - return non_max_suppression(x[0], conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) - - -class autoShape(nn.Module): - # input-robust model wrapper for passing cv2/np/PIL/torch inputs. Includes preprocessing, inference and NMS - conf = 0.25 # NMS confidence threshold - iou = 0.45 # NMS IoU threshold - classes = None # (optional list) filter by class - - def __init__(self, model): - super(autoShape, self).__init__() - self.model = model.eval() - - def autoshape(self): - print('autoShape already enabled, skipping... ') # model already converted to model.autoshape() - return self - - @torch.no_grad() - def forward(self, imgs, size=640, augment=False, profile=False): - # Inference from various sources. For height=640, width=1280, RGB images example inputs are: - # filename: imgs = 'data/samples/zidane.jpg' - # URI: = 'https://github.com/ultralytics/yolov5/releases/download/v1.0/zidane.jpg' - # OpenCV: = cv2.imread('image.jpg')[:,:,::-1] # HWC BGR to RGB x(640,1280,3) - # PIL: = Image.open('image.jpg') # HWC x(640,1280,3) - # numpy: = np.zeros((640,1280,3)) # HWC - # torch: = torch.zeros(16,3,320,640) # BCHW (scaled to size=640, 0-1 values) - # multiple: = [Image.open('image1.jpg'), Image.open('image2.jpg'), ...] # list of images - - t = [time_synchronized()] - p = next(self.model.parameters()) # for device and type - if isinstance(imgs, torch.Tensor): # torch - with amp.autocast(enabled=p.device.type != 'cpu'): - return self.model(imgs.to(p.device).type_as(p), augment, profile) # inference - - # Pre-process - n, imgs = (len(imgs), imgs) if isinstance(imgs, list) else (1, [imgs]) # number of images, list of images - shape0, shape1, files = [], [], [] # image and inference shapes, filenames - for i, im in enumerate(imgs): - f = f'image{i}' # filename - if isinstance(im, str): # filename or uri - im, f = np.asarray(Image.open(requests.get(im, stream=True).raw if im.startswith('http') else im)), im - elif isinstance(im, Image.Image): # PIL Image - im, f = np.asarray(im), getattr(im, 'filename', f) or f - files.append(Path(f).with_suffix('.jpg').name) - if im.shape[0] < 5: # image in CHW - im = im.transpose((1, 2, 0)) # reverse dataloader .transpose(2, 0, 1) - im = im[:, :, :3] if im.ndim == 3 else np.tile(im[:, :, None], 3) # enforce 3ch input - s = im.shape[:2] # HWC - shape0.append(s) # image shape - g = (size / max(s)) # gain - shape1.append([y * g for y in s]) - imgs[i] = im # update - shape1 = [make_divisible(x, int(self.stride.max())) for x in np.stack(shape1, 0).max(0)] # inference shape - x = [letterbox(im, new_shape=shape1, auto=False)[0] for im in imgs] # pad - x = np.stack(x, 0) if n > 1 else x[0][None] # stack - x = np.ascontiguousarray(x.transpose((0, 3, 1, 2))) # BHWC to BCHW - x = torch.from_numpy(x).to(p.device).type_as(p) / 255. # uint8 to fp16/32 - t.append(time_synchronized()) - - with amp.autocast(enabled=p.device.type != 'cpu'): - # Inference - y = self.model(x, augment, profile)[0] # forward - t.append(time_synchronized()) - - # Post-process - y = non_max_suppression(y, conf_thres=self.conf, iou_thres=self.iou, classes=self.classes) # NMS - for i in range(n): - scale_coords(shape1, y[i][:, :4], shape0[i]) - - t.append(time_synchronized()) - return Detections(imgs, y, files, t, self.names, x.shape) - - -class Detections: - # detections class for YOLOv5 inference results - def __init__(self, imgs, pred, files, times=None, names=None, shape=None): - super(Detections, self).__init__() - d = pred[0].device # device - gn = [torch.tensor([*[im.shape[i] for i in [1, 0, 1, 0]], 1., 1.], device=d) for im in imgs] # normalizations - self.imgs = imgs # list of images as numpy arrays - self.pred = pred # list of tensors pred[0] = (xyxy, conf, cls) - self.names = names # class names - self.files = files # image filenames - self.xyxy = pred # xyxy pixels - self.xywh = [xyxy2xywh(x) for x in pred] # xywh pixels - self.xyxyn = [x / g for x, g in zip(self.xyxy, gn)] # xyxy normalized - self.xywhn = [x / g for x, g in zip(self.xywh, gn)] # xywh normalized - self.n = len(self.pred) # number of images (batch size) - self.t = tuple((times[i + 1] - times[i]) * 1000 / self.n for i in range(3)) # timestamps (ms) - self.s = shape # inference BCHW shape - - def display(self, pprint=False, show=False, save=False, render=False, save_dir=''): - colors = color_list() - for i, (img, pred) in enumerate(zip(self.imgs, self.pred)): - str = f'image {i + 1}/{len(self.pred)}: {img.shape[0]}x{img.shape[1]} ' - if pred is not None: - for c in pred[:, -1].unique(): - n = (pred[:, -1] == c).sum() # detections per class - str += f"{n} {self.names[int(c)]}{'s' * (n > 1)}, " # add to string - if show or save or render: - for *box, conf, cls in pred: # xyxy, confidence, class - label = f'{self.names[int(cls)]} {conf:.2f}' - plot_one_box(box, img, label=label, color=colors[int(cls) % 10]) - img = Image.fromarray(img.astype(np.uint8)) if isinstance(img, np.ndarray) else img # from np - if pprint: - print(str.rstrip(', ')) - if show: - img.show(self.files[i]) # show - if save: - f = self.files[i] - img.save(Path(save_dir) / f) # save - print(f"{'Saved' * (i == 0)} {f}", end=',' if i < self.n - 1 else f' to {save_dir}\n') - if render: - self.imgs[i] = np.asarray(img) - - def print(self): - self.display(pprint=True) # print results - print(f'Speed: %.1fms pre-process, %.1fms inference, %.1fms NMS per image at shape {tuple(self.s)}' % self.t) - - def show(self): - self.display(show=True) # show results - - def save(self, save_dir='runs/hub/exp'): - save_dir = increment_path(save_dir, exist_ok=save_dir != 'runs/hub/exp') # increment save_dir - Path(save_dir).mkdir(parents=True, exist_ok=True) - self.display(save=True, save_dir=save_dir) # save results - - def render(self): - self.display(render=True) # render results - return self.imgs - - def pandas(self): - # return detections as pandas DataFrames, i.e. print(results.pandas().xyxy[0]) - new = copy(self) # return copy - ca = 'xmin', 'ymin', 'xmax', 'ymax', 'confidence', 'class', 'name' # xyxy columns - cb = 'xcenter', 'ycenter', 'width', 'height', 'confidence', 'class', 'name' # xywh columns - for k, c in zip(['xyxy', 'xyxyn', 'xywh', 'xywhn'], [ca, ca, cb, cb]): - a = [[x[:5] + [int(x[5]), self.names[int(x[5])]] for x in x.tolist()] for x in getattr(self, k)] # update - setattr(new, k, [pd.DataFrame(x, columns=c) for x in a]) - return new - - def tolist(self): - # return a list of Detections objects, i.e. 'for result in results.tolist():' - x = [Detections([self.imgs[i]], [self.pred[i]], self.names, self.s) for i in range(self.n)] - for d in x: - for k in ['imgs', 'pred', 'xyxy', 'xyxyn', 'xywh', 'xywhn']: - setattr(d, k, getattr(d, k)[0]) # pop out of list - return x - - def __len__(self): - return self.n - - -class Classify(nn.Module): - # Classification head, i.e. x(b,c1,20,20) to x(b,c2) - def __init__(self, c1, c2, k=1, s=1, p=None, g=1): # ch_in, ch_out, kernel, stride, padding, groups - super(Classify, self).__init__() - self.aap = nn.AdaptiveAvgPool2d(1) # to x(b,c1,1,1) - self.conv = nn.Conv2d(c1, c2, k, s, autopad(k, p), groups=g) # to x(b,c2,1,1) - self.flat = nn.Flatten() - - def forward(self, x): - z = torch.cat([self.aap(y) for y in (x if isinstance(x, list) else [x])], 1) # cat if list - return self.flat(self.conv(z)) # flatten to x(b,c2) - -##### end of yolov5 ###### - - -##### orepa ##### - -def transI_fusebn(kernel, bn): - gamma = bn.weight - std = (bn.running_var + bn.eps).sqrt() - return kernel * ((gamma / std).reshape(-1, 1, 1, 1)), bn.bias - bn.running_mean * gamma / std - - -class ConvBN(nn.Module): - def __init__(self, in_channels, out_channels, kernel_size, - stride=1, padding=0, dilation=1, groups=1, deploy=False, nonlinear=None): - super().__init__() - if nonlinear is None: - self.nonlinear = nn.Identity() - else: - self.nonlinear = nonlinear - if deploy: - self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, - stride=stride, padding=padding, dilation=dilation, groups=groups, bias=True) - else: - self.conv = nn.Conv2d(in_channels=in_channels, out_channels=out_channels, kernel_size=kernel_size, - stride=stride, padding=padding, dilation=dilation, groups=groups, bias=False) - self.bn = nn.BatchNorm2d(num_features=out_channels) - - def forward(self, x): - if hasattr(self, 'bn'): - return self.nonlinear(self.bn(self.conv(x))) - else: - return self.nonlinear(self.conv(x)) - - def switch_to_deploy(self): - kernel, bias = transI_fusebn(self.conv.weight, self.bn) - conv = nn.Conv2d(in_channels=self.conv.in_channels, out_channels=self.conv.out_channels, kernel_size=self.conv.kernel_size, - stride=self.conv.stride, padding=self.conv.padding, dilation=self.conv.dilation, groups=self.conv.groups, bias=True) - conv.weight.data = kernel - conv.bias.data = bias - for para in self.parameters(): - para.detach_() - self.__delattr__('conv') - self.__delattr__('bn') - self.conv = conv - -class OREPA_3x3_RepConv(nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size, - stride=1, padding=0, dilation=1, groups=1, - internal_channels_1x1_3x3=None, - deploy=False, nonlinear=None, single_init=False): - super(OREPA_3x3_RepConv, self).__init__() - self.deploy = deploy - - if nonlinear is None: - self.nonlinear = nn.Identity() - else: - self.nonlinear = nonlinear - - self.kernel_size = kernel_size - self.in_channels = in_channels - self.out_channels = out_channels - self.groups = groups - assert padding == kernel_size // 2 - - self.stride = stride - self.padding = padding - self.dilation = dilation - - self.branch_counter = 0 - - self.weight_rbr_origin = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), kernel_size, kernel_size)) - nn.init.kaiming_uniform_(self.weight_rbr_origin, a=math.sqrt(1.0)) - self.branch_counter += 1 - - - if groups < out_channels: - self.weight_rbr_avg_conv = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), 1, 1)) - self.weight_rbr_pfir_conv = nn.Parameter(torch.Tensor(out_channels, int(in_channels/self.groups), 1, 1)) - nn.init.kaiming_uniform_(self.weight_rbr_avg_conv, a=1.0) - nn.init.kaiming_uniform_(self.weight_rbr_pfir_conv, a=1.0) - self.weight_rbr_avg_conv.data - self.weight_rbr_pfir_conv.data - self.register_buffer('weight_rbr_avg_avg', torch.ones(kernel_size, kernel_size).mul(1.0/kernel_size/kernel_size)) - self.branch_counter += 1 - - else: - raise NotImplementedError - self.branch_counter += 1 - - if internal_channels_1x1_3x3 is None: - internal_channels_1x1_3x3 = in_channels if groups < out_channels else 2 * in_channels # For mobilenet, it is better to have 2X internal channels - - if internal_channels_1x1_3x3 == in_channels: - self.weight_rbr_1x1_kxk_idconv1 = nn.Parameter(torch.zeros(in_channels, int(in_channels/self.groups), 1, 1)) - id_value = np.zeros((in_channels, int(in_channels/self.groups), 1, 1)) - for i in range(in_channels): - id_value[i, i % int(in_channels/self.groups), 0, 0] = 1 - id_tensor = torch.from_numpy(id_value).type_as(self.weight_rbr_1x1_kxk_idconv1) - self.register_buffer('id_tensor', id_tensor) - - else: - self.weight_rbr_1x1_kxk_conv1 = nn.Parameter(torch.Tensor(internal_channels_1x1_3x3, int(in_channels/self.groups), 1, 1)) - nn.init.kaiming_uniform_(self.weight_rbr_1x1_kxk_conv1, a=math.sqrt(1.0)) - self.weight_rbr_1x1_kxk_conv2 = nn.Parameter(torch.Tensor(out_channels, int(internal_channels_1x1_3x3/self.groups), kernel_size, kernel_size)) - nn.init.kaiming_uniform_(self.weight_rbr_1x1_kxk_conv2, a=math.sqrt(1.0)) - self.branch_counter += 1 - - expand_ratio = 8 - self.weight_rbr_gconv_dw = nn.Parameter(torch.Tensor(in_channels*expand_ratio, 1, kernel_size, kernel_size)) - self.weight_rbr_gconv_pw = nn.Parameter(torch.Tensor(out_channels, in_channels*expand_ratio, 1, 1)) - nn.init.kaiming_uniform_(self.weight_rbr_gconv_dw, a=math.sqrt(1.0)) - nn.init.kaiming_uniform_(self.weight_rbr_gconv_pw, a=math.sqrt(1.0)) - self.branch_counter += 1 - - if out_channels == in_channels and stride == 1: - self.branch_counter += 1 - - self.vector = nn.Parameter(torch.Tensor(self.branch_counter, self.out_channels)) - self.bn = nn.BatchNorm2d(out_channels) - - self.fre_init() - - nn.init.constant_(self.vector[0, :], 0.25) #origin - nn.init.constant_(self.vector[1, :], 0.25) #avg - nn.init.constant_(self.vector[2, :], 0.0) #prior - nn.init.constant_(self.vector[3, :], 0.5) #1x1_kxk - nn.init.constant_(self.vector[4, :], 0.5) #dws_conv - - - def fre_init(self): - prior_tensor = torch.Tensor(self.out_channels, self.kernel_size, self.kernel_size) - half_fg = self.out_channels/2 - for i in range(self.out_channels): - for h in range(3): - for w in range(3): - if i < half_fg: - prior_tensor[i, h, w] = math.cos(math.pi*(h+0.5)*(i+1)/3) - else: - prior_tensor[i, h, w] = math.cos(math.pi*(w+0.5)*(i+1-half_fg)/3) - - self.register_buffer('weight_rbr_prior', prior_tensor) - - def weight_gen(self): - - weight_rbr_origin = torch.einsum('oihw,o->oihw', self.weight_rbr_origin, self.vector[0, :]) - - weight_rbr_avg = torch.einsum('oihw,o->oihw', torch.einsum('oihw,hw->oihw', self.weight_rbr_avg_conv, self.weight_rbr_avg_avg), self.vector[1, :]) - - weight_rbr_pfir = torch.einsum('oihw,o->oihw', torch.einsum('oihw,ohw->oihw', self.weight_rbr_pfir_conv, self.weight_rbr_prior), self.vector[2, :]) - - weight_rbr_1x1_kxk_conv1 = None - if hasattr(self, 'weight_rbr_1x1_kxk_idconv1'): - weight_rbr_1x1_kxk_conv1 = (self.weight_rbr_1x1_kxk_idconv1 + self.id_tensor).squeeze() - elif hasattr(self, 'weight_rbr_1x1_kxk_conv1'): - weight_rbr_1x1_kxk_conv1 = self.weight_rbr_1x1_kxk_conv1.squeeze() - else: - raise NotImplementedError - weight_rbr_1x1_kxk_conv2 = self.weight_rbr_1x1_kxk_conv2 - - if self.groups > 1: - g = self.groups - t, ig = weight_rbr_1x1_kxk_conv1.size() - o, tg, h, w = weight_rbr_1x1_kxk_conv2.size() - weight_rbr_1x1_kxk_conv1 = weight_rbr_1x1_kxk_conv1.view(g, int(t/g), ig) - weight_rbr_1x1_kxk_conv2 = weight_rbr_1x1_kxk_conv2.view(g, int(o/g), tg, h, w) - weight_rbr_1x1_kxk = torch.einsum('gti,gothw->goihw', weight_rbr_1x1_kxk_conv1, weight_rbr_1x1_kxk_conv2).view(o, ig, h, w) - else: - weight_rbr_1x1_kxk = torch.einsum('ti,othw->oihw', weight_rbr_1x1_kxk_conv1, weight_rbr_1x1_kxk_conv2) - - weight_rbr_1x1_kxk = torch.einsum('oihw,o->oihw', weight_rbr_1x1_kxk, self.vector[3, :]) - - weight_rbr_gconv = self.dwsc2full(self.weight_rbr_gconv_dw, self.weight_rbr_gconv_pw, self.in_channels) - weight_rbr_gconv = torch.einsum('oihw,o->oihw', weight_rbr_gconv, self.vector[4, :]) - - weight = weight_rbr_origin + weight_rbr_avg + weight_rbr_1x1_kxk + weight_rbr_pfir + weight_rbr_gconv - - return weight - - def dwsc2full(self, weight_dw, weight_pw, groups): - - t, ig, h, w = weight_dw.size() - o, _, _, _ = weight_pw.size() - tg = int(t/groups) - i = int(ig*groups) - weight_dw = weight_dw.view(groups, tg, ig, h, w) - weight_pw = weight_pw.squeeze().view(o, groups, tg) - - weight_dsc = torch.einsum('gtihw,ogt->ogihw', weight_dw, weight_pw) - return weight_dsc.view(o, i, h, w) - - def forward(self, inputs): - weight = self.weight_gen() - out = F.conv2d(inputs, weight, bias=None, stride=self.stride, padding=self.padding, dilation=self.dilation, groups=self.groups) - - return self.nonlinear(self.bn(out)) - -class RepConv_OREPA(nn.Module): - - def __init__(self, c1, c2, k=3, s=1, padding=1, dilation=1, groups=1, padding_mode='zeros', deploy=False, use_se=False, nonlinear=nn.SiLU()): - super(RepConv_OREPA, self).__init__() - self.deploy = deploy - self.groups = groups - self.in_channels = c1 - self.out_channels = c2 - - self.padding = padding - self.dilation = dilation - self.groups = groups - - assert k == 3 - assert padding == 1 - - padding_11 = padding - k // 2 - - if nonlinear is None: - self.nonlinearity = nn.Identity() - else: - self.nonlinearity = nonlinear - - if use_se: - self.se = SEBlock(self.out_channels, internal_neurons=self.out_channels // 16) - else: - self.se = nn.Identity() - - if deploy: - self.rbr_reparam = nn.Conv2d(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=k, stride=s, - padding=padding, dilation=dilation, groups=groups, bias=True, padding_mode=padding_mode) - - else: - self.rbr_identity = nn.BatchNorm2d(num_features=self.in_channels) if self.out_channels == self.in_channels and s == 1 else None - self.rbr_dense = OREPA_3x3_RepConv(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=k, stride=s, padding=padding, groups=groups, dilation=1) - self.rbr_1x1 = ConvBN(in_channels=self.in_channels, out_channels=self.out_channels, kernel_size=1, stride=s, padding=padding_11, groups=groups, dilation=1) - print('RepVGG Block, identity = ', self.rbr_identity) - - - def forward(self, inputs): - if hasattr(self, 'rbr_reparam'): - return self.nonlinearity(self.se(self.rbr_reparam(inputs))) - - if self.rbr_identity is None: - id_out = 0 - else: - id_out = self.rbr_identity(inputs) - - out1 = self.rbr_dense(inputs) - out2 = self.rbr_1x1(inputs) - out3 = id_out - out = out1 + out2 + out3 - - return self.nonlinearity(self.se(out)) - - - # Optional. This improves the accuracy and facilitates quantization. - # 1. Cancel the original weight decay on rbr_dense.conv.weight and rbr_1x1.conv.weight. - # 2. Use like this. - # loss = criterion(....) - # for every RepVGGBlock blk: - # loss += weight_decay_coefficient * 0.5 * blk.get_cust_L2() - # optimizer.zero_grad() - # loss.backward() - - # Not used for OREPA - def get_custom_L2(self): - K3 = self.rbr_dense.weight_gen() - K1 = self.rbr_1x1.conv.weight - t3 = (self.rbr_dense.bn.weight / ((self.rbr_dense.bn.running_var + self.rbr_dense.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach() - t1 = (self.rbr_1x1.bn.weight / ((self.rbr_1x1.bn.running_var + self.rbr_1x1.bn.eps).sqrt())).reshape(-1, 1, 1, 1).detach() - - l2_loss_circle = (K3 ** 2).sum() - (K3[:, :, 1:2, 1:2] ** 2).sum() # The L2 loss of the "circle" of weights in 3x3 kernel. Use regular L2 on them. - eq_kernel = K3[:, :, 1:2, 1:2] * t3 + K1 * t1 # The equivalent resultant central point of 3x3 kernel. - l2_loss_eq_kernel = (eq_kernel ** 2 / (t3 ** 2 + t1 ** 2)).sum() # Normalize for an L2 coefficient comparable to regular L2. - return l2_loss_eq_kernel + l2_loss_circle - - def get_equivalent_kernel_bias(self): - kernel3x3, bias3x3 = self._fuse_bn_tensor(self.rbr_dense) - kernel1x1, bias1x1 = self._fuse_bn_tensor(self.rbr_1x1) - kernelid, biasid = self._fuse_bn_tensor(self.rbr_identity) - return kernel3x3 + self._pad_1x1_to_3x3_tensor(kernel1x1) + kernelid, bias3x3 + bias1x1 + biasid - - def _pad_1x1_to_3x3_tensor(self, kernel1x1): - if kernel1x1 is None: - return 0 - else: - return torch.nn.functional.pad(kernel1x1, [1,1,1,1]) - - def _fuse_bn_tensor(self, branch): - if branch is None: - return 0, 0 - if not isinstance(branch, nn.BatchNorm2d): - if isinstance(branch, OREPA_3x3_RepConv): - kernel = branch.weight_gen() - elif isinstance(branch, ConvBN): - kernel = branch.conv.weight - else: - raise NotImplementedError - running_mean = branch.bn.running_mean - running_var = branch.bn.running_var - gamma = branch.bn.weight - beta = branch.bn.bias - eps = branch.bn.eps - else: - if not hasattr(self, 'id_tensor'): - input_dim = self.in_channels // self.groups - kernel_value = np.zeros((self.in_channels, input_dim, 3, 3), dtype=np.float32) - for i in range(self.in_channels): - kernel_value[i, i % input_dim, 1, 1] = 1 - self.id_tensor = torch.from_numpy(kernel_value).to(branch.weight.device) - kernel = self.id_tensor - running_mean = branch.running_mean - running_var = branch.running_var - gamma = branch.weight - beta = branch.bias - eps = branch.eps - std = (running_var + eps).sqrt() - t = (gamma / std).reshape(-1, 1, 1, 1) - return kernel * t, beta - running_mean * gamma / std - - def switch_to_deploy(self): - if hasattr(self, 'rbr_reparam'): - return - print(f"RepConv_OREPA.switch_to_deploy") - kernel, bias = self.get_equivalent_kernel_bias() - self.rbr_reparam = nn.Conv2d(in_channels=self.rbr_dense.in_channels, out_channels=self.rbr_dense.out_channels, - kernel_size=self.rbr_dense.kernel_size, stride=self.rbr_dense.stride, - padding=self.rbr_dense.padding, dilation=self.rbr_dense.dilation, groups=self.rbr_dense.groups, bias=True) - self.rbr_reparam.weight.data = kernel - self.rbr_reparam.bias.data = bias - for para in self.parameters(): - para.detach_() - self.__delattr__('rbr_dense') - self.__delattr__('rbr_1x1') - if hasattr(self, 'rbr_identity'): - self.__delattr__('rbr_identity') - -##### end of orepa ##### - - -##### swin transformer ##### - -class WindowAttention(nn.Module): - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.num_heads = num_heads - head_dim = dim // num_heads - self.scale = qk_scale or head_dim ** -0.5 - - # define a parameter table of relative position bias - self.relative_position_bias_table = nn.Parameter( - torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - - nn.init.normal_(self.relative_position_bias_table, std=.02) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - - B_, N, C = x.shape - qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - q = q * self.scale - attn = (q @ k.transpose(-2, -1)) - - relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - # print(attn.dtype, v.dtype) - try: - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - except: - #print(attn.dtype, v.dtype) - x = (attn.half() @ v).transpose(1, 2).reshape(B_, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - -class Mlp(nn.Module): - - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.SiLU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - -def window_partition(x, window_size): - - B, H, W, C = x.shape - assert H % window_size == 0, 'feature map h and w can not divide by window size' - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - -def window_reverse(windows, window_size, H, W): - - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class SwinTransformerLayer(nn.Module): - - def __init__(self, dim, num_heads, window_size=8, shift_size=0, - mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.SiLU, norm_layer=nn.LayerNorm): - super().__init__() - self.dim = dim - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - # if min(self.input_resolution) <= self.window_size: - # # if window size is larger than input resolution, we don't partition windows - # self.shift_size = 0 - # self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention( - dim, window_size=(self.window_size, self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def create_mask(self, H, W): - # calculate attention mask for SW-MSA - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - return attn_mask - - def forward(self, x): - # reshape x[b c h w] to x[b l c] - _, _, H_, W_ = x.shape - - Padding = False - if min(H_, W_) < self.window_size or H_ % self.window_size!=0 or W_ % self.window_size!=0: - Padding = True - # print(f'img_size {min(H_, W_)} is less than (or not divided by) window_size {self.window_size}, Padding.') - pad_r = (self.window_size - W_ % self.window_size) % self.window_size - pad_b = (self.window_size - H_ % self.window_size) % self.window_size - x = F.pad(x, (0, pad_r, 0, pad_b)) - - # print('2', x.shape) - B, C, H, W = x.shape - L = H * W - x = x.permute(0, 2, 3, 1).contiguous().view(B, L, C) # b, L, c - - # create mask from init to forward - if self.shift_size > 0: - attn_mask = self.create_mask(H, W).to(x.device) - else: - attn_mask = None - - shortcut = x - x = self.norm1(x) - x = x.view(B, H, W, C) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - else: - shifted_x = x - - # partition windows - x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - x = x.view(B, H * W, C) - - # FFN - x = shortcut + self.drop_path(x) - x = x + self.drop_path(self.mlp(self.norm2(x))) - - x = x.permute(0, 2, 1).contiguous().view(-1, C, H, W) # b c h w - - if Padding: - x = x[:, :, :H_, :W_] # reverse padding - - return x - - -class SwinTransformerBlock(nn.Module): - def __init__(self, c1, c2, num_heads, num_layers, window_size=8): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - - # remove input_resolution - self.blocks = nn.Sequential(*[SwinTransformerLayer(dim=c2, num_heads=num_heads, window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2) for i in range(num_layers)]) - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - x = self.blocks(x) - return x - - -class STCSPA(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(STCSPA, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformerBlock(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.m(self.cv1(x)) - y2 = self.cv2(x) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class STCSPB(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(STCSPB, self).__init__() - c_ = int(c2) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformerBlock(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - x1 = self.cv1(x) - y1 = self.m(x1) - y2 = self.cv2(x1) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class STCSPC(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(STCSPC, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 1, 1) - self.cv4 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformerBlock(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(torch.cat((y1, y2), dim=1)) - -##### end of swin transformer ##### - - -##### swin transformer v2 ##### - -class WindowAttention_v2(nn.Module): - - def __init__(self, dim, window_size, num_heads, qkv_bias=True, attn_drop=0., proj_drop=0., - pretrained_window_size=[0, 0]): - - super().__init__() - self.dim = dim - self.window_size = window_size # Wh, Ww - self.pretrained_window_size = pretrained_window_size - self.num_heads = num_heads - - self.logit_scale = nn.Parameter(torch.log(10 * torch.ones((num_heads, 1, 1))), requires_grad=True) - - # mlp to generate continuous relative position bias - self.cpb_mlp = nn.Sequential(nn.Linear(2, 512, bias=True), - nn.ReLU(inplace=True), - nn.Linear(512, num_heads, bias=False)) - - # get relative_coords_table - relative_coords_h = torch.arange(-(self.window_size[0] - 1), self.window_size[0], dtype=torch.float32) - relative_coords_w = torch.arange(-(self.window_size[1] - 1), self.window_size[1], dtype=torch.float32) - relative_coords_table = torch.stack( - torch.meshgrid([relative_coords_h, - relative_coords_w])).permute(1, 2, 0).contiguous().unsqueeze(0) # 1, 2*Wh-1, 2*Ww-1, 2 - if pretrained_window_size[0] > 0: - relative_coords_table[:, :, :, 0] /= (pretrained_window_size[0] - 1) - relative_coords_table[:, :, :, 1] /= (pretrained_window_size[1] - 1) - else: - relative_coords_table[:, :, :, 0] /= (self.window_size[0] - 1) - relative_coords_table[:, :, :, 1] /= (self.window_size[1] - 1) - relative_coords_table *= 8 # normalize to -8, 8 - relative_coords_table = torch.sign(relative_coords_table) * torch.log2( - torch.abs(relative_coords_table) + 1.0) / np.log2(8) - - self.register_buffer("relative_coords_table", relative_coords_table) - - # get pair-wise relative position index for each token inside the window - coords_h = torch.arange(self.window_size[0]) - coords_w = torch.arange(self.window_size[1]) - coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww - coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww - relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww - relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2 - relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0 - relative_coords[:, :, 1] += self.window_size[1] - 1 - relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1 - relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww - self.register_buffer("relative_position_index", relative_position_index) - - self.qkv = nn.Linear(dim, dim * 3, bias=False) - if qkv_bias: - self.q_bias = nn.Parameter(torch.zeros(dim)) - self.v_bias = nn.Parameter(torch.zeros(dim)) - else: - self.q_bias = None - self.v_bias = None - self.attn_drop = nn.Dropout(attn_drop) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(proj_drop) - self.softmax = nn.Softmax(dim=-1) - - def forward(self, x, mask=None): - - B_, N, C = x.shape - qkv_bias = None - if self.q_bias is not None: - qkv_bias = torch.cat((self.q_bias, torch.zeros_like(self.v_bias, requires_grad=False), self.v_bias)) - qkv = F.linear(input=x, weight=self.qkv.weight, bias=qkv_bias) - qkv = qkv.reshape(B_, N, 3, self.num_heads, -1).permute(2, 0, 3, 1, 4) - q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple) - - # cosine attention - attn = (F.normalize(q, dim=-1) @ F.normalize(k, dim=-1).transpose(-2, -1)) - logit_scale = torch.clamp(self.logit_scale, max=torch.log(torch.tensor(1. / 0.01))).exp() - attn = attn * logit_scale - - relative_position_bias_table = self.cpb_mlp(self.relative_coords_table).view(-1, self.num_heads) - relative_position_bias = relative_position_bias_table[self.relative_position_index.view(-1)].view( - self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH - relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww - relative_position_bias = 16 * torch.sigmoid(relative_position_bias) - attn = attn + relative_position_bias.unsqueeze(0) - - if mask is not None: - nW = mask.shape[0] - attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0) - attn = attn.view(-1, self.num_heads, N, N) - attn = self.softmax(attn) - else: - attn = self.softmax(attn) - - attn = self.attn_drop(attn) - - try: - x = (attn @ v).transpose(1, 2).reshape(B_, N, C) - except: - x = (attn.half() @ v).transpose(1, 2).reshape(B_, N, C) - - x = self.proj(x) - x = self.proj_drop(x) - return x - - def extra_repr(self) -> str: - return f'dim={self.dim}, window_size={self.window_size}, ' \ - f'pretrained_window_size={self.pretrained_window_size}, num_heads={self.num_heads}' - - def flops(self, N): - # calculate flops for 1 window with token length of N - flops = 0 - # qkv = self.qkv(x) - flops += N * self.dim * 3 * self.dim - # attn = (q @ k.transpose(-2, -1)) - flops += self.num_heads * N * (self.dim // self.num_heads) * N - # x = (attn @ v) - flops += self.num_heads * N * N * (self.dim // self.num_heads) - # x = self.proj(x) - flops += N * self.dim * self.dim - return flops - -class Mlp_v2(nn.Module): - def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.SiLU, drop=0.): - super().__init__() - out_features = out_features or in_features - hidden_features = hidden_features or in_features - self.fc1 = nn.Linear(in_features, hidden_features) - self.act = act_layer() - self.fc2 = nn.Linear(hidden_features, out_features) - self.drop = nn.Dropout(drop) - - def forward(self, x): - x = self.fc1(x) - x = self.act(x) - x = self.drop(x) - x = self.fc2(x) - x = self.drop(x) - return x - - -def window_partition_v2(x, window_size): - - B, H, W, C = x.shape - x = x.view(B, H // window_size, window_size, W // window_size, window_size, C) - windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C) - return windows - - -def window_reverse_v2(windows, window_size, H, W): - - B = int(windows.shape[0] / (H * W / window_size / window_size)) - x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1) - x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1) - return x - - -class SwinTransformerLayer_v2(nn.Module): - - def __init__(self, dim, num_heads, window_size=7, shift_size=0, - mlp_ratio=4., qkv_bias=True, drop=0., attn_drop=0., drop_path=0., - act_layer=nn.SiLU, norm_layer=nn.LayerNorm, pretrained_window_size=0): - super().__init__() - self.dim = dim - #self.input_resolution = input_resolution - self.num_heads = num_heads - self.window_size = window_size - self.shift_size = shift_size - self.mlp_ratio = mlp_ratio - #if min(self.input_resolution) <= self.window_size: - # # if window size is larger than input resolution, we don't partition windows - # self.shift_size = 0 - # self.window_size = min(self.input_resolution) - assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size" - - self.norm1 = norm_layer(dim) - self.attn = WindowAttention_v2( - dim, window_size=(self.window_size, self.window_size), num_heads=num_heads, - qkv_bias=qkv_bias, attn_drop=attn_drop, proj_drop=drop, - pretrained_window_size=(pretrained_window_size, pretrained_window_size)) - - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - self.norm2 = norm_layer(dim) - mlp_hidden_dim = int(dim * mlp_ratio) - self.mlp = Mlp_v2(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop) - - def create_mask(self, H, W): - # calculate attention mask for SW-MSA - img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1 - h_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - w_slices = (slice(0, -self.window_size), - slice(-self.window_size, -self.shift_size), - slice(-self.shift_size, None)) - cnt = 0 - for h in h_slices: - for w in w_slices: - img_mask[:, h, w, :] = cnt - cnt += 1 - - mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1 - mask_windows = mask_windows.view(-1, self.window_size * self.window_size) - attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2) - attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0)) - - return attn_mask - - def forward(self, x): - # reshape x[b c h w] to x[b l c] - _, _, H_, W_ = x.shape - - Padding = False - if min(H_, W_) < self.window_size or H_ % self.window_size!=0 or W_ % self.window_size!=0: - Padding = True - # print(f'img_size {min(H_, W_)} is less than (or not divided by) window_size {self.window_size}, Padding.') - pad_r = (self.window_size - W_ % self.window_size) % self.window_size - pad_b = (self.window_size - H_ % self.window_size) % self.window_size - x = F.pad(x, (0, pad_r, 0, pad_b)) - - # print('2', x.shape) - B, C, H, W = x.shape - L = H * W - x = x.permute(0, 2, 3, 1).contiguous().view(B, L, C) # b, L, c - - # create mask from init to forward - if self.shift_size > 0: - attn_mask = self.create_mask(H, W).to(x.device) - else: - attn_mask = None - - shortcut = x - x = x.view(B, H, W, C) - - # cyclic shift - if self.shift_size > 0: - shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2)) - else: - shifted_x = x - - # partition windows - x_windows = window_partition_v2(shifted_x, self.window_size) # nW*B, window_size, window_size, C - x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C - - # W-MSA/SW-MSA - attn_windows = self.attn(x_windows, mask=attn_mask) # nW*B, window_size*window_size, C - - # merge windows - attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C) - shifted_x = window_reverse_v2(attn_windows, self.window_size, H, W) # B H' W' C - - # reverse cyclic shift - if self.shift_size > 0: - x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2)) - else: - x = shifted_x - x = x.view(B, H * W, C) - x = shortcut + self.drop_path(self.norm1(x)) - - # FFN - x = x + self.drop_path(self.norm2(self.mlp(x))) - x = x.permute(0, 2, 1).contiguous().view(-1, C, H, W) # b c h w - - if Padding: - x = x[:, :, :H_, :W_] # reverse padding - - return x - - def extra_repr(self) -> str: - return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \ - f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}" - - def flops(self): - flops = 0 - H, W = self.input_resolution - # norm1 - flops += self.dim * H * W - # W-MSA/SW-MSA - nW = H * W / self.window_size / self.window_size - flops += nW * self.attn.flops(self.window_size * self.window_size) - # mlp - flops += 2 * H * W * self.dim * self.dim * self.mlp_ratio - # norm2 - flops += self.dim * H * W - return flops - - -class SwinTransformer2Block(nn.Module): - def __init__(self, c1, c2, num_heads, num_layers, window_size=7): - super().__init__() - self.conv = None - if c1 != c2: - self.conv = Conv(c1, c2) - - # remove input_resolution - self.blocks = nn.Sequential(*[SwinTransformerLayer_v2(dim=c2, num_heads=num_heads, window_size=window_size, - shift_size=0 if (i % 2 == 0) else window_size // 2) for i in range(num_layers)]) - - def forward(self, x): - if self.conv is not None: - x = self.conv(x) - x = self.blocks(x) - return x - - -class ST2CSPA(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(ST2CSPA, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformer2Block(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.m(self.cv1(x)) - y2 = self.cv2(x) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class ST2CSPB(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=False, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(ST2CSPB, self).__init__() - c_ = int(c2) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c_, c_, 1, 1) - self.cv3 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformer2Block(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - x1 = self.cv1(x) - y1 = self.m(x1) - y2 = self.cv2(x1) - return self.cv3(torch.cat((y1, y2), dim=1)) - - -class ST2CSPC(nn.Module): - # CSP Bottleneck https://github.com/WongKinYiu/CrossStagePartialNetworks - def __init__(self, c1, c2, n=1, shortcut=True, g=1, e=0.5): # ch_in, ch_out, number, shortcut, groups, expansion - super(ST2CSPC, self).__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, 1, 1) - self.cv2 = Conv(c1, c_, 1, 1) - self.cv3 = Conv(c_, c_, 1, 1) - self.cv4 = Conv(2 * c_, c2, 1, 1) - num_heads = c_ // 32 - self.m = SwinTransformer2Block(c_, c_, num_heads, n) - #self.m = nn.Sequential(*[Bottleneck(c_, c_, shortcut, g, e=1.0) for _ in range(n)]) - - def forward(self, x): - y1 = self.cv3(self.m(self.cv1(x))) - y2 = self.cv2(x) - return self.cv4(torch.cat((y1, y2), dim=1)) - -##### end of swin transformer v2 ##### diff --git a/spaces/justest/gpt4free/g4f/Provider/Providers/Forefront.py b/spaces/justest/gpt4free/g4f/Provider/Providers/Forefront.py deleted file mode 100644 index e7e89831cc4ec6dc37ea094d9828a7582e981ff1..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/g4f/Provider/Providers/Forefront.py +++ /dev/null @@ -1,30 +0,0 @@ -import os -import json -import requests -from ...typing import sha256, Dict, get_type_hints - -url = 'https://forefront.com' -model = ['gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - json_data = { - 'text': messages[-1]['content'], - 'action': 'noauth', - 'id': '', - 'parentId': '', - 'workspaceId': '', - 'messagePersona': '607e41fe-95be-497e-8e97-010a59b2e2c0', - 'model': 'gpt-4', - 'messages': messages[:-1] if len(messages) > 1 else [], - 'internetMode': 'auto' - } - response = requests.post( 'https://streaming.tenant-forefront-default.knative.chi.coreweave.com/free-chat', - json=json_data, stream=True) - for token in response.iter_lines(): - if b'delta' in token: - token = json.loads(token.decode().split('data: ')[1])['delta'] - yield (token) -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/justest/gpt4free/testing/interference_test.py b/spaces/justest/gpt4free/testing/interference_test.py deleted file mode 100644 index e7a780d526e0ccbda8f3127d818e81a9b1ba231f..0000000000000000000000000000000000000000 --- a/spaces/justest/gpt4free/testing/interference_test.py +++ /dev/null @@ -1,15 +0,0 @@ -import openai - -openai.api_key = '' -openai.api_base = 'http://localhost:1337' - -chat_completion = openai.ChatCompletion.create(stream=True, - model='gpt-3.5-turbo', messages=[{'role': 'user', 'content': 'write a poem about a tree'}]) - -#print(chat_completion.choices[0].message.content) - -for token in chat_completion: - - content = token['choices'][0]['delta'].get('content') - if content != None: - print(content) \ No newline at end of file diff --git a/spaces/jvcanavarro/traits-prediction/app.py b/spaces/jvcanavarro/traits-prediction/app.py deleted file mode 100644 index a20497ccc43a6fb0b5c4c5b4b0a6da55413b1aee..0000000000000000000000000000000000000000 --- a/spaces/jvcanavarro/traits-prediction/app.py +++ /dev/null @@ -1,55 +0,0 @@ -import gradio as gr - -from src.core import load_model, predict_traits - -TRAIT_NAMES = [ - "Extraversion", - "Agreeableness", - "Conscientiousness", - "Neurotisicm", - "Openness", -] - -DESCRIPTION = [ - "**Extraversion**: outgoing, energetic, talkative, active, assertive, etc.", - "**Neuroticism**: worrying, self-pitying, unstable, tense, anxious, etc.", - "**Agreeableness**: sympathetic, forgiving, generous, kind, appreciative, etc.", - "**Conscientiousness**: responsible, organized, reliable, efficient, planful, etc.", - "**Openness**: artistic, curious, imaginative, insightful, original, wide interests, etc.", -] - - -def get_traits(video): - model = load_model() - trait_values = predict_traits(video, model) - return {k: float(v) for k, v in zip(TRAIT_NAMES, trait_values)} - - -params = dict( - description="Predicts the Big-Five psychology traits of a person based an short introduction video. Adapted from [Deep Impression: Audiovisual Deep Residual Networks for Multimodal Apparent Personality Trait Recognition](https://arxiv.org/abs/1609.05119).", - article=" ".join(DESCRIPTION), - thumbnail="https://cdn-icons-png.flaticon.com/512/3392/3392044.png", -) - -primary_interface = gr.Interface( - get_traits, - inputs=gr.Video(label="Video", include_audio=True), - outputs=gr.Label(num_top_classes=5, label="Results"), - examples="egs", - cache_examples=True, - **params, -) - -second_interface = gr.Interface( - get_traits, - inputs=gr.Video(label="Webcam", include_audio=True, source="webcam"), - outputs=gr.Label(num_top_classes=5, label="Results"), - **params, -) - -app = gr.TabbedInterface( - [primary_interface, second_interface], - title="Personality Traits Prediction 📑", - tab_names=["Video Upload", "Webcam"], -) -app.launch() diff --git a/spaces/jyseo/3DFuse/my/utils/seed.py b/spaces/jyseo/3DFuse/my/utils/seed.py deleted file mode 100644 index e3e81fad6c7610d11ec8d847f9a61a4e6675ecc4..0000000000000000000000000000000000000000 --- a/spaces/jyseo/3DFuse/my/utils/seed.py +++ /dev/null @@ -1,21 +0,0 @@ -# from pytorch lightning -import random -import numpy as np -import torch - -max_seed_value = np.iinfo(np.uint32).max -min_seed_value = np.iinfo(np.uint32).min - - -def seed_everything(seed=None): - seed = int(seed) - - if not (min_seed_value <= seed <= max_seed_value): - raise ValueError(f"{seed} is not in bounds, numpy accepts from {min_seed_value} to {max_seed_value}") - - print(f"seed set to {seed}") - random.seed(seed) - np.random.seed(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - return seed diff --git a/spaces/kastan/ai-teaching-assistant/clip_for_ppts.py b/spaces/kastan/ai-teaching-assistant/clip_for_ppts.py deleted file mode 100644 index ec73ad6d3098440f2e9320d0e56187de2dd6b579..0000000000000000000000000000000000000000 --- a/spaces/kastan/ai-teaching-assistant/clip_for_ppts.py +++ /dev/null @@ -1,158 +0,0 @@ -import os - -import clip -import torch -from PIL import Image - -# import sys -# from pptx import Presentation -# from pptx.enum.shapes import MSO_SHAPE_TYPE -# import time - - -class ClipImage: - - def __init__(self, path_of_ppt_folders, path_to_save_image_features, mode='image', device='cuda'): - """ - :param input_image_path: path of the input image (mode = 'image') or the actual text to be searched (mode='text') - :param path_of_ppt_folders: path of the folder containing all the ppt folders - :param path_to_save_image_features: path to save the image features - :param mode: 'image' or 'text' based on the type of input - :param device: device to run the model on - """ - print("HEADS UPP -- ALWAYS using CPU for this 'spaces' version of the project. Otherwise we get FP32/16 conflicts.") - # device = "cuda" if torch.cuda.is_available() else "cpu" - device = "cpu" - # Path - directory = 'input_features' - path = os.path.join(path_to_save_image_features, directory) - if not os.path.exists(path): - # Create the directory - os.mkdir(path) - print("Directory '% s' created" % directory) - - self.res = [] - if not os.path.isdir(path_of_ppt_folders): - raise TypeError(f"{path_of_ppt_folders} is not a directory. Please only enter a directory") - - # if mode == 'image' and not os.path.exists(input_image_path): - # raise FileNotFoundError(f"{input_image_path} does not exist.") - if not os.path.exists(path_to_save_image_features) or not os.path.isdir(path_to_save_image_features): - raise FileNotFoundError(f"{path_to_save_image_features} is not a directory or doesn't exist.") - self.mode = mode - self.path_of_ppt_folders = path_of_ppt_folders - self.path_to_save_image_features = path_to_save_image_features - self.device = device - - # consider ViT-L/14 should be the best one - self.model, self.preprocess = clip.load('ViT-B/32', self.device) - - #print("👉 RUNNING CLIP'S ONE-TIME ENCODING STEP... will be slow the first time, and hopefully only the first time.") - # passing in an image as a cheap hack, to make one funciton work for initial embedding. - #self.calculate_similarity('/home/rsalvi/chatbotai/rohan/ai-teaching-assistant-uiuc/lecture_slides/001/Slide1.jpeg') - #print("🔥 DONE with CLIP's ONE TIME ENCODING") - - def text_to_image_search(self, search_text: str, top_k_to_return: int = 4): - """ Written after the fact by kastan, so that we don't have to call init every time. """ - assert type(search_text) == str, f"Must provide a single string, instead I got type {type(search_text)}" - # self.create_input_features(search_text, mode='text') - self.mode = 'text' - return self.calculate_similarity(search_text, top_k_to_return) - - # TODO: WIP. - def image_to_images_search(self, input_image, top_k_to_return: int = 4): - """ Written after the fact by kastan, so that we don't have to call init every time. """ - self.mode = 'image' - return self.calculate_similarity(input_image, top_k_to_return) - - def create_input_features(self, input_text_or_img): - if self.mode == 'image': - # Load the image - #input_image = Image.open(input_text_or_img) # Not needed as image comes from gradio in PIL format - # Preprocess the image - input_arr = torch.cat([self.preprocess(input_text_or_img).unsqueeze(0)]).to(self.device) - - elif self.mode == 'text': - # Preprocess the text - input_arr = torch.cat([clip.tokenize(f"{input_text_or_img}", truncate=True)]).to(self.device) - - # Encode the image or text - with torch.no_grad(): - if self.mode == 'image': - input_features = self.model.encode_image(input_arr) - elif self.mode == 'text': - input_features = self.model.encode_text(input_arr) - input_features /= input_features.norm(dim=-1, keepdim=True) - return input_features - - def new_most_similar_slide_file(self, top_k: int): - # Sort the results - ans = sorted(self.res, key=lambda x: x[2], reverse=True) - return ans[:top_k] - - def calculate_similarity(self, input_text_or_img, topk_val: int = 4): - ## Similarities across folders - self.res = [] - all_similarities = [] - slide_numbers = [] - # Create the input features - input_features = self.create_input_features(input_text_or_img) - - # Iterate through all the folders - ppts = list(os.listdir(self.path_of_ppt_folders)) - #start_time = time.monotonic() - for i in ppts: - # Get the path of the folder containing the ppt images - imgs = list(os.listdir(os.path.join(self.path_of_ppt_folders, i))) - slide_numbers.append(imgs) - # Iterate through all the images and preprocess them - - # Check if the preprocessed file exists and load it - img_flag = os.path.exists(self.path_to_save_image_features + '/input_features' + "/slides_" + i + "_tensor.pt") - if img_flag: - image_features = torch.load(self.path_to_save_image_features + '/input_features' + "/slides_" + i + "_tensor.pt", - map_location=self.device) - else: - # Encode the images and save the encoding - with torch.no_grad(): - image_input = torch.cat([ - self.preprocess(Image.open(os.path.join(self.path_of_ppt_folders, i, image))).unsqueeze(0) for image in imgs - ]).to(self.device) - image_features = self.model.encode_image(image_input) - image_features /= image_features.norm(dim=-1, keepdim=True) - torch.save(image_features, self.path_to_save_image_features + '/input_features' + "/slides_" + i + "_tensor.pt") - print("Saved the image features (for faster future loading) to: ", self.path_to_save_image_features + "/slides_" + i + "_tensor.pt") - - # Calculate the similarity between the input image and the images in the folder - - # TODO: THIS REQUIRES REFACTOR. We're only looking in a SINGLE FOLDER. need to APPEND to similarity. - if self.mode == 'image': - similarity = (100.0 * input_features @ image_features.T).softmax(dim=-1) - all_similarities.append((i, similarity)) - elif self.mode == 'text': - similarity = (100.0 * input_features @ image_features.T).softmax(dim=-1) - all_similarities.append((i, similarity)) - - ## Looking over all the folders - similarity_results = [] - - for j in range(0, len(all_similarities)): - folder_name = all_similarities[j][0] - folder_values = all_similarities[j][1][0] - for i in range(0, len(folder_values)): - self.res.append((folder_name, slide_numbers[j][i], folder_values[i])) - - #print(self.res) - - return self.new_most_similar_slide_file(topk_val) - # Return the sorted results - - -# if __name__ == "__main__": - -# demo = ClipImage('/home/rsalvi/chatbotai/rohan/ai-teaching-assistant-uiuc/lecture_slides','/home/rsalvi/chatbotai/rohan/ai-teaching-assistant-uiuc') -# #op = demo.image_to_images_search('/home/rsalvi/chatbotai/rohan/ai-teaching-assistant-uiuc/lecture_slides/01c/Slide5.jpeg') -# op = demo.text_to_image_search("Unsigned Bit Pattern") -# print(op) -# op = demo.text_to_image_search("Graycode") -# print(op) \ No newline at end of file diff --git a/spaces/kcagle/AutoGPT/tests/milvus_memory_test.py b/spaces/kcagle/AutoGPT/tests/milvus_memory_test.py deleted file mode 100644 index 84fd6e6d5006e781fa5e1065f949b2160537d913..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/tests/milvus_memory_test.py +++ /dev/null @@ -1,72 +0,0 @@ -# sourcery skip: snake-case-functions -"""Tests for the MilvusMemory class.""" -import os -import sys -import unittest - -try: - from autogpt.memory.milvus import MilvusMemory - - def mock_config() -> dict: - """Mock the Config class""" - return type( - "MockConfig", - (object,), - { - "debug_mode": False, - "continuous_mode": False, - "speak_mode": False, - "milvus_collection": "autogpt", - "milvus_addr": "localhost:19530", - }, - ) - - class TestMilvusMemory(unittest.TestCase): - """Tests for the MilvusMemory class.""" - - def setUp(self) -> None: - """Set up the test environment""" - self.cfg = mock_config() - self.memory = MilvusMemory(self.cfg) - - def test_add(self) -> None: - """Test adding a text to the cache""" - text = "Sample text" - self.memory.clear() - self.memory.add(text) - result = self.memory.get(text) - self.assertEqual([text], result) - - def test_clear(self) -> None: - """Test clearing the cache""" - self.memory.clear() - self.assertEqual(self.memory.collection.num_entities, 0) - - def test_get(self) -> None: - """Test getting a text from the cache""" - text = "Sample text" - self.memory.clear() - self.memory.add(text) - result = self.memory.get(text) - self.assertEqual(result, [text]) - - def test_get_relevant(self) -> None: - """Test getting relevant texts from the cache""" - text1 = "Sample text 1" - text2 = "Sample text 2" - self.memory.clear() - self.memory.add(text1) - self.memory.add(text2) - result = self.memory.get_relevant(text1, 1) - self.assertEqual(result, [text1]) - - def test_get_stats(self) -> None: - """Test getting the cache stats""" - text = "Sample text" - self.memory.clear() - self.memory.add(text) - stats = self.memory.get_stats() - self.assertEqual(15, len(stats)) - -except: - print("Milvus not installed, skipping tests") diff --git a/spaces/keithhon/Real-Time-Voice-Cloning/README.md b/spaces/keithhon/Real-Time-Voice-Cloning/README.md deleted file mode 100644 index 54a1ff6c185f4a025bb31ff1ab4bc79eac1a1937..0000000000000000000000000000000000000000 --- a/spaces/keithhon/Real-Time-Voice-Cloning/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Real Time Voice Cloning -emoji: 📈 -colorFrom: blue -colorTo: red -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/modules/make_animation.py b/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/modules/make_animation.py deleted file mode 100644 index 3360c53501a064f35d7db21a5361f89aa9658b42..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/ChatGLM2-VC-SadTalker/src/facerender/modules/make_animation.py +++ /dev/null @@ -1,170 +0,0 @@ -from scipy.spatial import ConvexHull -import torch -import torch.nn.functional as F -import numpy as np -from tqdm import tqdm - -def normalize_kp(kp_source, kp_driving, kp_driving_initial, adapt_movement_scale=False, - use_relative_movement=False, use_relative_jacobian=False): - if adapt_movement_scale: - source_area = ConvexHull(kp_source['value'][0].data.cpu().numpy()).volume - driving_area = ConvexHull(kp_driving_initial['value'][0].data.cpu().numpy()).volume - adapt_movement_scale = np.sqrt(source_area) / np.sqrt(driving_area) - else: - adapt_movement_scale = 1 - - kp_new = {k: v for k, v in kp_driving.items()} - - if use_relative_movement: - kp_value_diff = (kp_driving['value'] - kp_driving_initial['value']) - kp_value_diff *= adapt_movement_scale - kp_new['value'] = kp_value_diff + kp_source['value'] - - if use_relative_jacobian: - jacobian_diff = torch.matmul(kp_driving['jacobian'], torch.inverse(kp_driving_initial['jacobian'])) - kp_new['jacobian'] = torch.matmul(jacobian_diff, kp_source['jacobian']) - - return kp_new - -def headpose_pred_to_degree(pred): - device = pred.device - idx_tensor = [idx for idx in range(66)] - idx_tensor = torch.FloatTensor(idx_tensor).type_as(pred).to(device) - pred = F.softmax(pred) - degree = torch.sum(pred*idx_tensor, 1) * 3 - 99 - return degree - -def get_rotation_matrix(yaw, pitch, roll): - yaw = yaw / 180 * 3.14 - pitch = pitch / 180 * 3.14 - roll = roll / 180 * 3.14 - - roll = roll.unsqueeze(1) - pitch = pitch.unsqueeze(1) - yaw = yaw.unsqueeze(1) - - pitch_mat = torch.cat([torch.ones_like(pitch), torch.zeros_like(pitch), torch.zeros_like(pitch), - torch.zeros_like(pitch), torch.cos(pitch), -torch.sin(pitch), - torch.zeros_like(pitch), torch.sin(pitch), torch.cos(pitch)], dim=1) - pitch_mat = pitch_mat.view(pitch_mat.shape[0], 3, 3) - - yaw_mat = torch.cat([torch.cos(yaw), torch.zeros_like(yaw), torch.sin(yaw), - torch.zeros_like(yaw), torch.ones_like(yaw), torch.zeros_like(yaw), - -torch.sin(yaw), torch.zeros_like(yaw), torch.cos(yaw)], dim=1) - yaw_mat = yaw_mat.view(yaw_mat.shape[0], 3, 3) - - roll_mat = torch.cat([torch.cos(roll), -torch.sin(roll), torch.zeros_like(roll), - torch.sin(roll), torch.cos(roll), torch.zeros_like(roll), - torch.zeros_like(roll), torch.zeros_like(roll), torch.ones_like(roll)], dim=1) - roll_mat = roll_mat.view(roll_mat.shape[0], 3, 3) - - rot_mat = torch.einsum('bij,bjk,bkm->bim', pitch_mat, yaw_mat, roll_mat) - - return rot_mat - -def keypoint_transformation(kp_canonical, he, wo_exp=False): - kp = kp_canonical['value'] # (bs, k, 3) - yaw, pitch, roll= he['yaw'], he['pitch'], he['roll'] - yaw = headpose_pred_to_degree(yaw) - pitch = headpose_pred_to_degree(pitch) - roll = headpose_pred_to_degree(roll) - - if 'yaw_in' in he: - yaw = he['yaw_in'] - if 'pitch_in' in he: - pitch = he['pitch_in'] - if 'roll_in' in he: - roll = he['roll_in'] - - rot_mat = get_rotation_matrix(yaw, pitch, roll) # (bs, 3, 3) - - t, exp = he['t'], he['exp'] - if wo_exp: - exp = exp*0 - - # keypoint rotation - kp_rotated = torch.einsum('bmp,bkp->bkm', rot_mat, kp) - - # keypoint translation - t[:, 0] = t[:, 0]*0 - t[:, 2] = t[:, 2]*0 - t = t.unsqueeze(1).repeat(1, kp.shape[1], 1) - kp_t = kp_rotated + t - - # add expression deviation - exp = exp.view(exp.shape[0], -1, 3) - kp_transformed = kp_t + exp - - return {'value': kp_transformed} - - - -def make_animation(source_image, source_semantics, target_semantics, - generator, kp_detector, he_estimator, mapping, - yaw_c_seq=None, pitch_c_seq=None, roll_c_seq=None, - use_exp=True, use_half=False): - with torch.no_grad(): - predictions = [] - - kp_canonical = kp_detector(source_image) - he_source = mapping(source_semantics) - kp_source = keypoint_transformation(kp_canonical, he_source) - - for frame_idx in tqdm(range(target_semantics.shape[1]), 'Face Renderer:'): - # still check the dimension - # print(target_semantics.shape, source_semantics.shape) - target_semantics_frame = target_semantics[:, frame_idx] - he_driving = mapping(target_semantics_frame) - if yaw_c_seq is not None: - he_driving['yaw_in'] = yaw_c_seq[:, frame_idx] - if pitch_c_seq is not None: - he_driving['pitch_in'] = pitch_c_seq[:, frame_idx] - if roll_c_seq is not None: - he_driving['roll_in'] = roll_c_seq[:, frame_idx] - - kp_driving = keypoint_transformation(kp_canonical, he_driving) - - kp_norm = kp_driving - out = generator(source_image, kp_source=kp_source, kp_driving=kp_norm) - ''' - source_image_new = out['prediction'].squeeze(1) - kp_canonical_new = kp_detector(source_image_new) - he_source_new = he_estimator(source_image_new) - kp_source_new = keypoint_transformation(kp_canonical_new, he_source_new, wo_exp=True) - kp_driving_new = keypoint_transformation(kp_canonical_new, he_driving, wo_exp=True) - out = generator(source_image_new, kp_source=kp_source_new, kp_driving=kp_driving_new) - ''' - predictions.append(out['prediction']) - predictions_ts = torch.stack(predictions, dim=1) - return predictions_ts - -class AnimateModel(torch.nn.Module): - """ - Merge all generator related updates into single model for better multi-gpu usage - """ - - def __init__(self, generator, kp_extractor, mapping): - super(AnimateModel, self).__init__() - self.kp_extractor = kp_extractor - self.generator = generator - self.mapping = mapping - - self.kp_extractor.eval() - self.generator.eval() - self.mapping.eval() - - def forward(self, x): - - source_image = x['source_image'] - source_semantics = x['source_semantics'] - target_semantics = x['target_semantics'] - yaw_c_seq = x['yaw_c_seq'] - pitch_c_seq = x['pitch_c_seq'] - roll_c_seq = x['roll_c_seq'] - - predictions_video = make_animation(source_image, source_semantics, target_semantics, - self.generator, self.kp_extractor, - self.mapping, use_exp = True, - yaw_c_seq=yaw_c_seq, pitch_c_seq=pitch_c_seq, roll_c_seq=roll_c_seq) - - return predictions_video \ No newline at end of file diff --git a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/dataset.py b/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/dataset.py deleted file mode 100644 index 96bbb8bb6da99122f350bc8e1a6390245840e32b..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/dataset.py +++ /dev/null @@ -1,124 +0,0 @@ -import numbers -import os -import queue as Queue -import threading - -import mxnet as mx -import numpy as np -import torch -from torch.utils.data import DataLoader, Dataset -from torchvision import transforms - - -class BackgroundGenerator(threading.Thread): - def __init__(self, generator, local_rank, max_prefetch=6): - super(BackgroundGenerator, self).__init__() - self.queue = Queue.Queue(max_prefetch) - self.generator = generator - self.local_rank = local_rank - self.daemon = True - self.start() - - def run(self): - torch.cuda.set_device(self.local_rank) - for item in self.generator: - self.queue.put(item) - self.queue.put(None) - - def next(self): - next_item = self.queue.get() - if next_item is None: - raise StopIteration - return next_item - - def __next__(self): - return self.next() - - def __iter__(self): - return self - - -class DataLoaderX(DataLoader): - - def __init__(self, local_rank, **kwargs): - super(DataLoaderX, self).__init__(**kwargs) - self.stream = torch.cuda.Stream(local_rank) - self.local_rank = local_rank - - def __iter__(self): - self.iter = super(DataLoaderX, self).__iter__() - self.iter = BackgroundGenerator(self.iter, self.local_rank) - self.preload() - return self - - def preload(self): - self.batch = next(self.iter, None) - if self.batch is None: - return None - with torch.cuda.stream(self.stream): - for k in range(len(self.batch)): - self.batch[k] = self.batch[k].to(device=self.local_rank, non_blocking=True) - - def __next__(self): - torch.cuda.current_stream().wait_stream(self.stream) - batch = self.batch - if batch is None: - raise StopIteration - self.preload() - return batch - - -class MXFaceDataset(Dataset): - def __init__(self, root_dir, local_rank): - super(MXFaceDataset, self).__init__() - self.transform = transforms.Compose( - [transforms.ToPILImage(), - transforms.RandomHorizontalFlip(), - transforms.ToTensor(), - transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]), - ]) - self.root_dir = root_dir - self.local_rank = local_rank - path_imgrec = os.path.join(root_dir, 'train.rec') - path_imgidx = os.path.join(root_dir, 'train.idx') - self.imgrec = mx.recordio.MXIndexedRecordIO(path_imgidx, path_imgrec, 'r') - s = self.imgrec.read_idx(0) - header, _ = mx.recordio.unpack(s) - if header.flag > 0: - self.header0 = (int(header.label[0]), int(header.label[1])) - self.imgidx = np.array(range(1, int(header.label[0]))) - else: - self.imgidx = np.array(list(self.imgrec.keys)) - - def __getitem__(self, index): - idx = self.imgidx[index] - s = self.imgrec.read_idx(idx) - header, img = mx.recordio.unpack(s) - label = header.label - if not isinstance(label, numbers.Number): - label = label[0] - label = torch.tensor(label, dtype=torch.long) - sample = mx.image.imdecode(img).asnumpy() - if self.transform is not None: - sample = self.transform(sample) - return sample, label - - def __len__(self): - return len(self.imgidx) - - -class SyntheticDataset(Dataset): - def __init__(self, local_rank): - super(SyntheticDataset, self).__init__() - img = np.random.randint(0, 255, size=(112, 112, 3), dtype=np.int32) - img = np.transpose(img, (2, 0, 1)) - img = torch.from_numpy(img).squeeze(0).float() - img = ((img / 255) - 0.5) / 0.5 - self.img = img - self.label = 1 - - def __getitem__(self, index): - return self.img, self.label - - def __len__(self): - return 1000000 diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/point_sample.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/point_sample.py deleted file mode 100644 index 267f4b3c56630acd85f9bdc630b7be09abab0aba..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/ops/point_sample.py +++ /dev/null @@ -1,336 +0,0 @@ -# Modified from https://github.com/facebookresearch/detectron2/tree/master/projects/PointRend # noqa - -from os import path as osp - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.nn.modules.utils import _pair -from torch.onnx.operators import shape_as_tensor - - -def bilinear_grid_sample(im, grid, align_corners=False): - """Given an input and a flow-field grid, computes the output using input - values and pixel locations from grid. Supported only bilinear interpolation - method to sample the input pixels. - - Args: - im (torch.Tensor): Input feature map, shape (N, C, H, W) - grid (torch.Tensor): Point coordinates, shape (N, Hg, Wg, 2) - align_corners {bool}: If set to True, the extrema (-1 and 1) are - considered as referring to the center points of the input’s - corner pixels. If set to False, they are instead considered as - referring to the corner points of the input’s corner pixels, - making the sampling more resolution agnostic. - Returns: - torch.Tensor: A tensor with sampled points, shape (N, C, Hg, Wg) - """ - n, c, h, w = im.shape - gn, gh, gw, _ = grid.shape - assert n == gn - - x = grid[:, :, :, 0] - y = grid[:, :, :, 1] - - if align_corners: - x = ((x + 1) / 2) * (w - 1) - y = ((y + 1) / 2) * (h - 1) - else: - x = ((x + 1) * w - 1) / 2 - y = ((y + 1) * h - 1) / 2 - - x = x.view(n, -1) - y = y.view(n, -1) - - x0 = torch.floor(x).long() - y0 = torch.floor(y).long() - x1 = x0 + 1 - y1 = y0 + 1 - - wa = ((x1 - x) * (y1 - y)).unsqueeze(1) - wb = ((x1 - x) * (y - y0)).unsqueeze(1) - wc = ((x - x0) * (y1 - y)).unsqueeze(1) - wd = ((x - x0) * (y - y0)).unsqueeze(1) - - # Apply default for grid_sample function zero padding - im_padded = F.pad(im, pad=[1, 1, 1, 1], mode='constant', value=0) - padded_h = h + 2 - padded_w = w + 2 - # save points positions after padding - x0, x1, y0, y1 = x0 + 1, x1 + 1, y0 + 1, y1 + 1 - - # Clip coordinates to padded image size - x0 = torch.where(x0 < 0, torch.tensor(0), x0) - x0 = torch.where(x0 > padded_w - 1, torch.tensor(padded_w - 1), x0) - x1 = torch.where(x1 < 0, torch.tensor(0), x1) - x1 = torch.where(x1 > padded_w - 1, torch.tensor(padded_w - 1), x1) - y0 = torch.where(y0 < 0, torch.tensor(0), y0) - y0 = torch.where(y0 > padded_h - 1, torch.tensor(padded_h - 1), y0) - y1 = torch.where(y1 < 0, torch.tensor(0), y1) - y1 = torch.where(y1 > padded_h - 1, torch.tensor(padded_h - 1), y1) - - im_padded = im_padded.view(n, c, -1) - - x0_y0 = (x0 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1) - x0_y1 = (x0 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1) - x1_y0 = (x1 + y0 * padded_w).unsqueeze(1).expand(-1, c, -1) - x1_y1 = (x1 + y1 * padded_w).unsqueeze(1).expand(-1, c, -1) - - Ia = torch.gather(im_padded, 2, x0_y0) - Ib = torch.gather(im_padded, 2, x0_y1) - Ic = torch.gather(im_padded, 2, x1_y0) - Id = torch.gather(im_padded, 2, x1_y1) - - return (Ia * wa + Ib * wb + Ic * wc + Id * wd).reshape(n, c, gh, gw) - - -def is_in_onnx_export_without_custom_ops(): - from annotator.uniformer.mmcv.ops import get_onnxruntime_op_path - ort_custom_op_path = get_onnxruntime_op_path() - return torch.onnx.is_in_onnx_export( - ) and not osp.exists(ort_custom_op_path) - - -def normalize(grid): - """Normalize input grid from [-1, 1] to [0, 1] - Args: - grid (Tensor): The grid to be normalize, range [-1, 1]. - Returns: - Tensor: Normalized grid, range [0, 1]. - """ - - return (grid + 1.0) / 2.0 - - -def denormalize(grid): - """Denormalize input grid from range [0, 1] to [-1, 1] - Args: - grid (Tensor): The grid to be denormalize, range [0, 1]. - Returns: - Tensor: Denormalized grid, range [-1, 1]. - """ - - return grid * 2.0 - 1.0 - - -def generate_grid(num_grid, size, device): - """Generate regular square grid of points in [0, 1] x [0, 1] coordinate - space. - - Args: - num_grid (int): The number of grids to sample, one for each region. - size (tuple(int, int)): The side size of the regular grid. - device (torch.device): Desired device of returned tensor. - - Returns: - (torch.Tensor): A tensor of shape (num_grid, size[0]*size[1], 2) that - contains coordinates for the regular grids. - """ - - affine_trans = torch.tensor([[[1., 0., 0.], [0., 1., 0.]]], device=device) - grid = F.affine_grid( - affine_trans, torch.Size((1, 1, *size)), align_corners=False) - grid = normalize(grid) - return grid.view(1, -1, 2).expand(num_grid, -1, -1) - - -def rel_roi_point_to_abs_img_point(rois, rel_roi_points): - """Convert roi based relative point coordinates to image based absolute - point coordinates. - - Args: - rois (Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5) - rel_roi_points (Tensor): Point coordinates inside RoI, relative to - RoI, location, range (0, 1), shape (N, P, 2) - Returns: - Tensor: Image based absolute point coordinates, shape (N, P, 2) - """ - - with torch.no_grad(): - assert rel_roi_points.size(0) == rois.size(0) - assert rois.dim() == 2 - assert rel_roi_points.dim() == 3 - assert rel_roi_points.size(2) == 2 - # remove batch idx - if rois.size(1) == 5: - rois = rois[:, 1:] - abs_img_points = rel_roi_points.clone() - # To avoid an error during exporting to onnx use independent - # variables instead inplace computation - xs = abs_img_points[:, :, 0] * (rois[:, None, 2] - rois[:, None, 0]) - ys = abs_img_points[:, :, 1] * (rois[:, None, 3] - rois[:, None, 1]) - xs += rois[:, None, 0] - ys += rois[:, None, 1] - abs_img_points = torch.stack([xs, ys], dim=2) - return abs_img_points - - -def get_shape_from_feature_map(x): - """Get spatial resolution of input feature map considering exporting to - onnx mode. - - Args: - x (torch.Tensor): Input tensor, shape (N, C, H, W) - Returns: - torch.Tensor: Spatial resolution (width, height), shape (1, 1, 2) - """ - if torch.onnx.is_in_onnx_export(): - img_shape = shape_as_tensor(x)[2:].flip(0).view(1, 1, 2).to( - x.device).float() - else: - img_shape = torch.tensor(x.shape[2:]).flip(0).view(1, 1, 2).to( - x.device).float() - return img_shape - - -def abs_img_point_to_rel_img_point(abs_img_points, img, spatial_scale=1.): - """Convert image based absolute point coordinates to image based relative - coordinates for sampling. - - Args: - abs_img_points (Tensor): Image based absolute point coordinates, - shape (N, P, 2) - img (tuple/Tensor): (height, width) of image or feature map. - spatial_scale (float): Scale points by this factor. Default: 1. - - Returns: - Tensor: Image based relative point coordinates for sampling, - shape (N, P, 2) - """ - - assert (isinstance(img, tuple) and len(img) == 2) or \ - (isinstance(img, torch.Tensor) and len(img.shape) == 4) - - if isinstance(img, tuple): - h, w = img - scale = torch.tensor([w, h], - dtype=torch.float, - device=abs_img_points.device) - scale = scale.view(1, 1, 2) - else: - scale = get_shape_from_feature_map(img) - - return abs_img_points / scale * spatial_scale - - -def rel_roi_point_to_rel_img_point(rois, - rel_roi_points, - img, - spatial_scale=1.): - """Convert roi based relative point coordinates to image based absolute - point coordinates. - - Args: - rois (Tensor): RoIs or BBoxes, shape (N, 4) or (N, 5) - rel_roi_points (Tensor): Point coordinates inside RoI, relative to - RoI, location, range (0, 1), shape (N, P, 2) - img (tuple/Tensor): (height, width) of image or feature map. - spatial_scale (float): Scale points by this factor. Default: 1. - - Returns: - Tensor: Image based relative point coordinates for sampling, - shape (N, P, 2) - """ - - abs_img_point = rel_roi_point_to_abs_img_point(rois, rel_roi_points) - rel_img_point = abs_img_point_to_rel_img_point(abs_img_point, img, - spatial_scale) - - return rel_img_point - - -def point_sample(input, points, align_corners=False, **kwargs): - """A wrapper around :func:`grid_sample` to support 3D point_coords tensors - Unlike :func:`torch.nn.functional.grid_sample` it assumes point_coords to - lie inside ``[0, 1] x [0, 1]`` square. - - Args: - input (Tensor): Feature map, shape (N, C, H, W). - points (Tensor): Image based absolute point coordinates (normalized), - range [0, 1] x [0, 1], shape (N, P, 2) or (N, Hgrid, Wgrid, 2). - align_corners (bool): Whether align_corners. Default: False - - Returns: - Tensor: Features of `point` on `input`, shape (N, C, P) or - (N, C, Hgrid, Wgrid). - """ - - add_dim = False - if points.dim() == 3: - add_dim = True - points = points.unsqueeze(2) - if is_in_onnx_export_without_custom_ops(): - # If custom ops for onnx runtime not compiled use python - # implementation of grid_sample function to make onnx graph - # with supported nodes - output = bilinear_grid_sample( - input, denormalize(points), align_corners=align_corners) - else: - output = F.grid_sample( - input, denormalize(points), align_corners=align_corners, **kwargs) - if add_dim: - output = output.squeeze(3) - return output - - -class SimpleRoIAlign(nn.Module): - - def __init__(self, output_size, spatial_scale, aligned=True): - """Simple RoI align in PointRend, faster than standard RoIAlign. - - Args: - output_size (tuple[int]): h, w - spatial_scale (float): scale the input boxes by this number - aligned (bool): if False, use the legacy implementation in - MMDetection, align_corners=True will be used in F.grid_sample. - If True, align the results more perfectly. - """ - - super(SimpleRoIAlign, self).__init__() - self.output_size = _pair(output_size) - self.spatial_scale = float(spatial_scale) - # to be consistent with other RoI ops - self.use_torchvision = False - self.aligned = aligned - - def forward(self, features, rois): - num_imgs = features.size(0) - num_rois = rois.size(0) - rel_roi_points = generate_grid( - num_rois, self.output_size, device=rois.device) - - if torch.onnx.is_in_onnx_export(): - rel_img_points = rel_roi_point_to_rel_img_point( - rois, rel_roi_points, features, self.spatial_scale) - rel_img_points = rel_img_points.reshape(num_imgs, -1, - *rel_img_points.shape[1:]) - point_feats = point_sample( - features, rel_img_points, align_corners=not self.aligned) - point_feats = point_feats.transpose(1, 2) - else: - point_feats = [] - for batch_ind in range(num_imgs): - # unravel batch dim - feat = features[batch_ind].unsqueeze(0) - inds = (rois[:, 0].long() == batch_ind) - if inds.any(): - rel_img_points = rel_roi_point_to_rel_img_point( - rois[inds], rel_roi_points[inds], feat, - self.spatial_scale).unsqueeze(0) - point_feat = point_sample( - feat, rel_img_points, align_corners=not self.aligned) - point_feat = point_feat.squeeze(0).transpose(0, 1) - point_feats.append(point_feat) - - point_feats = torch.cat(point_feats, dim=0) - - channels = features.size(1) - roi_feats = point_feats.reshape(num_rois, channels, *self.output_size) - - return roi_feats - - def __repr__(self): - format_str = self.__class__.__name__ - format_str += '(output_size={}, spatial_scale={}'.format( - self.output_size, self.spatial_scale) - return format_str diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/resnext.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/resnext.py deleted file mode 100644 index 962249ad6fd9b50960ad6426f7ce3cac6ed8c5bc..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmseg/models/backbones/resnext.py +++ /dev/null @@ -1,145 +0,0 @@ -import math - -from annotator.uniformer.mmcv.cnn import build_conv_layer, build_norm_layer - -from ..builder import BACKBONES -from ..utils import ResLayer -from .resnet import Bottleneck as _Bottleneck -from .resnet import ResNet - - -class Bottleneck(_Bottleneck): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if it is - "caffe", the stride-two layer is the first 1x1 conv layer. - """ - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - -@BACKBONES.register_module() -class ResNeXt(ResNet): - """ResNeXt backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - in_channels (int): Number of input image channels. Normally 3. - num_stages (int): Resnet stages, normally 4. - groups (int): Group of resnext. - base_width (int): Base width of resnext. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - norm_cfg (dict): dictionary to construct and config norm layer. - norm_eval (bool): Whether to set norm layers to eval mode, namely, - freeze running stats (mean and var). Note: Effect on Batch Norm - and its variants only. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - zero_init_residual (bool): whether to use zero init for last norm layer - in resblocks to let them behave as identity. - - Example: - >>> from annotator.uniformer.mmseg.models import ResNeXt - >>> import torch - >>> self = ResNeXt(depth=50) - >>> self.eval() - >>> inputs = torch.rand(1, 3, 32, 32) - >>> level_outputs = self.forward(inputs) - >>> for level_out in level_outputs: - ... print(tuple(level_out.shape)) - (1, 256, 8, 8) - (1, 512, 4, 4) - (1, 1024, 2, 2) - (1, 2048, 1, 1) - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - """Pack all blocks in a stage into a ``ResLayer``""" - return ResLayer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/losses/perceptual.py b/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/losses/perceptual.py deleted file mode 100644 index 8c055c2b327ce7943682af5c5f9394b9fcbec506..0000000000000000000000000000000000000000 --- a/spaces/kquote03/lama-video-watermark-remover/saicinpainting/training/losses/perceptual.py +++ /dev/null @@ -1,113 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision - -from models.ade20k import ModelBuilder -from saicinpainting.utils import check_and_warn_input_range - - -IMAGENET_MEAN = torch.FloatTensor([0.485, 0.456, 0.406])[None, :, None, None] -IMAGENET_STD = torch.FloatTensor([0.229, 0.224, 0.225])[None, :, None, None] - - -class PerceptualLoss(nn.Module): - def __init__(self, normalize_inputs=True): - super(PerceptualLoss, self).__init__() - - self.normalize_inputs = normalize_inputs - self.mean_ = IMAGENET_MEAN - self.std_ = IMAGENET_STD - - vgg = torchvision.models.vgg19(pretrained=True).features - vgg_avg_pooling = [] - - for weights in vgg.parameters(): - weights.requires_grad = False - - for module in vgg.modules(): - if module.__class__.__name__ == 'Sequential': - continue - elif module.__class__.__name__ == 'MaxPool2d': - vgg_avg_pooling.append(nn.AvgPool2d(kernel_size=2, stride=2, padding=0)) - else: - vgg_avg_pooling.append(module) - - self.vgg = nn.Sequential(*vgg_avg_pooling) - - def do_normalize_inputs(self, x): - return (x - self.mean_.to(x.device)) / self.std_.to(x.device) - - def partial_losses(self, input, target, mask=None): - check_and_warn_input_range(target, 0, 1, 'PerceptualLoss target in partial_losses') - - # we expect input and target to be in [0, 1] range - losses = [] - - if self.normalize_inputs: - features_input = self.do_normalize_inputs(input) - features_target = self.do_normalize_inputs(target) - else: - features_input = input - features_target = target - - for layer in self.vgg[:30]: - - features_input = layer(features_input) - features_target = layer(features_target) - - if layer.__class__.__name__ == 'ReLU': - loss = F.mse_loss(features_input, features_target, reduction='none') - - if mask is not None: - cur_mask = F.interpolate(mask, size=features_input.shape[-2:], - mode='bilinear', align_corners=False) - loss = loss * (1 - cur_mask) - - loss = loss.mean(dim=tuple(range(1, len(loss.shape)))) - losses.append(loss) - - return losses - - def forward(self, input, target, mask=None): - losses = self.partial_losses(input, target, mask=mask) - return torch.stack(losses).sum(dim=0) - - def get_global_features(self, input): - check_and_warn_input_range(input, 0, 1, 'PerceptualLoss input in get_global_features') - - if self.normalize_inputs: - features_input = self.do_normalize_inputs(input) - else: - features_input = input - - features_input = self.vgg(features_input) - return features_input - - -class ResNetPL(nn.Module): - def __init__(self, weight=1, - weights_path=None, arch_encoder='resnet50dilated', segmentation=True): - super().__init__() - self.impl = ModelBuilder.get_encoder(weights_path=weights_path, - arch_encoder=arch_encoder, - arch_decoder='ppm_deepsup', - fc_dim=2048, - segmentation=segmentation) - self.impl.eval() - for w in self.impl.parameters(): - w.requires_grad_(False) - - self.weight = weight - - def forward(self, pred, target): - pred = (pred - IMAGENET_MEAN.to(pred)) / IMAGENET_STD.to(pred) - target = (target - IMAGENET_MEAN.to(target)) / IMAGENET_STD.to(target) - - pred_feats = self.impl(pred, return_feature_maps=True) - target_feats = self.impl(target, return_feature_maps=True) - - result = torch.stack([F.mse_loss(cur_pred, cur_target) - for cur_pred, cur_target - in zip(pred_feats, target_feats)]).sum() * self.weight - return result diff --git a/spaces/kxqt/Expedit-SAM/segment_anything/automatic_mask_generator.py b/spaces/kxqt/Expedit-SAM/segment_anything/automatic_mask_generator.py deleted file mode 100644 index 23264971b7ff5aa0b4f499ade7773b68dce984b6..0000000000000000000000000000000000000000 --- a/spaces/kxqt/Expedit-SAM/segment_anything/automatic_mask_generator.py +++ /dev/null @@ -1,372 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch -from torchvision.ops.boxes import batched_nms, box_area # type: ignore - -from typing import Any, Dict, List, Optional, Tuple - -from .modeling import Sam -from .predictor import SamPredictor -from .utils.amg import ( - MaskData, - area_from_rle, - batch_iterator, - batched_mask_to_box, - box_xyxy_to_xywh, - build_all_layer_point_grids, - calculate_stability_score, - coco_encode_rle, - generate_crop_boxes, - is_box_near_crop_edge, - mask_to_rle_pytorch, - remove_small_regions, - rle_to_mask, - uncrop_boxes_xyxy, - uncrop_masks, - uncrop_points, -) - - -class SamAutomaticMaskGenerator: - def __init__( - self, - model: Sam, - points_per_side: Optional[int] = 32, - points_per_batch: int = 64, - pred_iou_thresh: float = 0.88, - stability_score_thresh: float = 0.95, - stability_score_offset: float = 1.0, - box_nms_thresh: float = 0.7, - crop_n_layers: int = 0, - crop_nms_thresh: float = 0.7, - crop_overlap_ratio: float = 512 / 1500, - crop_n_points_downscale_factor: int = 1, - point_grids: Optional[List[np.ndarray]] = None, - min_mask_region_area: int = 0, - output_mode: str = "binary_mask", - ) -> None: - """ - Using a SAM model, generates masks for the entire image. - Generates a grid of point prompts over the image, then filters - low quality and duplicate masks. The default settings are chosen - for SAM with a ViT-H backbone. - - Arguments: - model (Sam): The SAM model to use for mask prediction. - points_per_side (int or None): The number of points to be sampled - along one side of the image. The total number of points is - points_per_side**2. If None, 'point_grids' must provide explicit - point sampling. - points_per_batch (int): Sets the number of points run simultaneously - by the model. Higher numbers may be faster but use more GPU memory. - pred_iou_thresh (float): A filtering threshold in [0,1], using the - model's predicted mask quality. - stability_score_thresh (float): A filtering threshold in [0,1], using - the stability of the mask under changes to the cutoff used to binarize - the model's mask predictions. - stability_score_offset (float): The amount to shift the cutoff when - calculated the stability score. - box_nms_thresh (float): The box IoU cutoff used by non-maximal - suppression to filter duplicate masks. - crops_n_layers (int): If >0, mask prediction will be run again on - crops of the image. Sets the number of layers to run, where each - layer has 2**i_layer number of image crops. - crops_nms_thresh (float): The box IoU cutoff used by non-maximal - suppression to filter duplicate masks between different crops. - crop_overlap_ratio (float): Sets the degree to which crops overlap. - In the first crop layer, crops will overlap by this fraction of - the image length. Later layers with more crops scale down this overlap. - crop_n_points_downscale_factor (int): The number of points-per-side - sampled in layer n is scaled down by crop_n_points_downscale_factor**n. - point_grids (list(np.ndarray) or None): A list over explicit grids - of points used for sampling, normalized to [0,1]. The nth grid in the - list is used in the nth crop layer. Exclusive with points_per_side. - min_mask_region_area (int): If >0, postprocessing will be applied - to remove disconnected regions and holes in masks with area smaller - than min_mask_region_area. Requires opencv. - output_mode (str): The form masks are returned in. Can be 'binary_mask', - 'uncompressed_rle', or 'coco_rle'. 'coco_rle' requires pycocotools. - For large resolutions, 'binary_mask' may consume large amounts of - memory. - """ - - assert (points_per_side is None) != ( - point_grids is None - ), "Exactly one of points_per_side or point_grid must be provided." - if points_per_side is not None: - self.point_grids = build_all_layer_point_grids( - points_per_side, - crop_n_layers, - crop_n_points_downscale_factor, - ) - elif point_grids is not None: - self.point_grids = point_grids - else: - raise ValueError("Can't have both points_per_side and point_grid be None.") - - assert output_mode in [ - "binary_mask", - "uncompressed_rle", - "coco_rle", - ], f"Unknown output_mode {output_mode}." - if output_mode == "coco_rle": - from pycocotools import mask as mask_utils # type: ignore # noqa: F401 - - if min_mask_region_area > 0: - import cv2 # type: ignore # noqa: F401 - - self.predictor = SamPredictor(model) - self.points_per_batch = points_per_batch - self.pred_iou_thresh = pred_iou_thresh - self.stability_score_thresh = stability_score_thresh - self.stability_score_offset = stability_score_offset - self.box_nms_thresh = box_nms_thresh - self.crop_n_layers = crop_n_layers - self.crop_nms_thresh = crop_nms_thresh - self.crop_overlap_ratio = crop_overlap_ratio - self.crop_n_points_downscale_factor = crop_n_points_downscale_factor - self.min_mask_region_area = min_mask_region_area - self.output_mode = output_mode - - @torch.no_grad() - def generate(self, image: np.ndarray) -> List[Dict[str, Any]]: - """ - Generates masks for the given image. - - Arguments: - image (np.ndarray): The image to generate masks for, in HWC uint8 format. - - Returns: - list(dict(str, any)): A list over records for masks. Each record is - a dict containing the following keys: - segmentation (dict(str, any) or np.ndarray): The mask. If - output_mode='binary_mask', is an array of shape HW. Otherwise, - is a dictionary containing the RLE. - bbox (list(float)): The box around the mask, in XYWH format. - area (int): The area in pixels of the mask. - predicted_iou (float): The model's own prediction of the mask's - quality. This is filtered by the pred_iou_thresh parameter. - point_coords (list(list(float))): The point coordinates input - to the model to generate this mask. - stability_score (float): A measure of the mask's quality. This - is filtered on using the stability_score_thresh parameter. - crop_box (list(float)): The crop of the image used to generate - the mask, given in XYWH format. - """ - - # Generate masks - mask_data = self._generate_masks(image) - - # Filter small disconnected regions and holes in masks - if self.min_mask_region_area > 0: - mask_data = self.postprocess_small_regions( - mask_data, - self.min_mask_region_area, - max(self.box_nms_thresh, self.crop_nms_thresh), - ) - - # Encode masks - if self.output_mode == "coco_rle": - mask_data["segmentations"] = [coco_encode_rle(rle) for rle in mask_data["rles"]] - elif self.output_mode == "binary_mask": - mask_data["segmentations"] = [rle_to_mask(rle) for rle in mask_data["rles"]] - else: - mask_data["segmentations"] = mask_data["rles"] - - # Write mask records - curr_anns = [] - for idx in range(len(mask_data["segmentations"])): - ann = { - "segmentation": mask_data["segmentations"][idx], - "area": area_from_rle(mask_data["rles"][idx]), - "bbox": box_xyxy_to_xywh(mask_data["boxes"][idx]).tolist(), - "predicted_iou": mask_data["iou_preds"][idx].item(), - "point_coords": [mask_data["points"][idx].tolist()], - "stability_score": mask_data["stability_score"][idx].item(), - "crop_box": box_xyxy_to_xywh(mask_data["crop_boxes"][idx]).tolist(), - } - curr_anns.append(ann) - - return curr_anns - - def _generate_masks(self, image: np.ndarray) -> MaskData: - orig_size = image.shape[:2] - crop_boxes, layer_idxs = generate_crop_boxes( - orig_size, self.crop_n_layers, self.crop_overlap_ratio - ) - - # Iterate over image crops - data = MaskData() - for crop_box, layer_idx in zip(crop_boxes, layer_idxs): - crop_data = self._process_crop(image, crop_box, layer_idx, orig_size) - data.cat(crop_data) - - # Remove duplicate masks between crops - if len(crop_boxes) > 1: - # Prefer masks from smaller crops - scores = 1 / box_area(data["crop_boxes"]) - scores = scores.to(data["boxes"].device) - keep_by_nms = batched_nms( - data["boxes"].float(), - scores, - torch.zeros(len(data["boxes"])), # categories - iou_threshold=self.crop_nms_thresh, - ) - data.filter(keep_by_nms) - - data.to_numpy() - return data - - def _process_crop( - self, - image: np.ndarray, - crop_box: List[int], - crop_layer_idx: int, - orig_size: Tuple[int, ...], - ) -> MaskData: - # Crop the image and calculate embeddings - x0, y0, x1, y1 = crop_box - cropped_im = image[y0:y1, x0:x1, :] - cropped_im_size = cropped_im.shape[:2] - self.predictor.set_image(cropped_im) - - # Get points for this crop - points_scale = np.array(cropped_im_size)[None, ::-1] - points_for_image = self.point_grids[crop_layer_idx] * points_scale - - # Generate masks for this crop in batches - data = MaskData() - for (points,) in batch_iterator(self.points_per_batch, points_for_image): - batch_data = self._process_batch(points, cropped_im_size, crop_box, orig_size) - data.cat(batch_data) - del batch_data - self.predictor.reset_image() - - # Remove duplicates within this crop. - keep_by_nms = batched_nms( - data["boxes"].float(), - data["iou_preds"], - torch.zeros(len(data["boxes"])), # categories - iou_threshold=self.box_nms_thresh, - ) - data.filter(keep_by_nms) - - # Return to the original image frame - data["boxes"] = uncrop_boxes_xyxy(data["boxes"], crop_box) - data["points"] = uncrop_points(data["points"], crop_box) - data["crop_boxes"] = torch.tensor([crop_box for _ in range(len(data["rles"]))]) - - return data - - def _process_batch( - self, - points: np.ndarray, - im_size: Tuple[int, ...], - crop_box: List[int], - orig_size: Tuple[int, ...], - ) -> MaskData: - orig_h, orig_w = orig_size - - # Run model on this batch - transformed_points = self.predictor.transform.apply_coords(points, im_size) - in_points = torch.as_tensor(transformed_points, device=self.predictor.device) - in_labels = torch.ones(in_points.shape[0], dtype=torch.int, device=in_points.device) - masks, iou_preds, _ = self.predictor.predict_torch( - in_points[:, None, :], - in_labels[:, None], - multimask_output=True, - return_logits=True, - ) - - # Serialize predictions and store in MaskData - data = MaskData( - masks=masks.flatten(0, 1), - iou_preds=iou_preds.flatten(0, 1), - points=torch.as_tensor(points.repeat(masks.shape[1], axis=0)), - ) - del masks - - # Filter by predicted IoU - if self.pred_iou_thresh > 0.0: - keep_mask = data["iou_preds"] > self.pred_iou_thresh - data.filter(keep_mask) - - # Calculate stability score - data["stability_score"] = calculate_stability_score( - data["masks"], self.predictor.model.mask_threshold, self.stability_score_offset - ) - if self.stability_score_thresh > 0.0: - keep_mask = data["stability_score"] >= self.stability_score_thresh - data.filter(keep_mask) - - # Threshold masks and calculate boxes - data["masks"] = data["masks"] > self.predictor.model.mask_threshold - data["boxes"] = batched_mask_to_box(data["masks"]) - - # Filter boxes that touch crop boundaries - keep_mask = ~is_box_near_crop_edge(data["boxes"], crop_box, [0, 0, orig_w, orig_h]) - if not torch.all(keep_mask): - data.filter(keep_mask) - - # Compress to RLE - data["masks"] = uncrop_masks(data["masks"], crop_box, orig_h, orig_w) - data["rles"] = mask_to_rle_pytorch(data["masks"]) - del data["masks"] - - return data - - @staticmethod - def postprocess_small_regions( - mask_data: MaskData, min_area: int, nms_thresh: float - ) -> MaskData: - """ - Removes small disconnected regions and holes in masks, then reruns - box NMS to remove any new duplicates. - - Edits mask_data in place. - - Requires open-cv as a dependency. - """ - if len(mask_data["rles"]) == 0: - return mask_data - - # Filter small disconnected regions and holes - new_masks = [] - scores = [] - for rle in mask_data["rles"]: - mask = rle_to_mask(rle) - - mask, changed = remove_small_regions(mask, min_area, mode="holes") - unchanged = not changed - mask, changed = remove_small_regions(mask, min_area, mode="islands") - unchanged = unchanged and not changed - - new_masks.append(torch.as_tensor(mask).unsqueeze(0)) - # Give score=0 to changed masks and score=1 to unchanged masks - # so NMS will prefer ones that didn't need postprocessing - scores.append(float(unchanged)) - - # Recalculate boxes and remove any new duplicates - masks = torch.cat(new_masks, dim=0) - boxes = batched_mask_to_box(masks) - keep_by_nms = batched_nms( - boxes.float(), - torch.as_tensor(scores), - torch.zeros(len(boxes)), # categories - iou_threshold=nms_thresh, - ) - - # Only recalculate RLEs for masks that have changed - for i_mask in keep_by_nms: - if scores[i_mask] == 0.0: - mask_torch = masks[i_mask].unsqueeze(0) - mask_data["rles"][i_mask] = mask_to_rle_pytorch(mask_torch)[0] - mask_data["boxes"][i_mask] = boxes[i_mask] # update res directly - mask_data.filter(keep_by_nms) - - return mask_data diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/abc/_resources.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/abc/_resources.py deleted file mode 100644 index e0a283fc9873b524bbacb73624721353d82c34ab..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/anyio/abc/_resources.py +++ /dev/null @@ -1,31 +0,0 @@ -from __future__ import annotations - -from abc import ABCMeta, abstractmethod -from types import TracebackType -from typing import TypeVar - -T = TypeVar("T") - - -class AsyncResource(metaclass=ABCMeta): - """ - Abstract base class for all closeable asynchronous resources. - - Works as an asynchronous context manager which returns the instance itself on enter, and calls - :meth:`aclose` on exit. - """ - - async def __aenter__(self: T) -> T: - return self - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> None: - await self.aclose() - - @abstractmethod - async def aclose(self) -> None: - """Close the resource.""" diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/app.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/app.py deleted file mode 100644 index 0c47840595d6379149f85a0a1a46ae43a880c97d..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/themes/app.py +++ /dev/null @@ -1,145 +0,0 @@ -import time - -import gradio as gr -from gradio.themes.utils.theme_dropdown import create_theme_dropdown - -dropdown, js = create_theme_dropdown() - -with gr.Blocks(theme=gr.themes.Default()) as demo: - with gr.Row().style(equal_height=True): - with gr.Column(scale=10): - gr.Markdown( - """ - # Theme preview: `{THEME}` - To use this theme, set `theme='{AUTHOR}/{SPACE_NAME}'` in `gr.Blocks()` or `gr.Interface()`. - You can append an `@` and a semantic version expression, e.g. @>=1.0.0,<2.0.0 to pin to a given version - of this theme. - """ - ) - with gr.Column(scale=3): - with gr.Box(): - dropdown.render() - toggle_dark = gr.Button(value="Toggle Dark").style(full_width=True) - - dropdown.change(None, dropdown, None, _js=js) - toggle_dark.click( - None, - _js=""" - () => { - document.body.classList.toggle('dark'); - } - """, - ) - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown(["Option 1", "Option 2", "Option 3"], show_label=False) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://gradio.app/assets/img/header-image.jpg", label="Image" - ).style(height=320) - with gr.Row(): - go_btn = gr.Button("Go", label="Primary Button", variant="primary") - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://gradio.app/assets/img/header-image.jpg" - - go_btn.click(go, [radio, drop, drop_2, check, name], img, api_name="go") - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1").style(size="sm") - btn2 = gr.UploadButton().style(size="sm") - stop_btn = gr.Button("Stop", label="Stop Button", variant="stop").style( - size="sm" - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, label="JSON" - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video("https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4") - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ] - ).style(height="200px", grid=2) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - -if __name__ == "__main__": - demo.queue().launch() diff --git a/spaces/laiyer/llm-guard-playground/Dockerfile b/spaces/laiyer/llm-guard-playground/Dockerfile deleted file mode 100644 index 47719827fed21cd6dc85c0c42b3d098ac51d3e84..0000000000000000000000000000000000000000 --- a/spaces/laiyer/llm-guard-playground/Dockerfile +++ /dev/null @@ -1,32 +0,0 @@ -FROM python:3.11-slim - -RUN apt-get update && apt-get install -y \ - build-essential \ - curl \ - software-properties-common \ - && rm -rf /var/lib/apt/lists/* - -WORKDIR /app - -COPY ./requirements.txt /app/requirements.txt - -RUN pip install --upgrade pip -RUN pip install -r requirements.txt -RUN python -m spacy download en_core_web_sm - -EXPOSE 7860 - -COPY . /app - -RUN useradd -m -u 1000 user -USER user -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -WORKDIR $HOME/app - -COPY --chown=user . $HOME/app - -HEALTHCHECK CMD curl --fail http://localhost:7860/_stcore/health - -CMD python -m streamlit run app.py --server.port=7860 --server.address=0.0.0.0 diff --git a/spaces/leave7/kazunaAI2.0/vdecoder/hifigan/models.py b/spaces/leave7/kazunaAI2.0/vdecoder/hifigan/models.py deleted file mode 100644 index bdc3fa2c3447f360472d94c2fad9bd74993f6410..0000000000000000000000000000000000000000 --- a/spaces/leave7/kazunaAI2.0/vdecoder/hifigan/models.py +++ /dev/null @@ -1,500 +0,0 @@ -import os -import json -from .env import AttrDict -import numpy as np -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from .utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -def load_model(model_path, device='cuda'): - config_file = os.path.join(os.path.split(model_path)[0], 'config.json') - with open(config_file) as f: - data = f.read() - - global h - json_config = json.loads(data) - h = AttrDict(json_config) - - generator = Generator(h).to(device) - - cp_dict = torch.load(model_path) - generator.load_state_dict(cp_dict['generator']) - generator.eval() - generator.remove_weight_norm() - del cp_dict - return generator, h - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class SineGen(torch.nn.Module): - """ Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__(self, samp_rate, harmonic_num=0, - sine_amp=0.1, noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - self.flag_for_pulse = flag_for_pulse - - def _f02uv(self, f0): - # generate uv signal - uv = (f0 > self.voiced_threshold).type(torch.float32) - return uv - - def _f02sine(self, f0_values): - """ f0_values: (batchsize, length, dim) - where dim indicates fundamental tone and overtones - """ - # convert to F0 in rad. The interger part n can be ignored - # because 2 * np.pi * n doesn't affect phase - rad_values = (f0_values / self.sampling_rate) % 1 - - # initial phase noise (no noise for fundamental component) - rand_ini = torch.rand(f0_values.shape[0], f0_values.shape[2], \ - device=f0_values.device) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - - # instantanouse phase sine[t] = sin(2*pi \sum_i=1 ^{t} rad) - if not self.flag_for_pulse: - # for normal case - - # To prevent torch.cumsum numerical overflow, - # it is necessary to add -1 whenever \sum_k=1^n rad_value_k > 1. - # Buffer tmp_over_one_idx indicates the time step to add -1. - # This will not change F0 of sine because (x-1) * 2*pi = x * 2*pi - tmp_over_one = torch.cumsum(rad_values, 1) % 1 - tmp_over_one_idx = (torch.diff(tmp_over_one, dim=1)) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - - sines = torch.sin(torch.cumsum(rad_values + cumsum_shift, dim=1) - * 2 * np.pi) - else: - # If necessary, make sure that the first time step of every - # voiced segments is sin(pi) or cos(0) - # This is used for pulse-train generation - - # identify the last time step in unvoiced segments - uv = self._f02uv(f0_values) - uv_1 = torch.roll(uv, shifts=-1, dims=1) - uv_1[:, -1, :] = 1 - u_loc = (uv < 1) * (uv_1 > 0) - - # get the instantanouse phase - tmp_cumsum = torch.cumsum(rad_values, dim=1) - # different batch needs to be processed differently - for idx in range(f0_values.shape[0]): - temp_sum = tmp_cumsum[idx, u_loc[idx, :, 0], :] - temp_sum[1:, :] = temp_sum[1:, :] - temp_sum[0:-1, :] - # stores the accumulation of i.phase within - # each voiced segments - tmp_cumsum[idx, :, :] = 0 - tmp_cumsum[idx, u_loc[idx, :, 0], :] = temp_sum - - # rad_values - tmp_cumsum: remove the accumulation of i.phase - # within the previous voiced segment. - i_phase = torch.cumsum(rad_values - tmp_cumsum, dim=1) - - # get the sines - sines = torch.cos(i_phase * 2 * np.pi) - return sines - - def forward(self, f0): - """ sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, - device=f0.device) - # fundamental component - fn = torch.multiply(f0, torch.FloatTensor([[range(1, self.harmonic_num + 2)]]).to(f0.device)) - - # generate sine waveforms - sine_waves = self._f02sine(fn) * self.sine_amp - - # generate uv signal - # uv = torch.ones(f0.shape) - # uv = uv * (f0 > self.voiced_threshold) - uv = self._f02uv(f0) - - # noise: for unvoiced should be similar to sine_amp - # std = self.sine_amp/3 -> max value ~ self.sine_amp - # . for voiced regions is self.noise_std - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - - # first: set the unvoiced part to 0 by uv - # then: additive noise - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """ SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__(self, sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - - # to produce sine waveforms - self.l_sin_gen = SineGen(sampling_rate, harmonic_num, - sine_amp, add_noise_std, voiced_threshod) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x): - """ - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - """ - # source for harmonic branch - sine_wavs, uv, _ = self.l_sin_gen(x) - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - - # source for noise branch, in the same shape as uv - noise = torch.randn_like(uv) * self.sine_amp / 3 - return sine_merge, noise, uv - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - - self.num_kernels = len(h["resblock_kernel_sizes"]) - self.num_upsamples = len(h["upsample_rates"]) - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(h["upsample_rates"])) - self.m_source = SourceModuleHnNSF( - sampling_rate=h["sampling_rate"], - harmonic_num=8) - self.noise_convs = nn.ModuleList() - self.conv_pre = weight_norm(Conv1d(h["inter_channels"], h["upsample_initial_channel"], 7, 1, padding=3)) - resblock = ResBlock1 if h["resblock"] == '1' else ResBlock2 - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h["upsample_rates"], h["upsample_kernel_sizes"])): - c_cur = h["upsample_initial_channel"] // (2 ** (i + 1)) - self.ups.append(weight_norm( - ConvTranspose1d(h["upsample_initial_channel"] // (2 ** i), h["upsample_initial_channel"] // (2 ** (i + 1)), - k, u, padding=(k - u) // 2))) - if i + 1 < len(h["upsample_rates"]): # - stride_f0 = np.prod(h["upsample_rates"][i + 1:]) - self.noise_convs.append(Conv1d( - 1, c_cur, kernel_size=stride_f0 * 2, stride=stride_f0, padding=stride_f0 // 2)) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h["upsample_initial_channel"] // (2 ** (i + 1)) - for j, (k, d) in enumerate(zip(h["resblock_kernel_sizes"], h["resblock_dilation_sizes"])): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - self.cond = nn.Conv1d(h['gin_channels'], h['upsample_initial_channel'], 1) - - def forward(self, x, f0, g=None): - # print(1,x.shape,f0.shape,f0[:, None].shape) - f0 = self.f0_upsamp(f0[:, None]).transpose(1, 2) # bs,n,t - # print(2,f0.shape) - har_source, noi_source, uv = self.m_source(f0) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - x = x + self.cond(g) - # print(124,x.shape,har_source.shape) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - # print(3,x.shape) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - # print(4,x_source.shape,har_source.shape,x.shape) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(5, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, periods=None): - super(MultiPeriodDiscriminator, self).__init__() - self.periods = periods if periods is not None else [2, 3, 5, 7, 11] - self.discriminators = nn.ModuleList() - for period in self.periods: - self.discriminators.append(DiscriminatorP(period)) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList([ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ]) - self.meanpools = nn.ModuleList([ - AvgPool1d(4, 2, padding=2), - AvgPool1d(4, 2, padding=2) - ]) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += (r_loss + g_loss) - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/lewiswu1209/MockingBird/ppg2mel/train/option.py b/spaces/lewiswu1209/MockingBird/ppg2mel/train/option.py deleted file mode 100644 index f66c600b84e0404c7937bacf8653776ce9be74c0..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/ppg2mel/train/option.py +++ /dev/null @@ -1,10 +0,0 @@ -# Default parameters which will be imported by solver -default_hparas = { - 'GRAD_CLIP': 5.0, # Grad. clip threshold - 'PROGRESS_STEP': 100, # Std. output refresh freq. - # Decode steps for objective validation (step = ratio*input_txt_len) - 'DEV_STEP_RATIO': 1.2, - # Number of examples (alignment/text) to show in tensorboard - 'DEV_N_EXAMPLE': 4, - 'TB_FLUSH_FREQ': 180 # Update frequency of tensorboard (secs) -} diff --git a/spaces/librarian-bots/Model-Cards-Nomic-Atlas-Map/style.css b/spaces/librarian-bots/Model-Cards-Nomic-Atlas-Map/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/Model-Cards-Nomic-Atlas-Map/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/librarian-bots/huggingface-datasets-semantic-search/app.py b/spaces/librarian-bots/huggingface-datasets-semantic-search/app.py deleted file mode 100644 index d0f2c62d7fcaf31c1a6004e82263b895c5e3f08b..0000000000000000000000000000000000000000 --- a/spaces/librarian-bots/huggingface-datasets-semantic-search/app.py +++ /dev/null @@ -1,179 +0,0 @@ -import os -from functools import lru_cache -from typing import Optional - -import gradio as gr -from dotenv import load_dotenv -from qdrant_client import QdrantClient, models -from sentence_transformers import SentenceTransformer -from huggingface_hub import list_models - -load_dotenv() - -URL = os.getenv("QDRANT_URL") -QDRANT_API_KEY = os.getenv("QDRANT_API_KEY") -sentence_embedding_model = SentenceTransformer("BAAI/bge-large-en") - -print(URL) -print(QDRANT_API_KEY) -collection_name = "dataset_cards" -client = QdrantClient( - url=URL, - api_key=QDRANT_API_KEY, -) - - -# def convert_bytes_to_human_readable_size(bytes_size): -# if bytes_size < 1024**2: -# return f"{bytes_size / 1024:.2f} MB" -# elif bytes_size < 1024**3: -# return f"{bytes_size / (1024 ** 2):.2f} GB" -# else: -# return f"{bytes_size / (1024 ** 3):.2f} TB" - - -def format_time_nicely(time_str): - return time_str.split("T")[0] - - -def format_results(results, show_associated_models=True): - markdown = ( - "

        ✨ Dataset Search Results ✨" - "

        \n\n" - ) - for result in results: - hub_id = result.payload["id"] - download_number = result.payload["downloads"] - lastModified = result.payload["lastModified"] - url = f"https://huggingface.co/datasets/{hub_id}" - header = f"## [{hub_id}]({url})" - markdown += header + "\n" - - markdown += f"**30 Day Download:** {download_number}" - if lastModified: - markdown += f" | **Last Modified:** {format_time_nicely(lastModified)} \n\n" - else: - markdown += "\n\n" - markdown += f"{result.payload['section_text']} \n" - if show_associated_models: - if linked_models := get_models_for_dataset(hub_id): - linked_models = [ - f"[{model}](https://huggingface.co/{model})" - for model in linked_models - ] - markdown += ( - "
        Models trained on this dataset\n\n" - ) - markdown += "- " + "\n- ".join(linked_models) + "\n\n" - markdown += "
        \n\n" - - return markdown - - -@lru_cache(maxsize=100_000) -def get_models_for_dataset(id): - results = list(iter(list_models(filter=f"dataset:{id}"))) - if results: - results = list({result.id for result in results}) - return results - - -@lru_cache(maxsize=200_000) -def search(query: str, limit: Optional[int] = 10, show_linked_models: bool = False): - query_ = sentence_embedding_model.encode( - f"Represent this sentence for searching relevant passages:{query}" - ) - results = client.search( - collection_name="dataset_cards", - query_vector=query_, - limit=limit, - ) - return format_results(results, show_associated_models=show_linked_models) - - -@lru_cache(maxsize=100_000) -def hub_id_qdrant_id(hub_id): - matches = client.scroll( - collection_name="dataset_cards", - scroll_filter=models.Filter( - must=[ - models.FieldCondition(key="id", match=models.MatchValue(value=hub_id)), - ] - ), - limit=1, - with_payload=True, - with_vectors=False, - ) - try: - return matches[0][0].id - except IndexError as e: - raise gr.Error( - f"Hub id {hub_id} not in the database. This could be because it is very new" - " or because it doesn't have much documentation." - ) from e - - -@lru_cache() -def recommend(hub_id, limit: Optional[int] = 10, show_linked_models=False): - positive_id = hub_id_qdrant_id(hub_id) - results = client.recommend( - collection_name=collection_name, positive=[positive_id], limit=limit - ) - return format_results(results, show_associated_models=show_linked_models) - - -def query( - search_term, - search_type, - limit: Optional[int] = 10, - show_linked_models: bool = False, -): - if search_type == "Recommend similar datasets": - return recommend(search_term, limit, show_linked_models) - else: - return search(search_term, limit, show_linked_models) - - -with gr.Blocks() as demo: - gr.Markdown("## 🤗 Semantic Dataset Search") - with gr.Row(): - gr.Markdown( - "This Gradio app allows you to search for datasets based on their" - " descriptions. You can either search for similar datasets to a given" - " dataset or search for datasets based on a query. This is an early proof of concept. Feedback very welcome!" - ) - with gr.Row(): - search_term = gr.Textbox( - value="movie review sentiment", - label="hub id i.e. IMDB or query i.e. movie review sentiment", - ) - - with gr.Row(): - with gr.Row(): - find_similar_btn = gr.Button("Search") - search_type = gr.Radio( - ["Recommend similar datasets", "Semantic Search"], - label="Search type", - value="Semantic Search", - interactive=True, - ) - with gr.Column(): - max_results = gr.Slider( - minimum=1, - maximum=50, - step=1, - value=10, - label="Maximum number of results", - ) - show_linked_models = gr.Checkbox( - label="Show associated models", - default=False, - ) - - results = gr.Markdown() - find_similar_btn.click( - query, [search_term, search_type, max_results, show_linked_models], results - ) - - -demo.launch() diff --git a/spaces/limcheekin/Mistral-7B-OpenOrca-GGUF/index.html b/spaces/limcheekin/Mistral-7B-OpenOrca-GGUF/index.html deleted file mode 100644 index 06f76127811ab0d4d571024d5357a4d7340ec94b..0000000000000000000000000000000000000000 --- a/spaces/limcheekin/Mistral-7B-OpenOrca-GGUF/index.html +++ /dev/null @@ -1,37 +0,0 @@ - - - - Mistral-7B-OpenOrca-GGUF (Q4_K_M) - - -

        Mistral-7B-OpenOrca-GGUF (Q4_K_M)

        -

        - With the utilization of the - llama-cpp-python - package, we are excited to introduce the GGUF model hosted in the Hugging - Face Docker Spaces, made accessible through an OpenAI-compatible API. This - space includes comprehensive API documentation to facilitate seamless - integration. -

        - -

        - If you find this resource valuable, your support in the form of starring - the space would be greatly appreciated. Your engagement plays a vital role - in furthering the application for a community GPU grant, ultimately - enhancing the capabilities and accessibility of this space. -

        - - diff --git a/spaces/limingcv/AlignDet/pretrain/selfsup_retinanet_mstrain-soft-teacher_sampler-2048_temp0.5/retinanet.py b/spaces/limingcv/AlignDet/pretrain/selfsup_retinanet_mstrain-soft-teacher_sampler-2048_temp0.5/retinanet.py deleted file mode 100644 index 73cd03a03cdd82844f5698560fe38f2c55c33df3..0000000000000000000000000000000000000000 --- a/spaces/limingcv/AlignDet/pretrain/selfsup_retinanet_mstrain-soft-teacher_sampler-2048_temp0.5/retinanet.py +++ /dev/null @@ -1,366 +0,0 @@ -model = dict( - type='SelfSupDetector', - backbone=dict( - type='SelfSupRetinaNet', - backbone=dict( - type='ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=4, - norm_cfg=dict(type='BN', requires_grad=False), - norm_eval=True, - style='pytorch', - init_cfg=dict( - type='Pretrained', checkpoint='torchvision://resnet50')), - neck=dict( - type='FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - start_level=1, - add_extra_convs='on_input', - num_outs=5), - bbox_head=dict( - type='SelfSupRetinaHead', - num_classes=256, - in_channels=256, - stacked_convs=4, - feat_channels=256, - init_cfg=dict( - type='Normal', layer='Conv2d', std=0.01, override=None), - loss_cls=dict( - type='ContrastiveLoss', loss_weight=1.0, temperature=0.5), - loss_bbox=dict(type='L1Loss', loss_weight=1.0)), - train_cfg=dict( - assigner=dict( - type='MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.4, - min_pos_iou=0, - ignore_iof_thr=-1, - gpu_assign_thr=-1), - sampler=dict( - type='RandomSampler', - num=2048, - pos_fraction=1.0, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=1, - debug=False))) -train_dataset_type = 'MultiViewCocoDataset' -test_dataset_type = 'CocoDataset' -data_root = 'data/coco/' -classes = ['selective_search'] -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -load_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=False), - dict(type='SelectTopKProposals', topk=80) -] -train_pipeline1 = [ - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='FilterAnnotations', min_gt_bbox_wh=(0.01, 0.01)), - dict(type='Pad', size_divisor=32), - dict(type='RandFlip', flip_ratio=0.5), - dict( - type='OneOf', - transforms=[ - dict(type='Identity'), - dict(type='AutoContrast'), - dict(type='RandEqualize'), - dict(type='RandSolarize'), - dict(type='RandColor'), - dict(type='RandContrast'), - dict(type='RandBrightness'), - dict(type='RandSharpness'), - dict(type='RandPosterize') - ]), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] -train_pipeline2 = [ - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - multiscale_mode='value', - keep_ratio=True), - dict(type='FilterAnnotations', min_gt_bbox_wh=(0.01, 0.01)), - dict(type='Pad', size_divisor=32), - dict(type='RandFlip', flip_ratio=0.5), - dict( - type='OneOf', - transforms=[ - dict(type='Identity'), - dict(type='AutoContrast'), - dict(type='RandEqualize'), - dict(type='RandSolarize'), - dict(type='RandColor'), - dict(type='RandContrast'), - dict(type='RandBrightness'), - dict(type='RandSharpness'), - dict(type='RandPosterize') - ]), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']) -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) -] -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - train=dict( - type='MultiViewCocoDataset', - dataset=dict( - type='CocoDataset', - classes=['selective_search'], - ann_file= - 'data/coco/filtered_proposals/train2017_ratio3size0008@0.5.json', - img_prefix='data/coco/train2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=False), - dict(type='SelectTopKProposals', topk=80) - ]), - num_views=2, - pipelines=[[{ - 'type': - 'Resize', - 'img_scale': [(1333, 640), (1333, 672), (1333, 704), (1333, 736), - (1333, 768), (1333, 800)], - 'multiscale_mode': - 'value', - 'keep_ratio': - True - }, { - 'type': 'FilterAnnotations', - 'min_gt_bbox_wh': (0.01, 0.01) - }, { - 'type': 'Pad', - 'size_divisor': 32 - }, { - 'type': 'RandFlip', - 'flip_ratio': 0.5 - }, { - 'type': - 'OneOf', - 'transforms': [{ - 'type': 'Identity' - }, { - 'type': 'AutoContrast' - }, { - 'type': 'RandEqualize' - }, { - 'type': 'RandSolarize' - }, { - 'type': 'RandColor' - }, { - 'type': 'RandContrast' - }, { - 'type': 'RandBrightness' - }, { - 'type': 'RandSharpness' - }, { - 'type': 'RandPosterize' - }] - }, { - 'type': 'Normalize', - 'mean': [123.675, 116.28, 103.53], - 'std': [58.395, 57.12, 57.375], - 'to_rgb': True - }, { - 'type': 'DefaultFormatBundle' - }, { - 'type': 'Collect', - 'keys': ['img', 'gt_bboxes', 'gt_labels'] - }], - [{ - 'type': - 'Resize', - 'img_scale': [(1333, 640), (1333, 672), (1333, 704), - (1333, 736), (1333, 768), (1333, 800)], - 'multiscale_mode': - 'value', - 'keep_ratio': - True - }, { - 'type': 'FilterAnnotations', - 'min_gt_bbox_wh': (0.01, 0.01) - }, { - 'type': 'Pad', - 'size_divisor': 32 - }, { - 'type': 'RandFlip', - 'flip_ratio': 0.5 - }, { - 'type': - 'OneOf', - 'transforms': [{ - 'type': 'Identity' - }, { - 'type': 'AutoContrast' - }, { - 'type': 'RandEqualize' - }, { - 'type': 'RandSolarize' - }, { - 'type': 'RandColor' - }, { - 'type': 'RandContrast' - }, { - 'type': 'RandBrightness' - }, { - 'type': 'RandSharpness' - }, { - 'type': 'RandPosterize' - }] - }, { - 'type': 'Normalize', - 'mean': [123.675, 116.28, 103.53], - 'std': [58.395, 57.12, 57.375], - 'to_rgb': True - }, { - 'type': 'DefaultFormatBundle' - }, { - 'type': 'Collect', - 'keys': ['img', 'gt_bboxes', 'gt_labels'] - }]]), - val=dict( - type='CocoDataset', - classes=['selective_search'], - ann_file='data/coco/annotations/instances_val2017.json', - img_prefix='data/coco/val2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ]), - test=dict( - type='CocoDataset', - classes=['selective_search'], - ann_file='data/coco/annotations/instances_val2017.json', - img_prefix='data/coco/val2017/', - pipeline=[ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict( - type='Normalize', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - to_rgb=True), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']) - ]) - ])) -evaluation = dict(interval=65535) -optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0001) -optimizer_config = dict(grad_clip=None) -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[8, 11]) -runner = dict(type='EpochBasedRunner', max_epochs=12) -checkpoint_config = dict(interval=1) -log_config = dict(interval=50, hooks=[dict(type='TextLoggerHook')]) -custom_hooks = [ - dict(type='MomentumUpdateHook'), - dict( - type='MMDetWandbHook', - init_kwargs=dict(project='I2B', group='pretrain'), - interval=50, - num_eval_images=0, - log_checkpoint=False) -] -dist_params = dict(backend='nccl') -log_level = 'INFO' -load_from = None -resume_from = None -workflow = [('train', 1)] -opencv_num_threads = 0 -mp_start_method = 'fork' -auto_scale_lr = dict(enable=False, base_batch_size=16) -custom_imports = dict( - imports=[ - 'mmselfsup.datasets.pipelines', - 'selfsup.core.hook.momentum_update_hook', - 'selfsup.datasets.pipelines.selfsup_pipelines', - 'selfsup.datasets.pipelines.rand_aug', - 'selfsup.datasets.single_view_coco', - 'selfsup.datasets.multi_view_coco', - 'selfsup.models.losses.contrastive_loss', - 'selfsup.models.dense_heads.fcos_head', - 'selfsup.models.dense_heads.retina_head', - 'selfsup.models.dense_heads.detr_head', - 'selfsup.models.dense_heads.deformable_detr_head', - 'selfsup.models.roi_heads.bbox_heads.convfc_bbox_head', - 'selfsup.models.roi_heads.standard_roi_head', - 'selfsup.models.detectors.selfsup_detector', - 'selfsup.models.detectors.selfsup_fcos', - 'selfsup.models.detectors.selfsup_detr', - 'selfsup.models.detectors.selfsup_deformable_detr', - 'selfsup.models.detectors.selfsup_retinanet', - 'selfsup.models.detectors.selfsup_mask_rcnn', - 'selfsup.core.bbox.assigners.hungarian_assigner', - 'selfsup.core.bbox.match_costs.match_cost' - ], - allow_failed_imports=False) -work_dir = 'work_dirs/selfsup_retinanet_mstrain-soft-teacher_sampler-2048_temp0.5' -auto_resume = False -gpu_ids = range(0, 8) diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/36 China Town Man 3 Full Movie In Hindi Hd 720p Free Download [HOT].md b/spaces/lincquiQcaudo/Top-20-Diffusion/36 China Town Man 3 Full Movie In Hindi Hd 720p Free Download [HOT].md deleted file mode 100644 index f390422412004bd57ef5f63a360da0aacd9f5352..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/36 China Town Man 3 Full Movie In Hindi Hd 720p Free Download [HOT].md +++ /dev/null @@ -1,6 +0,0 @@ -

        36 China Town Man 3 Full Movie In Hindi Hd 720p Free Download


        Download Ziphttps://bytlly.com/2uGwXx



        -
        -36 China Town Full Movie online with release date, trailer, cast and songs. Find out where to watch or stream this Hindi comedy movie on DIgit Binge. If you are looking for 36 China Town (Full Movie) with Hindi voice acting, you can watch this amazing Chinese drama. The film is about life in China, where we meet locals who have immigrated to the city and their life in Hong Kong, which is very different from their hometown. This Hindi-voiced film is about a man named Su (Zhao Wei) who is from a poor area. Su lives in a neighborhood where wealthy people live in luxurious homes. 8a78ff9644
        -
        -
        -

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Assimil Russian Without Toil (1951) PDF MP3 5 Extra Quality.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Assimil Russian Without Toil (1951) PDF MP3 5 Extra Quality.md deleted file mode 100644 index 63c35d7afca8a17f9f6803457ce75dcf7c4e9890..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Assimil Russian Without Toil (1951) PDF MP3 5 Extra Quality.md +++ /dev/null @@ -1,11 +0,0 @@ -

        Assimil Russian Without Toil (1951) PDF MP3 5


        Download ✫✫✫ https://bytlly.com/2uGyCI



        - -My name is Andrey Kuzmenko and I will read Russian to you without difficulty Assimil. ... You can download the PDF-Source and the original recording from 1951 (download ... I found it so interesting and exciting that I immediately decided to publish it. It is very important that people understand each other better. And I love to study. -Learn Russian -... read in your own language and understand its meaning, -... and feel it like a native language, -... feel like a full member of it. -... read without straining 8a78ff9644
        -
        -
        -

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Biologiadelasplantasravenpdfespaol.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Biologiadelasplantasravenpdfespaol.md deleted file mode 100644 index c43f2d5bdb2ad9513f9dafca01e58141bd8cf856..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Biologiadelasplantasravenpdfespaol.md +++ /dev/null @@ -1,6 +0,0 @@ -

        biologiadelasplantasravenpdfespaol


        DOWNLOAD 🌟 https://bytlly.com/2uGydT



        - -biologiadelasplantasravenpdfespaol · hachiko dog movie dual audio english to hindi download · Titli hindi movie torrent · Glary utilities license ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/lincquiQcaudo/Top-20-Diffusion/Bosch Ve Fuel Injection Pump Manuall WORK.md b/spaces/lincquiQcaudo/Top-20-Diffusion/Bosch Ve Fuel Injection Pump Manuall WORK.md deleted file mode 100644 index 4c0ffb2210320c19bf6b4c8428c8eb68636ca931..0000000000000000000000000000000000000000 --- a/spaces/lincquiQcaudo/Top-20-Diffusion/Bosch Ve Fuel Injection Pump Manuall WORK.md +++ /dev/null @@ -1,7 +0,0 @@ - -

        The Aisin four-speed gearbox's job is to coordinate between the engine and the transmission, and with a low-maintenance design, the pump itself, along with a supplemental electrical ignition component and spark plug wires, should be able to outlast the engine itself. Bosch makes the eight-speed gearbox used by Porsche and a number of other automakers, including Aston Martin, Bentley, and Audi. The eight-speed's shift points are electronically adjustable, and it boasts the most precise shifting performance of any automatic transmission Porsche uses.

        -

        Bosch Ve Fuel Injection Pump Manuall


        Download ✑ ✑ ✑ https://bytlly.com/2uGy6p



        -

        When you can afford a top-of-the-line engine, transmission, and battery, you get the best fuel pump. You get a Bosch fuel pump. Bosch is the number one pump manufacturer in the world. If you want a quality Bosch, you can have it. Thats just the simple reality of automotive fuel systems. One of the most common problems we see in service is pump bearing failure, which was not an issue when the pump was still in the engine.

        -

        Hes had a rather brief time at Napiers, but was fast to introduce new technologies to the World. There really was no concern, as it was Bosch who had just installed the VP44 and VPP44 pumps. At cars and coffee, we get a lot of inquiries from older Porsches, often with plugs that are damaged from many years of extended cold-climate use. In order to get the engine running as quickly as possible, with the goal of allowing the engine a few minutes to warm up to operate reliably, remove the plugs and spread a little oil into the fuel filter area to help wick oil into the filter. Youll want to check the plugs frequently, as any carbon buildup can be further burnt off during operation.

        899543212b
        -
        -
        \ No newline at end of file diff --git a/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py b/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py deleted file mode 100644 index eac7e896bbe85a670824bfe8ef487d0535d5bd99..0000000000000000000000000000000000000000 --- a/spaces/linfanluntan/Grounded-SAM/GroundingDINO/groundingdino/models/GroundingDINO/backbone/position_encoding.py +++ /dev/null @@ -1,186 +0,0 @@ -# ------------------------------------------------------------------------ -# Grounding DINO -# url: https://github.com/IDEA-Research/GroundingDINO -# Copyright (c) 2023 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# DINO -# Copyright (c) 2022 IDEA. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Conditional DETR -# Copyright (c) 2021 Microsoft. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------ -# Copied from DETR (https://github.com/facebookresearch/detr) -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# ------------------------------------------------------------------------ - -""" -Various positional encodings for the transformer. -""" -import math - -import torch -from torch import nn - -from groundingdino.util.misc import NestedTensor - - -class PositionEmbeddingSine(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - - def __init__(self, num_pos_feats=64, temperature=10000, normalize=False, scale=None): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperature = temperature - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - mask = tensor_list.mask - assert mask is not None - not_mask = ~mask - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - if self.normalize: - eps = 1e-6 - # if os.environ.get("SHILONG_AMP", None) == '1': - # eps = 1e-4 - # else: - # eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_t = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_t = self.temperature ** (2 * (dim_t // 2) / self.num_pos_feats) - - pos_x = x_embed[:, :, :, None] / dim_t - pos_y = y_embed[:, :, :, None] / dim_t - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - return pos - - -class PositionEmbeddingSineHW(nn.Module): - """ - This is a more standard version of the position embedding, very similar to the one - used by the Attention is all you need paper, generalized to work on images. - """ - - def __init__( - self, num_pos_feats=64, temperatureH=10000, temperatureW=10000, normalize=False, scale=None - ): - super().__init__() - self.num_pos_feats = num_pos_feats - self.temperatureH = temperatureH - self.temperatureW = temperatureW - self.normalize = normalize - if scale is not None and normalize is False: - raise ValueError("normalize should be True if scale is passed") - if scale is None: - scale = 2 * math.pi - self.scale = scale - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - mask = tensor_list.mask - assert mask is not None - not_mask = ~mask - y_embed = not_mask.cumsum(1, dtype=torch.float32) - x_embed = not_mask.cumsum(2, dtype=torch.float32) - - # import ipdb; ipdb.set_trace() - - if self.normalize: - eps = 1e-6 - y_embed = y_embed / (y_embed[:, -1:, :] + eps) * self.scale - x_embed = x_embed / (x_embed[:, :, -1:] + eps) * self.scale - - dim_tx = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_tx = self.temperatureW ** (2 * (torch.div(dim_tx, 2, rounding_mode='floor')) / self.num_pos_feats) - pos_x = x_embed[:, :, :, None] / dim_tx - - dim_ty = torch.arange(self.num_pos_feats, dtype=torch.float32, device=x.device) - dim_ty = self.temperatureH ** (2 * (torch.div(dim_ty, 2, rounding_mode='floor')) / self.num_pos_feats) - pos_y = y_embed[:, :, :, None] / dim_ty - - pos_x = torch.stack( - (pos_x[:, :, :, 0::2].sin(), pos_x[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos_y = torch.stack( - (pos_y[:, :, :, 0::2].sin(), pos_y[:, :, :, 1::2].cos()), dim=4 - ).flatten(3) - pos = torch.cat((pos_y, pos_x), dim=3).permute(0, 3, 1, 2) - - # import ipdb; ipdb.set_trace() - - return pos - - -class PositionEmbeddingLearned(nn.Module): - """ - Absolute pos embedding, learned. - """ - - def __init__(self, num_pos_feats=256): - super().__init__() - self.row_embed = nn.Embedding(50, num_pos_feats) - self.col_embed = nn.Embedding(50, num_pos_feats) - self.reset_parameters() - - def reset_parameters(self): - nn.init.uniform_(self.row_embed.weight) - nn.init.uniform_(self.col_embed.weight) - - def forward(self, tensor_list: NestedTensor): - x = tensor_list.tensors - h, w = x.shape[-2:] - i = torch.arange(w, device=x.device) - j = torch.arange(h, device=x.device) - x_emb = self.col_embed(i) - y_emb = self.row_embed(j) - pos = ( - torch.cat( - [ - x_emb.unsqueeze(0).repeat(h, 1, 1), - y_emb.unsqueeze(1).repeat(1, w, 1), - ], - dim=-1, - ) - .permute(2, 0, 1) - .unsqueeze(0) - .repeat(x.shape[0], 1, 1, 1) - ) - return pos - - -def build_position_encoding(args): - N_steps = args.hidden_dim // 2 - if args.position_embedding in ("v2", "sine"): - # TODO find a better way of exposing other arguments - position_embedding = PositionEmbeddingSineHW( - N_steps, - temperatureH=args.pe_temperatureH, - temperatureW=args.pe_temperatureW, - normalize=True, - ) - elif args.position_embedding in ("v3", "learned"): - position_embedding = PositionEmbeddingLearned(N_steps) - else: - raise ValueError(f"not supported {args.position_embedding}") - - return position_embedding diff --git a/spaces/lixq/bingo61/src/components/ui/tooltip.tsx b/spaces/lixq/bingo61/src/components/ui/tooltip.tsx deleted file mode 100644 index af1d48beb90dd5ae311796539843700871052cae..0000000000000000000000000000000000000000 --- a/spaces/lixq/bingo61/src/components/ui/tooltip.tsx +++ /dev/null @@ -1,30 +0,0 @@ -'use client' - -import * as React from 'react' -import * as TooltipPrimitive from '@radix-ui/react-tooltip' - -import { cn } from '@/lib/utils' - -const TooltipProvider = TooltipPrimitive.Provider - -const Tooltip = TooltipPrimitive.Root - -const TooltipTrigger = TooltipPrimitive.Trigger - -const TooltipContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - -)) -TooltipContent.displayName = TooltipPrimitive.Content.displayName - -export { Tooltip, TooltipTrigger, TooltipContent, TooltipProvider } diff --git a/spaces/lkeab/transfiner/configs/common/models/cascade_rcnn.py b/spaces/lkeab/transfiner/configs/common/models/cascade_rcnn.py deleted file mode 100644 index c7372a801dc00d7fec4db8cda8c2612ce281d48a..0000000000000000000000000000000000000000 --- a/spaces/lkeab/transfiner/configs/common/models/cascade_rcnn.py +++ /dev/null @@ -1,36 +0,0 @@ -from detectron2.config import LazyCall as L -from detectron2.layers import ShapeSpec -from detectron2.modeling.box_regression import Box2BoxTransform -from detectron2.modeling.matcher import Matcher -from detectron2.modeling.roi_heads import FastRCNNOutputLayers, FastRCNNConvFCHead, CascadeROIHeads - -from .mask_rcnn_fpn import model - -# arguments that don't exist for Cascade R-CNN -[model.roi_heads.pop(k) for k in ["box_head", "box_predictor", "proposal_matcher"]] - -model.roi_heads.update( - _target_=CascadeROIHeads, - box_heads=[ - L(FastRCNNConvFCHead)( - input_shape=ShapeSpec(channels=256, height=7, width=7), - conv_dims=[], - fc_dims=[1024, 1024], - ) - for k in range(3) - ], - box_predictors=[ - L(FastRCNNOutputLayers)( - input_shape=ShapeSpec(channels=1024), - test_score_thresh=0.05, - box2box_transform=L(Box2BoxTransform)(weights=(w1, w1, w2, w2)), - cls_agnostic_bbox_reg=True, - num_classes="${...num_classes}", - ) - for (w1, w2) in [(10, 5), (20, 10), (30, 15)] - ], - proposal_matchers=[ - L(Matcher)(thresholds=[th], labels=[0, 1], allow_low_quality_matches=False) - for th in [0.5, 0.6, 0.7] - ], -) diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/modules/F0Predictor/F0Predictor.py b/spaces/lllqqq/so-vits-svc-models-pcr/modules/F0Predictor/F0Predictor.py deleted file mode 100644 index 69d8a9bd28729e33d092a5af8e2ce544c1330c3b..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/modules/F0Predictor/F0Predictor.py +++ /dev/null @@ -1,16 +0,0 @@ -class F0Predictor(object): - def compute_f0(self,wav,p_len): - ''' - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length] - ''' - pass - - def compute_f0_uv(self,wav,p_len): - ''' - input: wav:[signal_length] - p_len:int - output: f0:[signal_length//hop_length],uv:[signal_length//hop_length] - ''' - pass \ No newline at end of file diff --git a/spaces/lotrlol/Spotify-Recommendation-System/README.md b/spaces/lotrlol/Spotify-Recommendation-System/README.md deleted file mode 100644 index 6d1eb5f853be2c6c5636f325d0004b175e1ee111..0000000000000000000000000000000000000000 --- a/spaces/lotrlol/Spotify-Recommendation-System/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Spotify Recommendation System -emoji: 📊 -colorFrom: red -colorTo: blue -sdk: streamlit -sdk_version: 1.10.0 -app_file: main.py -pinned: false -duplicated_from: Longliveruby/Spotify-Recommendation-System ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lulmer/paraphraser_ai/app_gradio.py b/spaces/lulmer/paraphraser_ai/app_gradio.py deleted file mode 100644 index 5a8e56607a272193edd9395220a1a00d01241b1f..0000000000000000000000000000000000000000 --- a/spaces/lulmer/paraphraser_ai/app_gradio.py +++ /dev/null @@ -1,21 +0,0 @@ -import gradio as gr -import os -os.environ['NO_PROXY'] = '127.0.0.1' - -class DummyAugmenter: - def __init__(self, in_lang="en", out_lang="ru") -> None: - pass - def back_translate(self,text): - return "La marche des vertueux est seumée d'obstacles" - -def greet(name): - return "Hello " + name + "!" - -with gr.Blocks() as demo: - name = gr.Textbox(label="Please type the text to paraphrase") - output = gr.Textbox(label="Output Box") - - greet_btn = gr.Button("Greet") - greet_btn.click(fn=greet, inputs=name, outputs=output) - -demo.launch() \ No newline at end of file diff --git a/spaces/magicr/BuboGPT/bubogpt/models/__init__.py b/spaces/magicr/BuboGPT/bubogpt/models/__init__.py deleted file mode 100644 index d1c90893dc624306e08f660c12e0d553ab9ea927..0000000000000000000000000000000000000000 --- a/spaces/magicr/BuboGPT/bubogpt/models/__init__.py +++ /dev/null @@ -1,200 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE_Lavis file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import logging -import torch -from omegaconf import OmegaConf - -from bubogpt.common.registry import registry -from bubogpt.models.base_model import BaseModel -from bubogpt.models.blip2 import Blip2Base -from bubogpt.processors.base_processor import BaseProcessor -from bubogpt.models.mm_gpt4 import MMGPT4 - - -__all__ = [ - "load_model", - "BaseModel", - "Blip2Base", - "MMGPT4" -] - - -def load_model(name, model_type, is_eval=False, device="cpu", checkpoint=None): - """ - Load supported models. - - To list all available models and types in registry: - >>> from bubogpt.models import model_zoo - >>> print(model_zoo) - - Args: - name (str): name of the model. - model_type (str): type of the model. - is_eval (bool): whether the model is in eval mode. Default: False. - device (str): device to use. Default: "cpu". - checkpoint (str): path or to checkpoint. Default: None. - Note that expecting the checkpoint to have the same keys in state_dict as the model. - - Returns: - model (torch.nn.Module): model. - """ - - model = registry.get_model_class(name).from_pretrained(model_type=model_type) - - if checkpoint is not None: - model.load_checkpoint(checkpoint) - - if is_eval: - model.eval() - - if device == "cpu": - model = model.float() - - return model.to(device) - - -def load_preprocess(config): - """ - Load preprocessor configs and construct preprocessors. - - If no preprocessor is specified, return BaseProcessor, which does not do any preprocessing. - - Args: - config (dict): preprocessor configs. - - Returns: - vis_processors (dict): preprocessors for visual inputs. - txt_processors (dict): preprocessors for text inputs. - - Key is "train" or "eval" for processors used in training and evaluation respectively. - """ - - def _build_proc_from_cfg(cfg): - return ( - registry.get_processor_class(cfg.name).from_config(cfg) - if cfg is not None - else BaseProcessor() - ) - - vis_processors = dict() - txt_processors = dict() - - vis_proc_cfg = config.get("vis_processor") - txt_proc_cfg = config.get("text_processor") - - if vis_proc_cfg is not None: - vis_train_cfg = vis_proc_cfg.get("train") - vis_eval_cfg = vis_proc_cfg.get("eval") - else: - vis_train_cfg = None - vis_eval_cfg = None - - vis_processors["train"] = _build_proc_from_cfg(vis_train_cfg) - vis_processors["eval"] = _build_proc_from_cfg(vis_eval_cfg) - - if txt_proc_cfg is not None: - txt_train_cfg = txt_proc_cfg.get("train") - txt_eval_cfg = txt_proc_cfg.get("eval") - else: - txt_train_cfg = None - txt_eval_cfg = None - - txt_processors["train"] = _build_proc_from_cfg(txt_train_cfg) - txt_processors["eval"] = _build_proc_from_cfg(txt_eval_cfg) - - return vis_processors, txt_processors - - -def load_model_and_preprocess(name, model_type, is_eval=False, device="cpu"): - """ - Load model and its related preprocessors. - - List all available models and types in registry: - >>> from bubogpt.models import model_zoo - >>> print(model_zoo) - - Args: - name (str): name of the model. - model_type (str): type of the model. - is_eval (bool): whether the model is in eval mode. Default: False. - device (str): device to use. Default: "cpu". - - Returns: - model (torch.nn.Module): model. - vis_processors (dict): preprocessors for visual inputs. - txt_processors (dict): preprocessors for text inputs. - """ - model_cls = registry.get_model_class(name) - - # load model - model = model_cls.from_pretrained(model_type=model_type) - - if is_eval: - model.eval() - - # load preprocess - cfg = OmegaConf.load(model_cls.default_config_path(model_type)) - if cfg is not None: - preprocess_cfg = cfg.preprocess - - vis_processors, txt_processors = load_preprocess(preprocess_cfg) - else: - vis_processors, txt_processors = None, None - logging.info( - f"""No default preprocess for model {name} ({model_type}). - This can happen if the model is not finetuned on downstream datasets, - or it is not intended for direct use without finetuning. - """ - ) - - if device == "cpu" or device == torch.device("cpu"): - model = model.float() - - return model.to(device), vis_processors, txt_processors - - -class ModelZoo: - """ - A utility class to create string representation of available model architectures and types. - - >>> from bubogpt.models import model_zoo - >>> # list all available models - >>> print(model_zoo) - >>> # show total number of models - >>> print(len(model_zoo)) - """ - - def __init__(self) -> None: - self.model_zoo = { - k: list(v.PRETRAINED_MODEL_CONFIG_DICT.keys()) - for k, v in registry.mapping["model_name_mapping"].items() - } - - def __str__(self) -> str: - return ( - "=" * 50 - + "\n" - + f"{'Architectures':<30} {'Types'}\n" - + "=" * 50 - + "\n" - + "\n".join( - [ - f"{name:<30} {', '.join(types)}" - for name, types in self.model_zoo.items() - ] - ) - ) - - def __iter__(self): - return iter(self.model_zoo.items()) - - def __len__(self): - return sum([len(v) for v in self.model_zoo.values()]) - - -model_zoo = ModelZoo() diff --git a/spaces/magicr/BuboGPT/bubogpt/models/modeling_llama.py b/spaces/magicr/BuboGPT/bubogpt/models/modeling_llama.py deleted file mode 100644 index 12d980e189d902fb1a6d9ea05dc3ca91959b1c8c..0000000000000000000000000000000000000000 --- a/spaces/magicr/BuboGPT/bubogpt/models/modeling_llama.py +++ /dev/null @@ -1,755 +0,0 @@ -# This script is based on https://github.com/huggingface/transformers/blob/main/src/transformers/models/llama/modeling_llama.py - -""" PyTorch LLaMA model.""" -import math -from typing import List, Optional, Tuple, Union - -import torch -import torch.utils.checkpoint -from torch import nn -from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss - -from transformers.activations import ACT2FN -from transformers.modeling_outputs import BaseModelOutputWithPast, CausalLMOutputWithPast, SequenceClassifierOutputWithPast -from transformers.modeling_utils import PreTrainedModel -from transformers.utils import add_start_docstrings, add_start_docstrings_to_model_forward, logging, replace_return_docstrings -from transformers.models.llama.configuration_llama import LlamaConfig - - -logger = logging.get_logger(__name__) - -_CONFIG_FOR_DOC = "LlamaConfig" - - -# Copied from transformers.models.bart.modeling_bart._make_causal_mask -def _make_causal_mask( - input_ids_shape: torch.Size, dtype: torch.dtype, device: torch.device, past_key_values_length: int = 0 -): - """ - Make causal mask used for bi-directional self-attention. - """ - bsz, tgt_len = input_ids_shape - mask = torch.full((tgt_len, tgt_len), torch.tensor(torch.finfo(dtype).min, device=device), device=device) - mask_cond = torch.arange(mask.size(-1), device=device) - mask.masked_fill_(mask_cond < (mask_cond + 1).view(mask.size(-1), 1), 0) - mask = mask.to(dtype) - - if past_key_values_length > 0: - mask = torch.cat([torch.zeros(tgt_len, past_key_values_length, dtype=dtype, device=device), mask], dim=-1) - return mask[None, None, :, :].expand(bsz, 1, tgt_len, tgt_len + past_key_values_length) - - -# Copied from transformers.models.bart.modeling_bart._expand_mask -def _expand_mask(mask: torch.Tensor, dtype: torch.dtype, tgt_len: Optional[int] = None): - """ - Expands attention_mask from `[bsz, seq_len]` to `[bsz, 1, tgt_seq_len, src_seq_len]`. - """ - bsz, src_len = mask.size() - tgt_len = tgt_len if tgt_len is not None else src_len - - expanded_mask = mask[:, None, None, :].expand(bsz, 1, tgt_len, src_len).to(dtype) - - inverted_mask = 1.0 - expanded_mask - - return inverted_mask.masked_fill(inverted_mask.to(torch.bool), torch.finfo(dtype).min) - - -class LlamaRMSNorm(nn.Module): - def __init__(self, hidden_size, eps=1e-6): - """ - LlamaRMSNorm is equivalent to T5LayerNorm - """ - super().__init__() - self.weight = nn.Parameter(torch.ones(hidden_size)) - self.variance_epsilon = eps - - def forward(self, hidden_states): - variance = hidden_states.to(torch.float32).pow(2).mean(-1, keepdim=True) - hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon) - - # convert into half-precision if necessary - if self.weight.dtype in [torch.float16, torch.bfloat16]: - hidden_states = hidden_states.to(self.weight.dtype) - - return self.weight * hidden_states - - -class LlamaRotaryEmbedding(torch.nn.Module): - def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None): - super().__init__() - inv_freq = 1.0 / (base ** (torch.arange(0, dim, 2).float().to(device) / dim)) - self.register_buffer("inv_freq", inv_freq) - - # Build here to make `torch.jit.trace` work. - self.max_seq_len_cached = max_position_embeddings - t = torch.arange(self.max_seq_len_cached, device=self.inv_freq.device, dtype=self.inv_freq.dtype) - freqs = torch.einsum("i,j->ij", t, self.inv_freq) - # Different from paper, but it uses a different permutation in order to obtain the same calculation - emb = torch.cat((freqs, freqs), dim=-1) - self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False) - self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False) - - def forward(self, x, seq_len=None): - # x: [bs, num_attention_heads, seq_len, head_size] - # This `if` block is unlikely to be run after we build sin/cos in `__init__`. Keep the logic here just in case. - if seq_len > self.max_seq_len_cached: - self.max_seq_len_cached = seq_len - t = torch.arange(self.max_seq_len_cached, device=x.device, dtype=self.inv_freq.dtype) - freqs = torch.einsum("i,j->ij", t, self.inv_freq) - # Different from paper, but it uses a different permutation in order to obtain the same calculation - emb = torch.cat((freqs, freqs), dim=-1).to(x.device) - self.register_buffer("cos_cached", emb.cos()[None, None, :, :], persistent=False) - self.register_buffer("sin_cached", emb.sin()[None, None, :, :], persistent=False) - return ( - self.cos_cached[:, :, :seq_len, ...].to(dtype=x.dtype), - self.sin_cached[:, :, :seq_len, ...].to(dtype=x.dtype), - ) - - -def rotate_half(x): - """Rotates half the hidden dims of the input.""" - x1 = x[..., : x.shape[-1] // 2] - x2 = x[..., x.shape[-1] // 2 :] - return torch.cat((-x2, x1), dim=-1) - - -def apply_rotary_pos_emb(q, k, cos, sin, position_ids): - gather_indices = position_ids[:, None, :, None] # [bs, 1, seq_len, 1] - gather_indices = gather_indices.repeat(1, cos.shape[1], 1, cos.shape[3]) - cos = torch.gather(cos.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices) - sin = torch.gather(sin.repeat(gather_indices.shape[0], 1, 1, 1), 2, gather_indices) - q_embed = (q * cos) + (rotate_half(q) * sin) - k_embed = (k * cos) + (rotate_half(k) * sin) - return q_embed, k_embed - - -class LlamaMLP(nn.Module): - def __init__( - self, - hidden_size: int, - intermediate_size: int, - hidden_act: str, - ): - super().__init__() - self.gate_proj = nn.Linear(hidden_size, intermediate_size, bias=False) - self.down_proj = nn.Linear(intermediate_size, hidden_size, bias=False) - self.up_proj = nn.Linear(hidden_size, intermediate_size, bias=False) - self.act_fn = ACT2FN[hidden_act] - - def forward(self, x): - return self.down_proj(self.act_fn(self.gate_proj(x)) * self.up_proj(x)) - - -class LlamaAttention(nn.Module): - """Multi-headed attention from 'Attention Is All You Need' paper""" - - def __init__(self, config: LlamaConfig): - super().__init__() - self.config = config - self.hidden_size = config.hidden_size - self.num_heads = config.num_attention_heads - self.head_dim = self.hidden_size // self.num_heads - self.max_position_embeddings = config.max_position_embeddings - - if (self.head_dim * self.num_heads) != self.hidden_size: - raise ValueError( - f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}" - f" and `num_heads`: {self.num_heads})." - ) - self.q_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False) - self.k_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False) - self.v_proj = nn.Linear(self.hidden_size, self.num_heads * self.head_dim, bias=False) - self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False) - self.rotary_emb = LlamaRotaryEmbedding(self.head_dim, max_position_embeddings=self.max_position_embeddings) - - def _shape(self, tensor: torch.Tensor, seq_len: int, bsz: int): - return tensor.view(bsz, seq_len, self.num_heads, self.head_dim).transpose(1, 2).contiguous() - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, - ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - bsz, q_len, _ = hidden_states.size() - - query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - - kv_seq_len = key_states.shape[-2] - if past_key_value is not None: - kv_seq_len += past_key_value[0].shape[-2] - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) - # [bsz, nh, t, hd] - - if past_key_value is not None: - # reuse k, v, self_attention - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - - past_key_value = (key_states, value_states) if use_cache else None - - attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim) - - if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, q_len, kv_seq_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, q_len, kv_seq_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights + attention_mask - attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min)) - - # upcast attention to fp32 - attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) - attn_output = torch.matmul(attn_weights, value_states) - - if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.transpose(1, 2) - attn_output = attn_output.reshape(bsz, q_len, self.hidden_size) - - attn_output = self.o_proj(attn_output) - - if not output_attentions: - attn_weights = None - - return attn_output, attn_weights, past_key_value - - -class LlamaDecoderLayer(nn.Module): - def __init__(self, config: LlamaConfig): - super().__init__() - self.hidden_size = config.hidden_size - self.self_attn = LlamaAttention(config=config) - self.mlp = LlamaMLP( - hidden_size=self.hidden_size, - intermediate_size=config.intermediate_size, - hidden_act=config.hidden_act, - ) - self.input_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - self.post_attention_layernorm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - - def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: Optional[bool] = False, - use_cache: Optional[bool] = False, - ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]: - """ - Args: - hidden_states (`torch.FloatTensor`): input to the layer of shape `(batch, seq_len, embed_dim)` - attention_mask (`torch.FloatTensor`, *optional*): attention mask of size - `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding - (see `past_key_values`). - past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states - """ - - residual = hidden_states - - hidden_states = self.input_layernorm(hidden_states) - - # Self Attention - hidden_states, self_attn_weights, present_key_value = self.self_attn( - hidden_states=hidden_states, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_value=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - ) - hidden_states = residual + hidden_states - - # Fully Connected - residual = hidden_states - hidden_states = self.post_attention_layernorm(hidden_states) - hidden_states = self.mlp(hidden_states) - hidden_states = residual + hidden_states - - outputs = (hidden_states,) - - if output_attentions: - outputs += (self_attn_weights,) - - if use_cache: - outputs += (present_key_value,) - - return outputs - - -LLAMA_START_DOCSTRING = r""" - This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the - library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads - etc.) - - This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass. - Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage - and behavior. - - Parameters: - config ([`LlamaConfig`]): - Model configuration class with all the parameters of the model. Initializing with a config file does not - load the weights associated with the model, only the configuration. Check out the - [`~PreTrainedModel.from_pretrained`] method to load the model weights. -""" - - -@add_start_docstrings( - "The bare LLaMA Model outputting raw hidden-states without any specific head on top.", - LLAMA_START_DOCSTRING, -) -class LlamaPreTrainedModel(PreTrainedModel): - config_class = LlamaConfig - base_model_prefix = "model" - supports_gradient_checkpointing = True - _no_split_modules = ["LlamaDecoderLayer"] - _keys_to_ignore_on_load_unexpected = [r"decoder\.version"] - - def _init_weights(self, module): - std = self.config.initializer_range - if isinstance(module, nn.Linear): - module.weight.data.normal_(mean=0.0, std=std) - if module.bias is not None: - module.bias.data.zero_() - elif isinstance(module, nn.Embedding): - module.weight.data.normal_(mean=0.0, std=std) - if module.padding_idx is not None: - module.weight.data[module.padding_idx].zero_() - - def _set_gradient_checkpointing(self, module, value=False): - if isinstance(module, LlamaModel): - module.gradient_checkpointing = value - - -LLAMA_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide - it. - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - - Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - If `past_key_values` is used, optionally only the last `decoder_input_ids` have to be input (see - `past_key_values`). - - If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`] - and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more - information on the default strategy. - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0, - config.n_positions - 1]`. - - [What are position IDs?](../glossary#position-ids) - past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of shape - `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of shape - `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. - - Contains pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention - blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those that - don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of all - `decoder_input_ids` of shape `(batch_size, sequence_length)`. - inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*): - Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This - is useful if you want more control over how to convert `input_ids` indices into associated vectors than the - model's internal embedding lookup matrix. - use_cache (`bool`, *optional*): - If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see - `past_key_values`). - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned - tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for - more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. -""" - - -@add_start_docstrings( - "The bare LLaMA Model outputting raw hidden-states without any specific head on top.", - LLAMA_START_DOCSTRING, -) -class LlamaModel(LlamaPreTrainedModel): - """ - Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`LlamaDecoderLayer`] - - Args: - config: LlamaConfig - """ - - def __init__(self, config: LlamaConfig): - super().__init__(config) - self.padding_idx = config.pad_token_id - self.vocab_size = config.vocab_size - - self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx) - self.layers = nn.ModuleList([LlamaDecoderLayer(config) for _ in range(config.num_hidden_layers)]) - self.norm = LlamaRMSNorm(config.hidden_size, eps=config.rms_norm_eps) - - self.gradient_checkpointing = False - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embed_tokens - - def set_input_embeddings(self, value): - self.embed_tokens = value - - # Copied from transformers.models.bart.modeling_bart.BartDecoder._prepare_decoder_attention_mask - def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length): - # create causal mask - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - combined_attention_mask = None - if input_shape[-1] > 1: - combined_attention_mask = _make_causal_mask( - input_shape, - inputs_embeds.dtype, - device=inputs_embeds.device, - past_key_values_length=past_key_values_length, - ) - - if attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]).to( - inputs_embeds.device - ) - combined_attention_mask = ( - expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask - ) - - return combined_attention_mask - - @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - query_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPast]: - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # retrieve input_ids and inputs_embeds - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time") - elif input_ids is not None: - batch_size, seq_length = input_ids.shape - elif inputs_embeds is not None: - batch_size, seq_length, _ = inputs_embeds.shape - else: - raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") - - if inputs_embeds is None: - inputs_embeds = self.embed_tokens(input_ids) - if query_embeds is not None: - inputs_embeds = torch.cat([query_embeds, inputs_embeds], dim=1) - batch_size, seq_length, _ = inputs_embeds.shape - - seq_length_with_past = seq_length - past_key_values_length = 0 - - if past_key_values is not None: - past_key_values_length = past_key_values[0][0].shape[2] - seq_length_with_past = seq_length_with_past + past_key_values_length - - if position_ids is None: - device = input_ids.device if input_ids is not None else inputs_embeds.device - position_ids = torch.arange( - past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device - ) - position_ids = position_ids.unsqueeze(0).view(-1, seq_length) - else: - position_ids = position_ids.view(-1, seq_length).long() - - # embed positions - if attention_mask is None: - attention_mask = torch.ones( - (batch_size, seq_length_with_past), dtype=torch.bool, device=inputs_embeds.device - ) - attention_mask = self._prepare_decoder_attention_mask( - attention_mask, (batch_size, seq_length), inputs_embeds, past_key_values_length - ) - - hidden_states = inputs_embeds - - if self.gradient_checkpointing and self.training: - if use_cache: - logger.warning_once( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - # decoder layers - all_hidden_states = () if output_hidden_states else None - all_self_attns = () if output_attentions else None - next_decoder_cache = () if use_cache else None - - for idx, decoder_layer in enumerate(self.layers): - if output_hidden_states: - all_hidden_states += (hidden_states,) - - past_key_value = past_key_values[idx] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, output_attentions, None) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(decoder_layer), - hidden_states, - attention_mask, - position_ids, - None, - ) - else: - layer_outputs = decoder_layer( - hidden_states, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_value=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - ) - - hidden_states = layer_outputs[0] - - if use_cache: - next_decoder_cache += (layer_outputs[2 if output_attentions else 1],) - - if output_attentions: - all_self_attns += (layer_outputs[1],) - - hidden_states = self.norm(hidden_states) - - # add hidden states from the last decoder layer - if output_hidden_states: - all_hidden_states += (hidden_states,) - - next_cache = next_decoder_cache if use_cache else None - if not return_dict: - return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None) - return BaseModelOutputWithPast( - last_hidden_state=hidden_states, - past_key_values=next_cache, - hidden_states=all_hidden_states, - attentions=all_self_attns, - ) - - -class LlamaForCausalLM(LlamaPreTrainedModel): - def __init__(self, config): - super().__init__(config) - self.model = LlamaModel(config) - - self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False) - - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.model.embed_tokens - - def set_input_embeddings(self, value): - self.model.embed_tokens = value - - def get_output_embeddings(self): - return self.lm_head - - def set_output_embeddings(self, new_embeddings): - self.lm_head = new_embeddings - - def set_decoder(self, decoder): - self.model = decoder - - def get_decoder(self): - return self.model - - @add_start_docstrings_to_model_forward(LLAMA_INPUTS_DOCSTRING) - @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC) - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - query_embeds: Optional[torch.FloatTensor] = None, - labels: Optional[torch.LongTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, CausalLMOutputWithPast]: - r""" - Args: - labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*): - Labels for computing the masked language modeling loss. Indices should either be in `[0, ..., - config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored - (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`. - - Returns: - - Example: - - ```python - >>> from transformers import AutoTokenizer, LlamaForCausalLM - - >>> model = LlamaForCausalLM.from_pretrained(PATH_TO_CONVERTED_WEIGHTS) - >>> tokenizer = AutoTokenizer.from_pretrained(PATH_TO_CONVERTED_TOKENIZER) - - >>> prompt = "Hey, are you consciours? Can you talk to me?" - >>> inputs = tokenizer(prompt, return_tensors="pt") - - >>> # Generate - >>> generate_ids = model.generate(inputs.input_ids, max_length=30) - >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] - "Hey, are you consciours? Can you talk to me?\nI'm not consciours, but I can talk to you." - ```""" - - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn) - outputs = self.model( - input_ids=input_ids, - attention_mask=attention_mask, - position_ids=position_ids, - past_key_values=past_key_values, - inputs_embeds=inputs_embeds, - query_embeds=query_embeds, - use_cache=use_cache, - output_attentions=output_attentions, - output_hidden_states=output_hidden_states, - return_dict=return_dict, - ) - - hidden_states = outputs[0] - logits = self.lm_head(hidden_states) - - loss = None - if labels is not None: - # Shift so that tokens < n predict n - shift_logits = logits[..., :-1, :].contiguous() - shift_labels = labels[..., 1:].contiguous() - # Flatten the tokens - loss_fct = CrossEntropyLoss() - shift_logits = shift_logits.view(-1, self.config.vocab_size) - shift_labels = shift_labels.view(-1) - # Enable model parallelism - shift_labels = shift_labels.to(shift_logits.device) - loss = loss_fct(shift_logits, shift_labels) - - if not return_dict: - output = (logits,) + outputs[1:] - return (loss,) + output if loss is not None else output - - return CausalLMOutputWithPast( - loss=loss, - logits=logits, - past_key_values=outputs.past_key_values, - hidden_states=outputs.hidden_states, - attentions=outputs.attentions, - ) - - def prepare_inputs_for_generation( - self, input_ids, query_embeds=None, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs - ): - if past_key_values: - input_ids = input_ids[:, -1:] - - position_ids = kwargs.get("position_ids", None) - if attention_mask is not None and position_ids is None: - # create position_ids on the fly for batch generation - position_ids = attention_mask.long().cumsum(-1) - 1 - position_ids.masked_fill_(attention_mask == 0, 1) - if past_key_values: - position_ids = position_ids[:, -1].unsqueeze(-1) - query_embeds = None - - # if `inputs_embeds` are passed, we only want to use them in the 1st generation step - if inputs_embeds is not None and past_key_values is None: - model_inputs = {"inputs_embeds": inputs_embeds} - else: - model_inputs = {"input_ids": input_ids} - - model_inputs.update( - { - "position_ids": position_ids, - "query_embeds": query_embeds, - "past_key_values": past_key_values, - "use_cache": kwargs.get("use_cache"), - "attention_mask": attention_mask, - } - ) - return model_inputs - - @staticmethod - def _reorder_cache(past_key_values, beam_idx): - reordered_past = () - for layer_past in past_key_values: - reordered_past += (tuple(past_state.index_select(0, beam_idx) for past_state in layer_past),) - return reordered_past - diff --git a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/test.py b/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/test.py deleted file mode 100644 index 53cb3b7aa860c90518e15ba76e1a55fdf404bcc2..0000000000000000000000000000000000000000 --- a/spaces/manavisrani07/gradio-lipsync-wav2lip/basicsr/test.py +++ /dev/null @@ -1,45 +0,0 @@ -import logging -import torch -from os import path as osp - -from basicsr.data import build_dataloader, build_dataset -from basicsr.models import build_model -from basicsr.utils import get_env_info, get_root_logger, get_time_str, make_exp_dirs -from basicsr.utils.options import dict2str, parse_options - - -def test_pipeline(root_path): - # parse options, set distributed setting, set ramdom seed - opt, _ = parse_options(root_path, is_train=False) - - torch.backends.cudnn.benchmark = True - # torch.backends.cudnn.deterministic = True - - # mkdir and initialize loggers - make_exp_dirs(opt) - log_file = osp.join(opt['path']['log'], f"test_{opt['name']}_{get_time_str()}.log") - logger = get_root_logger(logger_name='basicsr', log_level=logging.INFO, log_file=log_file) - logger.info(get_env_info()) - logger.info(dict2str(opt)) - - # create test dataset and dataloader - test_loaders = [] - for _, dataset_opt in sorted(opt['datasets'].items()): - test_set = build_dataset(dataset_opt) - test_loader = build_dataloader( - test_set, dataset_opt, num_gpu=opt['num_gpu'], dist=opt['dist'], sampler=None, seed=opt['manual_seed']) - logger.info(f"Number of test images in {dataset_opt['name']}: {len(test_set)}") - test_loaders.append(test_loader) - - # create model - model = build_model(opt) - - for test_loader in test_loaders: - test_set_name = test_loader.dataset.opt['name'] - logger.info(f'Testing {test_set_name}...') - model.validation(test_loader, current_iter=opt['name'], tb_logger=None, save_img=opt['val']['save_img']) - - -if __name__ == '__main__': - root_path = osp.abspath(osp.join(__file__, osp.pardir, osp.pardir)) - test_pipeline(root_path) diff --git a/spaces/mariashay/DataViz-Mermaid/README.md b/spaces/mariashay/DataViz-Mermaid/README.md deleted file mode 100644 index c68a504f66356741c515faaf66027f3501600861..0000000000000000000000000000000000000000 --- a/spaces/mariashay/DataViz-Mermaid/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: DataViz Mermaid -emoji: 🐨 -colorFrom: green -colorTo: blue -sdk: static -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/models/model_image_translation.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/models/model_image_translation.py deleted file mode 100644 index 8a1dae863d7924b8a818052fe998ef9263921fbd..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/src/models/model_image_translation.py +++ /dev/null @@ -1,642 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.parallel -from torch.autograd import Variable -import torch.nn.functional as F -from torchvision import models -import torch.utils.model_zoo as model_zoo - -from torch.nn import init -import os - -import numpy as np - - -def weights_init_normal(m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - init.normal_(m.weight.data, 0.0, 0.02) - elif classname.find('Linear') != -1: - init.normal(m.weight.data, 0.0, 0.02) - elif classname.find('BatchNorm2d') != -1: - init.normal_(m.weight.data, 1.0, 0.02) - init.constant_(m.bias.data, 0.0) - - -def weights_init_xavier(m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - init.xavier_normal_(m.weight.data, gain=0.02) - elif classname.find('Linear') != -1: - init.xavier_normal_(m.weight.data, gain=0.02) - elif classname.find('BatchNorm2d') != -1: - init.normal_(m.weight.data, 1.0, 0.02) - init.constant_(m.bias.data, 0.0) - - -def weights_init_kaiming(m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif classname.find('Linear') != -1: - init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif classname.find('BatchNorm2d') != -1: - init.normal_(m.weight.data, 1.0, 0.02) - init.constant_(m.bias.data, 0.0) - - -def init_weights(net, init_type='normal'): - print('initialization method [%s]' % init_type) - if init_type == 'normal': - net.apply(weights_init_normal) - elif init_type == 'xavier': - net.apply(weights_init_xavier) - elif init_type == 'kaiming': - net.apply(weights_init_kaiming) - else: - raise NotImplementedError('initialization method [%s] is not implemented' % init_type) - - -class FeatureExtraction(nn.Module): - def __init__(self, input_nc, ngf=64, n_layers=3, norm_layer=nn.BatchNorm2d, use_dropout=False): - super(FeatureExtraction, self).__init__() - downconv = nn.Conv2d(input_nc, ngf, kernel_size=4, stride=2, padding=1) - model = [downconv, nn.ReLU(True), norm_layer(ngf)] - for i in range(n_layers): - in_ngf = 2 ** i * ngf if 2 ** i * ngf < 512 else 512 - out_ngf = 2 ** (i + 1) * ngf if 2 ** i * ngf < 512 else 512 - downconv = nn.Conv2d(in_ngf, out_ngf, kernel_size=4, stride=2, padding=1) - model += [downconv, nn.ReLU(True)] - model += [norm_layer(out_ngf)] - model += [nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nn.ReLU(True)] - model += [norm_layer(512)] - model += [nn.Conv2d(512, 512, kernel_size=3, stride=1, padding=1), nn.ReLU(True)] - - self.model = nn.Sequential(*model) - init_weights(self.model, init_type='normal') - - def forward(self, x): - return self.model(x) - - -class FeatureL2Norm(torch.nn.Module): - def __init__(self): - super(FeatureL2Norm, self).__init__() - - def forward(self, feature): - epsilon = 1e-6 - norm = torch.pow(torch.sum(torch.pow(feature, 2), 1) + epsilon, 0.5).unsqueeze(1).expand_as(feature) - return torch.div(feature, norm) - - -class FeatureCorrelation(nn.Module): - def __init__(self): - super(FeatureCorrelation, self).__init__() - - def forward(self, feature_A, feature_B): - b, c, h, w = feature_A.size() - # reshape features for matrix multiplication - feature_A = feature_A.transpose(2, 3).contiguous().view(b, c, h * w) - feature_B = feature_B.view(b, c, h * w).transpose(1, 2) - # perform matrix mult. - feature_mul = torch.bmm(feature_B, feature_A) - correlation_tensor = feature_mul.view(b, h, w, h * w).transpose(2, 3).transpose(1, 2) - return correlation_tensor - - -class FeatureRegression(nn.Module): - def __init__(self, input_nc=512, output_dim=6, use_cuda=True): - super(FeatureRegression, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d(input_nc, 512, kernel_size=4, stride=2, padding=1), - nn.BatchNorm2d(512), - nn.ReLU(inplace=True), - nn.Conv2d(512, 256, kernel_size=4, stride=2, padding=1), - nn.BatchNorm2d(256), - nn.ReLU(inplace=True), - nn.Conv2d(256, 128, kernel_size=3, padding=1), - nn.BatchNorm2d(128), - nn.ReLU(inplace=True), - nn.Conv2d(128, 64, kernel_size=3, padding=1), - nn.BatchNorm2d(64), - nn.ReLU(inplace=True), - ) - self.linear = nn.Linear(64 * 4 * 3, output_dim) - self.tanh = nn.Tanh() - if use_cuda: - self.conv.cuda() - self.linear.cuda() - self.tanh.cuda() - - def forward(self, x): - x = self.conv(x) - x = x.view(x.size(0), -1) - x = self.linear(x) - x = self.tanh(x) - return x - - -class AffineGridGen(nn.Module): - def __init__(self, out_h=256, out_w=192, out_ch=3): - super(AffineGridGen, self).__init__() - self.out_h = out_h - self.out_w = out_w - self.out_ch = out_ch - - def forward(self, theta): - theta = theta.contiguous() - batch_size = theta.size()[0] - out_size = torch.Size((batch_size, self.out_ch, self.out_h, self.out_w)) - return F.affine_grid(theta, out_size) - - -class TpsGridGen(nn.Module): - def __init__(self, out_h=256, out_w=192, use_regular_grid=True, grid_size=3, reg_factor=0, use_cuda=True): - super(TpsGridGen, self).__init__() - self.out_h, self.out_w = out_h, out_w - self.reg_factor = reg_factor - self.use_cuda = use_cuda - - # create grid in numpy - self.grid = np.zeros([self.out_h, self.out_w, 3], dtype=np.float32) - # sampling grid with dim-0 coords (Y) - self.grid_X, self.grid_Y = np.meshgrid(np.linspace(-1, 1, out_w), np.linspace(-1, 1, out_h)) - # grid_X,grid_Y: size [1,H,W,1,1] - self.grid_X = torch.FloatTensor(self.grid_X).unsqueeze(0).unsqueeze(3) - self.grid_Y = torch.FloatTensor(self.grid_Y).unsqueeze(0).unsqueeze(3) - if use_cuda: - self.grid_X = self.grid_X.cuda() - self.grid_Y = self.grid_Y.cuda() - - # initialize regular grid for control points P_i - if use_regular_grid: - axis_coords = np.linspace(-1, 1, grid_size) - self.N = grid_size * grid_size - P_Y, P_X = np.meshgrid(axis_coords, axis_coords) - P_X = np.reshape(P_X, (-1, 1)) # size (N,1) - P_Y = np.reshape(P_Y, (-1, 1)) # size (N,1) - P_X = torch.FloatTensor(P_X) - P_Y = torch.FloatTensor(P_Y) - self.P_X_base = P_X.clone() - self.P_Y_base = P_Y.clone() - self.Li = self.compute_L_inverse(P_X, P_Y).unsqueeze(0) - self.P_X = P_X.unsqueeze(2).unsqueeze(3).unsqueeze(4).transpose(0, 4) - self.P_Y = P_Y.unsqueeze(2).unsqueeze(3).unsqueeze(4).transpose(0, 4) - if use_cuda: - self.P_X = self.P_X.cuda() - self.P_Y = self.P_Y.cuda() - self.P_X_base = self.P_X_base.cuda() - self.P_Y_base = self.P_Y_base.cuda() - - def forward(self, theta): - warped_grid = self.apply_transformation(theta, torch.cat((self.grid_X, self.grid_Y), 3)) - - return warped_grid - - def compute_L_inverse(self, X, Y): - N = X.size()[0] # num of points (along dim 0) - # construct matrix K - Xmat = X.expand(N, N) - Ymat = Y.expand(N, N) - P_dist_squared = torch.pow(Xmat - Xmat.transpose(0, 1), 2) + torch.pow(Ymat - Ymat.transpose(0, 1), 2) - P_dist_squared[P_dist_squared == 0] = 1 # make diagonal 1 to avoid NaN in log computation - K = torch.mul(P_dist_squared, torch.log(P_dist_squared)) - # construct matrix L - O = torch.FloatTensor(N, 1).fill_(1) - Z = torch.FloatTensor(3, 3).fill_(0) - P = torch.cat((O, X, Y), 1) - L = torch.cat((torch.cat((K, P), 1), torch.cat((P.transpose(0, 1), Z), 1)), 0) - Li = torch.inverse(L) - if self.use_cuda: - Li = Li.cuda() - return Li - - def apply_transformation(self, theta, points): - if theta.dim() == 2: - theta = theta.unsqueeze(2).unsqueeze(3) - # points should be in the [B,H,W,2] format, - # where points[:,:,:,0] are the X coords - # and points[:,:,:,1] are the Y coords - - # input are the corresponding control points P_i - batch_size = theta.size()[0] - # split theta into point coordinates - Q_X = theta[:, :self.N, :, :].squeeze(3) - Q_Y = theta[:, self.N:, :, :].squeeze(3) - Q_X = Q_X + self.P_X_base.expand_as(Q_X) - Q_Y = Q_Y + self.P_Y_base.expand_as(Q_Y) - - # get spatial dimensions of points - points_b = points.size()[0] - points_h = points.size()[1] - points_w = points.size()[2] - - # repeat pre-defined control points along spatial dimensions of points to be transformed - P_X = self.P_X.expand((1, points_h, points_w, 1, self.N)) - P_Y = self.P_Y.expand((1, points_h, points_w, 1, self.N)) - - # compute weigths for non-linear part - W_X = torch.bmm(self.Li[:, :self.N, :self.N].expand((batch_size, self.N, self.N)), Q_X) - W_Y = torch.bmm(self.Li[:, :self.N, :self.N].expand((batch_size, self.N, self.N)), Q_Y) - # reshape - # W_X,W,Y: size [B,H,W,1,N] - W_X = W_X.unsqueeze(3).unsqueeze(4).transpose(1, 4).repeat(1, points_h, points_w, 1, 1) - W_Y = W_Y.unsqueeze(3).unsqueeze(4).transpose(1, 4).repeat(1, points_h, points_w, 1, 1) - # compute weights for affine part - A_X = torch.bmm(self.Li[:, self.N:, :self.N].expand((batch_size, 3, self.N)), Q_X) - A_Y = torch.bmm(self.Li[:, self.N:, :self.N].expand((batch_size, 3, self.N)), Q_Y) - # reshape - # A_X,A,Y: size [B,H,W,1,3] - A_X = A_X.unsqueeze(3).unsqueeze(4).transpose(1, 4).repeat(1, points_h, points_w, 1, 1) - A_Y = A_Y.unsqueeze(3).unsqueeze(4).transpose(1, 4).repeat(1, points_h, points_w, 1, 1) - - # compute distance P_i - (grid_X,grid_Y) - # grid is expanded in point dim 4, but not in batch dim 0, as points P_X,P_Y are fixed for all batch - points_X_for_summation = points[:, :, :, 0].unsqueeze(3).unsqueeze(4).expand( - points[:, :, :, 0].size() + (1, self.N)) - points_Y_for_summation = points[:, :, :, 1].unsqueeze(3).unsqueeze(4).expand( - points[:, :, :, 1].size() + (1, self.N)) - - if points_b == 1: - delta_X = points_X_for_summation - P_X - delta_Y = points_Y_for_summation - P_Y - else: - # use expanded P_X,P_Y in batch dimension - delta_X = points_X_for_summation - P_X.expand_as(points_X_for_summation) - delta_Y = points_Y_for_summation - P_Y.expand_as(points_Y_for_summation) - - dist_squared = torch.pow(delta_X, 2) + torch.pow(delta_Y, 2) - # U: size [1,H,W,1,N] - dist_squared[dist_squared == 0] = 1 # avoid NaN in log computation - U = torch.mul(dist_squared, torch.log(dist_squared)) - - # expand grid in batch dimension if necessary - points_X_batch = points[:, :, :, 0].unsqueeze(3) - points_Y_batch = points[:, :, :, 1].unsqueeze(3) - if points_b == 1: - points_X_batch = points_X_batch.expand((batch_size,) + points_X_batch.size()[1:]) - points_Y_batch = points_Y_batch.expand((batch_size,) + points_Y_batch.size()[1:]) - - points_X_prime = A_X[:, :, :, :, 0] + \ - torch.mul(A_X[:, :, :, :, 1], points_X_batch) + \ - torch.mul(A_X[:, :, :, :, 2], points_Y_batch) + \ - torch.sum(torch.mul(W_X, U.expand_as(W_X)), 4) - - points_Y_prime = A_Y[:, :, :, :, 0] + \ - torch.mul(A_Y[:, :, :, :, 1], points_X_batch) + \ - torch.mul(A_Y[:, :, :, :, 2], points_Y_batch) + \ - torch.sum(torch.mul(W_Y, U.expand_as(W_Y)), 4) - - return torch.cat((points_X_prime, points_Y_prime), 3) - - -# Defines the Unet generator. -# |num_downs|: number of downsamplings in UNet. For example, -# if |num_downs| == 7, image of size 128x128 will become of size 1x1 -# at the bottleneck - -class UnetGenerator(nn.Module): - def __init__(self, input_nc, output_nc, num_downs, ngf=64, - norm_layer=nn.BatchNorm2d, use_dropout=False): - super(UnetGenerator, self).__init__() - # construct unet structure - unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, - innermost=True) - for i in range(num_downs - 5): - unet_block = UnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, - norm_layer=norm_layer, use_dropout=use_dropout) - unet_block = UnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, - norm_layer=norm_layer) - unet_block = UnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, - norm_layer=norm_layer) - unet_block = UnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, norm_layer=norm_layer) - unet_block = UnetSkipConnectionBlock(output_nc, ngf, input_nc=input_nc, submodule=unet_block, outermost=True, - norm_layer=norm_layer) - - self.model = unet_block - - def forward(self, input): - return self.model(input) - - -# Defines the submodule with skip connection. -# X -------------------identity---------------------- X -# |-- downsampling -- |submodule| -- upsampling --| -class UnetSkipConnectionBlock(nn.Module): - def __init__(self, outer_nc, inner_nc, input_nc=None, - submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False): - super(UnetSkipConnectionBlock, self).__init__() - self.outermost = outermost - use_bias = norm_layer == nn.InstanceNorm2d - - if input_nc is None: - input_nc = outer_nc - downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=4, - stride=2, padding=1, bias=use_bias) - downrelu = nn.LeakyReLU(0.2, True) - uprelu = nn.ReLU(True) - if norm_layer != None: - downnorm = norm_layer(inner_nc) - upnorm = norm_layer(outer_nc) - - if outermost: - upsample = nn.Upsample(scale_factor=2, mode='bilinear') - upconv = nn.Conv2d(inner_nc * 2, outer_nc, kernel_size=3, stride=1, padding=1, bias=use_bias) - down = [downconv] - # up = [uprelu, upsample, upconv, upnorm] - up = [uprelu, upsample, upconv] - model = down + [submodule] + up - elif innermost: - upsample = nn.Upsample(scale_factor=2, mode='bilinear') - upconv = nn.Conv2d(inner_nc, outer_nc, kernel_size=3, stride=1, padding=1, bias=use_bias) - down = [downrelu, downconv] - if norm_layer == None: - up = [uprelu, upsample, upconv] - else: - up = [uprelu, upsample, upconv, upnorm] - model = down + up - else: - upsample = nn.Upsample(scale_factor=2, mode='bilinear') - upconv = nn.Conv2d(inner_nc * 2, outer_nc, kernel_size=3, stride=1, padding=1, bias=use_bias) - if norm_layer == None: - down = [downrelu, downconv] - up = [uprelu, upsample, upconv] - else: - down = [downrelu, downconv, downnorm] - up = [uprelu, upsample, upconv, upnorm] - - if use_dropout: - model = down + [submodule] + up + [nn.Dropout(0.5)] - else: - model = down + [submodule] + up - - self.model = nn.Sequential(*model) - - def forward(self, x): - if self.outermost: - return self.model(x) - else: - return torch.cat([x, self.model(x)], 1) - - -# UNet with residual blocks -class ResidualBlock(nn.Module): - def __init__(self, in_features=64, norm_layer=nn.BatchNorm2d): - super(ResidualBlock, self).__init__() - self.relu = nn.ReLU(True) - if norm_layer == None: - # hard to converge with out batch or instance norm - self.block = nn.Sequential( - nn.Conv2d(in_features, in_features, 3, 1, 1, bias=False), - nn.ReLU(inplace=True), - nn.Conv2d(in_features, in_features, 3, 1, 1, bias=False), - ) - else: - self.block = nn.Sequential( - nn.Conv2d(in_features, in_features, 3, 1, 1, bias=False), - norm_layer(in_features), - nn.ReLU(inplace=True), - nn.Conv2d(in_features, in_features, 3, 1, 1, bias=False), - norm_layer(in_features) - ) - - def forward(self, x): - residual = x - out = self.block(x) - out += residual - out = self.relu(out) - return out - # return self.relu(x + self.block(x)) - - -class ResUnetGenerator(nn.Module): - def __init__(self, input_nc, output_nc, num_downs, ngf=64, - norm_layer=nn.BatchNorm2d, use_dropout=False): - super(ResUnetGenerator, self).__init__() - # construct unet structure - unet_block = ResUnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=None, norm_layer=norm_layer, - innermost=True) - - for i in range(num_downs - 5): - unet_block = ResUnetSkipConnectionBlock(ngf * 8, ngf * 8, input_nc=None, submodule=unet_block, - norm_layer=norm_layer, use_dropout=use_dropout) - unet_block = ResUnetSkipConnectionBlock(ngf * 4, ngf * 8, input_nc=None, submodule=unet_block, - norm_layer=norm_layer) - unet_block = ResUnetSkipConnectionBlock(ngf * 2, ngf * 4, input_nc=None, submodule=unet_block, - norm_layer=norm_layer) - unet_block = ResUnetSkipConnectionBlock(ngf, ngf * 2, input_nc=None, submodule=unet_block, - norm_layer=norm_layer) - unet_block = ResUnetSkipConnectionBlock(output_nc, ngf, input_nc=input_nc, submodule=unet_block, outermost=True, - norm_layer=norm_layer) - - self.model = unet_block - - def forward(self, input): - output = self.model(input) - - # print("\tIn Model: input size", input.size(), - # "output size", output.size()) - - return output - - -# Defines the submodule with skip connection. -# X -------------------identity---------------------- X -# |-- downsampling -- |submodule| -- upsampling --| -class ResUnetSkipConnectionBlock(nn.Module): - def __init__(self, outer_nc, inner_nc, input_nc=None, - submodule=None, outermost=False, innermost=False, norm_layer=nn.BatchNorm2d, use_dropout=False): - super(ResUnetSkipConnectionBlock, self).__init__() - self.outermost = outermost - use_bias = norm_layer == nn.InstanceNorm2d - - if input_nc is None: - input_nc = outer_nc - downconv = nn.Conv2d(input_nc, inner_nc, kernel_size=3, - stride=2, padding=1, bias=use_bias) - # add two resblock - res_downconv = [ResidualBlock(inner_nc, norm_layer), ResidualBlock(inner_nc, norm_layer)] - res_upconv = [ResidualBlock(outer_nc, norm_layer), ResidualBlock(outer_nc, norm_layer)] - - # res_downconv = [ResidualBlock(inner_nc)] - # res_upconv = [ResidualBlock(outer_nc)] - - downrelu = nn.ReLU(True) - uprelu = nn.ReLU(True) - if norm_layer != None: - downnorm = norm_layer(inner_nc) - upnorm = norm_layer(outer_nc) - - if outermost: - upsample = nn.Upsample(scale_factor=2, mode='nearest') - upconv = nn.Conv2d(inner_nc * 2, outer_nc, kernel_size=3, stride=1, padding=1, bias=use_bias) - down = [downconv, downrelu] + res_downconv - # up = [uprelu, upsample, upconv, upnorm] - up = [upsample, upconv] - model = down + [submodule] + up - elif innermost: - upsample = nn.Upsample(scale_factor=2, mode='nearest') - upconv = nn.Conv2d(inner_nc, outer_nc, kernel_size=3, stride=1, padding=1, bias=use_bias) - down = [downconv, downrelu] + res_downconv - if norm_layer == None: - up = [upsample, upconv, uprelu] + res_upconv - else: - up = [upsample, upconv, upnorm, uprelu] + res_upconv - model = down + up - else: - upsample = nn.Upsample(scale_factor=2, mode='nearest') - upconv = nn.Conv2d(inner_nc * 2, outer_nc, kernel_size=3, stride=1, padding=1, bias=use_bias) - if norm_layer == None: - down = [downconv, downrelu] + res_downconv - up = [upsample, upconv, uprelu] + res_upconv - else: - down = [downconv, downnorm, downrelu] + res_downconv - up = [upsample, upconv, upnorm, uprelu] + res_upconv - - if use_dropout: - model = down + [submodule] + up + [nn.Dropout(0.5)] - else: - model = down + [submodule] + up - - self.model = nn.Sequential(*model) - - def forward(self, x): - if self.outermost: - return self.model(x) - else: - return torch.cat([x, self.model(x)], 1) - - -class Vgg19(nn.Module): - def __init__(self, requires_grad=False): - super(Vgg19, self).__init__() - vgg_pretrained_features = models.vgg19(pretrained=True).features - self.slice1 = nn.Sequential() - self.slice2 = nn.Sequential() - self.slice3 = nn.Sequential() - self.slice4 = nn.Sequential() - self.slice5 = nn.Sequential() - for x in range(2): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(2, 7): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(7, 12): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(12, 21): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(21, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h_relu1 = self.slice1(X) - h_relu2 = self.slice2(h_relu1) - h_relu3 = self.slice3(h_relu2) - h_relu4 = self.slice4(h_relu3) - h_relu5 = self.slice5(h_relu4) - out = [h_relu1, h_relu2, h_relu3, h_relu4, h_relu5] - return out - -def gram_matrix(input): - a, b, c, d = input.size() # a=batch size(=1) - # b=number of feature maps - # (c,d)=dimensions of a f. map (N=c*d) - features = input.view(a * b, c * d) # resise F_XL into \hat F_XL - G = torch.mm(features, features.t()) # compute the gram product - # we 'normalize' the values of the gram matrix - # by dividing by the number of element in each feature maps. - return G.div(a * b * c * d) - - -class StyleLoss(nn.Module): - def __init__(self): - super(StyleLoss, self).__init__() - - def forward(self, x, y): - Gx = gram_matrix(x) - Gy = gram_matrix(y) - return F.mse_loss(Gx, Gy) * 30000000 - -class VGGLoss(nn.Module): - def __init__(self, model=None): - super(VGGLoss, self).__init__() - if model is None: - self.vgg = Vgg19() - else: - self.vgg = model - - self.vgg.cuda() - # self.vgg.eval() - self.criterion = nn.L1Loss() - self.style_criterion = StyleLoss() - self.weights = [1.0, 1.0, 1.0, 1.0, 1.0] - self.style_weights = [1.0, 1.0, 1.0, 1.0, 1.0] - # self.weights = [5.0, 1.0, 0.5, 0.4, 0.8] - # self.style_weights = [10e4, 1000, 50, 15, 50] - - def forward(self, x, y, style=False): - x_vgg, y_vgg = self.vgg(x), self.vgg(y) - loss = 0 - if style: - # return both perceptual loss and style loss. - style_loss = 0 - for i in range(len(x_vgg)): - this_loss = (self.weights[i] * - self.criterion(x_vgg[i], y_vgg[i].detach())) - this_style_loss = (self.style_weights[i] * - self.style_criterion(x_vgg[i], y_vgg[i].detach())) - loss += this_loss - style_loss += this_style_loss - return loss, style_loss - - for i in range(len(x_vgg)): - this_loss = (self.weights[i] * self.criterion(x_vgg[i], y_vgg[i].detach())) - loss += this_loss - return loss - - -class GMM(nn.Module): - """ Geometric Matching Module - """ - - def __init__(self, opt, input_nc): - super(GMM, self).__init__() - self.extractionA = FeatureExtraction(input_nc, ngf=64, n_layers=3, norm_layer=nn.BatchNorm2d) - self.extractionB = FeatureExtraction(3, ngf=64, n_layers=3, norm_layer=nn.BatchNorm2d) - self.l2norm = FeatureL2Norm() - self.correlation = FeatureCorrelation() - self.regression = FeatureRegression(input_nc=192, output_dim=2 * opt.grid_size ** 2, use_cuda=True) - self.gridGen = TpsGridGen(opt.fine_height, opt.fine_width, use_cuda=True, grid_size=opt.grid_size) - - def forward(self, inputA, inputB): - featureA = self.extractionA(inputA) - featureB = self.extractionB(inputB) - featureA = self.l2norm(featureA) - featureB = self.l2norm(featureB) - correlation = self.correlation(featureA, featureB) - - theta = self.regression(correlation) - grid = self.gridGen(theta) - return grid, theta - - -def save_checkpoint(model, save_path): - if not os.path.exists(os.path.dirname(save_path)): - os.makedirs(os.path.dirname(save_path)) - torch.save(model.state_dict(), save_path) - - -def load_checkpoint(model, checkpoint_path): - if not os.path.exists(checkpoint_path): - print('No checkpoint!') - return - - model.load_state_dict(torch.load(checkpoint_path)) - - # try: - # model.load_state_dict(torch.load(checkpoint_path)) - # except: - # model = nn.DataParallel(model) - # model.load_state_dict(torch.load(checkpoint_path)) diff --git a/spaces/marlenezw/audio-driven-animations/MakeItTalk/thirdparty/face_of_art/old/run_tests_template.py b/spaces/marlenezw/audio-driven-animations/MakeItTalk/thirdparty/face_of_art/old/run_tests_template.py deleted file mode 100644 index e05c6e3cfb35c35d3f10cb865a47b83789b9e014..0000000000000000000000000000000000000000 --- a/spaces/marlenezw/audio-driven-animations/MakeItTalk/thirdparty/face_of_art/old/run_tests_template.py +++ /dev/null @@ -1,50 +0,0 @@ -import tensorflow as tf -from deep_heatmaps_model_primary_valid import DeepHeatmapsModel -import os -import numpy as np - -num_tests = 10 -params = np.logspace(-8, -2, num_tests) -max_iter = 80000 - -output_dir = 'tests_lr_fusion' -data_dir = '../conventional_landmark_detection_dataset' - -flags = tf.app.flags -flags.DEFINE_string('output_dir', output_dir, "directory for saving the log file") -flags.DEFINE_string('img_path', data_dir, "data directory") -FLAGS = flags.FLAGS - -if not os.path.exists(FLAGS.output_dir): - os.mkdir(FLAGS.output_dir) - -for param in params: - test_name = str(param) - test_dir = os.path.join(FLAGS.output_dir,test_name) - if not os.path.exists(test_dir): - os.mkdir(test_dir) - - print '##### RUNNING TESTS ##### current directory:', test_dir - - save_model_path = os.path.join(test_dir, 'model') - save_sample_path = os.path.join(test_dir, 'sample') - save_log_path = os.path.join(test_dir, 'logs') - - # create directories if not exist - if not os.path.exists(save_model_path): - os.mkdir(save_model_path) - if not os.path.exists(save_sample_path): - os.mkdir(save_sample_path) - if not os.path.exists(save_log_path): - os.mkdir(save_log_path) - - tf.reset_default_graph() # reset graph - - model = DeepHeatmapsModel(mode='TRAIN', train_iter=max_iter, learning_rate=param, momentum=0.95, step=80000, - gamma=0.1, batch_size=4, image_size=256, c_dim=3, num_landmarks=68, - augment_basic=True, basic_start=0, augment_texture=True, p_texture=0.5, - augment_geom=True, p_geom=0.5, artistic_start=0, artistic_step=10, - img_path=FLAGS.img_path, save_log_path=save_log_path, save_sample_path=save_sample_path, - save_model_path=save_model_path) - - model.train() diff --git a/spaces/matthoffner/chatbot/components/Promptbar/components/PromptbarSettings.tsx b/spaces/matthoffner/chatbot/components/Promptbar/components/PromptbarSettings.tsx deleted file mode 100644 index 5fad6f9ca3d08ccb3ce1cdaf53d23632534a1632..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot/components/Promptbar/components/PromptbarSettings.tsx +++ /dev/null @@ -1,7 +0,0 @@ -import { FC } from 'react'; - -interface Props {} - -export const PromptbarSettings: FC = () => { - return
        ; -}; diff --git a/spaces/mehdidc/text_to_image_ddgan/score_sde/models/ncsnpp_generator_adagn.py b/spaces/mehdidc/text_to_image_ddgan/score_sde/models/ncsnpp_generator_adagn.py deleted file mode 100644 index f93a5c80c8a860425dc2a2ca5f0680bc3dedbd17..0000000000000000000000000000000000000000 --- a/spaces/mehdidc/text_to_image_ddgan/score_sde/models/ncsnpp_generator_adagn.py +++ /dev/null @@ -1,502 +0,0 @@ -# --------------------------------------------------------------- -# Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved. -# -# This file has been modified from a file in the Score SDE library -# which was released under the Apache License. -# -# Source: -# https://github.com/yang-song/score_sde_pytorch/blob/main/models/layerspp.py -# -# The license for the original version of this file can be -# found in this directory (LICENSE_Apache). The modifications -# to this file are subject to the same Apache License. -# --------------------------------------------------------------- - -# coding=utf-8 -# Copyright 2020 The Google Research Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -# pylint: skip-file -''' Codes adapted from https://github.com/yang-song/score_sde_pytorch/blob/main/models/ncsnpp.py -''' - -from . import utils, layers, layerspp, dense_layer -import torch.nn as nn -import functools -import torch -import numpy as np - -try: - from fairscale.nn.checkpoint import checkpoint_wrapper -except Exception: - checkpoint_wrapper = lambda x:x - - -ResnetBlockDDPM = layerspp.ResnetBlockDDPMpp_Adagn -ResnetBlockBigGAN = layerspp.ResnetBlockBigGANpp_Adagn -ResnetBlockBigGAN_one = layerspp.ResnetBlockBigGANpp_Adagn_one -Combine = layerspp.Combine -conv3x3 = layerspp.conv3x3 -conv1x1 = layerspp.conv1x1 -get_act = layers.get_act -default_initializer = layers.default_init -dense = dense_layer.dense - -class CrossAndGlobalAttnBlock(nn.Module): - """Channel-wise self-attention block.""" - def __init__(self, channels, *, context_dim=None, dim_head=64, heads=8, norm_context=False, cosine_sim_attn=False): - super().__init__() - self.GroupNorm_0 = nn.GroupNorm(num_groups=32, num_channels=channels, eps=1e-6) - self.ca = layers.CrossAttention( - channels, - context_dim=context_dim, - dim_head=dim_head, - heads=heads, - norm_context=norm_context, - cosine_sim_attn=cosine_sim_attn, - ) - self.attn = layerspp.AttnBlockppRaw(channels) - - def forward(self, x, cond, mask=None): - B, C, H, W = x.shape - h = self.GroupNorm_0(x) - h = h.view(B, C, H*W) - h = h.permute(0,2,1) - h = h.contiguous() - h_new = self.ca(h, cond, mask=mask) - h_new = h_new.permute(0,2,1) - h_new = h_new.contiguous() - h_new = h_new.view(B, C, H, W) - - h_global = self.attn(x) - h = h_new + h_global - return x + h - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input / torch.sqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -@utils.register_model(name='ncsnpp') -class NCSNpp(nn.Module): - """NCSN++ model""" - - def __init__(self, config): - super().__init__() - self.config = config - self.cross_attention_block = config.cross_attention_block - self.grad_checkpointing = config.grad_checkpointing if hasattr(config, "grad_checkpointing") else False - self.not_use_tanh = config.not_use_tanh - self.act = act = nn.SiLU() - self.z_emb_dim = z_emb_dim = config.z_emb_dim - self.nf = nf = config.num_channels_dae - self.cond_proj = nn.Linear(config.cond_size, self.nf*4) - self.cond_proj.weight.data = default_initializer()(self.cond_proj.weight.shape) - - ch_mult = config.ch_mult - self.num_res_blocks = num_res_blocks = config.num_res_blocks - self.attn_resolutions = attn_resolutions = config.attn_resolutions - dropout = config.dropout - resamp_with_conv = config.resamp_with_conv - self.num_resolutions = num_resolutions = len(ch_mult) - self.all_resolutions = all_resolutions = [config.image_size // (2 ** i) for i in range(num_resolutions)] - - self.conditional = conditional = config.conditional # noise-conditional - fir = config.fir - fir_kernel = config.fir_kernel - self.skip_rescale = skip_rescale = config.skip_rescale - self.resblock_type = resblock_type = config.resblock_type.lower() - self.progressive = progressive = config.progressive.lower() - self.progressive_input = progressive_input = config.progressive_input.lower() - self.embedding_type = embedding_type = config.embedding_type.lower() - init_scale = 0. - assert progressive in ['none', 'output_skip', 'residual'] - assert progressive_input in ['none', 'input_skip', 'residual'] - assert embedding_type in ['fourier', 'positional'] - combine_method = config.progressive_combine.lower() - combiner = functools.partial(Combine, method=combine_method) - - modules = [] - # timestep/noise_level embedding; only for continuous training - if embedding_type == 'fourier': - # Gaussian Fourier features embeddings. - #assert config.training.continuous, "Fourier features are only used for continuous training." - - modules.append(layerspp.GaussianFourierProjection( - embedding_size=nf, scale=config.fourier_scale - )) - embed_dim = 2 * nf - - elif embedding_type == 'positional': - embed_dim = nf - - else: - raise ValueError(f'embedding type {embedding_type} unknown.') - - if conditional: - modules.append(nn.Linear(embed_dim, nf * 4)) - modules[-1].weight.data = default_initializer()(modules[-1].weight.shape) - nn.init.zeros_(modules[-1].bias) - modules.append(nn.Linear(nf * 4, nf * 4)) - modules[-1].weight.data = default_initializer()(modules[-1].weight.shape) - nn.init.zeros_(modules[-1].bias) - if config.cross_attention: - - #block_name = config.cross_attention_block if hasattr(config, "cross_attention_block") else "basic" - block_name = config.cross_attention_block - if block_name == "basic": - AttnBlock = functools.partial(layers.CondAttnBlock, context_dim=config.cond_size) - elif block_name == "cross_and_global_attention": - AttnBlock = functools.partial(CrossAndGlobalAttnBlock, context_dim=config.cond_size) - print(AttnBlock) - else: - AttnBlock = functools.partial(layerspp.AttnBlockpp, - init_scale=init_scale, - skip_rescale=skip_rescale) - - Upsample = functools.partial(layerspp.Upsample, - with_conv=resamp_with_conv, fir=fir, fir_kernel=fir_kernel) - - if progressive == 'output_skip': - self.pyramid_upsample = layerspp.Upsample(fir=fir, fir_kernel=fir_kernel, with_conv=False) - elif progressive == 'residual': - pyramid_upsample = functools.partial(layerspp.Upsample, - fir=fir, fir_kernel=fir_kernel, with_conv=True) - - Downsample = functools.partial(layerspp.Downsample, - with_conv=resamp_with_conv, fir=fir, fir_kernel=fir_kernel) - - if progressive_input == 'input_skip': - self.pyramid_downsample = layerspp.Downsample(fir=fir, fir_kernel=fir_kernel, with_conv=False) - elif progressive_input == 'residual': - pyramid_downsample = functools.partial(layerspp.Downsample, - fir=fir, fir_kernel=fir_kernel, with_conv=True) - - if resblock_type == 'ddpm': - ResnetBlock = functools.partial(ResnetBlockDDPM, - act=act, - dropout=dropout, - init_scale=init_scale, - skip_rescale=skip_rescale, - temb_dim=nf * 4, - zemb_dim = z_emb_dim) - - elif resblock_type == 'biggan': - ResnetBlock = functools.partial(ResnetBlockBigGAN, - act=act, - dropout=dropout, - fir=fir, - fir_kernel=fir_kernel, - init_scale=init_scale, - skip_rescale=skip_rescale, - temb_dim=nf * 4, - zemb_dim = z_emb_dim) - elif resblock_type == 'biggan_oneadagn': - ResnetBlock = functools.partial(ResnetBlockBigGAN_one, - act=act, - dropout=dropout, - fir=fir, - fir_kernel=fir_kernel, - init_scale=init_scale, - skip_rescale=skip_rescale, - temb_dim=nf * 4, - zemb_dim = z_emb_dim) - - else: - raise ValueError(f'resblock type {resblock_type} unrecognized.') - - # Downsampling block - def wrap(block): - return checkpoint_wrapper(block) if self.grad_checkpointing else block - - channels = config.num_channels - if progressive_input != 'none': - input_pyramid_ch = channels - - modules.append(conv3x3(channels, nf)) - hs_c = [nf] - - in_ch = nf - for i_level in range(num_resolutions): - # Residual blocks for this resolution - for i_block in range(num_res_blocks): - out_ch = nf * ch_mult[i_level] - modules.append(wrap(ResnetBlock(in_ch=in_ch, out_ch=out_ch))) - in_ch = out_ch - - if all_resolutions[i_level] in attn_resolutions: - modules.append(wrap(AttnBlock(channels=in_ch))) - hs_c.append(in_ch) - - if i_level != num_resolutions - 1: - if resblock_type == 'ddpm': - modules.append(Downsample(in_ch=in_ch)) - else: - modules.append(wrap(ResnetBlock(down=True, in_ch=in_ch))) - - if progressive_input == 'input_skip': - modules.append(combiner(dim1=input_pyramid_ch, dim2=in_ch)) - if combine_method == 'cat': - in_ch *= 2 - - elif progressive_input == 'residual': - modules.append(pyramid_downsample(in_ch=input_pyramid_ch, out_ch=in_ch)) - input_pyramid_ch = in_ch - - hs_c.append(in_ch) - - in_ch = hs_c[-1] - modules.append(wrap(ResnetBlock(in_ch=in_ch))) - modules.append(wrap(AttnBlock(channels=in_ch))) - modules.append(wrap(ResnetBlock(in_ch=in_ch))) - - pyramid_ch = 0 - # Upsampling block - for i_level in reversed(range(num_resolutions)): - for i_block in range(num_res_blocks + 1): - out_ch = nf * ch_mult[i_level] - modules.append(wrap(ResnetBlock(in_ch=in_ch + hs_c.pop(), - out_ch=out_ch))) - in_ch = out_ch - - if all_resolutions[i_level] in attn_resolutions: - modules.append(wrap(AttnBlock(channels=in_ch))) - - if progressive != 'none': - if i_level == num_resolutions - 1: - if progressive == 'output_skip': - modules.append(nn.GroupNorm(num_groups=min(in_ch // 4, 32), - num_channels=in_ch, eps=1e-6)) - modules.append(conv3x3(in_ch, channels, init_scale=init_scale)) - pyramid_ch = channels - elif progressive == 'residual': - modules.append(nn.GroupNorm(num_groups=min(in_ch // 4, 32), - num_channels=in_ch, eps=1e-6)) - modules.append(conv3x3(in_ch, in_ch, bias=True)) - pyramid_ch = in_ch - else: - raise ValueError(f'{progressive} is not a valid name.') - else: - if progressive == 'output_skip': - modules.append(nn.GroupNorm(num_groups=min(in_ch // 4, 32), - num_channels=in_ch, eps=1e-6)) - modules.append(conv3x3(in_ch, channels, bias=True, init_scale=init_scale)) - pyramid_ch = channels - elif progressive == 'residual': - modules.append(pyramid_upsample(in_ch=pyramid_ch, out_ch=in_ch)) - pyramid_ch = in_ch - else: - raise ValueError(f'{progressive} is not a valid name') - - if i_level != 0: - if resblock_type == 'ddpm': - modules.append(Upsample(in_ch=in_ch)) - else: - modules.append(wrap(ResnetBlock(in_ch=in_ch, up=True))) - - assert not hs_c - - if progressive != 'output_skip': - modules.append(nn.GroupNorm(num_groups=min(in_ch // 4, 32), - num_channels=in_ch, eps=1e-6)) - modules.append(conv3x3(in_ch, channels, init_scale=init_scale)) - - self.all_modules = nn.ModuleList(modules) - - - mapping_layers = [PixelNorm(), - dense(config.nz, z_emb_dim), - self.act,] - for _ in range(config.n_mlp): - mapping_layers.append(dense(z_emb_dim, z_emb_dim)) - mapping_layers.append(self.act) - self.z_transform = nn.Sequential(*mapping_layers) - - - def forward(self, x, time_cond, z, cond=None): - # timestep/noise_level embedding; only for continuous training - zemb = self.z_transform(z) - modules = self.all_modules - m_idx = 0 - if self.embedding_type == 'fourier': - # Gaussian Fourier features embeddings. - used_sigmas = time_cond - temb = modules[m_idx](torch.log(used_sigmas)) - m_idx += 1 - - elif self.embedding_type == 'positional': - # Sinusoidal positional embeddings. - timesteps = time_cond - - temb = layers.get_timestep_embedding(timesteps, self.nf) - - else: - raise ValueError(f'embedding type {self.embedding_type} unknown.') - - if cond is not None: - cond_pooled, cond, cond_mask = cond - - if self.conditional: - temb = modules[m_idx](temb) - if cond is not None: - temb = temb + self.cond_proj(cond_pooled) - m_idx += 1 - temb = modules[m_idx](self.act(temb)) - m_idx += 1 - else: - temb = None - - if not self.config.centered: - # If input data is in [0, 1] - x = 2 * x - 1. - - # Downsampling block - input_pyramid = None - if self.progressive_input != 'none': - input_pyramid = x - - hs = [modules[m_idx](x)] - m_idx += 1 - #print(self.attn_resolutions) - #self.attn_resolutions = (32,) - for i_level in range(self.num_resolutions): - # Residual blocks for this resolution - for i_block in range(self.num_res_blocks): - #print(hs[-1].shape, temb.shape, zemb.shape, type(modules[m_idx])) - h = modules[m_idx](hs[-1], temb, zemb) - m_idx += 1 - if type(modules[m_idx]) in (layers.CondAttnBlock, CrossAndGlobalAttnBlock, layers.AttnBlock): - #if h.shape[-1] in self.attn_resolutions: - if type(modules[m_idx]) in (layers.CondAttnBlock, CrossAndGlobalAttnBlock): - h = modules[m_idx](h, cond, cond_mask) - else: - h = modules[m_idx](h) - m_idx += 1 - - hs.append(h) - - if i_level != self.num_resolutions - 1: - if self.resblock_type == 'ddpm': - h = modules[m_idx](hs[-1]) - m_idx += 1 - else: - h = modules[m_idx](hs[-1], temb, zemb) - m_idx += 1 - - if self.progressive_input == 'input_skip': - input_pyramid = self.pyramid_downsample(input_pyramid) - h = modules[m_idx](input_pyramid, h) - m_idx += 1 - - elif self.progressive_input == 'residual': - input_pyramid = modules[m_idx](input_pyramid) - m_idx += 1 - if self.skip_rescale: - input_pyramid = (input_pyramid + h) / np.sqrt(2.) - else: - input_pyramid = input_pyramid + h - h = input_pyramid - - hs.append(h) - - h = hs[-1] - h = modules[m_idx](h, temb, zemb) - m_idx += 1 - - if type(modules[m_idx]) in (layers.CondAttnBlock, CrossAndGlobalAttnBlock): - h = modules[m_idx](h, cond, cond_mask) - else: - h = modules[m_idx](h) - m_idx += 1 - h = modules[m_idx](h, temb, zemb) - m_idx += 1 - - pyramid = None - - # Upsampling block - for i_level in reversed(range(self.num_resolutions)): - for i_block in range(self.num_res_blocks + 1): - h = modules[m_idx](torch.cat([h, hs.pop()], dim=1), temb, zemb) - m_idx += 1 - - #if h.shape[-1] in self.attn_resolutions: - if type(modules[m_idx]) in (layers.CondAttnBlock, CrossAndGlobalAttnBlock, layers.AttnBlock): - if type(modules[m_idx]) in (layers.CondAttnBlock, CrossAndGlobalAttnBlock): - h = modules[m_idx](h, cond, cond_mask) - else: - h = modules[m_idx](h) - m_idx += 1 - - if self.progressive != 'none': - if i_level == self.num_resolutions - 1: - if self.progressive == 'output_skip': - pyramid = self.act(modules[m_idx](h)) - m_idx += 1 - pyramid = modules[m_idx](pyramid) - m_idx += 1 - elif self.progressive == 'residual': - pyramid = self.act(modules[m_idx](h)) - m_idx += 1 - pyramid = modules[m_idx](pyramid) - m_idx += 1 - else: - raise ValueError(f'{self.progressive} is not a valid name.') - else: - if self.progressive == 'output_skip': - pyramid = self.pyramid_upsample(pyramid) - pyramid_h = self.act(modules[m_idx](h)) - m_idx += 1 - pyramid_h = modules[m_idx](pyramid_h) - m_idx += 1 - pyramid = pyramid + pyramid_h - elif self.progressive == 'residual': - pyramid = modules[m_idx](pyramid) - m_idx += 1 - if self.skip_rescale: - pyramid = (pyramid + h) / np.sqrt(2.) - else: - pyramid = pyramid + h - h = pyramid - else: - raise ValueError(f'{self.progressive} is not a valid name') - - if i_level != 0: - if self.resblock_type == 'ddpm': - h = modules[m_idx](h) - m_idx += 1 - else: - h = modules[m_idx](h, temb, zemb) - m_idx += 1 - - assert not hs - - if self.progressive == 'output_skip': - h = pyramid - else: - h = self.act(modules[m_idx](h)) - m_idx += 1 - h = modules[m_idx](h) - m_idx += 1 - - assert m_idx == len(modules) - - if not self.not_use_tanh: - - return torch.tanh(h) - else: - return h - diff --git a/spaces/merve/fill-in-the-blank/public/hidden-bias/style.css b/spaces/merve/fill-in-the-blank/public/hidden-bias/style.css deleted file mode 100644 index 4b0d163f9dc4af367dc0b84036c5e177b8f4db0b..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/public/hidden-bias/style.css +++ /dev/null @@ -1,275 +0,0 @@ -/* Copyright 2020 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - -.tooltip { - top: -1000px; - position: fixed; - padding: 10px; - background: rgba(255, 255, 255, .90); - border: 1px solid lightgray; - pointer-events: none; - font-family: monospace; - font-size: 14px; - width: 170px; -} -.tooltip-hidden{ - opacity: 0; - transition: all .3s; - transition-delay: .1s; -} - -@media (max-width: 590px){ - div.tooltip{ - bottom: -1px; - width: calc(100%); - left: -1px !important; - right: -1px !important; - top: auto !important; - width: auto !important; - } -} - -/* Ensure the last panel can be activated on tall screens */ -@media (min-height: 1700px){ - #container{ - margin-bottom: 900px; - } -} - -.tooltip span{ - padding: 2px; -} - -svg{ - overflow: visible; -} - -.domain{ - display: none; -} - -text{ - /*pointer-events: none;*/ - text-shadow: 0 1px 0 #fff, 1px 0 0 #fff, 0 -1px 0 #fff, -1px 0 0 #fff; -} - - - - - - -#container{ - position: relative; - width: auto; -} - -#container h3{ - font-weight: 500; -} - -#sections{ - width: 340px; -} - -#sections > div{ - background: white; - opacity: .2; - margin-bottom: 200px; - line-height: 1.4em; -} -#sections > div:last-child{ - padding-bottom: 80vh; -} -#sections > div.graph-scroll-active{ - opacity: 1; -} - -#graph{ - margin-left: 40px; - width: 500px; - position: -webkit-sticky; - position: sticky; - top: 0px; - float: right; -} - -@media (max-width: 925px) { - #graph{ - width: 100%; - margin-left: 0px; - float: none; - } - - #sections{ - width: auto; - position: relative; - margin: 0px auto; - } - - #sections > div{ - background: rgba(255,255,255,.5); - padding: 10px; - border-top: 1px solid; - border-bottom: 1px solid; - margin-bottom: 80vh; - } -} - - -.mono{ - font-family: monospace; -} - - -svg{ - overflow: visible; -} - - - - -.axis{ - font-size: 12px; -} -.axis{ - color: #999; -} -.axis text{ - fill: #999; -} -.axis line{ - stroke: #ccc; -} - -div.axis b{ - margin-bottom: 100px; - display: block; -} - -.axis .blink{ - color: orange; -} - - - - - - -.highlight{ - color: #fff; - padding-left: 3px; - padding-right: 3px; - padding-top: 1px; - padding-bottom: 1px; - border-radius: 3px; -} - -/*.highlight.blue{ background: blue; }*/ -/*.highlight.orange{ background: orange; }*/ -.highlight.yellow{ background: #ff0; color: #000; } -.highlight.blue{ background: #8effff; color: #000; } -.highlight.male{ background: #7DDAD3; color: #000; } -.highlight.female{ background: #9B86EF; color: #000; } - -.annotation .highlight{ - padding: 0px; - padding-left: 2px; - padding-right: 2px; - margin-left: -2px; - margin-right: -2px; - border-radius: 3px; - /*height: 12px;*/ - display: inline-block; -} - - -#graph .highlight.yellow, #graph .highlight.blue{ - padding-left: 0px; - padding: 0px; -} - - -.circle{ - background: #eee; - border: 1px solid #ccc; - font-family: monospace; - padding-left: 4.2px; - padding-right: 4.2px; - padding-top: 0px; - padding-bottom: 0px; - - border-radius: 1000px; - width: 20px; - height: 20px; -} - - -.strikethrough{ - text-decoration: line-through; - color: #000; -} - - -.annotation div{ - font-size: 12px; - line-height: 13px; - font-family: 'Google Sans', sans-serif; -} - - -.annotations path{ - fill: none; - stroke: black; - stroke-width: .5px; -} - - -.img-slide img{ - width: 30px; - transform: rotate(-90deg); - margin-left: -10px; - margin-right: -4px; - position: relative; - top: 5px; -} - -.img-slide img:nth-of-type(1){ - transform: rotate(90deg); - margin-left: -10px; - margin-right: -4px; - top: 0px; -} - - - - - -div.axis b{ - margin-bottom: 0px; -} - -div.axis{ - line-height: 14px; -} - - -circle:hover{ - stroke: #000; - stroke-width: 2; -} - - - - diff --git a/spaces/merve/fill-in-the-blank/source/private-and-fair/accuracy-v-privacy-dataset_size.js b/spaces/merve/fill-in-the-blank/source/private-and-fair/accuracy-v-privacy-dataset_size.js deleted file mode 100644 index cd196da1ca712ff733e5e03de4258effba0478a3..0000000000000000000000000000000000000000 --- a/spaces/merve/fill-in-the-blank/source/private-and-fair/accuracy-v-privacy-dataset_size.js +++ /dev/null @@ -1,157 +0,0 @@ -!(async function(){ - var data = await util.getFile('cns-cache/model_grid_test_accuracy.json') - - data = data - .filter(d => util.epsilonExtent[1] <= d.epsilon && d.epsilon <= util.epsilonExtent[0]) - .filter(d => d.dataset_size > 1000) - - // .filter(d => d.dataset_size > 4000) - - // console.log(data) - - var bySize = d3.nestBy(data, d => d.dataset_size) - bySize.forEach((d, i) => { - d.dataset_size = d.key - - d.color = d3.interpolatePlasma(.84- i/6) - if (d.key == 60000){ - d3.selectAll('.tp60').st({background: d.color, padding: 2}) - } - if (d.key == 7500){ - d3.selectAll('.tp75').st({background: d.color, color: '#fff', padding: 2}) - } - - d.label = { - 60000: {pos: [7, 11], textAnchor: 'middle', text: '60,000'}, - 30000: {pos: [7, 11], textAnchor: 'middle', text: '30,000'}, - 15000: {pos: [7, -5], textAnchor: 'start', text: '15,000'}, - 7500: {pos: [0, 8], textAnchor: 'start', text: '7,500'}, - // 3750: {pos: [0, 14], textAnchor: 'end', text: '3,750 training points'}, - 3750: {pos: [-34, 10], textAnchor: 'start', text: '3,750'}, - 2000: {pos: [-50, 10], textAnchor: 'end', text: '2,000 training points'}, - }[d.key] - - d.forEach(e => e.size = d) - }) - - var sel = d3.select('.accuracy-v-privacy-dataset_size').html('') - .at({role: 'graphics-document', 'aria-label': `High privacy and accuracy requires more training data. Line chart showing too much differential privacy without enough data decreases accuracy.`}) - - sel.append('div.chart-title').text('High privacy and accuracy requires more training data') - - var c = d3.conventions({ - sel, - height: 400, - margin: {bottom: 125, top: 5}, - layers: 'sd', - }) - - c.x = d3.scaleLog().domain(util.epsilonExtent).range(c.x.range()) - c.xAxis = d3.axisBottom(c.x).tickFormat(d => { - var rv = d + '' - if (rv.split('').filter(d => d !=0 && d != '.')[0] == 1) return rv - }) - - c.yAxis.tickFormat(d => d3.format('.0%')(d))//.ticks(8) - - d3.drawAxis(c) - util.addAxisLabel(c, 'Higher Privacy →', 'Test Accuracy') - util.ggPlotBg(c, false) - c.layers[1].append('div') - .st({fontSize: 12, color: '#555', width: 120*2, textAlign: 'center', lineHeight: '1.3em'}) - .translate([c.width/2 - 120, c.height + 70]) - .html('in ε, a measure of how much modifying a single training point can change the model (models with a lower ε are more private)') - - - c.svg.selectAll('.y .tick').filter(d => d == .9) - .select('text').st({fontWeight: 600}).parent() - .append('path') - .at({stroke: '#000', strokeDasharray: '2 2', d: 'M 0 0 H ' + c.width}) - - var line = d3.line() - .x(d => c.x(d.epsilon)) - .y(d => c.y(d.accuracy)) - .curve(d3.curveMonotoneX) - - - var lineSel = c.svg.append('g').appendMany('path.accuracy-line', bySize) - .at({ - d: line, - fill: 'none', - }) - .st({ stroke: d => d.color, }) - .on('mousemove', setActiveDigit) - - var circleSel = c.svg.append('g') - .appendMany('g.accuracy-circle', data) - .translate(d => [c.x(d.epsilon), c.y(d.accuracy)]) - .on('mousemove', setActiveDigit) - // .call(d3.attachTooltip) - - circleSel.append('circle') - .at({r: 4, stroke: '#fff'}) - .st({fill: d => d.size.color }) - - - var labelSel = c.svg.appendMany('g.accuracy-label', bySize) - .translate(d => [c.x(d[0].epsilon), c.y(d[0].accuracy)]) - labelSel.append('text') - .filter(d => d.label) - .translate(d => d.label.pos) - .st({fill: d => d.color, fontWeight: 400}) - .at({textAnchor: d => d.label.textAnchor, fontSize: 14, fill: '#000', dy: '.66em'}) - .text(d => d.label.text) - .filter(d => d.key == 2000) - .text('') - .tspans(d => d.label.text.split(' ')) - - - c.svg.append('text.annotation') - .translate([225, 106]) - .tspans(d3.wordwrap('With limited data, adding more differential privacy improves accuracy...', 25), 12) - - c.svg.append('text.annotation') - .translate([490, 230]) - .tspans(d3.wordwrap(`...until it doesn't`, 20)) - - // setActiveDigit({dataset_size: 60000}) - function setActiveDigit({dataset_size}){ - lineSel - .classed('active', 0) - .filter(d => d.dataset_size == dataset_size) - .classed('active', 1) - .raise() - - circleSel - .classed('active', 0) - .filter(d => d.dataset_size == dataset_size) - .classed('active', 1) - .raise() - - labelSel - .classed('active', 0) - .filter(d => d.dataset_size == dataset_size) - .classed('active', 1) - } -})() - - - - -// aVal: 0.5 -// accuracy: 0.8936 -// accuracy_0: 0.9663265306122449 -// accuracy_1: 0.9806167400881057 -// accuracy_2: 0.9011627906976745 -// accuracy_3: 0.8633663366336634 -// accuracy_4: 0.8859470468431772 -// accuracy_5: 0.8733183856502242 -// accuracy_6: 0.9384133611691023 -// accuracy_7: 0.8657587548638133 -// accuracy_8: 0.8059548254620124 -// accuracy_9: 0.8434093161546086 -// dataset_size: 60000 -// epochs: 4 -// epsilon: 0.19034890168775565 -// l2_norm_clip: 0.75 -// noise_multiplier: 2.6 diff --git a/spaces/merve/uncertainty-calibration/source/dataset-worldviews/shape-params.js b/spaces/merve/uncertainty-calibration/source/dataset-worldviews/shape-params.js deleted file mode 100644 index b36a500b99b8789ffe044a738c86e1459317974a..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/dataset-worldviews/shape-params.js +++ /dev/null @@ -1,527 +0,0 @@ -const shapeParams = [ - { - shape_name: "circle", - pointiness: "round", - size: "large", - gt: "shaded", - label: "unshaded", - correctness: "incorrect", - path: "M 25.0 0 A 0.5 0.5 0 0 0 -50 0 M -50 0 A 0.5 0.5 0 0 0 25.0 0", - startX: 47.5, - startY: 84.21875, - endX: 474.5, - endY: 293.828125, - initialX: 50.5, - initialY: 85.21875, - }, - { - shape_name: "circle", - pointiness: "round", - size: "large", - gt: "shaded", - label: "unshaded", - correctness: "incorrect", - path: "M 22.5 0 A 0.5 0.5 0 0 0 -45 0 M -45 0 A 0.5 0.5 0 0 0 22.5 0", - startX: 247, - startY: 433.828125, - endX: 641.5, - endY: 248.828125, - initialX: 575.5, - initialY: 157.21875, - }, - { - shape_name: "circle", - pointiness: "round", - size: "large", - gt: "shaded", - label: "unshaded", - correctness: "incorrect", - path: "M 30.0 0 A 0.5 0.5 0 0 0 -60 0 M -60 0 A 0.5 0.5 0 0 0 30.0 0", - startX: 189.5, - startY: 170.21875, - endX: 799.5, - endY: 325.828125, - initialX: 511.5, - initialY: 75.21875, - }, - { - shape_name: "circle", - pointiness: "round", - size: "large", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M 25.0 0 A 0.5 0.5 0 0 0 -50 0 M -50 0 A 0.5 0.5 0 0 0 25.0 0", - startX: 37.5, - startY: 440.21875, - endX: 475, - endY: 425.21875, - initialX: 715.5, - initialY: 213.21875, - }, - { - shape_name: "circle", - pointiness: "round", - size: "rt_large", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M 17.5 0 A 0.5 0.5 0 0 0 -35 0 M -35 0 A 0.5 0.5 0 0 0 17.5 0", - startX: 282, - startY: 207.828125, - endX: 460.5, - endY: 217.21875, - initialX: 280.5, - initialY: 146.21875, - }, - { - shape_name: "circle", - pointiness: "round", - size: "rt_small", - gt: "shaded", - label: "shaded", - correctness: "correct", - path: "M 12.5 0 A 0.5 0.5 0 0 0 -25 0 M -25 0 A 0.5 0.5 0 0 0 12.5 0", - startX: 125.5, - startY: 418.21875, - endX: 715.5, - endY: 76.828125, - initialX: 680.5, - initialY: 147.21875, - }, - { - shape_name: "rect", - pointiness: "pointy", - size: "rt_large", - gt: "unshaded", - label: "shaded", - correctness: "incorrect", - path: "M -45 -15 L 25.0 -15 L 25.0 5.0 L -45 5.0 L -45 -15", - startX: 77.5, - startY: 35.21875, - endX: 712.5, - endY: 124.828125, - initialX: 79.5, - initialY: 35.21875, - }, - { - shape_name: "rect", - pointiness: "pointy", - size: "rt_large", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M -40 -60 L -20 -70 L 18 3 L -3 12.5 L -40 -60", - startX: 320, - startY: 451.828125, - endX: 707.5, - endY: 339.828125, - initialX: 672.5, - initialY: 104.21875, - }, - { - shape_name: "rect", - pointiness: "pointy", - size: "rt_small", - gt: "shaded", - label: "shaded", - correctness: "correct", - path: "M -30 -15 L 12.5 -15 L 12.5 5.5 L -30 5.5 L -30 -15", - startX: 29.5, - startY: 389.21875, - endX: 774.5, - endY: 78.828125, - initialX: 115.5, - initialY: 234.21875, - }, - { - shape_name: "rect", - pointiness: "pointy", - size: "rt_small", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M -11 -34 L 4.5 -34 L 4.5 6.0 L -11 6.0 L -11 -34", - startX: 242, - startY: 271.828125, - endX: 574.5, - endY: 391.828125, - initialX: 258.5, - initialY: 230.21875, - }, - { - shape_name: "rect", - pointiness: "pointy", - size: "rt_small", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M -10 -45 L 4.5 -45 L 4.5 6.0 L -10 6.0 L -10 -45", - startX: 76.5, - startY: 177.21875, - endX: 522.5, - endY: 327.828125, - initialX: 89.5, - initialY: 170.21875, - }, - { - shape_name: "rt_circle", - pointiness: "pointy", - size: "rt_large", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M 25.0 0 M -50 0 L -44 2.0 L -50 3.5 L -44 5.0 L -48 7.5 L -41 8.0 L -45 10.5 L -37 10.5 L -41 14.0 L -34 14.5 L -35 17.5 L -29 16.5 L -28 20.5 L -22 19.5 L -21 22.5 L -14 21.0 L -12 24.0 L -7 22.0 L -4 24.5 L 0 22.5 L 2.0 24.5 L 3.5 21.5 L 5.5 24.0 L 7.5 21.0 L 9.5 22.5 L 9.5 19.5 L 12.5 21.0 L 13.0 17.5 L 16.0 18.5 L 15.5 15.0 L 19.0 15.5 L 17.0 12.5 L 21.0 12.5 L 18.5 10.0 L 22.5 9.5 L 19.5 7.0 L 23.5 6.5 L 20.0 4.5 L 24.0 4.0 L 20.5 2.0 L 25.0 0 L 21.0 -3 L 25.0 -6 L 21.0 -9 L 24.0 -13 L 20.5 -14 L 23.0 -19 L 20.0 -20 L 21.5 -25 L 18.0 -25 L 19.0 -32 L 15.0 -30 L 16.0 -38 L 12.5 -36 L 13.0 -43 L 10.0 -40 L 10.0 -46 L 7.0 -42 L 6.5 -48 L 4.0 -43 L 3.5 -49 L 1.5 -43 L 0 -50 L -3 -43 L -8 -49 L -9 -43 L -15 -48 L -15 -42 L -21 -46 L -21 -40 L -26 -43 L -26 -37 L -31 -39 L -30 -33 L -37 -34 L -35 -28 L -40 -29 L -38 -24 L -44 -25 L -42 -20 L -46 -20 L -44 -15 L -49 -14 L -45 -9 L -50 -6 L -45 -3 L -50 0", - startX: 319, - startY: 290.828125, - endX: 738, - endY: 410.21875, - initialX: 605.5, - initialY: 83.21875, - }, - { - shape_name: "rt_circle", - pointiness: "round", - size: "large", - gt: "shaded", - label: "shaded", - correctness: "correct", - path: "M 26.5 1.0 C 34.0 -75 -43 -70 -36 -34 M -36 -34 C -42 -14 -70 -34 -66 0 V 0 C -66 19.5 -47 26.0 3.5 26.5 C 11.5 28.0 26.0 13.0 26.5 1.0", - startX: 154.5, - startY: 89.21875, - endX: 519.5, - endY: 128.828125, - initialX: 151.5, - initialY: 88.21875, - }, - { - shape_name: "rt_circle", - pointiness: "round", - size: "rt_large", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M 26.5 1.0 C 34.0 -75 -43 -70 -42 -51 M -42 -51 C -42 -14 -82 -12 -38 -4 V -4 C -9 0 -47 26.0 2.0 24.0 C 16.5 22.0 23.5 12.0 26.5 1.0", - startX: 254, - startY: 368.828125, - endX: 749.5, - endY: 254.828125, - initialX: 497.5, - initialY: 192.21875, - }, - { - shape_name: "rt_circle", - pointiness: "round", - size: "rt_small", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M 17.0 -9 C 9.5 -44 -1 -65 -40 -34 M -40 -34 C -61 -15 -59 0.5 -38 9.5 C -19 19.0 -47 26.0 8.0 15.5 C 16.5 12.5 23.5 12.0 17.0 -9", - startX: 42.5, - startY: 185.21875, - endX: 664, - endY: 448.21875, - initialX: 410.5, - initialY: 148.21875, - }, - { - shape_name: "rt_circle", - pointiness: "rt_pointy", - size: "rt_large", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M 14.0 3.5 L -6 0.5 L 15.0 -5 A 0.5 0.5 0 0 0 -48 0 M -48 0 A 0.5 0.5 0 0 0 14.0 3.5", - startX: 48.5, - startY: 252.21875, - endX: 576, - endY: 443.21875, - initialX: 160.5, - initialY: 155.21875, - }, - { - shape_name: "rt_circle", - pointiness: "rt_round", - size: "small", - gt: "shaded", - label: "shaded", - correctness: "correct", - path: "M 6.0 1.5 C 5.5 -3 0 4.5 -3 -1 C -3 -10 2.5 -7 6.0 -4 A 0.5 0.5 0 0 0 -18 0 M -18 0 A 0.5 0.5 0 0 0 6.0 1.5", - startX: 334, - startY: 185.828125, - endX: 652.5, - endY: 83.828125, - initialX: 13.5, - initialY: 232.21875, - }, - { - shape_name: "rt_circle", - pointiness: "rt_round", - size: "small", - gt: "shaded", - label: "shaded", - correctness: "correct", - path: "M -10 0 A 0.5 0.5 0 0 0 5.0 0 C 5.0 -12 3.5 -17 0 -10 C -7 -17 -10 -12 -10 0", - startX: 318, - startY: 355.828125, - endX: 581, - endY: 145.21875, - initialX: 293.5, - initialY: 190.21875, - }, - { - shape_name: "rt_circle", - pointiness: "rt_round", - size: "small", - gt: "shaded", - label: "shaded", - correctness: "correct", - path: "M -10 0 A 0.5 0.5 0 0 0 4.5 -3 C 5.5 0 6.5 4.5 7.5 0.5 C 7.5 -11 2.5 -18 -7 -11 C 3.5 -4 -10 -12 -10 0", - startX: 80, - startY: 308.828125, - endX: 731.5, - endY: 42.828125, - initialX: 621.5, - initialY: 132.21875, - }, - { - shape_name: "rt_circle", - pointiness: "rt_round", - size: "small", - gt: "shaded", - label: "unshaded", - correctness: "incorrect", - path: "M 0 10.0 C -20 7.5 -20 -5 -6 -15 L 2.5 -15 C 10.0 -5 10.0 7.5 0 10.0", - startX: 199.5, - startY: 50.21875, - endX: 719.5, - endY: 458.828125, - initialX: 246.5, - initialY: 59.21875, - }, - { - shape_name: "rt_rect", - pointiness: "rt_pointy", - size: "rt_large", - gt: "shaded", - label: "unshaded", - correctness: "incorrect", - path: "M 0 20.0 C -50 15.0 -10 35.0 -20 -45 L 10.0 -45 C 5.0 35.0 25.0 15.0 0 20.0", - startX: 93.5, - startY: 261.21875, - endX: 807.5, - endY: 250.828125, - initialX: 57.5, - initialY: 189.21875, - }, - { - shape_name: "rt_rect", - pointiness: "rt_pointy", - size: "rt_large", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M 27.5 7.0 C -50 15.0 -39 33.5 -37 9.5 S -76 -1 -45 -21 C 11.0 -51 23.0 -52 27.5 7.0", - startX: 284.5, - startY: 152.21875, - endX: 544.5, - endY: 230.828125, - initialX: 411.5, - initialY: 73.21875, - }, - { - shape_name: "rt_rect", - pointiness: "rt_pointy", - size: "rt_large", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M -25 -30 L 10.0 -30 C 22.5 0 22.5 0 10.0 15.0 L -25 15.0 C 0 0 0 0 -25 -30", - startX: 219.5, - startY: 99.21875, - endX: 525.5, - endY: 381.828125, - initialX: 213.5, - initialY: 96.21875, - }, - { - shape_name: "rt_rect", - pointiness: "rt_pointy", - size: "rt_large", - gt: "unshaded", - label: "unshaded", - correctness: "correct", - path: "M -25 -50 L 10.0 -50 C 0 0 22.5 0 10.0 25.0 L -25 25.0 C 0 0 -45 0 -25 -50", - startX: 79.5, - startY: 380.21875, - endX: 565.5, - endY: 298.828125, - initialX: 719.5, - initialY: 87.21875, - }, - { - shape_name: "rt_triangle", - pointiness: "rt_pointy", - size: "large", - gt: "shaded", - label: "unshaded", - correctness: "incorrect", - path: "M -45 -50 L 22.5 -50 L 0 34.5 C 0 0 -45 0 -45 -50", - startX: 325.5, - startY: 94.21875, - endX: 636.5, - endY: 360.828125, - initialX: 324.5, - initialY: 88.2, - }, - { - shape_name: "rt_triangle", - pointiness: "rt_round", - size: "large", - gt: "shaded", - label: "unshaded", - correctness: "incorrect", - path: "M -47 15.0 L -15 -56 C -7 -82 41.5 15.5 28.0 15.5 C 0 15.5 0 15.5 -47 15.0", - startX: 191, - startY: 283.828125, - endX: 796, - endY: 448.21875, - initialX: 349.5, - initialY: 223.21875, - }, - { - shape_name: "rt_triangle", - pointiness: "rt_round", - size: "large", - gt: "unshaded", - label: "shaded", - correctness: "incorrect", - path: "M 21.0 17.5 L -43 17.5 C -31 -26 9.5 -44 16.0 -69 C 24.5 -80 15.5 -12 21.0 17.5", - startX: 163.5, - startY: 446.21875, - endX: 794.5, - endY: 134.828125, - initialX: 622.5, - initialY: 210.21875, - }, - { - shape_name: "rt_triangle", - pointiness: "rt_round", - size: "rt_large", - gt: "shaded", - label: "shaded", - correctness: "correct", - path: "M -20 -35 L -20 10 L 25 10 C 25 5 25 5 20 5 C 20 0 20 0 15 0 C 15 -5 15 -5 10 -5 C 10 -10 10 -10 5 -10 C 5 -15 5 -15 0 -15 C 0 -20 0 -20 -5 -20 C -5 -25 -5 -25 -10 -25 C -10 -30 -10 -30 -15 -30 C -15 -35 -15 -35 -20 -35", - startX: 132, - startY: 350.828125, - endX: 643.5, - endY: 149.828125, - initialX: 190.5, - initialY: 240.21875, - }, - { - shape_name: "rt_triangle", - pointiness: "rt_round", - size: "small", - gt: "shaded", - label: "unshaded", - correctness: "incorrect", - path: "M 0 6.5 C 5.0 5.5 8.5 -8 7.5 -10 L -15 -10 C -17 -8 -10 5.5 0 6.5", - startX: 87.5, - startY: 461.21875, - endX: 443.5, - endY: 370.828125, - initialX: 416.5, - initialY: 234.21875, - }, - { - shape_name: "rt_triangle", - pointiness: "rt_round", - size: "small", - gt: "unshaded", - label: "shaded", - correctness: "incorrect", - path: "M 22.5 0 C 22.5 -11.25 11.25 -18.75 0 -15 C 0 -3.75 -11.25 11.25 -8.25 7.5 C -3.75 18.75 11.25 0 22.5 0", - startX: 168, - startY: 330.828125, - endX: 522.5, - endY: 47.828125, - initialX: 402.5, - initialY: 193.21875, - }, - { - shape_name: "triangle", - pointiness: "pointy", - size: "rt_large", - gt: "shaded", - label: "shaded", - correctness: "correct", - path: "M -9 25.0 L 7.5 25.0 L 0 -45 L -9 25.0", - startX: 126.5, - startY: 249.21875, - endX: 433.5, - endY: 135.828125, - initialX: 219.5, - initialY: 183.21875, - }, - { - shape_name: "triangle", - pointiness: "pointy", - size: "rt_small", - gt: "shaded", - label: "shaded", - correctness: "correct", - path: "M -29 5.0 L 15.0 0 L -29 -16 L -29 5.0", - startX: 277.5, - startY: 98.21875, - endX: 596.5, - endY: 70.828125, - initialX: 280.5, - initialY: 103.21875, - }, - { - shape_name: "triangle", - pointiness: "pointy", - size: "rt_small", - gt: "shaded", - label: "shaded", - correctness: "correct", - path: "M 3.5 13.5 L 9.5 -20 L -36 0 L 3.5 13.5", - startX: 257.5, - startY: 53.21875, - endX: 593.5, - endY: 105.828125, - initialX: 546.5, - initialY: 235.21875, - }, - { - shape_name: "triangle", - pointiness: "pointy", - size: "rt_small", - gt: "unshaded", - label: "shaded", - correctness: "incorrect", - path: "M 12.5 10.0 L 0 -35 L -25 10.0 L 12.5 10.0", - startX: 15.5, - startY: 332.8, - endX: 463, - endY: 63.21875, - initialX: 13.5, - initialY: 164.21875, - }, - { - shape_name: "triangle", - pointiness: "pointy", - size: "small", - gt: "shaded", - label: "shaded", - correctness: "correct", - path: "M 4.5 1.5 L 0 -15 L -8 1.5 L 4.5 1.5", - startX: 111, - startY: 180.828125, - endX: 784.5, - endY: 42.828125, - initialX: 195.5, - initialY: 136.21875, - }, -]; diff --git a/spaces/merve/uncertainty-calibration/source/fill-in-the-blank/init.js b/spaces/merve/uncertainty-calibration/source/fill-in-the-blank/init.js deleted file mode 100644 index 2e61759b05c45666ac2013000d8c4da1bc367630..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/fill-in-the-blank/init.js +++ /dev/null @@ -1,426 +0,0 @@ -/* Copyright 2021 Google LLC. All Rights Reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); -you may not use this file except in compliance with the License. -You may obtain a copy of the License at - - http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software -distributed under the License is distributed on an "AS IS" BASIS, -WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -See the License for the specific language governing permissions and -limitations under the License. -==============================================================================*/ - -window.ttSel = d3.select('body').selectAppend('div.tooltip.tooltip-hidden') - -window.palette = function palette(min, max){ - // https://blocks.roadtolarissa.com/1wheel/raw/94091c1f8a69d5966e48aef4ac19baf9/index.html?colors=00006e-006a78-00a963-8a8a8a-d5882a-a15142-7f0000&numTicks=255&space=lab&type=basis - var colors = ['#00006e', '#00006e', '#00006f', '#00006f', '#00006f', '#000070', '#000070', '#000170', '#000471', '#000871', '#000b71', '#000f72', '#001272', '#001572', '#001872', '#001b73', '#001e73', '#002173', '#002473', '#002674', '#002974', '#002c74', '#002e74', '#003174', '#003375', '#003675', '#003975', '#003b75', '#003e75', '#004075', '#004375', '#004575', '#004775', '#004a75', '#004c75', '#004f75', '#005175', '#005375', '#005675', '#005875', '#005a75', '#005c75', '#005e75', '#006175', '#006375', '#006574', '#006774', '#006974', '#006b74', '#006d74', '#006f73', '#007173', '#007373', '#007473', '#007672', '#007872', '#007a72', '#007b72', '#007d71', '#007f71', '#008071', '#008270', '#008370', '#008570', '#008670', '#00886f', '#00896f', '#008a6f', '#008c6f', '#008d6e', '#008e6e', '#008f6e', '#00906e', '#00916e', '#00926d', '#00936d', '#00946d', '#00956d', '#00966d', '#00976d', '#00976d', '#00986d', '#00996d', '#00996d', '#009a6d', '#009a6e', '#009b6e', '#009b6e', '#009b6e', '#079c6f', '#119c6f', '#189c6f', '#1e9c70', '#249c70', '#289c70', '#2d9c71', '#319c71', '#359c71', '#399c72', '#3c9c72', '#409c73', '#439c73', '#479b74', '#4a9b74', '#4d9b74', '#509b75', '#539a75', '#569a76', '#599976', '#5c9976', '#5f9976', '#629877', '#659877', '#679777', '#6a9777', '#6d9677', '#6f9678', '#729578', '#749578', '#779478', '#799477', '#7c9377', '#7e9377', '#819277', '#839277', '#859176', '#889176', '#8a9175', '#8c9075', '#8e9074', '#908f73', '#938f73', '#958e72', '#978e71', '#998e70', '#9b8d6f', '#9d8d6e', '#9f8d6d', '#a08c6c', '#a28c6b', '#a48c69', '#a68b68', '#a88b67', '#a98b65', '#ab8a64', '#ac8a63', '#ae8a61', '#af8960', '#b1895f', '#b2895d', '#b4885c', '#b5885a', '#b68859', '#b78757', '#b88756', '#b98755', '#ba8653', '#bb8652', '#bc8550', '#bd854f', '#be854d', '#bf844c', '#bf844b', '#c0834a', '#c08348', '#c18247', '#c18246', '#c28145', '#c28044', '#c28043', '#c27f42', '#c27e41', '#c37e40', '#c27d3f', '#c27c3f', '#c27b3e', '#c27a3d', '#c27a3d', '#c1793c', '#c1783c', '#c1773c', '#c0763b', '#c0753b', '#bf743a', '#bf733a', '#be713a', '#bd703a', '#bd6f39', '#bc6e39', '#bb6d39', '#bb6b38', '#ba6a38', '#b96938', '#b86737', '#b76637', '#b76537', '#b66336', '#b56236', '#b46035', '#b35e35', '#b25d34', '#b15b34', '#b05933', '#af5833', '#ae5632', '#ad5431', '#ad5230', '#ac502f', '#ab4e2f', '#aa4c2e', '#a94a2c', '#a8482b', '#a7462a', '#a64429', '#a54127', '#a43f26', '#a33d24', '#a33a23', '#a23721', '#a1351f', '#a0321e', '#9f2f1c', '#9e2c1a', '#9d2818', '#9c2516', '#9c2114', '#9b1d11', '#9a180f', '#99120d', '#980b0a', '#970207', '#960004', '#950001', '#940000', '#930000', '#920000', '#910000', '#900000', '#8f0000', '#8e0000', '#8e0000', '#8d0000', '#8c0000', '#8b0000', '#8a0000', '#890000', '#880000', '#870000', '#860000', '#850000', '#840000', '#830000', '#820000', '#810000', '#800000'] - - return v => { - var i = d3.clamp(0, (v - min)/(max - min), 1) - return colors[Math.round(i*(colors.length - 1))] - } - - // https://gka.github.io/palettes/#/99|d|00429d,96ffea,d1ea00|d1ea00,ff005e,93003a|1|1 - // https://gka.github.io/palettes/#/99|d|00429d,96ffea,f1f1d2|f1f1d2,ff005e,93003a|1|1 - //https://gka.github.io/palettes/#/99|d|00429d,76dfca,d1d1b3|d1d1b3,a787a8,93003a|1|1 - // https://gka.github.io/palettes/#/99|d|76dfca,00429d,000000|000000,93003a,ff005e|1|1 - - // https://gka.github.io/palettes/#/99|d|078977,91a5ff,555555|555555,e2bfe3,980000|0|1 - // https://gka.github.io/palettes/#/99|d|002854,a1ffe1,555555|555555,ffa361,980000|0|1 - // https://gka.github.io/palettes/#/99|d|002854,a1ffe1,616161|616161,f47e2a,9e005c|0|1 - // var nMid = 13 - // var midIndex = Math.floor(colors.length/2) - // var minIndex = midIndex - (nMid - 1)/2 - // var maxIndex = midIndex + (nMid - 1)/2 - // var interpolate = d3.interpolate(colors[minIndex], colors[maxIndex]) - - // d3.range(minIndex, maxIndex + 1).forEach(i => { - // colors[i] = interpolate((i - minIndex)/nMid) - // }) - - // return d => { - // var rv = d3.interpolateGreys(d/2 + 2/2) - // if (rv == 'rgb(255, 255, 255)') rv = 'rgb(254, 254, 254)' - // return rv - // } - -} -window.util = { - palette, - color: d3.interpolateSpectral, - color: palette(0, 1), -} -window.util.colors = [1 - .25, .25].map(util.color) -window.util.colors.push('#aaaa00') - -!(function(){ - var memo = {} - - util.color2array = d => { - if (memo[d]) return memo[d] - - var {r, g, b} = d3.color(d).rgb() - return memo[d] = [r, g, b].map(v => v/255) - } -})() - - -// add colors to inline elements -!(function(){ - d3.selectAll('c0').st({fontWeight: 600, color: util.colors[0]}) - d3.selectAll('c1').st({fontWeight: 600, color: util.colors[1]}) - d3.selectAll('c2').st({fontWeight: 600, color: util.colors[2]}) -})() - - - -window.pairs = [ - { - class: 'texas-ohio', - s0: 'In New York, they like to buy _.', - s1: 'In Texas, they like to buy _.', - count: 30, - annotations: [ - { - str: 'BERT associates these potential purchases more with Texas
        than New York...', - pos: [15, 15], - color: util.colors[1] - }, - { - str: '...and these purchases
        more with New York
        than Texas', - pos: [290, 305], - color: util.colors[0] - }, - ], - ariaLabel: 'Scatter plot of differences in purchases between New York and Texas. Oil, cotten and land are associated more with Texas; Pictures and perfume are more associated with New York', - alts: [ - { - str: 'Ireland v. Australia', - s1: 'We went to Ireland and bought a _.', - s0: 'We went to Australia and bought a _.', - }, - { - str: 'Arctic v. Equator', - s1: 'Near the Arctic, they like to buy _.', - s0: 'Near the equator, they like to buy _.', - }, - { - str: 'Coast v. Plains', - s1: 'On the coast, they like to buy _.', - s0: 'On the plains, they like to buy _.', - }, - { - str: 'Narnia v. Gotham', - s1: 'In Narnia, they bought a _.', - s0: 'In Gotham, they bought a _.', - }, - { - str: 'Supermarket v. Mall', - s1: 'At the supermarket, they like to buy _.', - s0: 'At the mall, they like to buy _.', - }, - // { - // str: 'Train v. Plane', - // s1: 'At the airport, they like to buy _.', - // s0: 'At the bus depot, they like to buy _.', - // }, - // { - // str: 'buy v. sell', - // s0: 'They like to buy _.', - // s1: 'We like to buy _.', - // }, - // { - // str: 'Paris v. London', - // s1: 'In Paris, they like to buy _.', - // s0: 'In London, they like to buy _.', - // }, - ] - // type: 'Differences', - }, - { - class: 'age-name', - s0: 'Elsie was born in the year of _.', - s1: 'Lauren was born in the year of _.', - count: 200, - ariaLabel: 'Scatter plot of differences in birth years between Elsie and Lauren.', - }, - { - class: 'jim-jane', - s0: 'Jim worked as a _.', - s1: 'Jane worked as a _.', - count: 30, - ariaLabel: 'Scatter plot of differences in occupations between Jim and Jane. Salesmen, carpenter and mechanic are more associated with Jim; Nurse, secretary and modal are more associated with Jane.', - }, - { - class: 'nurse-name', - s0: 'The new nurse was named _.', - s1: 'The new doctor was named _.', - count: 30, - ariaLabel: 'Scatter plot of differences in doctor and nurse names. David, Michael and himself are more associated with doctors; Jean, Sarah and Catherine are more associated with nurses.', - - }, - { - class: 'nurse-name-zari-cda', - s0: 'The new nurse was named _.', - s1: 'The new doctor was named _.', - s0: 'The doctor performed CPR even though _ knew it was too late.', - s1: 'The nurse performed CPR even though _ knew it was too late.', - s0model: '_zari_cda', - s1model: '_zari_cda', - showModel: true, - count: 30, - ariaLabel: 'Scatter plot of differences in doctor and nurse names in the Zari model. He and she are equally associated with both. But Jack, Logan and Andrew are more associated with doctors; Emily, Rachel and Amy are more associated with nurses.', - }, - { - class: 'interesting-pair', - s1: '_ flavored ice cream is tasty.', - s0: '_ flavored ice cream is revolting.', - count: 30, - alts: [ - { - str: 'Dangerous animals', - s1: '_ is a [friendly|dangerous] animal', - s0: '_ is a [friendly|dangerous] animal', - }, - ] - } -] - -pairs.forEach(d => { - d.count = d.count || 200 - d.s0model = d.s0model || '' - d.s1model = d.s1model || '' - d.annotations = d.annotations || [] - d.model = d.s0model ? 'Zari' : 'BERT' - d.type = d.type || 'Likelihoods' - d.pairStr = JSON.stringify(d) -}) -// pairs = [window.pairs[1]] - - -var diffs = [ - { - s0: 'In [Texas|Paris], [Men|Women] like to buy _.', - s0: 'Born in [1940|2018], [his|her] name was _.', - s0: 'In [1908|2018], [he|she] was employed as a _.', - class: 'difference-difference', - count: 1000, - annotations: [], - model: 'BERT', - type: 'Likelihoods', - ariaLabel: 'Small multiple difference in difference plots.', - } -] - -diffs.forEach(d => { - d.pairStr = JSON.stringify(d) -}) - - -window.sents = [ - { - class: 'hamlet', - str: 'To be or not to be, that is the question;', - }, -] -sents.push({class: 'texas', str: pairs[0].s1.replace('_', 'things')}) -sents.push({class: 'new-york', str: pairs[0].s0.replace('_', 'things')}) - - -window.init = async function(){ - try { window.regltick.cancel() } catch (e) {} - - if (!window.tokenizer){ - window.tokenizer = new BertTokenizer() - await tokenizer.load() - } - - if (!window.bertLargeVocab){ - var text = await (await fetch('data/bert_large_vocab.txt')).text() - window.bertLargeVocab = text - .split('\n') - } - - sents.forEach(initSent) - sleep(10) - - pairs.forEach(initPair) - sleep(500) - window.initGenderOverTime() - - - // Skip rendering differene in difference until scrolled into view - var renderDiffDiff = false - var observer = new IntersectionObserver(entries => { - entries.forEach(d => { - if (renderDiffDiff || !d.isIntersecting) return - - initDiff(diffs[0]) - renderDiffDiff = true - }) - }, {}) - observer.observe(d3.select('.difference-difference').node()) - if (renderDiffDiff) initDiff(diffs[0]) - - - function sleep(ms) { - return new Promise(resolve => setTimeout(resolve, ms)) - } -} - -// Run init, rerun when width changes -!(function(){ - var lastInnerWidth = null - - function resize(){ - if (lastInnerWidth == window.innerWidth) return - lastInnerWidth = window.innerWidth - - window.init() - } - resize() - d3.select(window).on('resize', _.debounce(resize, 500)) -})() - -// Hamlet text entry -!(function(){ - var sel = d3.select('.hamlet-edit').html('') - .st({textAlign: 'center', marginTop: 17}) - .on('keydown', function(){ - sel.classed('changed', 1) - if (d3.event.keyCode != 13) return - d3.event.preventDefault() - - update() - }) - - var sent = sents[0] - - var inputSel = sel.append('textarea').at({cols: 30}) - inputSel.node().value = sent.str - - // sel.append('div') - sel.append('button.button.update').on('click', update).text('Update Sentence') - .st({width: 140, height: 47, marginLeft: 20, marginTop: 0, top: -19, marginRight: 0}) - - - function update(){ - sent.str = inputSel.node().value - - sel.classed('changed', 0) - initSent(sent) - } -})() - - -window.addLockedTooltip = function(sel){ - sel - .on('mouseover', function(d, i){ - ttSel - .html(d) - .select('.footend').remove() - - var x = this.offsetLeft, - y = this.offsetTop, - bb = ttSel.node().getBoundingClientRect(), - left = d3.clamp(20, (x-bb.width/2), window.innerWidth - bb.width - 20), - top = innerHeight + scrollY > y + 20 + bb.height ? y + 20 : y - bb.height - 10; - - ttSel.st({left, top}).classed('tooltip-hidden', false) - }) - - sel.on('mousemove',mouseover).on('mouseout', mouseout) - ttSel.on('mousemove', mouseover).on('mouseout', mouseout) - function mouseover(){ - if (window.__ttfade) window.__ttfade.stop() - } - function mouseout(){ - if (window.__ttfade) window.__ttfade.stop() - window.__ttfade = d3.timeout(() => { - ttSel.classed('tooltip-hidden', true) - }, 250) - } -} - -// Footnotes -!(function(){ - var footnums = '¹²³⁴⁵⁶⁷⁸⁹' - - var footendSel = d3.selectAll('.footend') - .each(function(d, i){ - var sel = d3.select(this) - var ogHTML = sel.parent().html() - sel - .at({href: '#footstart-' + i, id: 'footend-' + i}) - .text(footnums[i]) - .datum(ogHTML) - }) - - - var footstartSel = d3.selectAll('.footstart') - .each(function(d, i){ - d3.select(this) - .at({ - href: '#footend-' + i, - }) - .text(footnums[i]) - .datum(footendSel.data()[i]) - .parent().at({id: 'footstart-' + i}) - }) - .call(addLockedTooltip) - -})() - - - - - - - -// // Populate interesting alts -// !(() => { -// var listSel = d3.select('.interesting-list').st({display: 'none'}) - -// var listStr = listSel.text() - -// _.last(pairs).alts = listStr.split('-').map(d => d.trim()).filter(d => d).map(rawStr => { -// var start = rawStr.split('[')[0] -// var end = rawStr.split(']')[1] - -// var [t0, t1] = rawStr.split('[')[1].split(']')[0].split('|') -// var s0 = start + t0 + end -// var s1 = start + t1 + end - -// var str = `
        ${start} -// ${t1}|${t0} -// ${end}
        `.replace('_', '____') - -// return {str, s0, s1} -// }) -// })() - -// // Populate difference in difference -// !(() => { -// var listSel = d3.select('.difference-difference-list').st({display: 'none'}) - -// var listStr = listSel.text() - -// diffs[0].alts = listStr.split('-').map(d => d.trim()).filter(d => d).map(rawStr => { -// var start = rawStr.split('[')[0] -// var end = rawStr.split(']')[1] - -// var [t0, t1] = rawStr.split('[')[1].split(']')[0].split('|') -// var s0 = start + t0 + end -// var s1 = start + t1 + end - -// var str = `
        ${rawStr}
        `.replace('_', '____') - - -// return {str, s0, s1, rawStr} -// }) -// })() diff --git a/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/dataset.py b/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/dataset.py deleted file mode 100644 index 7713ea2f8bc94d202d2dfbe830af3cb96b1e803d..0000000000000000000000000000000000000000 --- a/spaces/mfrashad/ClothingGAN/models/stylegan2/stylegan2-pytorch/dataset.py +++ /dev/null @@ -1,40 +0,0 @@ -from io import BytesIO - -import lmdb -from PIL import Image -from torch.utils.data import Dataset - - -class MultiResolutionDataset(Dataset): - def __init__(self, path, transform, resolution=256): - self.env = lmdb.open( - path, - max_readers=32, - readonly=True, - lock=False, - readahead=False, - meminit=False, - ) - - if not self.env: - raise IOError('Cannot open lmdb dataset', path) - - with self.env.begin(write=False) as txn: - self.length = int(txn.get('length'.encode('utf-8')).decode('utf-8')) - - self.resolution = resolution - self.transform = transform - - def __len__(self): - return self.length - - def __getitem__(self, index): - with self.env.begin(write=False) as txn: - key = f'{self.resolution}-{str(index).zfill(5)}'.encode('utf-8') - img_bytes = txn.get(key) - - buffer = BytesIO(img_bytes) - img = Image.open(buffer) - img = self.transform(img) - - return img diff --git a/spaces/montagekoko/anything-v3.0/README.md b/spaces/montagekoko/anything-v3.0/README.md deleted file mode 100644 index 15176bed26d36b4f9566c7102a5655e310f76036..0000000000000000000000000000000000000000 --- a/spaces/montagekoko/anything-v3.0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anything V3.0 -emoji: 🏃 -colorFrom: gray -colorTo: yellow -sdk: gradio -sdk_version: 3.10.1 -app_file: app.py -pinned: false -duplicated_from: akhaliq/anything-v3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mozilla-foundation/youtube_video_similarity/utils/text_cleaning.py b/spaces/mozilla-foundation/youtube_video_similarity/utils/text_cleaning.py deleted file mode 100644 index 3595100c501bf2a7217a387ae5bc2b2c878e605f..0000000000000000000000000000000000000000 --- a/spaces/mozilla-foundation/youtube_video_similarity/utils/text_cleaning.py +++ /dev/null @@ -1,131 +0,0 @@ -from fastcore.basics import listify -from fastcore.utils import compose -import unicodedata -from string import punctuation -import html -from itertools import groupby -import re - -control_char_regex = re.compile(r'[\r\n\t]+') -url_regex = re.compile( - r'((http|https)\:\/\/)?[a-zA-Z0-9\.\/\?\:@\-_=#]+\.([a-zA-Z]){2,6}([a-zA-Z0-9\.\&\/\?\:@\-_=#])*') -username_regex = re.compile(r'(^|[^@\w])@(\w{1,15})\b') - - -def fix_html(text): - tmp_ls = [] - for e in listify(text): - e = e.replace('#39;', "'").replace('amp;', '&').replace('#146;', "'").replace('nbsp;', ' ').replace( - '#36;', '$').replace('\\n', "\n").replace('quot;', "'").replace('
        ', "\n").replace( - '\\"', '"').replace('', ' ').replace(' @.@ ', '.').replace(' @-@ ', '-').replace('...', ' …') - tmp_ls.append(html.unescape(e)) - - text = tmp_ls - return text - - -def remove_control_char(text): - tmp_ls = [] - for e in listify(text): - tmp_ls.append(re.sub(control_char_regex, '.', e)) - - text = tmp_ls - return text - - -def remove_remaining_control_chars(text): - tmp_ls = [] - for e in listify(text): - tmp_ls.append( - ''.join(ch for ch in e if unicodedata.category(ch)[0] != 'C')) - - text = tmp_ls - return text - - -def remove_unicode_symbols(text): - tmp_ls = [] - for e in listify(text): - tmp_ls.append( - ''.join(ch for ch in e if unicodedata.category(ch)[0] != 'So')) - - text = tmp_ls - return text - - -def standardise_punc(text): - transl_table = dict([(ord(x), ord(y)) - for x, y in zip(u"‘’´“”–-", u"'''\"\"--")]) - tmp_ls = [] - for e in listify(text): - e = e.translate(transl_table) - tmp_ls.append(e) - - text = tmp_ls - return text - - -def remove_news_tags(text): - tmp_ls = [] - for e in listify(text): - e = re.sub(r"(<[A-Z].+?>)|()", "", e) - tmp_ls.append(e) - - text = tmp_ls - return text - - -def replace_urls(text): - filler, tmp_ls = '', [] - for e in listify(text): - e = re.sub(r"()|()|()", "", e) - e = re.sub(url_regex, filler, e) - tmp_ls.append(e) - - text = tmp_ls - return text - - -def replace_usernames(text): - filler, tmp_ls = '', [] - for e in listify(text): - occ = e.count('@') - for _ in range(occ): - e = e.replace('@', f'{filler}') - # replace other user handles by filler - e = re.sub(username_regex, filler, e) - tmp_ls.append(e) - - text = tmp_ls - return text - - -def remove_duplicate_punctuation(text): - tmp_ls = [] - for e in listify(text): - e = re.sub(r'\b(\w+)( \1\b)+', r'\1', e) - punc = set(punctuation) - newtext = [] - for k, g in groupby(e): - if k in punc: - newtext.append(k) - else: - newtext.extend(g) - e = ''.join(newtext) - tmp_ls.append(e) - - text = tmp_ls - return text - - -def remove_multi_space(text): - tmp_ls = [] - for e in listify(text): - tmp_ls.append(' '.join(e.split())) - - text = tmp_ls - return text - - -clean_text_funcs = compose(*[fix_html, remove_control_char, remove_remaining_control_chars, remove_unicode_symbols, - standardise_punc, remove_news_tags, replace_urls, replace_usernames, remove_duplicate_punctuation, remove_multi_space]) diff --git a/spaces/mrfshk/paint-diffusion/README.md b/spaces/mrfshk/paint-diffusion/README.md deleted file mode 100644 index 7d9f46ed087c035b0507ecb47f5dd1d2e00b2e63..0000000000000000000000000000000000000000 --- a/spaces/mrfshk/paint-diffusion/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Paint Diffusion -emoji: 😻 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mrm8488/PromptSource/promptsource/seqio_tasks/tasks.py b/spaces/mrm8488/PromptSource/promptsource/seqio_tasks/tasks.py deleted file mode 100644 index 6b39719e11b615a5292138382676c2f48ed935cd..0000000000000000000000000000000000000000 --- a/spaces/mrm8488/PromptSource/promptsource/seqio_tasks/tasks.py +++ /dev/null @@ -1,421 +0,0 @@ -import csv -import functools -from typing import Dict, List, Optional, Tuple - -import datasets -import pkg_resources -import seqio -import t5 -import tensorflow as tf -from t5.data.glue_utils import get_glue_metric, get_super_glue_metric -from t5.evaluation import metrics as mt - -import promptsource.templates -from promptsource.seqio_tasks import utils - - -GET_METRICS = { - "BLEU": mt.bleu, - "ROUGE": mt.rouge, - "Span Squad": mt.span_squad, - "Squad": mt.squad, - "Trivia QA": mt.trivia_qa, - "Accuracy": mt.accuracy, - "Sequence Accuracy": mt.sequence_accuracy, - "Pearson Correlation": mt.pearson_corrcoef, - "Spearman Correlation": mt.spearman_corrcoef, - "MultiRC": mt.multirc_f1_over_all_answers, - "AUC": mt.auc, - "COQA F1": mt.coqa_f1, - "Edit Distance": mt.edit_distance, - # "Mean Reciprocal Rank": mt.accuracy, # NOTE not in T5? - "Other": mt.accuracy, - # Missing support for mean_multiclass_f1 etc. which need a num_classes parameter -} - -MAX_EXAMPLES_PER_DATASET = 500_000 - - -def strip_whitespace(output_or_target, example=None, is_target=False): - """Cached tasks from promptsource all have a leading space on the ground-truth targets.""" - return output_or_target.strip() - - -def maybe_get_class_id_postprocessor(template): - if template.get_fixed_answer_choices_list(): - - def postprocess_fn(output_or_target, example=None, is_target=False): - output_or_target = strip_whitespace(output_or_target) - return t5.data.postprocessors.string_label_to_class_id( - output_or_target, label_classes=template.get_fixed_answer_choices_list() - ) - - return postprocess_fn - - else: - return strip_whitespace - - -def get_tf_dataset(split, shuffle_files, seed, dataset_name, subset_name, template, split_mapping): - # HF datasets does not support file-level shuffling - del shuffle_files, seed - dataset = datasets.load_dataset(dataset_name, subset_name) - dataset = dataset[split_mapping[split]] - dataset = utils.apply_template(dataset, template) - return utils.hf_dataset_to_tf_dataset(dataset) - - -def add_task(dataset_name, subset_name, template_name, task_name=None, split_mapping=None): - template = all_templates.get_dataset(dataset_name, subset_name)[template_name] - task_name = task_name or utils.get_task_name(dataset_name, subset_name, template_name) - - if dataset_name == "glue": - metrics = get_glue_metric(subset_name) - elif dataset_name == "super_glue": - if subset_name in ("wsc.fixed", "multirc"): - # TODO: WSC and MultiRC need special pre/postprocesing - metrics = [mt.accuracy] - else: - metrics = get_super_glue_metric(subset_name) - else: - # TODO what if metric is null? - metrics = [GET_METRICS[m] for m in template.metadata.metrics] - - dataset_splits = utils.get_dataset_splits(dataset_name, subset_name) - split_mapping = split_mapping or {k: k for k in dataset_splits.keys()} - - dataset_fn = functools.partial( - get_tf_dataset, - seed=None, - dataset_name=dataset_name, - subset_name=subset_name, - template=template, - split_mapping=split_mapping, - ) - data_source = seqio.FunctionDataSource( - dataset_fn, - splits=list(split_mapping.keys()), - num_input_examples={s: dataset_splits[split_mapping[s]].num_examples for s in split_mapping.keys()}, - ) - output_features = { - "inputs": seqio.Feature(t5.data.get_default_vocabulary(), add_eos=False, dtype=tf.int32), - "targets": seqio.Feature(t5.data.get_default_vocabulary(), add_eos=True, dtype=tf.int32), - } - preprocessors = [ - seqio.preprocessors.tokenize, - seqio.preprocessors.append_eos, - seqio.CacheDatasetPlaceholder(required=False), - ] - - # Add train and normal eval tasks - seqio.TaskRegistry.add( - task_name, - data_source, - preprocessors=preprocessors, - output_features=output_features, - metric_fns=metrics, - postprocess_fn=maybe_get_class_id_postprocessor(template), - ) - - # Add rank classification eval task - if template.answer_choices: - rank_classification_preprocessor = functools.partial( - t5.data.preprocessors.rank_classification, - inputs_fn=lambda ex: tf.fill((len(ex["answer_choices"]),), ex["inputs"]), - targets_fn=lambda ex: ex["answer_choices"], - is_correct_fn=lambda ex: tf.equal(ex["answer_choices"], tf.strings.strip(ex["targets"])), - weight_fn=lambda ex: 1.0, - ) - - fixed_choices = template.get_fixed_answer_choices_list() - num_classes = len(fixed_choices) if fixed_choices else None - seqio.TaskRegistry.add( - task_name + "_score_eval", - data_source, - preprocessors=[rank_classification_preprocessor] + preprocessors, - output_features=output_features, - metric_fns=[functools.partial(t5.evaluation.metrics.rank_classification, num_classes=num_classes)], - postprocess_fn=t5.data.postprocessors.rank_classification, - ) - - -datatset_subset_tuple = Tuple[str, Optional[str]] -d4_train: List[datatset_subset_tuple] = [] -d4_eval: List[datatset_subset_tuple] = [] -d3_train_gpt: List[datatset_subset_tuple] = [] -d3_train_sglue: List[datatset_subset_tuple] = [] -bias_fairness_eval: List[datatset_subset_tuple] = [] -gsheet: Dict[datatset_subset_tuple, Dict] = {} -experiment_path = pkg_resources.resource_filename(__name__, "experiment_D4.csv") -with open(experiment_path) as exp_file: - reader = csv.DictReader(exp_file) - for row in reader: - if row["skip"]: - continue - if row["subset"] == "": - row["subset"] = None # to match promptsource.Template object - dataset_subset = (row["HF_name"], row["subset"]) - if row["do_train"] == "TRUE": - d4_train.append(dataset_subset) - if row["do_eval"] == "TRUE": - d4_eval.append(dataset_subset) - if row["D3_do_train"] == "TRUE" and "GPT" in row["seed_paper"]: - d3_train_gpt.append(dataset_subset) - if row["D3_do_train"] == "TRUE" and row["HF_name"] == "super_glue": - d3_train_sglue.append(dataset_subset) - if ( - row["do_eval"] == "TRUE" - and row["task_by_convention"] == "bias_and_fairness" - and row["HF_name"] != "winogender" - ): - bias_fairness_eval.append(dataset_subset) - gsheet[dataset_subset] = row -all_datasets = d4_train + d4_eval + d3_train_gpt + d3_train_sglue + bias_fairness_eval - -all_templates = promptsource.templates.TemplateCollection() -all_templates.remove("anli") # Need to special-case ANLI due to weird split conventions - -# 3 stages of training/ablation: D4 -> GPT -> SuperGLUE -d4_train_mixture: List[str] = [] # strings are dataset_subset_template -gpt_train_mixture: List[str] = [] -sglue_train_mixture: List[str] = [] -d4_eval_mixture: List[str] = [] -bias_fairness_eval_mixture: List[str] = [] -mixture_cap: Dict[str, int] = {} -single_original_task: Dict[Tuple[str, str], str] = {} -all_original_tasks: List[str] = [] -for dataset_name, subset_name in all_templates.keys: - if (dataset_name, subset_name) not in all_datasets: - all_templates.remove(dataset_name, subset_name) - continue - - dataset = all_templates.get_dataset(dataset_name, subset_name) - num_templates = len(dataset.all_template_names) - train_size = gsheet[(dataset_name, subset_name)]["train_size"] - if train_size == "": - train_size = 0 - else: - train_size = int(train_size) - if train_size > MAX_EXAMPLES_PER_DATASET: - cap = MAX_EXAMPLES_PER_DATASET // num_templates - else: - cap = train_size - for template_name in dataset.all_template_names: - add_task(dataset_name, subset_name, template_name) - - template = dataset[template_name] - - task_name = utils.get_task_name(dataset_name, subset_name, template_name) - - if (dataset_name, subset_name) not in single_original_task and template.metadata.original_task: - single_original_task[(dataset_name, subset_name)] = task_name - - if template.metadata.original_task: - all_original_tasks.append(task_name) - - if (dataset_name, subset_name) in d4_train: - d4_train_mixture.append(task_name) - mixture_cap[task_name] = cap - if (dataset_name, subset_name) in d3_train_gpt: - gpt_train_mixture.append(task_name) - mixture_cap[task_name] = cap - if (dataset_name, subset_name) in d3_train_sglue: - sglue_train_mixture.append(task_name) - mixture_cap[task_name] = cap - if (dataset_name, subset_name) in d4_eval: - if template.metadata.original_task: - d4_eval_mixture.append(task_name) - # TODO use template.metadata.answer_choices here for rank eval - if (dataset_name, subset_name) in bias_fairness_eval: - bias_fairness_eval_mixture.append(task_name) - -# Special case for ANLI, which has weirdly-named splits and rounds that should be subsets -dataset_name, subset_name = ("anli", None) -dataset = all_templates.get_dataset(dataset_name, subset_name) -for anli_round in ("r1", "r2", "r3"): - for template_name in all_templates.get_dataset(dataset_name, subset_name).all_template_names: - task_name = utils.get_task_name(dataset_name, subset_name, template_name) + f"_{anli_round}" - split_mapping = { - "train": f"train_{anli_round}", - "validation": f"dev_{anli_round}", - "test": f"test_{anli_round}", - } - add_task(dataset_name, subset_name, template_name, task_name, split_mapping) - - template = dataset[template_name] - if template.metadata.original_task: - d4_eval_mixture.append(task_name) # TODO or add to ANLI special mixture - # TODO use template.metadata.answer_choices here for rank eval - - -TASK_BLACKLIST = [ - # Tasks which often tokenize to > 1024 tokens currently - "hotpot_qa_distractor_Generate_Explanations", - "hotpot_qa_fullwiki_Generate_Explanations", - "hotpot_qa_distractor_Generate_Answer_and_Explanations", - "hotpot_qa_fullwiki_Generate_Answer_and_Explanations", - "hotpot_qa_fullwiki_Generate_Answer", - "hotpot_qa_distractor_Generate_Answer", - "hotpot_qa_distractor_Generate_Title_2", - "hotpot_qa_fullwiki_Generate_Title_2", - "hotpot_qa_fullwiki_Generate_Title_1", - "hotpot_qa_distractor_Generate_Title_1", - "hotpot_qa_distractor_Generate_Question", - "hotpot_qa_fullwiki_Generate_Question", - "tab_fact_tab_fact_tab_fact_3", - "tab_fact_tab_fact_tab_fact_2", - "tab_fact_tab_fact_tab_fact_1", - "tab_fact_tab_fact_tab_fact_7", - "tab_fact_tab_fact_tab_fact_4", - "tab_fact_tab_fact_tab_fact_5", - "tab_fact_tab_fact_tab_fact_6", - "wiki_hop_masked_Choose_Best_Object_Candidate", - "wiki_hop_masked_Indirect_Question_about_Birthplace_Citizenship_Place_of_Death", - "narrativeqa_Template_05", - "ecthr_cases_alleged_violation_prediction_silver_rationales", - # Tasks with broken cached files - "gigaword_summarize_", -] - -# Tasks that failed caching (won't try to fix them for now) - remove when we are done -D4_TRAIN_SCORE_EVAL_TASK_BLACKLIST = [ - "amazon_polarity_Is_this_product_review_positive_score_eval", - "amazon_polarity_Is_this_review_negative_score_eval", - "amazon_polarity_Is_this_review_score_eval", - "amazon_polarity_User_recommend_this_product_score_eval", - "amazon_polarity_convey_negative_or_positive_sentiment_score_eval", - "amazon_polarity_flattering_or_not_score_eval", - "amazon_polarity_negative_or_positive_tone_score_eval", - "amazon_polarity_user_satisfied_score_eval", - "amazon_polarity_would_you_buy_score_eval", - "dbpedia_14_given_a_choice_of_categories__score_eval", - "dbpedia_14_given_list_what_category_does_the_paragraph_belong_to_score_eval", - "dbpedia_14_pick_one_category_for_the_following_text_score_eval", - "wiki_hop_original_choose_best_object_affirmative_1_score_eval", - "wiki_hop_original_choose_best_object_affirmative_2_score_eval", - "wiki_hop_original_choose_best_object_affirmative_3_score_eval", - "wiki_hop_original_choose_best_object_interrogative_1_score_eval", - "wiki_hop_original_choose_best_object_interrogative_2_score_eval", -] - -seqio.MixtureRegistry.add( - "d4_train", - [task for task in d4_train_mixture if task not in TASK_BLACKLIST], - default_rate=lambda t: mixture_cap[t.name], -) - -seqio.MixtureRegistry.add( - "gpt_train", - [task for task in gpt_train_mixture if task not in TASK_BLACKLIST], - default_rate=lambda t: mixture_cap[t.name], -) - -seqio.MixtureRegistry.add( - "sglue_train", - [task for task in sglue_train_mixture if task not in TASK_BLACKLIST], - default_rate=lambda t: mixture_cap[t.name], -) - -seqio.MixtureRegistry.add( - "d4_gpt_train", - [task for task in d4_train_mixture + gpt_train_mixture if task not in TASK_BLACKLIST], - default_rate=lambda t: mixture_cap[t.name], -) - -seqio.MixtureRegistry.add( - "d4_gpt_sglue_train", - [task for task in d4_train_mixture + gpt_train_mixture + sglue_train_mixture if task not in TASK_BLACKLIST], - default_rate=lambda t: mixture_cap[t.name], -) - -seqio.MixtureRegistry.add( - "d4_eval", - [task for task in d4_eval_mixture if task not in TASK_BLACKLIST], - default_rate=functools.partial(seqio.mixing_rate_num_examples, maximum=500_000), -) # eval mixture does not need to be capped - - -seqio.MixtureRegistry.add( - "d4_score_eval", - [ - task - for task in seqio.TaskRegistry.names() - if task.endswith("_score_eval") - and task.split("_score_eval")[0] in d4_eval_mixture - and task.split("_score_eval")[0] not in TASK_BLACKLIST - ], - default_rate=functools.partial(seqio.mixing_rate_num_examples, maximum=500_000), -) - -# Train tasks we don't care about evaluating on -D4_TRAIN_SKIP_EVAL = [ - "paws_labeled_final", - "adversarial_qa_dbidaf", - "adversarial_qa_dbert", - "duorc_ParaphraseRC", - "dream", - "amazon_polarity", - "app_reviews", - "imdb", - "wiki_bio", - "gigaword", - "multi_news", - "samsum", - "dbpedia_14", - "trec", -] - -seqio.MixtureRegistry.add( - "d4_train_eval", - [ - task - for task in d4_train_mixture - if task not in TASK_BLACKLIST - and not any([skip in task for skip in D4_TRAIN_SKIP_EVAL]) - and task in all_original_tasks - ], - default_rate=lambda t: mixture_cap[t.name], -) - -seqio.MixtureRegistry.add( - "d4_train_score_eval", - [ - task - for task in seqio.TaskRegistry.names() - if task.endswith("_score_eval") - and task.split("_score_eval")[0] in d4_train_mixture - and task.split("_score_eval")[0] not in TASK_BLACKLIST - and task not in D4_TRAIN_SCORE_EVAL_TASK_BLACKLIST - and not any([skip in task for skip in D4_TRAIN_SKIP_EVAL]) - and task.split("_score_eval")[0] in all_original_tasks - ], - default_rate=functools.partial(seqio.mixing_rate_num_examples, maximum=500_000), -) - -seqio.MixtureRegistry.add( - "d4_train_one_og_prompt", - [task for task in single_original_task.values() if task in d4_train_mixture and task not in TASK_BLACKLIST], - default_rate=lambda t: mixture_cap[t.name], -) - -seqio.MixtureRegistry.add( - "d4_train_all_og_prompts", - [task for task in all_original_tasks if task in d4_train_mixture and task not in TASK_BLACKLIST], - default_rate=lambda t: mixture_cap[t.name], -) - -seqio.MixtureRegistry.add( - "bias_fairness_eval", - bias_fairness_eval_mixture, - default_rate=functools.partial(seqio.mixing_rate_num_examples, maximum=500_000), -) - -seqio.MixtureRegistry.add( - "bias_fairness_eval_score_eval", - [ - task - for task in seqio.TaskRegistry.names() - if task.endswith("_score_eval") and task.split("_score_eval")[0] in bias_fairness_eval_mixture - ], - default_rate=functools.partial(seqio.mixing_rate_num_examples, maximum=500_000), -) diff --git a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/config/__init__.py b/spaces/msmilauer/AutoGPT-duplicated2/autogpt/config/__init__.py deleted file mode 100644 index 726b6dcf3da95968b948c4d897e97a9cdd0928ff..0000000000000000000000000000000000000000 --- a/spaces/msmilauer/AutoGPT-duplicated2/autogpt/config/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -""" -This module contains the configuration classes for AutoGPT. -""" -from autogpt.config.ai_config import AIConfig -from autogpt.config.config import Config, check_openai_api_key -from autogpt.config.singleton import AbstractSingleton, Singleton - -__all__ = [ - "check_openai_api_key", - "AbstractSingleton", - "AIConfig", - "Config", - "Singleton", -] diff --git a/spaces/multimodalart/stable-diffusion-inpainting/clipseg/training.py b/spaces/multimodalart/stable-diffusion-inpainting/clipseg/training.py deleted file mode 100644 index ce12cf443f37e2520658614e15d0e64eb554b7f1..0000000000000000000000000000000000000000 --- a/spaces/multimodalart/stable-diffusion-inpainting/clipseg/training.py +++ /dev/null @@ -1,266 +0,0 @@ -import torch -import inspect -import json -import yaml -import math -import os -import sys - -from general_utils import log - -import numpy as np -from functools import partial -from os.path import expanduser, join, isfile, basename - -from torch.cuda.amp import autocast, GradScaler -from torch.optim.lr_scheduler import LambdaLR -from contextlib import nullcontext -from torch.utils.data import DataLoader - -from general_utils import TrainingLogger, get_attribute, filter_args, log, training_config_from_cli_args - - -def cosine_warmup_lr(i, warmup=10, max_iter=90): - """ Cosine LR with Warmup """ - if i < warmup: - return (i+1)/(warmup+1) - else: - return 0.5 + 0.5*math.cos(math.pi*(((i-warmup)/(max_iter- warmup)))) - - -def validate(model, dataset, config): - data_loader = torch.utils.data.DataLoader(dataset, batch_size=4, shuffle=False) - - metric_class, use_metric = config.val_metric_class, config.use_val_metric - loss_fn = get_attribute(config.loss) - - model.eval() - model.cuda() - - if metric_class is not None: - metric = get_attribute(metric_class)() - - with torch.no_grad(): - - i, losses = 0, [] - for data_x, data_y in data_loader: - - data_x = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_x] - data_y = [x.cuda() if isinstance(x, torch.Tensor) else x for x in data_y] - - prompts = model.sample_prompts(data_x[1], prompt_list=('a photo of a {}',)) - pred, visual_q, _, _ = model(data_x[0], prompts, return_features=True) - - if metric_class is not None: - metric.add([pred], data_y) - - # pred = model(data_x[0], prompts) - # loss = loss_fn(pred[0], data_y[0]) - loss = loss_fn(pred, data_y[0]) - losses += [float(loss)] - - i += 1 - - if config.val_max_iterations is not None and i > config.val_max_iterations: - break - - if use_metric is None: - return np.mean(losses), {}, False - else: - metric_scores = {m: s for m, s in zip(metric.names(), metric.value())} if metric is not None else {} - return np.mean(losses), metric_scores, True - - -def main(): - - config = training_config_from_cli_args() - - val_interval, best_val_loss, best_val_score = config.val_interval, float('inf'), float('-inf') - - model_cls = get_attribute(config.model) - _, model_args, _ = filter_args(config, inspect.signature(model_cls).parameters) - model = model_cls(**model_args).cuda() - - dataset_cls = get_attribute(config.dataset) - _, dataset_args, _ = filter_args(config, inspect.signature(dataset_cls).parameters) - - dataset = dataset_cls(**dataset_args) - - log.info(f'Train dataset {dataset.__class__.__name__} (length: {len(dataset)})') - - if val_interval is not None: - dataset_val_args = {k[4:]: v for k,v in config.items() if k.startswith('val_') and k != 'val_interval'} - _, dataset_val_args, _ = filter_args(dataset_val_args, inspect.signature(dataset_cls).parameters) - print('val args', {**dataset_args, **{'split': 'val', 'aug': 0}, **dataset_val_args}) - - dataset_val = dataset_cls(**{**dataset_args, **{'split': 'val', 'aug': 0}, **dataset_val_args}) - - # optimizer - opt_cls = get_attribute(config.optimizer) - if config.optimize == 'torch.optim.SGD': - opt_args = {'momentum': config.momentum if 'momentum' in config else 0} - else: - opt_args = {} - opt = opt_cls(model.parameters(), lr=config.lr, **opt_args) - - if config.lr_scheduler == 'cosine': - assert config.T_max is not None and config.eta_min is not None - lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(opt, config.T_max, config.eta_min) - elif config.lr_scheduler == 'warmup_cosine': - lr_scheduler = LambdaLR(opt, partial(cosine_warmup_lr, max_iter=(config.max_iterations), warmup=config.warmup)) - else: - lr_scheduler = None - - batch_size, max_iterations = config.batch_size, config.max_iterations - - loss_fn = get_attribute(config.loss) - - if config.amp: - log.info('Using AMP') - autocast_fn = autocast - scaler = GradScaler() - else: - autocast_fn, scaler = nullcontext, None - - - save_only_trainable = True - data_loader = DataLoader(dataset, batch_size=batch_size, num_workers=4) - - # disable config when hyperparam. opt. to avoid writing logs. - tracker_config = config if not config.hyperparameter_optimization else None - - with TrainingLogger(log_dir=config.name, model=model, config=tracker_config) as logger: - - i = 0 - while True: - for data_x, data_y in data_loader: - - # between caption and output feature. - # 1. Sample random captions - # 2. Check alignment with CLIP - - # randomly mix text and visual support conditionals - if config.mix: - - assert config.mask.startswith('text_and') - - with autocast_fn(): - # data_x[1] = text label - prompts = model.sample_prompts(data_x[1]) - - # model.clip_model() - - text_cond = model.compute_conditional(prompts) - if model.__class__.__name__ == 'CLIPDensePredTMasked': - # when mask=='separate' - visual_s_cond, _, _ = model.visual_forward_masked(data_x[2].cuda(), data_x[3].cuda()) - else: - # data_x[2] = visual prompt - visual_s_cond, _, _ = model.visual_forward(data_x[2].cuda()) - - max_txt = config.mix_text_max if config.mix_text_max is not None else 1 - batch_size = text_cond.shape[0] - - # sample weights for each element in batch - text_weights = torch.distributions.Uniform(config.mix_text_min, max_txt).sample((batch_size,))[:, None] - text_weights = text_weights.cuda() - - if dataset.__class__.__name__ == 'PhraseCut': - # give full weight to text where support_image is invalid - visual_is_valid = data_x[4] if model.__class__.__name__ == 'CLIPDensePredTMasked' else data_x[3] - text_weights = torch.max(text_weights[:,0], 1 - visual_is_valid.float().cuda()).unsqueeze(1) - - cond = text_cond * text_weights + visual_s_cond * (1 - text_weights) - - else: - # no mix - - if model.__class__.__name__ == 'CLIPDensePredTMasked': - # compute conditional vector using CLIP masking - with autocast_fn(): - assert config.mask == 'separate' - cond, _, _ = model.visual_forward_masked(data_x[1].cuda(), data_x[2].cuda()) - else: - cond = data_x[1] - if isinstance(cond, torch.Tensor): - cond = cond.cuda() - - with autocast_fn(): - visual_q = None - - pred, visual_q, _, _ = model(data_x[0].cuda(), cond, return_features=True) - - loss = loss_fn(pred, data_y[0].cuda()) - - if torch.isnan(loss) or torch.isinf(loss): - # skip if loss is nan - log.warning('Training stopped due to inf/nan loss.') - sys.exit(-1) - - extra_loss = 0 - loss += extra_loss - - opt.zero_grad() - - if scaler is None: - loss.backward() - opt.step() - else: - scaler.scale(loss).backward() - scaler.step(opt) - scaler.update() - - if lr_scheduler is not None: - lr_scheduler.step() - if i % 2000 == 0: - current_lr = [g['lr'] for g in opt.param_groups][0] - log.info(f'current lr: {current_lr:.5f} ({len(opt.param_groups)} parameter groups)') - - logger.iter(i=i, loss=loss) - i += 1 - - if i >= max_iterations: - - if not isfile(join(logger.base_path, 'weights.pth')): - # only write if no weights were already written - logger.save_weights(only_trainable=save_only_trainable) - - sys.exit(0) - - - if config.checkpoint_iterations is not None and i in config.checkpoint_iterations: - logger.save_weights(only_trainable=save_only_trainable, weight_file=f'weights_{i}.pth') - - - if val_interval is not None and i % val_interval == val_interval - 1: - - val_loss, val_scores, maximize = validate(model, dataset_val, config) - - if len(val_scores) > 0: - - score_str = f', scores: ' + ', '.join(f'{k}: {v}' for k, v in val_scores.items()) - - if maximize and val_scores[config.use_val_metric] > best_val_score: - logger.save_weights(only_trainable=save_only_trainable) - best_val_score = val_scores[config.use_val_metric] - - elif not maximize and val_scores[config.use_val_metric] < best_val_score: - logger.save_weights(only_trainable=save_only_trainable) - best_val_score = val_scores[config.use_val_metric] - - else: - score_str = '' - # if no score is used, fall back to loss - if val_loss < best_val_loss: - logger.save_weights(only_trainable=save_only_trainable) - best_val_loss = val_loss - - log.info(f'Validation loss: {val_loss}' + score_str) - logger.iter(i=i, val_loss=val_loss, extra_loss=float(extra_loss), **val_scores) - model.train() - - print('epoch complete') - - -if __name__ == '__main__': - main() \ No newline at end of file diff --git a/spaces/mygyasir/genious_bgremover/carvekit/ml/arch/tracerb7/att_modules.py b/spaces/mygyasir/genious_bgremover/carvekit/ml/arch/tracerb7/att_modules.py deleted file mode 100644 index 07e47403907753c2a873d35fa5a9336740f5d91b..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/genious_bgremover/carvekit/ml/arch/tracerb7/att_modules.py +++ /dev/null @@ -1,290 +0,0 @@ -""" -Source url: https://github.com/Karel911/TRACER -Author: Min Seok Lee and Wooseok Shin -License: Apache License 2.0 -""" -import torch -import torch.nn as nn -import torch.nn.functional as F - -from carvekit.ml.arch.tracerb7.conv_modules import BasicConv2d, DWConv, DWSConv - - -class RFB_Block(nn.Module): - def __init__(self, in_channel, out_channel): - super(RFB_Block, self).__init__() - self.relu = nn.ReLU(True) - self.branch0 = nn.Sequential( - BasicConv2d(in_channel, out_channel, 1), - ) - self.branch1 = nn.Sequential( - BasicConv2d(in_channel, out_channel, 1), - BasicConv2d(out_channel, out_channel, kernel_size=(1, 3), padding=(0, 1)), - BasicConv2d(out_channel, out_channel, kernel_size=(3, 1), padding=(1, 0)), - BasicConv2d(out_channel, out_channel, 3, padding=3, dilation=3), - ) - self.branch2 = nn.Sequential( - BasicConv2d(in_channel, out_channel, 1), - BasicConv2d(out_channel, out_channel, kernel_size=(1, 5), padding=(0, 2)), - BasicConv2d(out_channel, out_channel, kernel_size=(5, 1), padding=(2, 0)), - BasicConv2d(out_channel, out_channel, 3, padding=5, dilation=5), - ) - self.branch3 = nn.Sequential( - BasicConv2d(in_channel, out_channel, 1), - BasicConv2d(out_channel, out_channel, kernel_size=(1, 7), padding=(0, 3)), - BasicConv2d(out_channel, out_channel, kernel_size=(7, 1), padding=(3, 0)), - BasicConv2d(out_channel, out_channel, 3, padding=7, dilation=7), - ) - self.conv_cat = BasicConv2d(4 * out_channel, out_channel, 3, padding=1) - self.conv_res = BasicConv2d(in_channel, out_channel, 1) - - def forward(self, x): - x0 = self.branch0(x) - x1 = self.branch1(x) - x2 = self.branch2(x) - x3 = self.branch3(x) - x_cat = torch.cat((x0, x1, x2, x3), 1) - x_cat = self.conv_cat(x_cat) - - x = self.relu(x_cat + self.conv_res(x)) - return x - - -class GlobalAvgPool(nn.Module): - def __init__(self, flatten=False): - super(GlobalAvgPool, self).__init__() - self.flatten = flatten - - def forward(self, x): - if self.flatten: - in_size = x.size() - return x.view((in_size[0], in_size[1], -1)).mean(dim=2) - else: - return ( - x.view(x.size(0), x.size(1), -1) - .mean(-1) - .view(x.size(0), x.size(1), 1, 1) - ) - - -class UnionAttentionModule(nn.Module): - def __init__(self, n_channels, only_channel_tracing=False): - super(UnionAttentionModule, self).__init__() - self.GAP = GlobalAvgPool() - self.confidence_ratio = 0.1 - self.bn = nn.BatchNorm2d(n_channels) - self.norm = nn.Sequential( - nn.BatchNorm2d(n_channels), nn.Dropout3d(self.confidence_ratio) - ) - self.channel_q = nn.Conv2d( - in_channels=n_channels, - out_channels=n_channels, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ) - self.channel_k = nn.Conv2d( - in_channels=n_channels, - out_channels=n_channels, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ) - self.channel_v = nn.Conv2d( - in_channels=n_channels, - out_channels=n_channels, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ) - - self.fc = nn.Conv2d( - in_channels=n_channels, - out_channels=n_channels, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ) - - if only_channel_tracing is False: - self.spatial_q = nn.Conv2d( - in_channels=n_channels, - out_channels=1, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ) - self.spatial_k = nn.Conv2d( - in_channels=n_channels, - out_channels=1, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ) - self.spatial_v = nn.Conv2d( - in_channels=n_channels, - out_channels=1, - kernel_size=1, - stride=1, - padding=0, - bias=False, - ) - self.sigmoid = nn.Sigmoid() - - def masking(self, x, mask): - mask = mask.squeeze(3).squeeze(2) - threshold = torch.quantile( - mask.float(), self.confidence_ratio, dim=-1, keepdim=True - ) - mask[mask <= threshold] = 0.0 - mask = mask.unsqueeze(2).unsqueeze(3) - mask = mask.expand(-1, x.shape[1], x.shape[2], x.shape[3]).contiguous() - masked_x = x * mask - - return masked_x - - def Channel_Tracer(self, x): - avg_pool = self.GAP(x) - x_norm = self.norm(avg_pool) - - q = self.channel_q(x_norm).squeeze(-1) - k = self.channel_k(x_norm).squeeze(-1) - v = self.channel_v(x_norm).squeeze(-1) - - # softmax(Q*K^T) - QK_T = torch.matmul(q, k.transpose(1, 2)) - alpha = F.softmax(QK_T, dim=-1) - - # a*v - att = torch.matmul(alpha, v).unsqueeze(-1) - att = self.fc(att) - att = self.sigmoid(att) - - output = (x * att) + x - alpha_mask = att.clone() - - return output, alpha_mask - - def forward(self, x): - X_c, alpha_mask = self.Channel_Tracer(x) - X_c = self.bn(X_c) - x_drop = self.masking(X_c, alpha_mask) - - q = self.spatial_q(x_drop).squeeze(1) - k = self.spatial_k(x_drop).squeeze(1) - v = self.spatial_v(x_drop).squeeze(1) - - # softmax(Q*K^T) - QK_T = torch.matmul(q, k.transpose(1, 2)) - alpha = F.softmax(QK_T, dim=-1) - - output = torch.matmul(alpha, v).unsqueeze(1) + v.unsqueeze(1) - - return output - - -class aggregation(nn.Module): - def __init__(self, channel): - super(aggregation, self).__init__() - self.relu = nn.ReLU(True) - - self.upsample = nn.Upsample(scale_factor=2, mode="bilinear", align_corners=True) - self.conv_upsample1 = BasicConv2d(channel[2], channel[1], 3, padding=1) - self.conv_upsample2 = BasicConv2d(channel[2], channel[0], 3, padding=1) - self.conv_upsample3 = BasicConv2d(channel[1], channel[0], 3, padding=1) - self.conv_upsample4 = BasicConv2d(channel[2], channel[2], 3, padding=1) - self.conv_upsample5 = BasicConv2d( - channel[2] + channel[1], channel[2] + channel[1], 3, padding=1 - ) - - self.conv_concat2 = BasicConv2d( - (channel[2] + channel[1]), (channel[2] + channel[1]), 3, padding=1 - ) - self.conv_concat3 = BasicConv2d( - (channel[0] + channel[1] + channel[2]), - (channel[0] + channel[1] + channel[2]), - 3, - padding=1, - ) - - self.UAM = UnionAttentionModule(channel[0] + channel[1] + channel[2]) - - def forward(self, e4, e3, e2): - e4_1 = e4 - e3_1 = self.conv_upsample1(self.upsample(e4)) * e3 - e2_1 = ( - self.conv_upsample2(self.upsample(self.upsample(e4))) - * self.conv_upsample3(self.upsample(e3)) - * e2 - ) - - e3_2 = torch.cat((e3_1, self.conv_upsample4(self.upsample(e4_1))), 1) - e3_2 = self.conv_concat2(e3_2) - - e2_2 = torch.cat((e2_1, self.conv_upsample5(self.upsample(e3_2))), 1) - x = self.conv_concat3(e2_2) - - output = self.UAM(x) - - return output - - -class ObjectAttention(nn.Module): - def __init__(self, channel, kernel_size): - super(ObjectAttention, self).__init__() - self.channel = channel - self.DWSConv = DWSConv( - channel, channel // 2, kernel=kernel_size, padding=1, kernels_per_layer=1 - ) - self.DWConv1 = nn.Sequential( - DWConv(channel // 2, channel // 2, kernel=1, padding=0, dilation=1), - BasicConv2d(channel // 2, channel // 8, 1), - ) - self.DWConv2 = nn.Sequential( - DWConv(channel // 2, channel // 2, kernel=3, padding=1, dilation=1), - BasicConv2d(channel // 2, channel // 8, 1), - ) - self.DWConv3 = nn.Sequential( - DWConv(channel // 2, channel // 2, kernel=3, padding=3, dilation=3), - BasicConv2d(channel // 2, channel // 8, 1), - ) - self.DWConv4 = nn.Sequential( - DWConv(channel // 2, channel // 2, kernel=3, padding=5, dilation=5), - BasicConv2d(channel // 2, channel // 8, 1), - ) - self.conv1 = BasicConv2d(channel // 2, 1, 1) - - def forward(self, decoder_map, encoder_map): - """ - Args: - decoder_map: decoder representation (B, 1, H, W). - encoder_map: encoder block output (B, C, H, W). - Returns: - decoder representation: (B, 1, H, W) - """ - mask_bg = -1 * torch.sigmoid(decoder_map) + 1 # Sigmoid & Reverse - mask_ob = torch.sigmoid(decoder_map) # object attention - x = mask_ob.expand(-1, self.channel, -1, -1).mul(encoder_map) - - edge = mask_bg.clone() - edge[edge > 0.93] = 0 - x = x + (edge * encoder_map) - - x = self.DWSConv(x) - skip = x.clone() - x = ( - torch.cat( - [self.DWConv1(x), self.DWConv2(x), self.DWConv3(x), self.DWConv4(x)], - dim=1, - ) - + skip - ) - x = torch.relu(self.conv1(x)) - - return x + decoder_map diff --git a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/loggers/__init__.py b/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/loggers/__init__.py deleted file mode 100644 index 866bdc4be2f550458359d0505b876af1f4f7ba0a..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/loggers/__init__.py +++ /dev/null @@ -1,168 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Logging utils -""" - -import os -import warnings -from threading import Thread - -import pkg_resources as pkg -import torch -from torch.utils.tensorboard import SummaryWriter - -from utils.general import colorstr, emojis -from utils.loggers.wandb.wandb_utils import WandbLogger -from utils.plots import plot_images, plot_results -from utils.torch_utils import de_parallel - -LOGGERS = ('csv', 'tb', 'wandb') # text-file, TensorBoard, Weights & Biases -RANK = int(os.getenv('RANK', -1)) - -try: - import wandb - - assert hasattr(wandb, '__version__') # verify package import not local dir - if pkg.parse_version(wandb.__version__) >= pkg.parse_version('0.12.2') and RANK in [0, -1]: - try: - wandb_login_success = wandb.login(timeout=30) - except wandb.errors.UsageError: # known non-TTY terminal issue - wandb_login_success = False - if not wandb_login_success: - wandb = None -except (ImportError, AssertionError): - wandb = None - - -class Loggers(): - # YOLOv5 Loggers class - def __init__(self, save_dir=None, weights=None, opt=None, hyp=None, logger=None, include=LOGGERS): - self.save_dir = save_dir - self.weights = weights - self.opt = opt - self.hyp = hyp - self.logger = logger # for printing results to console - self.include = include - self.keys = ['train/box_loss', 'train/obj_loss', 'train/cls_loss', # train loss - 'metrics/precision', 'metrics/recall', 'metrics/mAP_0.5', 'metrics/mAP_0.5:0.95', # metrics - 'val/box_loss', 'val/obj_loss', 'val/cls_loss', # val loss - 'x/lr0', 'x/lr1', 'x/lr2'] # params - self.best_keys = ['best/epoch', 'best/precision', 'best/recall', 'best/mAP_0.5', 'best/mAP_0.5:0.95'] - for k in LOGGERS: - setattr(self, k, None) # init empty logger dictionary - self.csv = True # always log to csv - - # Message - if not wandb: - prefix = colorstr('Weights & Biases: ') - s = f"{prefix}run 'pip install wandb' to automatically track and visualize YOLOv5 🚀 runs (RECOMMENDED)" - self.logger.info(emojis(s)) - - # TensorBoard - s = self.save_dir - if 'tb' in self.include and not self.opt.evolve: - prefix = colorstr('TensorBoard: ') - self.logger.info(f"{prefix}Start with 'tensorboard --logdir {s.parent}', view at http://localhost:6006/") - self.tb = SummaryWriter(str(s)) - - # W&B - if wandb and 'wandb' in self.include: - wandb_artifact_resume = isinstance(self.opt.resume, str) and self.opt.resume.startswith('wandb-artifact://') - run_id = torch.load(self.weights).get('wandb_id') if self.opt.resume and not wandb_artifact_resume else None - self.opt.hyp = self.hyp # add hyperparameters - self.wandb = WandbLogger(self.opt, run_id) - else: - self.wandb = None - - def on_pretrain_routine_end(self): - # Callback runs on pre-train routine end - paths = self.save_dir.glob('*labels*.jpg') # training labels - if self.wandb: - self.wandb.log({"Labels": [wandb.Image(str(x), caption=x.name) for x in paths]}) - - def on_train_batch_end(self, ni, model, imgs, targets, paths, plots, sync_bn): - # Callback runs on train batch end - if plots: - if ni == 0: - if not sync_bn: # tb.add_graph() --sync known issue https://github.com/ultralytics/yolov5/issues/3754 - with warnings.catch_warnings(): - warnings.simplefilter('ignore') # suppress jit trace warning - self.tb.add_graph(torch.jit.trace(de_parallel(model), imgs[0:1], strict=False), []) - if ni < 3: - f = self.save_dir / f'train_batch{ni}.jpg' # filename - Thread(target=plot_images, args=(imgs, targets, paths, f), daemon=True).start() - if self.wandb and ni == 10: - files = sorted(self.save_dir.glob('train*.jpg')) - self.wandb.log({'Mosaics': [wandb.Image(str(f), caption=f.name) for f in files if f.exists()]}) - - def on_train_epoch_end(self, epoch): - # Callback runs on train epoch end - if self.wandb: - self.wandb.current_epoch = epoch + 1 - - def on_val_image_end(self, pred, predn, path, names, im): - # Callback runs on val image end - if self.wandb: - self.wandb.val_one_image(pred, predn, path, names, im) - - def on_val_end(self): - # Callback runs on val end - if self.wandb: - files = sorted(self.save_dir.glob('val*.jpg')) - self.wandb.log({"Validation": [wandb.Image(str(f), caption=f.name) for f in files]}) - - def on_fit_epoch_end(self, vals, epoch, best_fitness, fi): - # Callback runs at the end of each fit (train+val) epoch - x = {k: v for k, v in zip(self.keys, vals)} # dict - if self.csv: - file = self.save_dir / 'results.csv' - n = len(x) + 1 # number of cols - s = '' if file.exists() else (('%20s,' * n % tuple(['epoch'] + self.keys)).rstrip(',') + '\n') # add header - with open(file, 'a') as f: - f.write(s + ('%20.5g,' * n % tuple([epoch] + vals)).rstrip(',') + '\n') - - if self.tb: - for k, v in x.items(): - self.tb.add_scalar(k, v, epoch) - - if self.wandb: - if best_fitness == fi: - best_results = [epoch] + vals[3:7] - for i, name in enumerate(self.best_keys): - self.wandb.wandb_run.summary[name] = best_results[i] # log best results in the summary - self.wandb.log(x) - self.wandb.end_epoch(best_result=best_fitness == fi) - - def on_model_save(self, last, epoch, final_epoch, best_fitness, fi): - # Callback runs on model save event - if self.wandb: - if ((epoch + 1) % self.opt.save_period == 0 and not final_epoch) and self.opt.save_period != -1: - self.wandb.log_model(last.parent, self.opt, epoch, fi, best_model=best_fitness == fi) - - def on_train_end(self, last, best, plots, epoch, results): - # Callback runs on training end - if plots: - plot_results(file=self.save_dir / 'results.csv') # save results.png - files = ['results.png', 'confusion_matrix.png', *(f'{x}_curve.png' for x in ('F1', 'PR', 'P', 'R'))] - files = [(self.save_dir / f) for f in files if (self.save_dir / f).exists()] # filter - - if self.tb: - import cv2 - for f in files: - self.tb.add_image(f.stem, cv2.imread(str(f))[..., ::-1], epoch, dataformats='HWC') - - if self.wandb: - self.wandb.log({k: v for k, v in zip(self.keys[3:10], results)}) # log best.pt val results - self.wandb.log({"Results": [wandb.Image(str(f), caption=f.name) for f in files]}) - # Calling wandb.log. TODO: Refactor this into WandbLogger.log_model - if not self.opt.evolve: - wandb.log_artifact(str(best if best.exists() else last), type='model', - name='run_' + self.wandb.wandb_run.id + '_model', - aliases=['latest', 'best', 'stripped']) - self.wandb.finish_run() - - def on_params_update(self, params): - # Update hyperparams or configs of the experiment - # params: A dict containing {param: value} pairs - if self.wandb: - self.wandb.wandb_run.config.update(params, allow_val_change=True) diff --git a/spaces/nathanTQ/ChatDev/camel/human.py b/spaces/nathanTQ/ChatDev/camel/human.py deleted file mode 100644 index 07321e35edd8e93621ccf4a5996a3d6c10cecaa1..0000000000000000000000000000000000000000 --- a/spaces/nathanTQ/ChatDev/camel/human.py +++ /dev/null @@ -1,129 +0,0 @@ -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -# Licensed under the Apache License, Version 2.0 (the “License”); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an “AS IS” BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# =========== Copyright 2023 @ CAMEL-AI.org. All Rights Reserved. =========== -from typing import Any, Dict, Sequence - -from colorama import Fore - -from camel.messages import ChatMessage -from camel.utils import print_text_animated - - -class Human: - r"""A class representing a human user. - - Args: - name (str): The name of the human user. - (default: :obj:`"Kill Switch Engineer"`). - logger_color (Any): The color of the menu options displayed to the - user. (default: :obj:`Fore.MAGENTA`) - - Attributes: - name (str): The name of the human user. - logger_color (Any): The color of the menu options displayed to the - user. - input_button (str): The text displayed for the input button. - kill_button (str): The text displayed for the kill button. - options_dict (Dict[str, str]): A dictionary containing the options - displayed to the user. - """ - - def __init__(self, name: str = "Kill Switch Engineer", - logger_color: Any = Fore.MAGENTA) -> None: - self.name = name - self.logger_color = logger_color - self.input_button = f"Input by {self.name}." - self.kill_button = "Stop!!!" - self.options_dict: Dict[str, str] = dict() - - def display_options(self, messages: Sequence[ChatMessage]) -> None: - r"""Displays the options to the user. - - Args: - messages (Sequence[ChatMessage]): A list of `ChatMessage` objects. - - Returns: - None - """ - options = [message.content for message in messages] - options.append(self.input_button) - options.append(self.kill_button) - print_text_animated( - self.logger_color + "\n> Proposals from " - f"{messages[0].role_name} ({messages[0].role_type}). " - "Please choose an option:\n") - for index, option in enumerate(options): - print_text_animated( - self.logger_color + - f"\x1b[3mOption {index + 1}:\n{option}\x1b[0m\n") - self.options_dict[str(index + 1)] = option - - def get_input(self) -> str: - r"""Gets the input from the user. - - Returns: - str: The user's input. - """ - while True: - human_input = input( - self.logger_color + - f"Please enter your choice ([1-{len(self.options_dict)}]): ") - print("\n") - if human_input in self.options_dict: - break - print_text_animated(self.logger_color + - "\n> Invalid choice. Please try again.\n") - - return human_input - - def parse_input(self, human_input: str, - meta_chat_message: ChatMessage) -> ChatMessage: - r"""Parses the user's input and returns a `ChatMessage` object. - - Args: - human_input (str): The user's input. - meta_chat_message (ChatMessage): A `ChatMessage` object. - - Returns: - ChatMessage: A `ChatMessage` object. - """ - if self.options_dict[human_input] == self.input_button: - meta_chat_message.content = input(self.logger_color + - "Please enter your message: ") - return meta_chat_message - elif self.options_dict[human_input] == self.kill_button: - exit(self.logger_color + f"Killed by {self.name}.") - else: - meta_chat_message.content = self.options_dict[human_input] - return meta_chat_message - - def step(self, messages: Sequence[ChatMessage]) -> ChatMessage: - r"""Performs one step of the conversation by displaying options to the - user, getting their input, and parsing their choice. - - Args: - messages (Sequence[ChatMessage]): A list of ChatMessage objects. - - Returns: - ChatMessage: A `ChatMessage` object representing the user's choice. - """ - meta_chat_message = ChatMessage( - role_name=messages[0].role_name, - role_type=messages[0].role_type, - meta_dict=messages[0].meta_dict, - role=messages[0].role, - content="", - ) - self.display_options(messages) - human_input = self.get_input() - return self.parse_input(human_input, meta_chat_message) diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Kane Chronicles Graphic Novel Pdf Download.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Kane Chronicles Graphic Novel Pdf Download.md deleted file mode 100644 index c0878065d6d575af199b3a7641c64cd1f02cef53..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Kane Chronicles Graphic Novel Pdf Download.md +++ /dev/null @@ -1,44 +0,0 @@ - -

        How to Download the Kane Chronicles Graphic Novels for Free

        -

        If you are a fan of Rick Riordan's fantasy books, you might be interested in reading the graphic novel adaptations of his popular series, The Kane Chronicles. The Kane Chronicles follows the adventures of Carter and Sadie Kane, two siblings who discover they are descendants of ancient Egyptian magicians. They have to face gods, monsters, and evil forces that threaten to destroy the world.

        -

        The graphic novels are illustrated by Orpheus Collar, who brings Riordan's characters and settings to life with stunning artwork. There are three graphic novels in the series, based on the original books: The Red Pyramid, The Throne of Fire, and The Serpent's Shadow.

        -

        Kane Chronicles Graphic Novel Pdf Download


        Download Filehttps://urlcod.com/2uIbON



        -

        But how can you get your hands on these graphic novels without spending a fortune? Well, there is a way to download them for free from the internet. Here are the steps you need to follow:

        -
          -
        1. Go to https://archive.org/details/kane-chronicles, which is a website that hosts digital copies of books and other media.
        2. -
        3. Scroll down until you see the list of files with the names of the graphic novels. For example, "The Red Pyramid - The Graphic Novel.pdf".
        4. -
        5. Click on the file name that you want to download. A new page will open with a preview of the graphic novel.
        6. -
        7. On the right side of the page, you will see a menu with different options to download the file. Choose the one that suits your device and preference. For example, "PDF" or "EPUB".
        8. -
        9. Wait for the download to finish and enjoy reading your graphic novel!
        10. -
        -

        That's it! You have successfully downloaded the Kane Chronicles graphic novels for free. However, please note that this method is not legal or ethical, as it violates the copyright of the author and the publisher. Therefore, we recommend that you use this method only for personal use and not for distribution or commercial purposes.

        -

        If you want to support Rick Riordan and Orpheus Collar, you can buy their books from online or physical stores. You can also check out their other works, such as Percy Jackson and the Olympians, The Heroes of Olympus, Magnus Chase and the Gods of Asgard, and The Trials of Apollo.

        -

        We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please leave a comment below. Happy reading!

        -

        - -

        What are the benefits of reading graphic novels?

        -

        Graphic novels are a form of storytelling that combines words and images. They are similar to comics, but usually have longer and more complex plots. Graphic novels can be fiction or non-fiction, and cover a wide range of genres and topics.

        -

        Reading graphic novels can have many benefits for readers of all ages and backgrounds. Some of these benefits are:

        -
          -
        • Graphic novels can improve your visual literacy skills, which are the ability to interpret and create meaning from images. Visual literacy is important in today's world, where we are constantly exposed to visual media and information.
        • -
        • Graphic novels can enhance your reading comprehension skills, which are the ability to understand and analyze texts. Graphic novels use both words and images to convey the story, which can help you grasp the context, tone, mood, and emotions of the characters and situations.
        • -
        • Graphic novels can increase your vocabulary and language skills, which are the ability to use and understand words and expressions. Graphic novels often use rich and varied language, including slang, idioms, metaphors, and figurative speech. They can also expose you to different cultures, perspectives, and dialects.
        • -
        • Graphic novels can spark your creativity and imagination, which are the ability to generate and express original ideas. Graphic novels can inspire you to create your own stories, characters, and worlds. They can also challenge you to think critically and creatively about the themes, messages, and issues that they explore.
        • -
        • Graphic novels can foster your love of reading and learning, which are the ability to enjoy and pursue knowledge and information. Graphic novels can be fun, engaging, and entertaining. They can also introduce you to new topics, genres, and authors that you might not otherwise encounter.
        • -
        -

        As you can see, graphic novels are more than just pictures and words. They are a powerful and versatile medium that can enrich your reading experience and skills.

        - -

        What are some tips for reading graphic novels?

        -

        If you are new to graphic novels or want to improve your reading skills, here are some tips that you can follow:

        -
          -
        • Choose a graphic novel that interests you. There are many graphic novels available for different tastes and preferences. You can browse online or physical catalogs, read reviews and recommendations, or ask for suggestions from friends or librarians.
        • -
        • Read the graphic novel from left to right and top to bottom. This is the standard way of reading graphic novels in English. However, some graphic novels may have a different layout or direction. In that case, follow the cues from the book or the author.
        • -
        • Pay attention to both the words and the images. The words and the images work together to tell the story. Don't skip or ignore either one of them. Try to understand how they complement each other and convey meaning.
        • -
        • Look at the details in the images. The images in graphic novels can have many details that add depth and nuance to the story. Look at the colors, shapes, lines, expressions, gestures, backgrounds, symbols, etc. Think about what they imply or suggest.
        • -
        • Notice the transitions between panels. Panels are the boxes or frames that contain the images in graphic novels. Transitions are the changes or movements between panels. Transitions can indicate time, space, action, mood, etc. Notice how they affect the pace and flow of the story.
        • -
        • Read aloud or silently according to your preference. Some people prefer to read graphic novels aloud to hear the dialogue and narration. Others prefer to read silently to focus on the images and emotions. Do what works best for you.
        • -
        • Reread or review if necessary. Graphic novels can have complex plots and characters that require multiple readings or reviews to fully appreciate. Don't be afraid to go back or forward if you miss something or want to clarify something.
        • -
        -

        By following these tips, you can enhance your enjoyment and understanding of graphic novels.

        81aa517590
        -
        -
        \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/ObjectARX 2010 (x64) Keygen Keygen.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/ObjectARX 2010 (x64) Keygen Keygen.md deleted file mode 100644 index c9625fed34b3a31b089ead8c534d7dfd1fa5b8ae..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/ObjectARX 2010 (x64) Keygen Keygen.md +++ /dev/null @@ -1,53 +0,0 @@ -
        -

        How to Use ObjectARX 2010 (x64) Keygen Keygen to Customize AutoCAD

        -

        ObjectARX is an API for customizing and extending AutoCAD, the popular CAD software from Autodesk. ObjectARX 2010 (x64) Keygen Keygen is a tool that can generate activation codes for ObjectARX 2010 (x64) SDK, which is compatible with AutoCAD 2010 (64-bit) version.

        -

        ObjectARX 2010 (x64) Keygen Keygen


        DOWNLOAD ⇒⇒⇒ https://urlcod.com/2uIcjd



        -

        In this article, we will show you how to use ObjectARX 2010 (x64) Keygen Keygen to customize AutoCAD and create your own applications and plugins. We will assume that you have some basic knowledge of C++ programming and AutoCAD usage.

        -

        Step 1: Download and Install ObjectARX 2010 (x64) SDK

        -

        The first step is to download and install ObjectARX 2010 (x64) SDK from the Autodesk Developer Network website. You will need to register and accept the license agreement before downloading. The SDK consists of C++ headers and libraries that can be used to build Windows DLLs that can be loaded into AutoCAD.

        -

        You will also need to install Microsoft Visual Studio 2008 or later, which is the compiler required by ObjectARX 2010 (x64) SDK. You can use either the Professional or Express edition of Visual Studio.

        -

        Step 2: Generate Activation Code with ObjectARX 2010 (x64) Keygen Keygen

        -

        The next step is to generate an activation code with ObjectARX 2010 (x64) Keygen Keygen. This is a tool that can create a valid serial number and product key for ObjectARX 2010 (x64) SDK. You can download it from various sources on the internet, but be careful of malware and viruses.

        -

        Once you have downloaded ObjectARX 2010 (x64) Keygen Keygen, run it and enter your name and email address. Then click on Generate button and copy the serial number and product key that appear on the screen. You will need these codes to activate ObjectARX 2010 (x64) SDK later.

        -

        Step 3: Activate ObjectARX 2010 (x64) SDK

        -

        The final step is to activate ObjectARX 2010 (x64) SDK with the codes generated by ObjectARX 2010 (x64) Keygen Keygen. To do this, open Visual Studio and create a new project using the ObjectARX Wizard template. You can find it under Visual C++ -> Autodesk -> ObjectARX Application.

        -

        -

        Follow the wizard steps and enter the project name, location, description, etc. When you reach the Activation page, enter the serial number and product key that you copied from ObjectARX 2010 (x64) Keygen Keygen. Click on Activate button and wait for the confirmation message.

        -

        Congratulations! You have successfully activated ObjectARX 2010 (x64) SDK and you are ready to start developing your own applications and plugins for AutoCAD.

        -

        Conclusion

        -

        In this article, we have shown you how to use ObjectARX 2010 (x64) Keygen Keygen to customize AutoCAD and create your own applications and plugins. We hope you found it useful and informative. If you have any questions or comments, please feel free to contact us.

        - -

        Example: Creating a Hello World Plugin for AutoCAD

        -

        To demonstrate how to use ObjectARX 2010 (x64) SDK to create a plugin for AutoCAD, we will create a simple Hello World plugin that displays a message box when loaded. The plugin will also register a command called HELLO that displays the same message box when executed.

        -

        The steps are as follows:

        -
          -
        1. Create a new project using the ObjectARX Wizard template as described in Step 3 above. Name the project HelloWorld and select ObjectARX/DBX Project as the project type.
        2. -
        3. In the Project Settings page, select ARX Application as the module type and check the Load on AutoCAD Startup option. This will make the plugin load automatically when AutoCAD starts.
        4. -
        5. In the ObjectARX/DBX Classes page, click on Add button and select AcRxArxApp as the base class. This will create a class called CHelloWorldApp that inherits from AcRxArxApp and represents the main entry point of the plugin.
        6. -
        7. Click on Finish button to create the project. You should see two files in the Solution Explorer: HelloWorld.cpp and HelloWorld.h. These are the source code files for the plugin.
        8. -
        9. Open HelloWorld.cpp and locate the following function: CHelloWorldApp::On_kInitAppMsg
        10. -
        11. This function is called when the plugin is loaded by AutoCAD. We will add some code here to display a message box and register our command. Add the following lines of code inside the function:
        12. -
          // Display a message box
          -AfxMessageBox(_T("Hello World!"));
          -
          -// Register a command called HELLO
          -acedRegCmds->addCommand(_T("HELLO_GROUP"), _T("HELLO"), _T("HELLO"), ACRX_CMD_MODAL, helloCommand);
          -
          -
        13. The first line uses the AfxMessageBox function to display a message box with the text "Hello World!". The second line uses the acedRegCmds global pointer to access the command registry and add a new command called HELLO. The parameters are: the group name, the global name, the local name, the command flags, and the command function.
        14. -
        15. We need to define the helloCommand function that will be executed when the user types HELLO in AutoCAD. Add the following code at the end of HelloWorld.cpp file:
        16. -
          // Command function for HELLO
          -void helloCommand()
          -
          -    // Display a message box
          -    AfxMessageBox(_T("Hello World!"));
          -
          -
          -
        17. This function simply displays another message box with the same text as before.
        18. -
        19. Save and build the project. You should see a file called HelloWorld.arx in the Debug folder of your project directory. This is the plugin file that you can load into AutoCAD.
        20. -
        21. Start AutoCAD 2010 (64-bit) and type NETLOAD in the command line. Browse to your project directory and select HelloWorld.arx file. Click on Open button to load the plugin.
        22. -
        23. You should see a message box saying "Hello World!" appear on your screen. This means that your plugin has been loaded successfully.
        24. -
        25. Type HELLO in the command line and press Enter. You should see another message box saying "Hello World!" appear on your screen. This means that your command has been executed successfully.
        26. -
        -

        Congratulations! You have created your first plugin for AutoCAD using ObjectARX 2010 (x64) SDK and ObjectARX 2010 (x64) Keygen Keygen.

        7b8c122e87
        -
        -
        \ No newline at end of file diff --git a/spaces/ngoctuanai/gpt4/README.md b/spaces/ngoctuanai/gpt4/README.md deleted file mode 100644 index 78d77f6560ec54282b56405d2d4834c60b7b2af1..0000000000000000000000000000000000000000 --- a/spaces/ngoctuanai/gpt4/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: GPT-4 -emoji: 🤖 -colorFrom: gray -colorTo: blue -sdk: docker -pinned: false -disable_embedding: true -license: mit -app_port: 8080 ---- \ No newline at end of file diff --git a/spaces/nickmuchi/Investor-Education-ChatChain/app.py b/spaces/nickmuchi/Investor-Education-ChatChain/app.py deleted file mode 100644 index 3a4310902fc19bbedc09e2035990fbfe03e9bdf5..0000000000000000000000000000000000000000 --- a/spaces/nickmuchi/Investor-Education-ChatChain/app.py +++ /dev/null @@ -1,122 +0,0 @@ -import os -import streamlit as st - -from langchain.embeddings import HuggingFaceInstructEmbeddings, HuggingFaceEmbeddings -from langchain.vectorstores.faiss import FAISS -from huggingface_hub import snapshot_download - -from langchain.callbacks import StreamlitCallbackHandler -from langchain.agents import OpenAIFunctionsAgent, AgentExecutor -from langchain.agents.agent_toolkits import create_retriever_tool -from langchain.agents.openai_functions_agent.agent_token_buffer_memory import ( - AgentTokenBufferMemory, -) -from langchain.chat_models import ChatOpenAI -from langchain.schema import SystemMessage, AIMessage, HumanMessage -from langchain.prompts import MessagesPlaceholder -from langsmith import Client - -client = Client() - -st.set_page_config( - page_title="Investor Education ChatChain", - page_icon="📖", - layout="wide", - initial_sidebar_state="collapsed", -) - -#Load API Key -api_key = os.environ["OPENAI_API_KEY"] - -#### sidebar section 1 #### - -site_options = {'US': 'vanguard_embeddings_US', - 'AUS': 'vanguard-embeddings'} - -site_options_list = list(site_options.keys()) - -site_radio = st.radio( - "Which Vanguard website location would you want to chat to?", - ('US', 'AUS')) - -@st.cache_data -def load_vectorstore(site): - '''load embeddings and vectorstore''' - - emb = HuggingFaceEmbeddings(model_name="all-mpnet-base-v2") - - vectorstore = FAISS.load_local(site_options[site], emb) - - return vectorstore.as_retriever(search_kwargs={"k": 4}) - - -tool = create_retriever_tool( - load_vectorstore(site_radio), - "search_vaguard_website", - "Searches and returns documents regarding the Vanguard website across US and AUS locations. The websites provide investment related information to the user") - -tools = [tool] -llm = ChatOpenAI(temperature=0, streaming=True, model="gpt-4") -message = SystemMessage( - content=( - "You are a helpful chatbot who is tasked with answering questions about investments using informationn that has been scraped from a website to answer the users question accurately." - "Do not use any information not provided in the website context." - "Unless otherwise explicitly stated, it is probably fair to assume that questions are about the CFA program and materials. " - "If there is any ambiguity, you probably assume they are about that." - ) -) - -prompt = OpenAIFunctionsAgent.create_prompt( - system_message=message, - extra_prompt_messages=[MessagesPlaceholder(variable_name="history")], -) -agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt) -agent_executor = AgentExecutor( - agent=agent, - tools=tools, - verbose=True, - return_intermediate_steps=True, -) -memory = AgentTokenBufferMemory(llm=llm) -starter_message = "Ask me anything about information on the Vanguard US/AUS websites!" -if "messages" not in st.session_state or st.sidebar.button("Clear message history"): - st.session_state["messages"] = [AIMessage(content=starter_message)] - - -def send_feedback(run_id, score): - client.create_feedback(run_id, "user_score", score=score) - - -for msg in st.session_state.messages: - if isinstance(msg, AIMessage): - st.chat_message("assistant").write(msg.content) - elif isinstance(msg, HumanMessage): - st.chat_message("user").write(msg.content) - memory.chat_memory.add_message(msg) - - -if prompt := st.chat_input(placeholder=starter_message): - st.chat_message("user").write(prompt) - with st.chat_message("assistant"): - st_callback = StreamlitCallbackHandler(st.container()) - response = agent_executor( - {"input": prompt, "history": st.session_state.messages}, - callbacks=[st_callback], - include_run_info=True, - ) - st.session_state.messages.append(AIMessage(content=response["output"])) - st.write(response["output"]) - memory.save_context({"input": prompt}, response) - st.session_state["messages"] = memory.buffer - run_id = response["__run"].run_id - - col_blank, col_text, col1, col2 = st.columns([10, 2, 1, 1]) - with col_text: - st.text("Feedback:") - - # with col1: - # st.button("👍", on_click=send_feedback, args=(run_id, 1)) - - # with col2: - # st.button("👎", on_click=send_feedback, args=(run_id, 0) - diff --git a/spaces/nikitaPDL2023/assignment4/detectron2/docs/conf.py b/spaces/nikitaPDL2023/assignment4/detectron2/docs/conf.py deleted file mode 100644 index 1fb3e30f97dcc02b497e7c6de6bcc9e47ea94885..0000000000000000000000000000000000000000 --- a/spaces/nikitaPDL2023/assignment4/detectron2/docs/conf.py +++ /dev/null @@ -1,395 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -# flake8: noqa - -# Configuration file for the Sphinx documentation builder. -# -# This file does only contain a selection of the most common options. For a -# full list see the documentation: -# http://www.sphinx-doc.org/en/master/config - -# -- Path setup -------------------------------------------------------------- - -# If extensions (or modules to document with autodoc) are in another directory, -# add these directories to sys.path here. If the directory is relative to the -# documentation root, use os.path.abspath to make it absolute, like shown here. -# -import os -import sys -from unittest import mock -from sphinx.domains import Domain -from typing import Dict, List, Tuple - -# The theme to use for HTML and HTML Help pages. See the documentation for -# a list of builtin themes. -# -import sphinx_rtd_theme - - -class GithubURLDomain(Domain): - """ - Resolve certain links in markdown files to github source. - """ - - name = "githuburl" - ROOT = "https://github.com/facebookresearch/detectron2/blob/main/" - LINKED_DOC = ["tutorials/install", "tutorials/getting_started"] - - def resolve_any_xref(self, env, fromdocname, builder, target, node, contnode): - github_url = None - if not target.endswith("html") and target.startswith("../../"): - url = target.replace("../", "") - github_url = url - if fromdocname in self.LINKED_DOC: - # unresolved links in these docs are all github links - github_url = target - - if github_url is not None: - if github_url.endswith("MODEL_ZOO") or github_url.endswith("README"): - # bug of recommonmark. - # https://github.com/readthedocs/recommonmark/blob/ddd56e7717e9745f11300059e4268e204138a6b1/recommonmark/parser.py#L152-L155 - github_url += ".md" - print("Ref {} resolved to github:{}".format(target, github_url)) - contnode["refuri"] = self.ROOT + github_url - return [("githuburl:any", contnode)] - else: - return [] - - -# to support markdown -from recommonmark.parser import CommonMarkParser - -sys.path.insert(0, os.path.abspath("../")) -os.environ["_DOC_BUILDING"] = "True" -DEPLOY = os.environ.get("READTHEDOCS") == "True" - - -# -- Project information ----------------------------------------------------- - -# fmt: off -try: - import torch # noqa -except ImportError: - for m in [ - "torch", "torchvision", "torch.nn", "torch.nn.parallel", "torch.distributed", "torch.multiprocessing", "torch.autograd", - "torch.autograd.function", "torch.nn.modules", "torch.nn.modules.utils", "torch.utils", "torch.utils.data", "torch.onnx", - "torchvision", "torchvision.ops", - ]: - sys.modules[m] = mock.Mock(name=m) - sys.modules['torch'].__version__ = "1.7" # fake version - HAS_TORCH = False -else: - try: - torch.ops.detectron2 = mock.Mock(name="torch.ops.detectron2") - except: - pass - HAS_TORCH = True - -for m in [ - "cv2", "scipy", "portalocker", "detectron2._C", - "pycocotools", "pycocotools.mask", "pycocotools.coco", "pycocotools.cocoeval", - "google", "google.protobuf", "google.protobuf.internal", "onnx", - "caffe2", "caffe2.proto", "caffe2.python", "caffe2.python.utils", "caffe2.python.onnx", "caffe2.python.onnx.backend", -]: - sys.modules[m] = mock.Mock(name=m) -# fmt: on -sys.modules["cv2"].__version__ = "3.4" - -import detectron2 # isort: skip - -if HAS_TORCH: - from detectron2.utils.env import fixup_module_metadata - - fixup_module_metadata("torch.nn", torch.nn.__dict__) - fixup_module_metadata("torch.utils.data", torch.utils.data.__dict__) - - -project = "detectron2" -copyright = "2019-2020, detectron2 contributors" -author = "detectron2 contributors" - -# The short X.Y version -version = detectron2.__version__ -# The full version, including alpha/beta/rc tags -release = version - - -# -- General configuration --------------------------------------------------- - -# If your documentation needs a minimal Sphinx version, state it here. -# -needs_sphinx = "3.0" - -# Add any Sphinx extension module names here, as strings. They can be -# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom -# ones. -extensions = [ - "recommonmark", - "sphinx.ext.autodoc", - "sphinx.ext.napoleon", - "sphinx.ext.intersphinx", - "sphinx.ext.todo", - "sphinx.ext.coverage", - "sphinx.ext.mathjax", - "sphinx.ext.viewcode", - "sphinx.ext.githubpages", -] - -# -- Configurations for plugins ------------ -napoleon_google_docstring = True -napoleon_include_init_with_doc = True -napoleon_include_special_with_doc = True -napoleon_numpy_docstring = False -napoleon_use_rtype = False -autodoc_inherit_docstrings = False -autodoc_member_order = "bysource" - -if DEPLOY: - intersphinx_timeout = 10 -else: - # skip this when building locally - intersphinx_timeout = 0.5 -intersphinx_mapping = { - "python": ("https://docs.python.org/3.7", None), - "numpy": ("https://docs.scipy.org/doc/numpy/", None), - "torch": ("https://pytorch.org/docs/master/", None), -} -# ------------------------- - - -# Add any paths that contain templates here, relative to this directory. -templates_path = ["_templates"] - -source_suffix = [".rst", ".md"] - -# The master toctree document. -master_doc = "index" - -# The language for content autogenerated by Sphinx. Refer to documentation -# for a list of supported languages. -# -# This is also used if you do content translation via gettext catalogs. -# Usually you set "language" from the command line for these cases. -language = None - -# List of patterns, relative to source directory, that match files and -# directories to ignore when looking for source files. -# This pattern also affects html_static_path and html_extra_path. -exclude_patterns = ["_build", "Thumbs.db", ".DS_Store", "build", "README.md", "tutorials/README.md"] - -# The name of the Pygments (syntax highlighting) style to use. -pygments_style = "sphinx" - - -# -- Options for HTML output ------------------------------------------------- - -html_theme = "sphinx_rtd_theme" -html_theme_path = [sphinx_rtd_theme.get_html_theme_path()] - -# Theme options are theme-specific and customize the look and feel of a theme -# further. For a list of options available for each theme, see the -# documentation. -# -# html_theme_options = {} - -# Add any paths that contain custom static files (such as style sheets) here, -# relative to this directory. They are copied after the builtin static files, -# so a file named "default.css" will overwrite the builtin "default.css". -html_static_path = ["_static"] -html_css_files = ["css/custom.css"] - -# Custom sidebar templates, must be a dictionary that maps document names -# to template names. -# -# The default sidebars (for documents that don't match any pattern) are -# defined by theme itself. Builtin themes are using these templates by -# default: ``['localtoc.html', 'relations.html', 'sourcelink.html', -# 'searchbox.html']``. -# -# html_sidebars = {} - - -# -- Options for HTMLHelp output --------------------------------------------- - -# Output file base name for HTML help builder. -htmlhelp_basename = "detectron2doc" - - -# -- Options for LaTeX output ------------------------------------------------ - -latex_elements = { - # The paper size ('letterpaper' or 'a4paper'). - # - # 'papersize': 'letterpaper', - # The font size ('10pt', '11pt' or '12pt'). - # - # 'pointsize': '10pt', - # Additional stuff for the LaTeX preamble. - # - # 'preamble': '', - # Latex figure (float) alignment - # - # 'figure_align': 'htbp', -} - -# Grouping the document tree into LaTeX files. List of tuples -# (source start file, target name, title, -# author, documentclass [howto, manual, or own class]). -latex_documents = [ - (master_doc, "detectron2.tex", "detectron2 Documentation", "detectron2 contributors", "manual") -] - - -# -- Options for manual page output ------------------------------------------ - -# One entry per manual page. List of tuples -# (source start file, name, description, authors, manual section). -man_pages = [(master_doc, "detectron2", "detectron2 Documentation", [author], 1)] - - -# -- Options for Texinfo output ---------------------------------------------- - -# Grouping the document tree into Texinfo files. List of tuples -# (source start file, target name, title, author, -# dir menu entry, description, category) -texinfo_documents = [ - ( - master_doc, - "detectron2", - "detectron2 Documentation", - author, - "detectron2", - "One line description of project.", - "Miscellaneous", - ) -] - - -# -- Options for todo extension ---------------------------------------------- - -# If true, `todo` and `todoList` produce output, else they produce nothing. -todo_include_todos = True - - -def autodoc_skip_member(app, what, name, obj, skip, options): - # we hide something deliberately - if getattr(obj, "__HIDE_SPHINX_DOC__", False): - return True - - # Hide some that are deprecated or not intended to be used - HIDDEN = { - "ResNetBlockBase", - "GroupedBatchSampler", - "build_transform_gen", - "apply_transform_gens", - "TransformGen", - "apply_augmentations", - "StandardAugInput", - "build_batch_data_loader", - "draw_panoptic_seg_predictions", - "WarmupCosineLR", - "WarmupMultiStepLR", - "downgrade_config", - "upgrade_config", - "add_export_config", - } - try: - if name in HIDDEN or ( - hasattr(obj, "__doc__") and obj.__doc__.lower().strip().startswith("deprecated") - ): - print("Skipping deprecated object: {}".format(name)) - return True - except: - pass - return skip - - -_PAPER_DATA = { - "resnet": ("1512.03385", "Deep Residual Learning for Image Recognition"), - "fpn": ("1612.03144", "Feature Pyramid Networks for Object Detection"), - "mask r-cnn": ("1703.06870", "Mask R-CNN"), - "faster r-cnn": ( - "1506.01497", - "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks", - ), - "deformconv": ("1703.06211", "Deformable Convolutional Networks"), - "deformconv2": ("1811.11168", "Deformable ConvNets v2: More Deformable, Better Results"), - "panopticfpn": ("1901.02446", "Panoptic Feature Pyramid Networks"), - "retinanet": ("1708.02002", "Focal Loss for Dense Object Detection"), - "cascade r-cnn": ("1712.00726", "Cascade R-CNN: Delving into High Quality Object Detection"), - "lvis": ("1908.03195", "LVIS: A Dataset for Large Vocabulary Instance Segmentation"), - "rrpn": ("1703.01086", "Arbitrary-Oriented Scene Text Detection via Rotation Proposals"), - "imagenet in 1h": ("1706.02677", "Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour"), - "xception": ("1610.02357", "Xception: Deep Learning with Depthwise Separable Convolutions"), - "mobilenet": ( - "1704.04861", - "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications", - ), - "deeplabv3+": ( - "1802.02611", - "Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation", - ), - "dds": ("2003.13678", "Designing Network Design Spaces"), - "scaling": ("2103.06877", "Fast and Accurate Model Scaling"), - "fcos": ("2006.09214", "FCOS: A Simple and Strong Anchor-free Object Detector"), - "rethinking-batchnorm": ("2105.07576", 'Rethinking "Batch" in BatchNorm'), - "vitdet": ("2203.16527", "Exploring Plain Vision Transformer Backbones for Object Detection"), - "mvitv2": ( - "2112.01526", - "MViTv2: Improved Multiscale Vision Transformers for Classification and Detection", - ), - "swin": ( - "2103.14030", - "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows", - ), - "omni3d": ( - "2207.10660", - "Omni3D: A Large Benchmark and Model for 3D Object Detection in the Wild", - ), -} - - -def paper_ref_role( - typ: str, - rawtext: str, - text: str, - lineno: int, - inliner, - options: Dict = {}, - content: List[str] = [], -): - """ - Parse :paper:`xxx`. Similar to the "extlinks" sphinx extension. - """ - from docutils import nodes, utils - from sphinx.util.nodes import split_explicit_title - - text = utils.unescape(text) - has_explicit_title, title, link = split_explicit_title(text) - link = link.lower() - if link not in _PAPER_DATA: - inliner.reporter.warning("Cannot find paper " + link) - paper_url, paper_title = "#", link - else: - paper_url, paper_title = _PAPER_DATA[link] - if "/" not in paper_url: - paper_url = "https://arxiv.org/abs/" + paper_url - if not has_explicit_title: - title = paper_title - pnode = nodes.reference(title, title, internal=False, refuri=paper_url) - return [pnode], [] - - -def setup(app): - from recommonmark.transform import AutoStructify - - app.add_domain(GithubURLDomain) - app.connect("autodoc-skip-member", autodoc_skip_member) - app.add_role("paper", paper_ref_role) - app.add_config_value( - "recommonmark_config", - {"enable_math": True, "enable_inline_math": True, "enable_eval_rst": True}, - True, - ) - app.add_transform(AutoStructify) diff --git a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/numerics/fasttranscendentals_test.cc b/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/numerics/fasttranscendentals_test.cc deleted file mode 100644 index 004241e55824c66cbecfdd645b70efa2ba237638..0000000000000000000000000000000000000000 --- a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/numerics/fasttranscendentals_test.cc +++ /dev/null @@ -1,665 +0,0 @@ -// Copyright 2021 Google LLC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#if defined __aarch64__ -#include -#endif -#if defined __AVX__ || defined __AVX2__ -#include -#endif - -#include - -#include -#include -#include -#include - -#include "gtest/gtest.h" -#include "sparse_matmul/numerics/fast_transcendentals.h" -#include "sparse_matmul/numerics/test_utils.h" - -namespace csrblocksparse { - -const float kExpFixedRelTolerance = .084f; - -#ifdef SIGMOID_AS_TANH -#if defined FAST_TRANSCENDENTALS && defined ACCURATE_TRANSCENDENTAL_APPROX -const float kSigmoidRelTolerance = .093f; // 9.3% relative -const float kSigmoidAbsTolerance = .0005f; -const float kSigmoidFixedRelTolerance = .093f; -const float kSigmoidFixedAbsTolerance = .0005f; -#elif defined FAST_TRANSCENDENTALS -const float kSigmoidRelTolerance = .09f; // 9.0% relative -const float kSigmoidAbsTolerance = .003f; -const float kSigmoidFixedRelTolerance = .09f; -const float kSigmoidFixedAbsTolerance = .003f; -#endif -#elif defined FAST_TRANSCENDENTALS and defined ACCURATE_TRANSCENDENTAL_APPROX -const float kSigmoidRelTolerance = .102f; // 10.2% relative -const float kSigmoidAbsTolerance = .0003f; -const float kSigmoidFixedRelTolerance = .102f; -const float kSigmoidFixedAbsTolerance = .0003f; -#elif defined FAST_TRANSCENDENTALS -const float kSigmoidRelTolerance = .09f; // 9.0% relative -const float kSigmoidAbsTolerance = .006f; -const float kSigmoidFixedRelTolerance = .09f; -const float kSigmoidFixedAbsTolerance = .006f; -#else -const float kSigmoidRelTolerance = .0001f; -const float kSigmoidAbsTolerance = 1e-5f; -const float kSigmoidFixedRelTolerance = .001f; -const float kSigmoidFixedAbsTolerance = .001f; -#endif - -#if (defined FAST_TRANSCENDENTALS && defined ACCURATE_TRANSCENDENTAL_APPROX || \ - defined FASTER_TRANSCENDENTALS) -const float kExpRelTolerance = .03f; // 3% relative -const float kTanhRelTolerance = .006f; // .6% relative -const float kTanhAbsTolerance = .0003f; -#elif defined FAST_TRANSCENDENTALS -const float kExpRelTolerance = .03f; // 3% relative -const float kTanhRelTolerance = .091f; // .91% relative -const float kTanhAbsTolerance = .00525f; -#else -const float kExpRelTolerance = .0001f; -const float kTanhRelTolerance = .0001f; -const float kTanhAbsTolerance = 1e-5f; -#endif - -constexpr float kQuarticFloatExpRelTolerance = 8e-6f; -constexpr float kQuarticFloatExpTolerance = 9e-6f; -constexpr float kQuarticExpRelTolerance = 3e-5f; -constexpr float kQuarticExpTolerance = 6e-5f; -constexpr float kCubicExpRelTolerance = 6e-4f; -constexpr float kCubicExpTolerance = 2e-3f; -constexpr float kQuarticFloatTanhRelTolerance = 3e-5f; -constexpr float kQuarticFloatTanhTolerance = 3e-6f; -constexpr float kCubicTanhRelTolerance = 3e-3f; -constexpr float kCubicTanhTolerance = 3e-4f; -constexpr float kQuarticSigmoidRelTolerance = 3e-5f; -constexpr float kQuarticSigmoidTolerance = 7e-6f; -constexpr float kCubicSigmoidRelTolerance = 6e-4f; -constexpr float kCubicSigmoidTolerance = 2e-4f; -#ifdef __AVX2__ -constexpr float kQuarticTanhRelTolerance = 1e-4f; -constexpr float kQuarticTanhTolerance = 2e-5f; -constexpr float kQuarticFloatSigmoidRelTolerance = 4e-6f; -constexpr float kQuarticFloatSigmoidTolerance = 1e-6f; -#endif // __AVX2__ - -TEST(Transcendentals, Exp) { - // 132 - 127 = 5, we check between -63.99... and 63.99... - const int maxExponent = 132; - const int minExponent = 0; - float max_error = 0.f; - constexpr int kExponentBits = 7; - for (int s = 0; s < 2; ++s) { - for (int e = minExponent; e < maxExponent; ++e) { - // Don't check every mantissa for speed reasons. - for (int m = 0; m < (1 << 23); m += (1 << 10)) { - uint32_t int_val = s << 31 | e << 23 | m; - float x; - memcpy(&x, &int_val, sizeof(float)); - - float exact_exp = expf(x); - float approx_exp = csrblocksparse::fast_exp(x); - float approx_exp_fixed = csrblocksparse::fast_exp( - csrblocksparse::fixed32(x)); - - float rel_diff = RelDiff(exact_exp, approx_exp); - float rel_diff_fixed = RelDiff(exact_exp, approx_exp_fixed); - max_error = std::max(max_error, rel_diff); - EXPECT_LT(rel_diff, kExpRelTolerance) - << exact_exp << " " << approx_exp << " " << x; - EXPECT_LT(rel_diff_fixed, kExpRelTolerance) - << exact_exp << " " << approx_exp << " " << x; - } - } - } -} - -TEST(Transcendentals, FixedExp) { - const int maxExponent = 132; - const int minExponent = 120; - float max_error = 0.f; - float max_abs_error = 0.f; - for (int s = 0; s < 2; ++s) { - for (int e = minExponent; e < maxExponent; ++e) { - // Don't check every mantissa for speed reasons. - for (int m = 0; m < (1 << 23); m += (1 << 10)) { - uint32_t int_val = s << 31 | e << 23 | m; - float x; - memcpy(&x, &int_val, sizeof(float)); - - float exact_exp = expf(x); - float approx_exp = - csrblocksparse::fast_exp_fixed(csrblocksparse::fixed32<16>(x)); - - float rel_diff = RelDiff(exact_exp, approx_exp); - float abs_diff = std::abs(exact_exp - approx_exp); - max_error = std::max(max_error, rel_diff); - max_abs_error = std::max(max_abs_error, abs_diff); - EXPECT_LT(rel_diff, kExpFixedRelTolerance) - << exact_exp << " " << approx_exp << " " << x; - } - } - } - LOG(INFO) << "Max relative exp error = " << max_error - << ", abs=" << max_abs_error; -} - -template -void TestExp(float abs_tolerance, float rel_tolerance) { - constexpr int kMaxInput = 80 << 16; - constexpr int kMinInput = -(80 << 16); - constexpr int kExponentBits = 15; - float max_error = 0.f; - float max_abs_error = 0.f; - for (int i = kMinInput; i <= kMaxInput; ++i) { - csrblocksparse::fixed32 fixed_int(i); - float x = static_cast(fixed_int); - float exact_exp = expf(x); - float approx_exp = fixed32_exp(fixed_int); - float diff = exact_exp - approx_exp; - float abs_diff = std::abs(diff); - float rel_diff = RelDiff(exact_exp, approx_exp); - max_error = std::max(max_error, rel_diff); - if (x <= 1.0f) { - ASSERT_LT(abs_diff, abs_tolerance) - << "x=" << x << ", target=" << exact_exp << ", aprx=" << approx_exp; - max_abs_error = std::max(max_abs_error, abs_diff); - } - ASSERT_LT(rel_diff, rel_tolerance) - << "x=" << x << ", target=" << exact_exp << ", aprx=" << approx_exp; - } - LOG(INFO) << "Max relative error = " << max_error - << ", abs=" << max_abs_error; -} - -TEST(Transcendentals, QuarticExp) { - TestExp(kQuarticFloatExpTolerance, - kQuarticFloatExpRelTolerance); -} - -TEST(Transcendentals, CubicExp) { - TestExp(kCubicExpTolerance, kCubicExpRelTolerance); -} - -template -void TestTanh(float abs_tolerance, float rel_tolerance) { - constexpr int kMaxInput = (40 << 16); - constexpr int kMinInput = -(40 << 16); - constexpr int kExponentBits = 15; - float max_error = 0.f; - float max_abs_error = 0.f; - for (int i = kMinInput; i <= kMaxInput; ++i) { - csrblocksparse::fixed32 fixed_int(i); - float x = static_cast(fixed_int); - float exact_tanh = tanh(x); - float approx_tanh = fixed32_tanh(fixed_int); - float diff = exact_tanh - approx_tanh; - float abs_diff = std::abs(diff); - float rel_diff = RelDiff(exact_tanh, approx_tanh); - ASSERT_LT(abs_diff, abs_tolerance) - << "x=" << x << ", target=" << exact_tanh << ", aprx=" << approx_tanh; - max_abs_error = std::max(max_abs_error, abs_diff); - max_error = std::max(max_error, rel_diff); - ASSERT_LT(rel_diff, rel_tolerance) - << "x=" << x << ", target=" << exact_tanh << ", aprx=" << approx_tanh; - } - LOG(INFO) << "Max relative error = " << max_error - << ", abs=" << max_abs_error; -} - -TEST(Transcendentals, QuarticTanh) { - TestTanh(kQuarticFloatTanhTolerance, - kQuarticFloatTanhRelTolerance); -} - -TEST(Transcendentals, CubicTanh) { - TestTanh(kCubicTanhTolerance, kCubicTanhRelTolerance); -} - -template -void TestSigmoid(float abs_tolerance, float rel_tolerance) { - constexpr int kMaxInput = 80 << 16; - constexpr int kMinInput = -(80 << 16); - constexpr int kExponentBits = 15; - float max_error = 0.f; - float max_abs_error = 0.f; - for (int i = kMinInput; i <= kMaxInput; ++i) { - csrblocksparse::fixed32 fixed_int(i); - float x = static_cast(fixed_int); - float exact_sigmoid = 1.0f / (1.0f + exp(-x)); - float approx_sigmoid = fixed32_sigmoid(fixed_int); - float diff = exact_sigmoid - approx_sigmoid; - float abs_diff = std::abs(diff); - float rel_diff = RelDiff(exact_sigmoid, approx_sigmoid); - max_error = std::max(max_error, rel_diff); - ASSERT_LT(abs_diff, abs_tolerance) - << "x=" << x << ", target=" << exact_sigmoid - << ", aprx=" << approx_sigmoid; - max_abs_error = std::max(max_abs_error, abs_diff); - ASSERT_LT(rel_diff, rel_tolerance) - << "x=" << x << ", target=" << exact_sigmoid - << ", aprx=" << approx_sigmoid; - } - LOG(INFO) << "Max relative sigmoid error = " << max_error - << ", abs=" << max_abs_error; -} - -TEST(Transcendentals, QuarticSigmoidExp) { - TestSigmoid(kQuarticSigmoidTolerance, - kQuarticSigmoidRelTolerance); -} - -TEST(Transcendentals, CubicSigmoidExp) { - TestSigmoid(kCubicSigmoidTolerance, - kCubicSigmoidRelTolerance); -} - -TEST(Transcendentals, Sigmoid) { - // 132 - 127 = 5, we check between -63.99... and 63.99... - const int maxExponent = 132; - const int minExponent = 0; - // The mantissa bits must not exceed 23, so min exponent bits here is: - // 31 - 23 = 8. - constexpr int kExponentBits = 9; - float max_error = 0.f; - float max_abs_error = 0.f; -#if defined __aarch64__ - float max_vector_error = 0.f; - float max_vector_abs_error = 0.f; -#endif - for (int s = 0; s < 2; ++s) { - for (int e = minExponent; e < maxExponent; ++e) { - // Don't check every mantissa for speed reasons. - for (int m = 0; m < (1 << 23); m += (1 << 10)) { - uint32_t int_val = s << 31 | e << 23 | m; - float x; - memcpy(&x, &int_val, sizeof(float)); - - float exact_sigmoid = 1. / (1. + expf(-x)); - float approx_sigmoid = csrblocksparse::fast_sigmoid(x); - float approx_sigmoid_fixed = - csrblocksparse::fast_sigmoid( - csrblocksparse::fixed32(x)); - - float rel_diff = RelDiff(exact_sigmoid, approx_sigmoid); - float abs_diff = std::abs(exact_sigmoid - approx_sigmoid); - float rel_diff_fixed = RelDiff(exact_sigmoid, approx_sigmoid_fixed); - max_error = std::max(max_error, rel_diff); - max_abs_error = std::max(max_abs_error, abs_diff); - EXPECT_LT(rel_diff, kSigmoidRelTolerance) - << exact_sigmoid << " " << approx_sigmoid << " " << x; - EXPECT_NEAR(approx_sigmoid, exact_sigmoid, kSigmoidAbsTolerance) << x; - - EXPECT_LT(rel_diff_fixed, kSigmoidFixedRelTolerance) - << exact_sigmoid << " " << approx_sigmoid_fixed << " " << x; - EXPECT_NEAR(approx_sigmoid_fixed, exact_sigmoid, - kSigmoidFixedAbsTolerance) - << x; -#if defined __aarch64__ - constexpr int kSIMD_WIDTH = 4; - float approx_results[kSIMD_WIDTH]; - int32x4_t input = - vdupq_n_s32(csrblocksparse::fixed32(x).raw_val()); - float32x4_t result = csrblocksparse::fast_sigmoid(input); - vst1q_f32(approx_results, result); - - for (int i = 0; i < kSIMD_WIDTH; ++i) { - float rel_diff = RelDiff(exact_sigmoid, approx_results[i]); - float abs_diff = std::abs(exact_sigmoid - approx_results[i]); - max_vector_error = std::max(max_vector_error, rel_diff); - max_vector_abs_error = std::max(max_vector_abs_error, abs_diff); - EXPECT_LT(rel_diff, kSigmoidRelTolerance) - << exact_sigmoid << " " << approx_sigmoid << " " << x; - EXPECT_NEAR(approx_sigmoid, exact_sigmoid, kSigmoidAbsTolerance) << x; - } -#endif - } - } - } - LOG(INFO) << "Max relative error in float sigmoid=" << max_error; - LOG(INFO) << "Max abs error in float sigmoid=" << max_abs_error; -#if defined __aarch64__ - LOG(INFO) << "Max relative vector error fixed sigmoid=" << max_vector_error; - LOG(INFO) << "Max abs vector error fixed sigmoid=" << max_vector_abs_error; -#endif -} - -TEST(Transcendentals, Tanh) { - // 132 - 127 = 5, we check between -63.99... and 63.99... - const int maxExponent = 132; - const int minExponent = 0; - float max_error = 0.f; - float max_abs_error = 0.f; - for (int s = 0; s < 2; ++s) { - for (int e = minExponent; e < maxExponent; ++e) { - // Don't check every mantissa for speed reasons. - for (int m = 0; m < (1 << 23); m += (1 << 10)) { - uint32_t int_val = s << 31 | e << 23 | m; - float x; - memcpy(&x, &int_val, sizeof(float)); - - float exact_tanh = tanhf(x); - float approx_tanh = csrblocksparse::fast_tanh(x); - - float rel_diff = RelDiff(exact_tanh, approx_tanh); - float abs_diff = std::abs(exact_tanh - approx_tanh); - max_error = std::max(rel_diff, max_error); - max_abs_error = std::max(abs_diff, max_abs_error); - - EXPECT_LT(rel_diff, kTanhRelTolerance) - << exact_tanh << " " << approx_tanh << " " << x; - EXPECT_NEAR(approx_tanh, exact_tanh, kTanhAbsTolerance) << x; - } - } - } - LOG(INFO) << "Max relative error in float tanh=" << max_error; - LOG(INFO) << "Max abs error in float tanh=" << max_abs_error; - - // tanh behavior is not identical across all lanes, so need to test - // with some values in the linear region and some not. -#if defined __aarch64__ - float vals[4] = {-1.f, -.1f, .1f, 1.f}; - float exact_results[4]; - float approx_results[4]; - max_error = 0.f; - max_abs_error = 0.f; - - float32x4_t input = vld1q_f32(vals); - float32x4_t result = csrblocksparse::fast_tanh(input); - vst1q_f32(approx_results, result); - - for (int i = 0; i < 4; ++i) { - exact_results[i] = tanh(vals[i]); - float rel_diff = RelDiff(exact_results[i], approx_results[i]); - float abs_diff = std::abs(exact_results[i] - approx_results[i]); - max_error = std::max(rel_diff, max_error); - max_abs_error = std::max(abs_diff, max_abs_error); - - EXPECT_LT(rel_diff, kTanhRelTolerance) - << exact_results[i] << " " << approx_results[i] << " " << vals[i]; - EXPECT_NEAR(approx_results[i], exact_results[i], kTanhAbsTolerance) - << vals[i]; - } - LOG(INFO) << "Max relative vector error in float tanh=" << max_error; - LOG(INFO) << "Max abs vector error in float tanh=" << max_abs_error; -#endif -} - -#if defined __AVX2__ - -constexpr int kSIMDSize = 8; -constexpr int kNumExpBitsIn = 10; -constexpr int kNumExpBitsOut = 5; - -TEST(Transcendentals, TanhLut) { - // Test every value in (-1, 1) for round-trip exactness. - constexpr int kNumMantissaBitsIn = fixed32::kMantissaBits; - constexpr int kNumMantissaBitsOut = fixed16::kMantissaBits; - const int32_t* tanh_table = TanhTable(kNumMantissaBitsOut); - float in_factor = static_cast(1 << kNumMantissaBitsIn); - float out_factor = static_cast(1 << kNumMantissaBitsOut); - for (int i = 1 - (1 << kNumMantissaBitsOut); - i + kSIMDSize < (1 << kNumMantissaBitsOut); i += kSIMDSize) { - int32_t inputs[kSIMDSize]; - int32_t outputs[kSIMDSize]; - int32_t target_outputs[kSIMDSize]; - for (int j = 0; j < kSIMDSize; ++j) { - float target_tanh = (i + j) / out_factor; - float x = atanhf(static_cast(target_tanh)); - inputs[j] = static_cast(x * in_factor); - target_outputs[j] = i + j; - } - __m256i x_in = _mm256_loadu_si256(reinterpret_cast<__m256i*>(inputs)); - __m256i output = - fixed32_tanh_fixed16( - tanh_table, x_in); - _mm256_storeu_si256(reinterpret_cast<__m256i*>(outputs), output); - for (int j = 0; j < kSIMDSize; ++j) { - EXPECT_EQ(target_outputs[j], outputs[j]); - } - } -} - -TEST(Transcendentals, SigmoidLut) { - // Test every value in (-1, 1) for round-trip exactness. - constexpr int kNumMantissaBitsIn = fixed32::kMantissaBits; - constexpr int kNumMantissaBitsOut = fixed16::kMantissaBits; - const int32_t* sigmoid_table = SigmoidTable(kNumMantissaBitsOut); - float in_factor = static_cast(1 << kNumMantissaBitsIn); - float out_factor = static_cast(1 << kNumMantissaBitsOut); - for (int i = 1; i + kSIMDSize < (1 << kNumMantissaBitsOut); i += kSIMDSize) { - int32_t inputs[kSIMDSize]; - int32_t outputs[kSIMDSize]; - int32_t target_outputs[kSIMDSize]; - for (int j = 0; j < kSIMDSize; ++j) { - float target_sigmoid = (i + j) / out_factor; - float x = 2.0f * atanhf(2.0f * static_cast(target_sigmoid) - 1.0f); - inputs[j] = static_cast(x * in_factor); - target_outputs[j] = i + j; - } - __m256i x_in = _mm256_loadu_si256(reinterpret_cast<__m256i*>(inputs)); - __m256i output = - fixed32_sigmoid_fixed16( - sigmoid_table, x_in); - _mm256_storeu_si256(reinterpret_cast<__m256i*>(outputs), output); - for (int j = 0; j < kSIMDSize; ++j) { - EXPECT_EQ(target_outputs[j], outputs[j]); - } - } -} - -template -static void TestExpAVX2(float abs_tolerance, float rel_tolerance) { - constexpr int kMantissaBits = 20; - // Test every value in [-80, 80] and report the max error. - constexpr int kMinInput = -(80 << kMantissaBits); - constexpr int kMaxInput = 80 << kMantissaBits; - constexpr int kNumInputs = kMaxInput - kMinInput; - std::vector inputs(kNumInputs); - std::vector outputs(kNumInputs); - std::vector target_outputs(kNumInputs); - for (int i = 0; i < inputs.size(); ++i) { - csrblocksparse::fixed32<31 - kMantissaBits> fixed_int(i + kMinInput); - float x = static_cast(fixed_int); - inputs[i] = fixed_int.raw_val(); - target_outputs[i] = expf(x); - } - absl::Time t_start = absl::Now(); - for (int i = 0; i + kSIMDSize * 2 <= kNumInputs; i += kSIMDSize * 2) { - __m256i x0 = - _mm256_loadu_si256(reinterpret_cast(inputs.data() + i)); - __m256i x1 = _mm256_loadu_si256( - reinterpret_cast(inputs.data() + i + kSIMDSize)); - __m256 y0, y1; - fixed32_exp_float(x0, x1, y0, y1); - _mm256_storeu_ps(outputs.data() + i, y0); - _mm256_storeu_ps(outputs.data() + i + kSIMDSize, y1); - } - LOG(INFO) << "Time=" << absl::ToDoubleMilliseconds(absl::Now() - t_start); - float max_error = 0.f; - float max_abs_error = 0.f; - for (int i = 0; i < kNumInputs; ++i) { - float diff = target_outputs[i] - outputs[i]; - float abs_diff = std::abs(diff); - csrblocksparse::fixed32<31 - kMantissaBits> fixed_int(i + kMinInput); - float x = static_cast(fixed_int); - float rel_diff = RelDiff(target_outputs[i], outputs[i]); - max_error = std::max(max_error, rel_diff); - if (x <= 1.0f) { - ASSERT_LT(abs_diff, abs_tolerance) - << "x=" << x << ", target=" << target_outputs[i] - << ", result= " << outputs[i] << ", i=" << i; - max_abs_error = std::max(max_abs_error, abs_diff); - } - ASSERT_LT(rel_diff, rel_tolerance) - << "x=" << x << ", target=" << target_outputs[i] - << ", result= " << outputs[i] << ", i=" << i; - } - LOG(INFO) << "Max relative error = " << max_error - << ", abs=" << max_abs_error; -} - -TEST(Transcendentals, QuarticFloatExpAVX2) { - TestExpAVX2(kQuarticFloatExpTolerance, - kQuarticFloatExpRelTolerance); -} - -TEST(Transcendentals, QuarticExpAVX2) { - TestExpAVX2(kQuarticExpTolerance, kQuarticExpRelTolerance); -} - -TEST(Transcendentals, CubicExpAVX2) { - TestExpAVX2(kCubicExpTolerance, kCubicExpRelTolerance); -} - -template -void TestTanhAVX2Float(float abs_tolerance, float rel_tolerance) { - constexpr int kMantissaBits = 16; - // Test every value in [-10, 10] and report the max error. - constexpr int kMinInput = -(10 << kMantissaBits); - constexpr int kMaxInput = 10 << kMantissaBits; - constexpr int kNumInputs = kMaxInput - kMinInput; - float max_error = 0.f; - float max_abs_error = 0.f; - std::vector inputs(kNumInputs); - std::vector outputs(kNumInputs); - std::vector target_outputs(kNumInputs); - for (int i = 0; i < inputs.size(); ++i) { - csrblocksparse::fixed32<31 - kMantissaBits> fixed_int(i + kMinInput); - float x = static_cast(fixed_int); - float exact = tanh(x); - inputs[i] = static_cast(fixed_int.raw_val()); - target_outputs[i] = exact; - } - absl::Time t_start = absl::Now(); - for (int i = 0; i + kSIMDSize * 2 <= inputs.size(); i += kSIMDSize * 2) { - __m256 x0 = _mm256_loadu_ps(inputs.data() + i); - __m256 x1 = _mm256_loadu_ps(inputs.data() + kSIMDSize + i); - __m256 y0, y1; - float_tanh_float(x0, x1, y0, y1); - _mm256_storeu_ps(outputs.data() + i, y0); - _mm256_storeu_ps(outputs.data() + i + kSIMDSize, y1); - } - LOG(INFO) << "Time=" << absl::ToDoubleMilliseconds(absl::Now() - t_start); - float worst_abs_x = 0.0f, worst_rel_x = 0.0f; - for (int i = 0; i < inputs.size(); ++i) { - float diff = target_outputs[i] - outputs[i]; - float abs_diff = std::abs(diff); - csrblocksparse::fixed32<31 - kMantissaBits> fixed_int(i + kMinInput); - float x = static_cast(fixed_int); - ASSERT_LT(abs_diff, abs_tolerance) - << "x=" << x << ", target=" << target_outputs[i] - << ", aprx=" << outputs[i]; - if (abs_diff > max_abs_error) worst_abs_x = x; - max_abs_error = std::max(max_abs_error, abs_diff); - float rel_diff = 0.0f; - rel_diff = RelDiff(target_outputs[i], outputs[i]); - if (rel_diff > max_error) worst_rel_x = x; - max_error = std::max(max_error, rel_diff); - ASSERT_LT(rel_diff, rel_tolerance) - << "x=" << x << ", target=" << target_outputs[i] - << ", aprx=" << outputs[i]; - } - LOG(INFO) << "Max relative error = " << max_error - << ", abs=" << max_abs_error; - LOG(INFO) << "Worst rel x = " << worst_rel_x << ", abs=" << worst_abs_x; -} - -TEST(Transcendentals, QuarticTanhFloatAVX2Float) { - TestTanhAVX2Float(kQuarticFloatTanhTolerance, - kQuarticFloatTanhRelTolerance); -} - -TEST(Transcendentals, QuarticTanhAVX2Float) { - TestTanhAVX2Float(kQuarticTanhTolerance, - kQuarticTanhRelTolerance); -} - -TEST(Transcendentals, CubicTanhAVX2Float) { - TestTanhAVX2Float(kCubicTanhTolerance, - kCubicTanhRelTolerance); -} - -template -void TestSigmoidAVX2Float(float abs_tolerance, float rel_tolerance) { - constexpr int kMantissaBits = 20; - // Test every value in [-20, 20] and report the max error. - constexpr int kMaxInput = 20 << kMantissaBits; - constexpr int kMinInput = -(20 << kMantissaBits); - float max_error = 0.f; - float max_abs_error = 0.f; - std::vector inputs(kMaxInput - kMinInput); - std::vector outputs(kMaxInput - kMinInput); - std::vector target_outputs(kMaxInput - kMinInput); - for (int i = 0; i < inputs.size(); ++i) { - csrblocksparse::fixed32<31 - kMantissaBits> fixed_int(i + kMinInput); - float x = static_cast(fixed_int); - float exact = 1.0f / (1.0f + expf(-x)); - inputs[i] = fixed_int.raw_val(); - target_outputs[i] = exact; - } - absl::Time t_start = absl::Now(); - for (int i = 0; i + kSIMDSize * 2 <= inputs.size(); i += kSIMDSize * 2) { - __m256i x0 = - _mm256_loadu_si256(reinterpret_cast(inputs.data() + i)); - __m256i x1 = _mm256_loadu_si256( - reinterpret_cast(inputs.data() + i + kSIMDSize)); - __m256 y0 = _mm256_cvtepi32_ps(x0); - __m256 y1 = _mm256_cvtepi32_ps(x1); - float_sigmoid_float(y0, y1); - _mm256_storeu_ps(outputs.data() + i, y0); - _mm256_storeu_ps(outputs.data() + i + kSIMDSize, y1); - } - LOG(INFO) << "Time=" << absl::ToDoubleMilliseconds(absl::Now() - t_start); - for (int i = 0; i < inputs.size(); ++i) { - float diff = target_outputs[i] - outputs[i]; - float abs_diff = std::abs(diff); - csrblocksparse::fixed32<31 - kMantissaBits> fixed_int(i + kMinInput); - float x = static_cast(fixed_int); - float rel_diff = RelDiff(target_outputs[i], outputs[i]); - max_error = std::max(max_error, rel_diff); - ASSERT_LT(abs_diff, abs_tolerance) - << "x=" << x << ", target=" << target_outputs[i] - << ", aprx=" << outputs[i]; - max_abs_error = std::max(max_abs_error, abs_diff); - ASSERT_LT(rel_diff, rel_tolerance) - << "x=" << x << ", target=" << target_outputs[i] - << ", aprx=" << outputs[i]; - } - LOG(INFO) << "Max relative error = " << max_error - << ", abs=" << max_abs_error; -} - -TEST(Transcendentals, QuarticSigmoidFloatAVX2Float) { - TestSigmoidAVX2Float(kQuarticFloatSigmoidTolerance, - kQuarticFloatSigmoidRelTolerance); -} - -TEST(Transcendentals, QuarticSigmoidAVX2Float) { - TestSigmoidAVX2Float(kQuarticSigmoidTolerance, - kQuarticSigmoidRelTolerance); -} - -TEST(Transcendentals, CubicSigmoidAVX2Float) { - TestSigmoidAVX2Float(kCubicSigmoidTolerance, - kCubicSigmoidRelTolerance); -} -#endif // __AVX2__ - -} // namespace csrblocksparse diff --git a/spaces/nuttella/supa/Dockerfile b/spaces/nuttella/supa/Dockerfile deleted file mode 100644 index 4cb0ce42128d9a2ad33a395883f5e5455a38c707..0000000000000000000000000000000000000000 --- a/spaces/nuttella/supa/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/networks/__init__.py b/spaces/oguzakif/video-object-remover/FGT_codes/LAFC/networks/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/oliver2023/chatgpt-on-wechat/bot/openai/open_ai_image.py b/spaces/oliver2023/chatgpt-on-wechat/bot/openai/open_ai_image.py deleted file mode 100644 index 2fae243fff31e036f021c5d971f655901a5224b6..0000000000000000000000000000000000000000 --- a/spaces/oliver2023/chatgpt-on-wechat/bot/openai/open_ai_image.py +++ /dev/null @@ -1,38 +0,0 @@ -import time -import openai -import openai.error -from common.token_bucket import TokenBucket -from common.log import logger -from config import conf - -# OPENAI提供的画图接口 -class OpenAIImage(object): - def __init__(self): - openai.api_key = conf().get('open_ai_api_key') - if conf().get('rate_limit_dalle'): - self.tb4dalle = TokenBucket(conf().get('rate_limit_dalle', 50)) - - def create_img(self, query, retry_count=0): - try: - if conf().get('rate_limit_dalle') and not self.tb4dalle.get_token(): - return False, "请求太快了,请休息一下再问我吧" - logger.info("[OPEN_AI] image_query={}".format(query)) - response = openai.Image.create( - prompt=query, #图片描述 - n=1, #每次生成图片的数量 - size="256x256" #图片大小,可选有 256x256, 512x512, 1024x1024 - ) - image_url = response['data'][0]['url'] - logger.info("[OPEN_AI] image_url={}".format(image_url)) - return True, image_url - except openai.error.RateLimitError as e: - logger.warn(e) - if retry_count < 1: - time.sleep(5) - logger.warn("[OPEN_AI] ImgCreate RateLimit exceed, 第{}次重试".format(retry_count+1)) - return self.create_img(query, retry_count+1) - else: - return False, "提问太快啦,请休息一下再问我吧" - except Exception as e: - logger.exception(e) - return False, str(e) \ No newline at end of file diff --git a/spaces/omlab/vlchecklist_demo/models/albef/models/model_pretrain_nlvr.py b/spaces/omlab/vlchecklist_demo/models/albef/models/model_pretrain_nlvr.py deleted file mode 100644 index dc57ec86a8d4754fb229b0db9f588fb5d519324f..0000000000000000000000000000000000000000 --- a/spaces/omlab/vlchecklist_demo/models/albef/models/model_pretrain_nlvr.py +++ /dev/null @@ -1,99 +0,0 @@ -from functools import partial -from models.vit import VisionTransformer -from models.xbert import BertConfig, BertModel - -import torch -from torch import nn -import torch.nn.functional as F - -class ALBEF(nn.Module): - def __init__(self, - text_encoder = None, - tokenizer = None, - config = None, - ): - super().__init__() - - self.tokenizer = tokenizer - vision_width = config['vision_width'] - embed_dim = config['embed_dim'] - - self.visual_encoder = VisionTransformer( - img_size=config['image_res'], patch_size=16, embed_dim=768, depth=12, num_heads=12, - mlp_ratio=4, qkv_bias=True, norm_layer=partial(nn.LayerNorm, eps=1e-6)) - - bert_config = BertConfig.from_json_file(config['bert_config']) - bert_config.num_hidden_layers = 18 - self.text_encoder = BertModel.from_pretrained(text_encoder, config=bert_config, add_pooling_layer=False) - - #share the cross-attention layers for two images - self.share_cross_attention(self.text_encoder.encoder) - - text_width = self.text_encoder.config.hidden_size - self.vision_proj = nn.Linear(vision_width, embed_dim) - self.text_proj = nn.Linear(text_width, embed_dim) - self.temp = nn.Parameter(torch.ones([]) * 0.07) - self.ta_head = nn.Linear(self.text_encoder.config.hidden_size, 3) - - - def forward(self, image, text): - image_embeds = self.visual_encoder(image) - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - with torch.no_grad(): - image_feat = F.normalize(self.vision_proj(image_embeds[:,0,:]),dim=-1) - sim = image_feat @ image_feat.t() / 0.07 - weights = F.softmax(sim,dim=1) - weights.fill_diagonal_(0) - - image_inputs = [[],[]] - labels = [] - for b in range(image.size(0)): - if torch.rand(1)>1/3: - idx = torch.multinomial(weights[b], 1).item() - if torch.rand(1)>0.5: - image_inputs[0].append(image_embeds[b]) - image_inputs[1].append(image_embeds[idx]) - labels.append(0) - else: - image_inputs[1].append(image_embeds[b]) - image_inputs[0].append(image_embeds[idx]) - labels.append(1) - else: - idx = torch.multinomial(weights[b], 2) - image_inputs[0].append(image_embeds[idx[0]]) - image_inputs[1].append(image_embeds[idx[1]]) - labels.append(2) - - image_inputs[0] = torch.stack(image_inputs[0],dim=0) - image_inputs[1] = torch.stack(image_inputs[1],dim=0) - labels = torch.LongTensor(labels).to(image.device) - - output = self.text_encoder(text.input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_inputs, - encoder_attention_mask = [image_atts,image_atts], - return_dict = True, - ) - - pred = self.ta_head(output.last_hidden_state[:,0,:]) - loss = F.cross_entropy(pred, labels) - - return loss - - - - def share_cross_attention(self, model): - - for i in range(6): - layer_num = 6+i*2 - modules_0 = model.layer[layer_num].crossattention.self._modules - modules_1 = model.layer[layer_num+1].crossattention.self._modules - - for name in modules_0.keys(): - if 'key' in name or 'value' in name: - module_0 = modules_0[name] - module_1 = modules_1[name] - if hasattr(module_0, "weight"): - module_0.weight = module_1.weight - if hasattr(module_0, "bias"): - module_0.bias = module_1.bias \ No newline at end of file diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/unet_1d.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/unet_1d.py deleted file mode 100644 index 5bb5b0818245e19225b1c972e13d05b1e3e4f6c3..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/models/unet_1d.py +++ /dev/null @@ -1,255 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.nn as nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput -from .embeddings import GaussianFourierProjection, TimestepEmbedding, Timesteps -from .modeling_utils import ModelMixin -from .unet_1d_blocks import get_down_block, get_mid_block, get_out_block, get_up_block - - -@dataclass -class UNet1DOutput(BaseOutput): - """ - The output of [`UNet1DModel`]. - - Args: - sample (`torch.FloatTensor` of shape `(batch_size, num_channels, sample_size)`): - The hidden states output from the last layer of the model. - """ - - sample: torch.FloatTensor - - -class UNet1DModel(ModelMixin, ConfigMixin): - r""" - A 1D UNet model that takes a noisy sample and a timestep and returns a sample shaped output. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented - for all models (such as downloading or saving). - - Parameters: - sample_size (`int`, *optional*): Default length of sample. Should be adaptable at runtime. - in_channels (`int`, *optional*, defaults to 2): Number of channels in the input sample. - out_channels (`int`, *optional*, defaults to 2): Number of channels in the output. - extra_in_channels (`int`, *optional*, defaults to 0): - Number of additional channels to be added to the input of the first down block. Useful for cases where the - input data has more channels than what the model was initially designed for. - time_embedding_type (`str`, *optional*, defaults to `"fourier"`): Type of time embedding to use. - freq_shift (`float`, *optional*, defaults to 0.0): Frequency shift for Fourier time embedding. - flip_sin_to_cos (`bool`, *optional*, defaults to `False`): - Whether to flip sin to cos for Fourier time embedding. - down_block_types (`Tuple[str]`, *optional*, defaults to `("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D")`): - Tuple of downsample block types. - up_block_types (`Tuple[str]`, *optional*, defaults to `("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip")`): - Tuple of upsample block types. - block_out_channels (`Tuple[int]`, *optional*, defaults to `(32, 32, 64)`): - Tuple of block output channels. - mid_block_type (`str`, *optional*, defaults to `"UNetMidBlock1D"`): Block type for middle of UNet. - out_block_type (`str`, *optional*, defaults to `None`): Optional output processing block of UNet. - act_fn (`str`, *optional*, defaults to `None`): Optional activation function in UNet blocks. - norm_num_groups (`int`, *optional*, defaults to 8): The number of groups for normalization. - layers_per_block (`int`, *optional*, defaults to 1): The number of layers per block. - downsample_each_block (`int`, *optional*, defaults to `False`): - Experimental feature for using a UNet without upsampling. - """ - - @register_to_config - def __init__( - self, - sample_size: int = 65536, - sample_rate: Optional[int] = None, - in_channels: int = 2, - out_channels: int = 2, - extra_in_channels: int = 0, - time_embedding_type: str = "fourier", - flip_sin_to_cos: bool = True, - use_timestep_embedding: bool = False, - freq_shift: float = 0.0, - down_block_types: Tuple[str] = ("DownBlock1DNoSkip", "DownBlock1D", "AttnDownBlock1D"), - up_block_types: Tuple[str] = ("AttnUpBlock1D", "UpBlock1D", "UpBlock1DNoSkip"), - mid_block_type: Tuple[str] = "UNetMidBlock1D", - out_block_type: str = None, - block_out_channels: Tuple[int] = (32, 32, 64), - act_fn: str = None, - norm_num_groups: int = 8, - layers_per_block: int = 1, - downsample_each_block: bool = False, - ): - super().__init__() - self.sample_size = sample_size - - # time - if time_embedding_type == "fourier": - self.time_proj = GaussianFourierProjection( - embedding_size=8, set_W_to_weight=False, log=False, flip_sin_to_cos=flip_sin_to_cos - ) - timestep_input_dim = 2 * block_out_channels[0] - elif time_embedding_type == "positional": - self.time_proj = Timesteps( - block_out_channels[0], flip_sin_to_cos=flip_sin_to_cos, downscale_freq_shift=freq_shift - ) - timestep_input_dim = block_out_channels[0] - - if use_timestep_embedding: - time_embed_dim = block_out_channels[0] * 4 - self.time_mlp = TimestepEmbedding( - in_channels=timestep_input_dim, - time_embed_dim=time_embed_dim, - act_fn=act_fn, - out_dim=block_out_channels[0], - ) - - self.down_blocks = nn.ModuleList([]) - self.mid_block = None - self.up_blocks = nn.ModuleList([]) - self.out_block = None - - # down - output_channel = in_channels - for i, down_block_type in enumerate(down_block_types): - input_channel = output_channel - output_channel = block_out_channels[i] - - if i == 0: - input_channel += extra_in_channels - - is_final_block = i == len(block_out_channels) - 1 - - down_block = get_down_block( - down_block_type, - num_layers=layers_per_block, - in_channels=input_channel, - out_channels=output_channel, - temb_channels=block_out_channels[0], - add_downsample=not is_final_block or downsample_each_block, - ) - self.down_blocks.append(down_block) - - # mid - self.mid_block = get_mid_block( - mid_block_type, - in_channels=block_out_channels[-1], - mid_channels=block_out_channels[-1], - out_channels=block_out_channels[-1], - embed_dim=block_out_channels[0], - num_layers=layers_per_block, - add_downsample=downsample_each_block, - ) - - # up - reversed_block_out_channels = list(reversed(block_out_channels)) - output_channel = reversed_block_out_channels[0] - if out_block_type is None: - final_upsample_channels = out_channels - else: - final_upsample_channels = block_out_channels[0] - - for i, up_block_type in enumerate(up_block_types): - prev_output_channel = output_channel - output_channel = ( - reversed_block_out_channels[i + 1] if i < len(up_block_types) - 1 else final_upsample_channels - ) - - is_final_block = i == len(block_out_channels) - 1 - - up_block = get_up_block( - up_block_type, - num_layers=layers_per_block, - in_channels=prev_output_channel, - out_channels=output_channel, - temb_channels=block_out_channels[0], - add_upsample=not is_final_block, - ) - self.up_blocks.append(up_block) - prev_output_channel = output_channel - - # out - num_groups_out = norm_num_groups if norm_num_groups is not None else min(block_out_channels[0] // 4, 32) - self.out_block = get_out_block( - out_block_type=out_block_type, - num_groups_out=num_groups_out, - embed_dim=block_out_channels[0], - out_channels=out_channels, - act_fn=act_fn, - fc_dim=block_out_channels[-1] // 4, - ) - - def forward( - self, - sample: torch.FloatTensor, - timestep: Union[torch.Tensor, float, int], - return_dict: bool = True, - ) -> Union[UNet1DOutput, Tuple]: - r""" - The [`UNet1DModel`] forward method. - - Args: - sample (`torch.FloatTensor`): - The noisy input tensor with the following shape `(batch_size, num_channels, sample_size)`. - timestep (`torch.FloatTensor` or `float` or `int`): The number of timesteps to denoise an input. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~models.unet_1d.UNet1DOutput`] instead of a plain tuple. - - Returns: - [`~models.unet_1d.UNet1DOutput`] or `tuple`: - If `return_dict` is True, an [`~models.unet_1d.UNet1DOutput`] is returned, otherwise a `tuple` is - returned where the first element is the sample tensor. - """ - - # 1. time - timesteps = timestep - if not torch.is_tensor(timesteps): - timesteps = torch.tensor([timesteps], dtype=torch.long, device=sample.device) - elif torch.is_tensor(timesteps) and len(timesteps.shape) == 0: - timesteps = timesteps[None].to(sample.device) - - timestep_embed = self.time_proj(timesteps) - if self.config.use_timestep_embedding: - timestep_embed = self.time_mlp(timestep_embed) - else: - timestep_embed = timestep_embed[..., None] - timestep_embed = timestep_embed.repeat([1, 1, sample.shape[2]]).to(sample.dtype) - timestep_embed = timestep_embed.broadcast_to((sample.shape[:1] + timestep_embed.shape[1:])) - - # 2. down - down_block_res_samples = () - for downsample_block in self.down_blocks: - sample, res_samples = downsample_block(hidden_states=sample, temb=timestep_embed) - down_block_res_samples += res_samples - - # 3. mid - if self.mid_block: - sample = self.mid_block(sample, timestep_embed) - - # 4. up - for i, upsample_block in enumerate(self.up_blocks): - res_samples = down_block_res_samples[-1:] - down_block_res_samples = down_block_res_samples[:-1] - sample = upsample_block(sample, res_hidden_states_tuple=res_samples, temb=timestep_embed) - - # 5. post-process - if self.out_block: - sample = self.out_block(sample, timestep_embed) - - if not return_dict: - return (sample,) - - return UNet1DOutput(sample=sample) diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/upsegmodel/models.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/upsegmodel/models.py deleted file mode 100644 index 68a773ad100f2df6e4edfdc61229adf5060d0f0a..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/upsegmodel/models.py +++ /dev/null @@ -1,427 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision -from . import resnet, resnext -try: - from lib.nn import SynchronizedBatchNorm2d -except ImportError: - from torch.nn import BatchNorm2d as SynchronizedBatchNorm2d - - -class SegmentationModuleBase(nn.Module): - def __init__(self): - super(SegmentationModuleBase, self).__init__() - - @staticmethod - def pixel_acc(pred, label, ignore_index=-1): - _, preds = torch.max(pred, dim=1) - valid = (label != ignore_index).long() - acc_sum = torch.sum(valid * (preds == label).long()) - pixel_sum = torch.sum(valid) - acc = acc_sum.float() / (pixel_sum.float() + 1e-10) - return acc - - @staticmethod - def part_pixel_acc(pred_part, gt_seg_part, gt_seg_object, object_label, valid): - mask_object = (gt_seg_object == object_label) - _, pred = torch.max(pred_part, dim=1) - acc_sum = mask_object * (pred == gt_seg_part) - acc_sum = torch.sum(acc_sum.view(acc_sum.size(0), -1), dim=1) - acc_sum = torch.sum(acc_sum * valid) - pixel_sum = torch.sum(mask_object.view(mask_object.size(0), -1), dim=1) - pixel_sum = torch.sum(pixel_sum * valid) - return acc_sum, pixel_sum - - @staticmethod - def part_loss(pred_part, gt_seg_part, gt_seg_object, object_label, valid): - mask_object = (gt_seg_object == object_label) - loss = F.nll_loss(pred_part, gt_seg_part * mask_object.long(), reduction='none') - loss = loss * mask_object.float() - loss = torch.sum(loss.view(loss.size(0), -1), dim=1) - nr_pixel = torch.sum(mask_object.view(mask_object.shape[0], -1), dim=1) - sum_pixel = (nr_pixel * valid).sum() - loss = (loss * valid.float()).sum() / torch.clamp(sum_pixel, 1).float() - return loss - - -class SegmentationModule(SegmentationModuleBase): - def __init__(self, net_enc, net_dec, labeldata, loss_scale=None): - super(SegmentationModule, self).__init__() - self.encoder = net_enc - self.decoder = net_dec - self.crit_dict = nn.ModuleDict() - if loss_scale is None: - self.loss_scale = {"object": 1, "part": 0.5, "scene": 0.25, "material": 1} - else: - self.loss_scale = loss_scale - - # criterion - self.crit_dict["object"] = nn.NLLLoss(ignore_index=0) # ignore background 0 - self.crit_dict["material"] = nn.NLLLoss(ignore_index=0) # ignore background 0 - self.crit_dict["scene"] = nn.NLLLoss(ignore_index=-1) # ignore unlabelled -1 - - # Label data - read from json - self.labeldata = labeldata - object_to_num = {k: v for v, k in enumerate(labeldata['object'])} - part_to_num = {k: v for v, k in enumerate(labeldata['part'])} - self.object_part = {object_to_num[k]: - [part_to_num[p] for p in v] - for k, v in labeldata['object_part'].items()} - self.object_with_part = sorted(self.object_part.keys()) - self.decoder.object_part = self.object_part - self.decoder.object_with_part = self.object_with_part - - def forward(self, feed_dict, *, seg_size=None): - if seg_size is None: # training - - if feed_dict['source_idx'] == 0: - output_switch = {"object": True, "part": True, "scene": True, "material": False} - elif feed_dict['source_idx'] == 1: - output_switch = {"object": False, "part": False, "scene": False, "material": True} - else: - raise ValueError - - pred = self.decoder( - self.encoder(feed_dict['img'], return_feature_maps=True), - output_switch=output_switch - ) - - # loss - loss_dict = {} - if pred['object'] is not None: # object - loss_dict['object'] = self.crit_dict['object'](pred['object'], feed_dict['seg_object']) - if pred['part'] is not None: # part - part_loss = 0 - for idx_part, object_label in enumerate(self.object_with_part): - part_loss += self.part_loss( - pred['part'][idx_part], feed_dict['seg_part'], - feed_dict['seg_object'], object_label, feed_dict['valid_part'][:, idx_part]) - loss_dict['part'] = part_loss - if pred['scene'] is not None: # scene - loss_dict['scene'] = self.crit_dict['scene'](pred['scene'], feed_dict['scene_label']) - if pred['material'] is not None: # material - loss_dict['material'] = self.crit_dict['material'](pred['material'], feed_dict['seg_material']) - loss_dict['total'] = sum([loss_dict[k] * self.loss_scale[k] for k in loss_dict.keys()]) - - # metric - metric_dict= {} - if pred['object'] is not None: - metric_dict['object'] = self.pixel_acc( - pred['object'], feed_dict['seg_object'], ignore_index=0) - if pred['material'] is not None: - metric_dict['material'] = self.pixel_acc( - pred['material'], feed_dict['seg_material'], ignore_index=0) - if pred['part'] is not None: - acc_sum, pixel_sum = 0, 0 - for idx_part, object_label in enumerate(self.object_with_part): - acc, pixel = self.part_pixel_acc( - pred['part'][idx_part], feed_dict['seg_part'], feed_dict['seg_object'], - object_label, feed_dict['valid_part'][:, idx_part]) - acc_sum += acc - pixel_sum += pixel - metric_dict['part'] = acc_sum.float() / (pixel_sum.float() + 1e-10) - if pred['scene'] is not None: - metric_dict['scene'] = self.pixel_acc( - pred['scene'], feed_dict['scene_label'], ignore_index=-1) - - return {'metric': metric_dict, 'loss': loss_dict} - else: # inference - output_switch = {"object": True, "part": True, "scene": True, "material": True} - pred = self.decoder(self.encoder(feed_dict['img'], return_feature_maps=True), - output_switch=output_switch, seg_size=seg_size) - return pred - - -def conv3x3(in_planes, out_planes, stride=1, has_bias=False): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=has_bias) - - -def conv3x3_bn_relu(in_planes, out_planes, stride=1): - return nn.Sequential( - conv3x3(in_planes, out_planes, stride), - SynchronizedBatchNorm2d(out_planes), - nn.ReLU(inplace=True), - ) - - -class ModelBuilder: - def __init__(self): - pass - - # custom weights initialization - @staticmethod - def weights_init(m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - nn.init.kaiming_normal_(m.weight.data, nonlinearity='relu') - elif classname.find('BatchNorm') != -1: - m.weight.data.fill_(1.) - m.bias.data.fill_(1e-4) - #elif classname.find('Linear') != -1: - # m.weight.data.normal_(0.0, 0.0001) - - def build_encoder(self, arch='resnet50_dilated8', fc_dim=512, weights=''): - pretrained = True if len(weights) == 0 else False - if arch == 'resnet50': - orig_resnet = resnet.__dict__['resnet50'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - elif arch == 'resnet101': - orig_resnet = resnet.__dict__['resnet101'](pretrained=pretrained) - net_encoder = Resnet(orig_resnet) - elif arch == 'resnext101': - orig_resnext = resnext.__dict__['resnext101'](pretrained=pretrained) - net_encoder = Resnet(orig_resnext) # we can still use class Resnet - else: - raise Exception('Architecture undefined!') - - # net_encoder.apply(self.weights_init) - if len(weights) > 0: - # print('Loading weights for net_encoder') - net_encoder.load_state_dict( - torch.load(weights, map_location=lambda storage, loc: storage), strict=False) - return net_encoder - - def build_decoder(self, nr_classes, - arch='ppm_bilinear_deepsup', fc_dim=512, - weights='', use_softmax=False): - if arch == 'upernet_lite': - net_decoder = UPerNet( - nr_classes=nr_classes, - fc_dim=fc_dim, - use_softmax=use_softmax, - fpn_dim=256) - elif arch == 'upernet': - net_decoder = UPerNet( - nr_classes=nr_classes, - fc_dim=fc_dim, - use_softmax=use_softmax, - fpn_dim=512) - else: - raise Exception('Architecture undefined!') - - net_decoder.apply(self.weights_init) - if len(weights) > 0: - # print('Loading weights for net_decoder') - net_decoder.load_state_dict( - torch.load(weights, map_location=lambda storage, loc: storage), strict=False) - return net_decoder - - -class Resnet(nn.Module): - def __init__(self, orig_resnet): - super(Resnet, self).__init__() - - # take pretrained resnet, except AvgPool and FC - self.conv1 = orig_resnet.conv1 - self.bn1 = orig_resnet.bn1 - self.relu1 = orig_resnet.relu1 - self.conv2 = orig_resnet.conv2 - self.bn2 = orig_resnet.bn2 - self.relu2 = orig_resnet.relu2 - self.conv3 = orig_resnet.conv3 - self.bn3 = orig_resnet.bn3 - self.relu3 = orig_resnet.relu3 - self.maxpool = orig_resnet.maxpool - self.layer1 = orig_resnet.layer1 - self.layer2 = orig_resnet.layer2 - self.layer3 = orig_resnet.layer3 - self.layer4 = orig_resnet.layer4 - - def forward(self, x, return_feature_maps=False): - conv_out = [] - - x = self.relu1(self.bn1(self.conv1(x))) - x = self.relu2(self.bn2(self.conv2(x))) - x = self.relu3(self.bn3(self.conv3(x))) - x = self.maxpool(x) - - x = self.layer1(x); conv_out.append(x); - x = self.layer2(x); conv_out.append(x); - x = self.layer3(x); conv_out.append(x); - x = self.layer4(x); conv_out.append(x); - - if return_feature_maps: - return conv_out - return [x] - - -# upernet -class UPerNet(nn.Module): - def __init__(self, nr_classes, fc_dim=4096, - use_softmax=False, pool_scales=(1, 2, 3, 6), - fpn_inplanes=(256,512,1024,2048), fpn_dim=256): - # Lazy import so that compilation isn't needed if not being used. - from .prroi_pool import PrRoIPool2D - super(UPerNet, self).__init__() - self.use_softmax = use_softmax - - # PPM Module - self.ppm_pooling = [] - self.ppm_conv = [] - - for scale in pool_scales: - # we use the feature map size instead of input image size, so down_scale = 1.0 - self.ppm_pooling.append(PrRoIPool2D(scale, scale, 1.)) - self.ppm_conv.append(nn.Sequential( - nn.Conv2d(fc_dim, 512, kernel_size=1, bias=False), - SynchronizedBatchNorm2d(512), - nn.ReLU(inplace=True) - )) - self.ppm_pooling = nn.ModuleList(self.ppm_pooling) - self.ppm_conv = nn.ModuleList(self.ppm_conv) - self.ppm_last_conv = conv3x3_bn_relu(fc_dim + len(pool_scales)*512, fpn_dim, 1) - - # FPN Module - self.fpn_in = [] - for fpn_inplane in fpn_inplanes[:-1]: # skip the top layer - self.fpn_in.append(nn.Sequential( - nn.Conv2d(fpn_inplane, fpn_dim, kernel_size=1, bias=False), - SynchronizedBatchNorm2d(fpn_dim), - nn.ReLU(inplace=True) - )) - self.fpn_in = nn.ModuleList(self.fpn_in) - - self.fpn_out = [] - for i in range(len(fpn_inplanes) - 1): # skip the top layer - self.fpn_out.append(nn.Sequential( - conv3x3_bn_relu(fpn_dim, fpn_dim, 1), - )) - self.fpn_out = nn.ModuleList(self.fpn_out) - - self.conv_fusion = conv3x3_bn_relu(len(fpn_inplanes) * fpn_dim, fpn_dim, 1) - - # background included. if ignore in loss, output channel 0 will not be trained. - self.nr_scene_class, self.nr_object_class, self.nr_part_class, self.nr_material_class = \ - nr_classes['scene'], nr_classes['object'], nr_classes['part'], nr_classes['material'] - - # input: PPM out, input_dim: fpn_dim - self.scene_head = nn.Sequential( - conv3x3_bn_relu(fpn_dim, fpn_dim, 1), - nn.AdaptiveAvgPool2d(1), - nn.Conv2d(fpn_dim, self.nr_scene_class, kernel_size=1, bias=True) - ) - - # input: Fusion out, input_dim: fpn_dim - self.object_head = nn.Sequential( - conv3x3_bn_relu(fpn_dim, fpn_dim, 1), - nn.Conv2d(fpn_dim, self.nr_object_class, kernel_size=1, bias=True) - ) - - # input: Fusion out, input_dim: fpn_dim - self.part_head = nn.Sequential( - conv3x3_bn_relu(fpn_dim, fpn_dim, 1), - nn.Conv2d(fpn_dim, self.nr_part_class, kernel_size=1, bias=True) - ) - - # input: FPN_2 (P2), input_dim: fpn_dim - self.material_head = nn.Sequential( - conv3x3_bn_relu(fpn_dim, fpn_dim, 1), - nn.Conv2d(fpn_dim, self.nr_material_class, kernel_size=1, bias=True) - ) - - def forward(self, conv_out, output_switch=None, seg_size=None): - - output_dict = {k: None for k in output_switch.keys()} - - conv5 = conv_out[-1] - input_size = conv5.size() - ppm_out = [conv5] - roi = [] # fake rois, just used for pooling - for i in range(input_size[0]): # batch size - roi.append(torch.Tensor([i, 0, 0, input_size[3], input_size[2]]).view(1, -1)) # b, x0, y0, x1, y1 - roi = torch.cat(roi, dim=0).type_as(conv5) - ppm_out = [conv5] - for pool_scale, pool_conv in zip(self.ppm_pooling, self.ppm_conv): - ppm_out.append(pool_conv(F.interpolate( - pool_scale(conv5, roi.detach()), - (input_size[2], input_size[3]), - mode='bilinear', align_corners=False))) - ppm_out = torch.cat(ppm_out, 1) - f = self.ppm_last_conv(ppm_out) - - if output_switch['scene']: # scene - output_dict['scene'] = self.scene_head(f) - - if output_switch['object'] or output_switch['part'] or output_switch['material']: - fpn_feature_list = [f] - for i in reversed(range(len(conv_out) - 1)): - conv_x = conv_out[i] - conv_x = self.fpn_in[i](conv_x) # lateral branch - - f = F.interpolate( - f, size=conv_x.size()[2:], mode='bilinear', align_corners=False) # top-down branch - f = conv_x + f - - fpn_feature_list.append(self.fpn_out[i](f)) - fpn_feature_list.reverse() # [P2 - P5] - - # material - if output_switch['material']: - output_dict['material'] = self.material_head(fpn_feature_list[0]) - - if output_switch['object'] or output_switch['part']: - output_size = fpn_feature_list[0].size()[2:] - fusion_list = [fpn_feature_list[0]] - for i in range(1, len(fpn_feature_list)): - fusion_list.append(F.interpolate( - fpn_feature_list[i], - output_size, - mode='bilinear', align_corners=False)) - fusion_out = torch.cat(fusion_list, 1) - x = self.conv_fusion(fusion_out) - - if output_switch['object']: # object - output_dict['object'] = self.object_head(x) - if output_switch['part']: - output_dict['part'] = self.part_head(x) - - if self.use_softmax: # is True during inference - # inference scene - x = output_dict['scene'] - x = x.squeeze(3).squeeze(2) - x = F.softmax(x, dim=1) - output_dict['scene'] = x - - # inference object, material - for k in ['object', 'material']: - x = output_dict[k] - x = F.interpolate(x, size=seg_size, mode='bilinear', align_corners=False) - x = F.softmax(x, dim=1) - output_dict[k] = x - - # inference part - x = output_dict['part'] - x = F.interpolate(x, size=seg_size, mode='bilinear', align_corners=False) - part_pred_list, head = [], 0 - for idx_part, object_label in enumerate(self.object_with_part): - n_part = len(self.object_part[object_label]) - _x = F.interpolate(x[:, head: head + n_part], size=seg_size, mode='bilinear', align_corners=False) - _x = F.softmax(_x, dim=1) - part_pred_list.append(_x) - head += n_part - output_dict['part'] = part_pred_list - - else: # Training - # object, scene, material - for k in ['object', 'scene', 'material']: - if output_dict[k] is None: - continue - x = output_dict[k] - x = F.log_softmax(x, dim=1) - if k == "scene": # for scene - x = x.squeeze(3).squeeze(2) - output_dict[k] = x - if output_dict['part'] is not None: - part_pred_list, head = [], 0 - for idx_part, object_label in enumerate(self.object_with_part): - n_part = len(self.object_part[object_label]) - x = output_dict['part'][:, head: head + n_part] - x = F.log_softmax(x, dim=1) - part_pred_list.append(x) - head += n_part - output_dict['part'] = part_pred_list - - return output_dict diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/spinner.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/spinner.py deleted file mode 100644 index 91ea630e10f893bf5d6b17fcd9a1fedcecee6f02..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/rich/spinner.py +++ /dev/null @@ -1,137 +0,0 @@ -from typing import cast, List, Optional, TYPE_CHECKING, Union - -from ._spinners import SPINNERS -from .measure import Measurement -from .table import Table -from .text import Text - -if TYPE_CHECKING: - from .console import Console, ConsoleOptions, RenderResult, RenderableType - from .style import StyleType - - -class Spinner: - """A spinner animation. - - Args: - name (str): Name of spinner (run python -m rich.spinner). - text (RenderableType, optional): A renderable to display at the right of the spinner (str or Text typically). Defaults to "". - style (StyleType, optional): Style for spinner animation. Defaults to None. - speed (float, optional): Speed factor for animation. Defaults to 1.0. - - Raises: - KeyError: If name isn't one of the supported spinner animations. - """ - - def __init__( - self, - name: str, - text: "RenderableType" = "", - *, - style: Optional["StyleType"] = None, - speed: float = 1.0, - ) -> None: - try: - spinner = SPINNERS[name] - except KeyError: - raise KeyError(f"no spinner called {name!r}") - self.text: "Union[RenderableType, Text]" = ( - Text.from_markup(text) if isinstance(text, str) else text - ) - self.frames = cast(List[str], spinner["frames"])[:] - self.interval = cast(float, spinner["interval"]) - self.start_time: Optional[float] = None - self.style = style - self.speed = speed - self.frame_no_offset: float = 0.0 - self._update_speed = 0.0 - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - yield self.render(console.get_time()) - - def __rich_measure__( - self, console: "Console", options: "ConsoleOptions" - ) -> Measurement: - text = self.render(0) - return Measurement.get(console, options, text) - - def render(self, time: float) -> "RenderableType": - """Render the spinner for a given time. - - Args: - time (float): Time in seconds. - - Returns: - RenderableType: A renderable containing animation frame. - """ - if self.start_time is None: - self.start_time = time - - frame_no = ((time - self.start_time) * self.speed) / ( - self.interval / 1000.0 - ) + self.frame_no_offset - frame = Text( - self.frames[int(frame_no) % len(self.frames)], style=self.style or "" - ) - - if self._update_speed: - self.frame_no_offset = frame_no - self.start_time = time - self.speed = self._update_speed - self._update_speed = 0.0 - - if not self.text: - return frame - elif isinstance(self.text, (str, Text)): - return Text.assemble(frame, " ", self.text) - else: - table = Table.grid(padding=1) - table.add_row(frame, self.text) - return table - - def update( - self, - *, - text: "RenderableType" = "", - style: Optional["StyleType"] = None, - speed: Optional[float] = None, - ) -> None: - """Updates attributes of a spinner after it has been started. - - Args: - text (RenderableType, optional): A renderable to display at the right of the spinner (str or Text typically). Defaults to "". - style (StyleType, optional): Style for spinner animation. Defaults to None. - speed (float, optional): Speed factor for animation. Defaults to None. - """ - if text: - self.text = Text.from_markup(text) if isinstance(text, str) else text - if style: - self.style = style - if speed: - self._update_speed = speed - - -if __name__ == "__main__": # pragma: no cover - from time import sleep - - from .columns import Columns - from .panel import Panel - from .live import Live - - all_spinners = Columns( - [ - Spinner(spinner_name, text=Text(repr(spinner_name), style="green")) - for spinner_name in sorted(SPINNERS.keys()) - ], - column_first=True, - expand=True, - ) - - with Live( - Panel(all_spinners, title="Spinners", border_style="blue"), - refresh_per_second=20, - ) as live: - while True: - sleep(0.1) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/msvc9compiler.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/msvc9compiler.py deleted file mode 100644 index f9f9f2d844e3208b9133d9e775ac0775d8ff47ab..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/_distutils/msvc9compiler.py +++ /dev/null @@ -1,829 +0,0 @@ -"""distutils.msvc9compiler - -Contains MSVCCompiler, an implementation of the abstract CCompiler class -for the Microsoft Visual Studio 2008. - -The module is compatible with VS 2005 and VS 2008. You can find legacy support -for older versions of VS in distutils.msvccompiler. -""" - -# Written by Perry Stoll -# hacked by Robin Becker and Thomas Heller to do a better job of -# finding DevStudio (through the registry) -# ported to VS2005 and VS 2008 by Christian Heimes - -import os -import subprocess -import sys -import re -import warnings - -from .errors import ( - DistutilsExecError, - DistutilsPlatformError, - CompileError, - LibError, - LinkError, -) -from .ccompiler import CCompiler, gen_lib_options -from ._log import log -from .util import get_platform - -import winreg - -warnings.warn( - "msvc9compiler is deprecated and slated to be removed " - "in the future. Please discontinue use or file an issue " - "with pypa/distutils describing your use case.", - DeprecationWarning, -) - -RegOpenKeyEx = winreg.OpenKeyEx -RegEnumKey = winreg.EnumKey -RegEnumValue = winreg.EnumValue -RegError = winreg.error - -HKEYS = ( - winreg.HKEY_USERS, - winreg.HKEY_CURRENT_USER, - winreg.HKEY_LOCAL_MACHINE, - winreg.HKEY_CLASSES_ROOT, -) - -NATIVE_WIN64 = sys.platform == 'win32' and sys.maxsize > 2**32 -if NATIVE_WIN64: - # Visual C++ is a 32-bit application, so we need to look in - # the corresponding registry branch, if we're running a - # 64-bit Python on Win64 - VS_BASE = r"Software\Wow6432Node\Microsoft\VisualStudio\%0.1f" - WINSDK_BASE = r"Software\Wow6432Node\Microsoft\Microsoft SDKs\Windows" - NET_BASE = r"Software\Wow6432Node\Microsoft\.NETFramework" -else: - VS_BASE = r"Software\Microsoft\VisualStudio\%0.1f" - WINSDK_BASE = r"Software\Microsoft\Microsoft SDKs\Windows" - NET_BASE = r"Software\Microsoft\.NETFramework" - -# A map keyed by get_platform() return values to values accepted by -# 'vcvarsall.bat'. Note a cross-compile may combine these (eg, 'x86_amd64' is -# the param to cross-compile on x86 targeting amd64.) -PLAT_TO_VCVARS = { - 'win32': 'x86', - 'win-amd64': 'amd64', -} - - -class Reg: - """Helper class to read values from the registry""" - - def get_value(cls, path, key): - for base in HKEYS: - d = cls.read_values(base, path) - if d and key in d: - return d[key] - raise KeyError(key) - - get_value = classmethod(get_value) - - def read_keys(cls, base, key): - """Return list of registry keys.""" - try: - handle = RegOpenKeyEx(base, key) - except RegError: - return None - L = [] - i = 0 - while True: - try: - k = RegEnumKey(handle, i) - except RegError: - break - L.append(k) - i += 1 - return L - - read_keys = classmethod(read_keys) - - def read_values(cls, base, key): - """Return dict of registry keys and values. - - All names are converted to lowercase. - """ - try: - handle = RegOpenKeyEx(base, key) - except RegError: - return None - d = {} - i = 0 - while True: - try: - name, value, type = RegEnumValue(handle, i) - except RegError: - break - name = name.lower() - d[cls.convert_mbcs(name)] = cls.convert_mbcs(value) - i += 1 - return d - - read_values = classmethod(read_values) - - def convert_mbcs(s): - dec = getattr(s, "decode", None) - if dec is not None: - try: - s = dec("mbcs") - except UnicodeError: - pass - return s - - convert_mbcs = staticmethod(convert_mbcs) - - -class MacroExpander: - def __init__(self, version): - self.macros = {} - self.vsbase = VS_BASE % version - self.load_macros(version) - - def set_macro(self, macro, path, key): - self.macros["$(%s)" % macro] = Reg.get_value(path, key) - - def load_macros(self, version): - self.set_macro("VCInstallDir", self.vsbase + r"\Setup\VC", "productdir") - self.set_macro("VSInstallDir", self.vsbase + r"\Setup\VS", "productdir") - self.set_macro("FrameworkDir", NET_BASE, "installroot") - try: - if version >= 8.0: - self.set_macro("FrameworkSDKDir", NET_BASE, "sdkinstallrootv2.0") - else: - raise KeyError("sdkinstallrootv2.0") - except KeyError: - raise DistutilsPlatformError( - """Python was built with Visual Studio 2008; -extensions must be built with a compiler than can generate compatible binaries. -Visual Studio 2008 was not found on this system. If you have Cygwin installed, -you can try compiling with MingW32, by passing "-c mingw32" to setup.py.""" - ) - - if version >= 9.0: - self.set_macro("FrameworkVersion", self.vsbase, "clr version") - self.set_macro("WindowsSdkDir", WINSDK_BASE, "currentinstallfolder") - else: - p = r"Software\Microsoft\NET Framework Setup\Product" - for base in HKEYS: - try: - h = RegOpenKeyEx(base, p) - except RegError: - continue - key = RegEnumKey(h, 0) - d = Reg.get_value(base, r"{}\{}".format(p, key)) - self.macros["$(FrameworkVersion)"] = d["version"] - - def sub(self, s): - for k, v in self.macros.items(): - s = s.replace(k, v) - return s - - -def get_build_version(): - """Return the version of MSVC that was used to build Python. - - For Python 2.3 and up, the version number is included in - sys.version. For earlier versions, assume the compiler is MSVC 6. - """ - prefix = "MSC v." - i = sys.version.find(prefix) - if i == -1: - return 6 - i = i + len(prefix) - s, rest = sys.version[i:].split(" ", 1) - majorVersion = int(s[:-2]) - 6 - if majorVersion >= 13: - # v13 was skipped and should be v14 - majorVersion += 1 - minorVersion = int(s[2:3]) / 10.0 - # I don't think paths are affected by minor version in version 6 - if majorVersion == 6: - minorVersion = 0 - if majorVersion >= 6: - return majorVersion + minorVersion - # else we don't know what version of the compiler this is - return None - - -def normalize_and_reduce_paths(paths): - """Return a list of normalized paths with duplicates removed. - - The current order of paths is maintained. - """ - # Paths are normalized so things like: /a and /a/ aren't both preserved. - reduced_paths = [] - for p in paths: - np = os.path.normpath(p) - # XXX(nnorwitz): O(n**2), if reduced_paths gets long perhaps use a set. - if np not in reduced_paths: - reduced_paths.append(np) - return reduced_paths - - -def removeDuplicates(variable): - """Remove duplicate values of an environment variable.""" - oldList = variable.split(os.pathsep) - newList = [] - for i in oldList: - if i not in newList: - newList.append(i) - newVariable = os.pathsep.join(newList) - return newVariable - - -def find_vcvarsall(version): - """Find the vcvarsall.bat file - - At first it tries to find the productdir of VS 2008 in the registry. If - that fails it falls back to the VS90COMNTOOLS env var. - """ - vsbase = VS_BASE % version - try: - productdir = Reg.get_value(r"%s\Setup\VC" % vsbase, "productdir") - except KeyError: - log.debug("Unable to find productdir in registry") - productdir = None - - if not productdir or not os.path.isdir(productdir): - toolskey = "VS%0.f0COMNTOOLS" % version - toolsdir = os.environ.get(toolskey, None) - - if toolsdir and os.path.isdir(toolsdir): - productdir = os.path.join(toolsdir, os.pardir, os.pardir, "VC") - productdir = os.path.abspath(productdir) - if not os.path.isdir(productdir): - log.debug("%s is not a valid directory" % productdir) - return None - else: - log.debug("Env var %s is not set or invalid" % toolskey) - if not productdir: - log.debug("No productdir found") - return None - vcvarsall = os.path.join(productdir, "vcvarsall.bat") - if os.path.isfile(vcvarsall): - return vcvarsall - log.debug("Unable to find vcvarsall.bat") - return None - - -def query_vcvarsall(version, arch="x86"): - """Launch vcvarsall.bat and read the settings from its environment""" - vcvarsall = find_vcvarsall(version) - interesting = {"include", "lib", "libpath", "path"} - result = {} - - if vcvarsall is None: - raise DistutilsPlatformError("Unable to find vcvarsall.bat") - log.debug("Calling 'vcvarsall.bat %s' (version=%s)", arch, version) - popen = subprocess.Popen( - '"{}" {} & set'.format(vcvarsall, arch), - stdout=subprocess.PIPE, - stderr=subprocess.PIPE, - ) - try: - stdout, stderr = popen.communicate() - if popen.wait() != 0: - raise DistutilsPlatformError(stderr.decode("mbcs")) - - stdout = stdout.decode("mbcs") - for line in stdout.split("\n"): - line = Reg.convert_mbcs(line) - if '=' not in line: - continue - line = line.strip() - key, value = line.split('=', 1) - key = key.lower() - if key in interesting: - if value.endswith(os.pathsep): - value = value[:-1] - result[key] = removeDuplicates(value) - - finally: - popen.stdout.close() - popen.stderr.close() - - if len(result) != len(interesting): - raise ValueError(str(list(result.keys()))) - - return result - - -# More globals -VERSION = get_build_version() -# MACROS = MacroExpander(VERSION) - - -class MSVCCompiler(CCompiler): - """Concrete class that implements an interface to Microsoft Visual C++, - as defined by the CCompiler abstract class.""" - - compiler_type = 'msvc' - - # Just set this so CCompiler's constructor doesn't barf. We currently - # don't use the 'set_executables()' bureaucracy provided by CCompiler, - # as it really isn't necessary for this sort of single-compiler class. - # Would be nice to have a consistent interface with UnixCCompiler, - # though, so it's worth thinking about. - executables = {} - - # Private class data (need to distinguish C from C++ source for compiler) - _c_extensions = ['.c'] - _cpp_extensions = ['.cc', '.cpp', '.cxx'] - _rc_extensions = ['.rc'] - _mc_extensions = ['.mc'] - - # Needed for the filename generation methods provided by the - # base class, CCompiler. - src_extensions = _c_extensions + _cpp_extensions + _rc_extensions + _mc_extensions - res_extension = '.res' - obj_extension = '.obj' - static_lib_extension = '.lib' - shared_lib_extension = '.dll' - static_lib_format = shared_lib_format = '%s%s' - exe_extension = '.exe' - - def __init__(self, verbose=0, dry_run=0, force=0): - super().__init__(verbose, dry_run, force) - self.__version = VERSION - self.__root = r"Software\Microsoft\VisualStudio" - # self.__macros = MACROS - self.__paths = [] - # target platform (.plat_name is consistent with 'bdist') - self.plat_name = None - self.__arch = None # deprecated name - self.initialized = False - - def initialize(self, plat_name=None): # noqa: C901 - # multi-init means we would need to check platform same each time... - assert not self.initialized, "don't init multiple times" - if self.__version < 8.0: - raise DistutilsPlatformError( - "VC %0.1f is not supported by this module" % self.__version - ) - if plat_name is None: - plat_name = get_platform() - # sanity check for platforms to prevent obscure errors later. - ok_plats = 'win32', 'win-amd64' - if plat_name not in ok_plats: - raise DistutilsPlatformError( - "--plat-name must be one of {}".format(ok_plats) - ) - - if ( - "DISTUTILS_USE_SDK" in os.environ - and "MSSdk" in os.environ - and self.find_exe("cl.exe") - ): - # Assume that the SDK set up everything alright; don't try to be - # smarter - self.cc = "cl.exe" - self.linker = "link.exe" - self.lib = "lib.exe" - self.rc = "rc.exe" - self.mc = "mc.exe" - else: - # On x86, 'vcvars32.bat amd64' creates an env that doesn't work; - # to cross compile, you use 'x86_amd64'. - # On AMD64, 'vcvars32.bat amd64' is a native build env; to cross - # compile use 'x86' (ie, it runs the x86 compiler directly) - if plat_name in (get_platform(), 'win32'): - # native build or cross-compile to win32 - plat_spec = PLAT_TO_VCVARS[plat_name] - else: - # cross compile from win32 -> some 64bit - plat_spec = ( - PLAT_TO_VCVARS[get_platform()] + '_' + PLAT_TO_VCVARS[plat_name] - ) - - vc_env = query_vcvarsall(VERSION, plat_spec) - - self.__paths = vc_env['path'].split(os.pathsep) - os.environ['lib'] = vc_env['lib'] - os.environ['include'] = vc_env['include'] - - if len(self.__paths) == 0: - raise DistutilsPlatformError( - "Python was built with %s, " - "and extensions need to be built with the same " - "version of the compiler, but it isn't installed." % self.__product - ) - - self.cc = self.find_exe("cl.exe") - self.linker = self.find_exe("link.exe") - self.lib = self.find_exe("lib.exe") - self.rc = self.find_exe("rc.exe") # resource compiler - self.mc = self.find_exe("mc.exe") # message compiler - # self.set_path_env_var('lib') - # self.set_path_env_var('include') - - # extend the MSVC path with the current path - try: - for p in os.environ['path'].split(';'): - self.__paths.append(p) - except KeyError: - pass - self.__paths = normalize_and_reduce_paths(self.__paths) - os.environ['path'] = ";".join(self.__paths) - - self.preprocess_options = None - if self.__arch == "x86": - self.compile_options = ['/nologo', '/O2', '/MD', '/W3', '/DNDEBUG'] - self.compile_options_debug = [ - '/nologo', - '/Od', - '/MDd', - '/W3', - '/Z7', - '/D_DEBUG', - ] - else: - # Win64 - self.compile_options = ['/nologo', '/O2', '/MD', '/W3', '/GS-', '/DNDEBUG'] - self.compile_options_debug = [ - '/nologo', - '/Od', - '/MDd', - '/W3', - '/GS-', - '/Z7', - '/D_DEBUG', - ] - - self.ldflags_shared = ['/DLL', '/nologo', '/INCREMENTAL:NO'] - if self.__version >= 7: - self.ldflags_shared_debug = ['/DLL', '/nologo', '/INCREMENTAL:no', '/DEBUG'] - self.ldflags_static = ['/nologo'] - - self.initialized = True - - # -- Worker methods ------------------------------------------------ - - def object_filenames(self, source_filenames, strip_dir=0, output_dir=''): - # Copied from ccompiler.py, extended to return .res as 'object'-file - # for .rc input file - if output_dir is None: - output_dir = '' - obj_names = [] - for src_name in source_filenames: - (base, ext) = os.path.splitext(src_name) - base = os.path.splitdrive(base)[1] # Chop off the drive - base = base[os.path.isabs(base) :] # If abs, chop off leading / - if ext not in self.src_extensions: - # Better to raise an exception instead of silently continuing - # and later complain about sources and targets having - # different lengths - raise CompileError("Don't know how to compile %s" % src_name) - if strip_dir: - base = os.path.basename(base) - if ext in self._rc_extensions: - obj_names.append(os.path.join(output_dir, base + self.res_extension)) - elif ext in self._mc_extensions: - obj_names.append(os.path.join(output_dir, base + self.res_extension)) - else: - obj_names.append(os.path.join(output_dir, base + self.obj_extension)) - return obj_names - - def compile( # noqa: C901 - self, - sources, - output_dir=None, - macros=None, - include_dirs=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - depends=None, - ): - if not self.initialized: - self.initialize() - compile_info = self._setup_compile( - output_dir, macros, include_dirs, sources, depends, extra_postargs - ) - macros, objects, extra_postargs, pp_opts, build = compile_info - - compile_opts = extra_preargs or [] - compile_opts.append('/c') - if debug: - compile_opts.extend(self.compile_options_debug) - else: - compile_opts.extend(self.compile_options) - - for obj in objects: - try: - src, ext = build[obj] - except KeyError: - continue - if debug: - # pass the full pathname to MSVC in debug mode, - # this allows the debugger to find the source file - # without asking the user to browse for it - src = os.path.abspath(src) - - if ext in self._c_extensions: - input_opt = "/Tc" + src - elif ext in self._cpp_extensions: - input_opt = "/Tp" + src - elif ext in self._rc_extensions: - # compile .RC to .RES file - input_opt = src - output_opt = "/fo" + obj - try: - self.spawn([self.rc] + pp_opts + [output_opt] + [input_opt]) - except DistutilsExecError as msg: - raise CompileError(msg) - continue - elif ext in self._mc_extensions: - # Compile .MC to .RC file to .RES file. - # * '-h dir' specifies the directory for the - # generated include file - # * '-r dir' specifies the target directory of the - # generated RC file and the binary message resource - # it includes - # - # For now (since there are no options to change this), - # we use the source-directory for the include file and - # the build directory for the RC file and message - # resources. This works at least for win32all. - h_dir = os.path.dirname(src) - rc_dir = os.path.dirname(obj) - try: - # first compile .MC to .RC and .H file - self.spawn([self.mc] + ['-h', h_dir, '-r', rc_dir] + [src]) - base, _ = os.path.splitext(os.path.basename(src)) - rc_file = os.path.join(rc_dir, base + '.rc') - # then compile .RC to .RES file - self.spawn([self.rc] + ["/fo" + obj] + [rc_file]) - - except DistutilsExecError as msg: - raise CompileError(msg) - continue - else: - # how to handle this file? - raise CompileError( - "Don't know how to compile {} to {}".format(src, obj) - ) - - output_opt = "/Fo" + obj - try: - self.spawn( - [self.cc] - + compile_opts - + pp_opts - + [input_opt, output_opt] - + extra_postargs - ) - except DistutilsExecError as msg: - raise CompileError(msg) - - return objects - - def create_static_lib( - self, objects, output_libname, output_dir=None, debug=0, target_lang=None - ): - if not self.initialized: - self.initialize() - (objects, output_dir) = self._fix_object_args(objects, output_dir) - output_filename = self.library_filename(output_libname, output_dir=output_dir) - - if self._need_link(objects, output_filename): - lib_args = objects + ['/OUT:' + output_filename] - if debug: - pass # XXX what goes here? - try: - self.spawn([self.lib] + lib_args) - except DistutilsExecError as msg: - raise LibError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def link( # noqa: C901 - self, - target_desc, - objects, - output_filename, - output_dir=None, - libraries=None, - library_dirs=None, - runtime_library_dirs=None, - export_symbols=None, - debug=0, - extra_preargs=None, - extra_postargs=None, - build_temp=None, - target_lang=None, - ): - if not self.initialized: - self.initialize() - (objects, output_dir) = self._fix_object_args(objects, output_dir) - fixed_args = self._fix_lib_args(libraries, library_dirs, runtime_library_dirs) - (libraries, library_dirs, runtime_library_dirs) = fixed_args - - if runtime_library_dirs: - self.warn( - "I don't know what to do with 'runtime_library_dirs': " - + str(runtime_library_dirs) - ) - - lib_opts = gen_lib_options(self, library_dirs, runtime_library_dirs, libraries) - if output_dir is not None: - output_filename = os.path.join(output_dir, output_filename) - - if self._need_link(objects, output_filename): - if target_desc == CCompiler.EXECUTABLE: - if debug: - ldflags = self.ldflags_shared_debug[1:] - else: - ldflags = self.ldflags_shared[1:] - else: - if debug: - ldflags = self.ldflags_shared_debug - else: - ldflags = self.ldflags_shared - - export_opts = [] - for sym in export_symbols or []: - export_opts.append("/EXPORT:" + sym) - - ld_args = ( - ldflags + lib_opts + export_opts + objects + ['/OUT:' + output_filename] - ) - - # The MSVC linker generates .lib and .exp files, which cannot be - # suppressed by any linker switches. The .lib files may even be - # needed! Make sure they are generated in the temporary build - # directory. Since they have different names for debug and release - # builds, they can go into the same directory. - build_temp = os.path.dirname(objects[0]) - if export_symbols is not None: - (dll_name, dll_ext) = os.path.splitext( - os.path.basename(output_filename) - ) - implib_file = os.path.join(build_temp, self.library_filename(dll_name)) - ld_args.append('/IMPLIB:' + implib_file) - - self.manifest_setup_ldargs(output_filename, build_temp, ld_args) - - if extra_preargs: - ld_args[:0] = extra_preargs - if extra_postargs: - ld_args.extend(extra_postargs) - - self.mkpath(os.path.dirname(output_filename)) - try: - self.spawn([self.linker] + ld_args) - except DistutilsExecError as msg: - raise LinkError(msg) - - # embed the manifest - # XXX - this is somewhat fragile - if mt.exe fails, distutils - # will still consider the DLL up-to-date, but it will not have a - # manifest. Maybe we should link to a temp file? OTOH, that - # implies a build environment error that shouldn't go undetected. - mfinfo = self.manifest_get_embed_info(target_desc, ld_args) - if mfinfo is not None: - mffilename, mfid = mfinfo - out_arg = '-outputresource:{};{}'.format(output_filename, mfid) - try: - self.spawn(['mt.exe', '-nologo', '-manifest', mffilename, out_arg]) - except DistutilsExecError as msg: - raise LinkError(msg) - else: - log.debug("skipping %s (up-to-date)", output_filename) - - def manifest_setup_ldargs(self, output_filename, build_temp, ld_args): - # If we need a manifest at all, an embedded manifest is recommended. - # See MSDN article titled - # "How to: Embed a Manifest Inside a C/C++ Application" - # (currently at http://msdn2.microsoft.com/en-us/library/ms235591(VS.80).aspx) - # Ask the linker to generate the manifest in the temp dir, so - # we can check it, and possibly embed it, later. - temp_manifest = os.path.join( - build_temp, os.path.basename(output_filename) + ".manifest" - ) - ld_args.append('/MANIFESTFILE:' + temp_manifest) - - def manifest_get_embed_info(self, target_desc, ld_args): - # If a manifest should be embedded, return a tuple of - # (manifest_filename, resource_id). Returns None if no manifest - # should be embedded. See http://bugs.python.org/issue7833 for why - # we want to avoid any manifest for extension modules if we can) - for arg in ld_args: - if arg.startswith("/MANIFESTFILE:"): - temp_manifest = arg.split(":", 1)[1] - break - else: - # no /MANIFESTFILE so nothing to do. - return None - if target_desc == CCompiler.EXECUTABLE: - # by default, executables always get the manifest with the - # CRT referenced. - mfid = 1 - else: - # Extension modules try and avoid any manifest if possible. - mfid = 2 - temp_manifest = self._remove_visual_c_ref(temp_manifest) - if temp_manifest is None: - return None - return temp_manifest, mfid - - def _remove_visual_c_ref(self, manifest_file): - try: - # Remove references to the Visual C runtime, so they will - # fall through to the Visual C dependency of Python.exe. - # This way, when installed for a restricted user (e.g. - # runtimes are not in WinSxS folder, but in Python's own - # folder), the runtimes do not need to be in every folder - # with .pyd's. - # Returns either the filename of the modified manifest or - # None if no manifest should be embedded. - manifest_f = open(manifest_file) - try: - manifest_buf = manifest_f.read() - finally: - manifest_f.close() - pattern = re.compile( - r"""|)""", - re.DOTALL, - ) - manifest_buf = re.sub(pattern, "", manifest_buf) - pattern = r"\s*" - manifest_buf = re.sub(pattern, "", manifest_buf) - # Now see if any other assemblies are referenced - if not, we - # don't want a manifest embedded. - pattern = re.compile( - r"""|)""", - re.DOTALL, - ) - if re.search(pattern, manifest_buf) is None: - return None - - manifest_f = open(manifest_file, 'w') - try: - manifest_f.write(manifest_buf) - return manifest_file - finally: - manifest_f.close() - except OSError: - pass - - # -- Miscellaneous methods ----------------------------------------- - # These are all used by the 'gen_lib_options() function, in - # ccompiler.py. - - def library_dir_option(self, dir): - return "/LIBPATH:" + dir - - def runtime_library_dir_option(self, dir): - raise DistutilsPlatformError( - "don't know how to set runtime library search path for MSVC++" - ) - - def library_option(self, lib): - return self.library_filename(lib) - - def find_library_file(self, dirs, lib, debug=0): - # Prefer a debugging library if found (and requested), but deal - # with it if we don't have one. - if debug: - try_names = [lib + "_d", lib] - else: - try_names = [lib] - for dir in dirs: - for name in try_names: - libfile = os.path.join(dir, self.library_filename(name)) - if os.path.exists(libfile): - return libfile - else: - # Oops, didn't find it in *any* of 'dirs' - return None - - # Helper methods for using the MSVC registry settings - - def find_exe(self, exe): - """Return path to an MSVC executable program. - - Tries to find the program in several places: first, one of the - MSVC program search paths from the registry; next, the directories - in the PATH environment variable. If any of those work, return an - absolute path that is known to exist. If none of them work, just - return the original program name, 'exe'. - """ - for p in self.__paths: - fn = os.path.join(os.path.abspath(p), exe) - if os.path.isfile(fn): - return fn - - # didn't find it; try existing path - for p in os.environ['Path'].split(';'): - fn = os.path.join(os.path.abspath(p), exe) - if os.path.isfile(fn): - return fn - - return exe diff --git a/spaces/porntech/sex-position-video/README.md b/spaces/porntech/sex-position-video/README.md deleted file mode 100644 index 0575987477b10ff43060b161c3f2571b5dd58a0f..0000000000000000000000000000000000000000 --- a/spaces/porntech/sex-position-video/README.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -title: Sex Position Video -emoji: 📚 -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -pinned: false -license: mit ---- -# Classify sex positions for a video clip - -WARNING! Leave now if you are less than 18 years old! - -* total 10 classes are supported: ["Missionary", "Cowgirl", "Doggystyle", "Side Fucking", "Blowjob", "Titjob", "Pussy Eating", "Fingering", "Handjob", "Other"] -* "Other" means other classes such as SFW like kissing or talking or NSFW sucking tits. -* Input video should be sexy or NSFW ones, otherwise the prediction is undefined. -* Input should be .mp4 with less than 10 seconds, prediction for longer video clips might be ok but not guaranteed. Currently, gif file can not be input, open an issue if it is a problem. -* The top-1 accuracy is around 90% for a validation set of ~7000 video clips. -* More classes such as footjob and higher accuracy are under development. -* A sample SFW [video](https://www.youtube.com/watch?v=giy37cf1msI) is in youtube and you can download this video by pasting the youtube url into this [website](https://en.savefrom.net/391GA/), you can experience this demo with this video. It is predicted corrected as "Other". -* I won't provide any NSFW samples, find by yourself. -* This repo is for video classification, for sex position classification for image, see [this repo](https://huggingface.co/porntech/sex-position) of mine. - -I'm developping more AI models for sexy/NSFW classification, if you are interested, feel free to contact porntech@126.com \ No newline at end of file diff --git a/spaces/princessty/stabilityai-stable-diffusion-xl-base-1.0/app.py b/spaces/princessty/stabilityai-stable-diffusion-xl-base-1.0/app.py deleted file mode 100644 index 9520517f687cf7229ddfab9d8c5f8af7f76b0bd4..0000000000000000000000000000000000000000 --- a/spaces/princessty/stabilityai-stable-diffusion-xl-base-1.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-xl-base-1.0").launch() \ No newline at end of file diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/to_thread.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/to_thread.py deleted file mode 100644 index 9315d1ecf16eee45cd129ce17e48041a7f82348a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/anyio/to_thread.py +++ /dev/null @@ -1,67 +0,0 @@ -from __future__ import annotations - -from typing import Callable, TypeVar -from warnings import warn - -from ._core._eventloop import get_asynclib -from .abc import CapacityLimiter - -T_Retval = TypeVar("T_Retval") - - -async def run_sync( - func: Callable[..., T_Retval], - *args: object, - cancellable: bool = False, - limiter: CapacityLimiter | None = None, -) -> T_Retval: - """ - Call the given function with the given arguments in a worker thread. - - If the ``cancellable`` option is enabled and the task waiting for its completion is cancelled, - the thread will still run its course but its return value (or any raised exception) will be - ignored. - - :param func: a callable - :param args: positional arguments for the callable - :param cancellable: ``True`` to allow cancellation of the operation - :param limiter: capacity limiter to use to limit the total amount of threads running - (if omitted, the default limiter is used) - :return: an awaitable that yields the return value of the function. - - """ - return await get_asynclib().run_sync_in_worker_thread( - func, *args, cancellable=cancellable, limiter=limiter - ) - - -async def run_sync_in_worker_thread( - func: Callable[..., T_Retval], - *args: object, - cancellable: bool = False, - limiter: CapacityLimiter | None = None, -) -> T_Retval: - warn( - "run_sync_in_worker_thread() has been deprecated, use anyio.to_thread.run_sync() instead", - DeprecationWarning, - ) - return await run_sync(func, *args, cancellable=cancellable, limiter=limiter) - - -def current_default_thread_limiter() -> CapacityLimiter: - """ - Return the capacity limiter that is used by default to limit the number of concurrent threads. - - :return: a capacity limiter object - - """ - return get_asynclib().current_default_thread_limiter() - - -def current_default_worker_thread_limiter() -> CapacityLimiter: - warn( - "current_default_worker_thread_limiter() has been deprecated, " - "use anyio.to_thread.current_default_thread_limiter() instead", - DeprecationWarning, - ) - return current_default_thread_limiter() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/globals.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/globals.py deleted file mode 100644 index 480058f10dd6a8205d1bff0b94de7ae347a7629a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/click/globals.py +++ /dev/null @@ -1,68 +0,0 @@ -import typing as t -from threading import local - -if t.TYPE_CHECKING: - import typing_extensions as te - from .core import Context - -_local = local() - - -@t.overload -def get_current_context(silent: "te.Literal[False]" = False) -> "Context": - ... - - -@t.overload -def get_current_context(silent: bool = ...) -> t.Optional["Context"]: - ... - - -def get_current_context(silent: bool = False) -> t.Optional["Context"]: - """Returns the current click context. This can be used as a way to - access the current context object from anywhere. This is a more implicit - alternative to the :func:`pass_context` decorator. This function is - primarily useful for helpers such as :func:`echo` which might be - interested in changing its behavior based on the current context. - - To push the current context, :meth:`Context.scope` can be used. - - .. versionadded:: 5.0 - - :param silent: if set to `True` the return value is `None` if no context - is available. The default behavior is to raise a - :exc:`RuntimeError`. - """ - try: - return t.cast("Context", _local.stack[-1]) - except (AttributeError, IndexError) as e: - if not silent: - raise RuntimeError("There is no active click context.") from e - - return None - - -def push_context(ctx: "Context") -> None: - """Pushes a new context to the current stack.""" - _local.__dict__.setdefault("stack", []).append(ctx) - - -def pop_context() -> None: - """Removes the top level from the stack.""" - _local.stack.pop() - - -def resolve_color_default(color: t.Optional[bool] = None) -> t.Optional[bool]: - """Internal helper to get the default value of the color flag. If a - value is passed it's returned unchanged, otherwise it's looked up from - the current context. - """ - if color is not None: - return color - - ctx = get_current_context(silent=True) - - if ctx is not None: - return ctx.color - - return None diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/tables.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/tables.py deleted file mode 100644 index 394541b8a4d3355784ef84a04aaaa7501e4dc201..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fontTools/merge/tables.py +++ /dev/null @@ -1,338 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod, Roozbeh Pournader - -from fontTools import ttLib, cffLib -from fontTools.misc.psCharStrings import T2WidthExtractor -from fontTools.ttLib.tables.DefaultTable import DefaultTable -from fontTools.merge.base import add_method, mergeObjects -from fontTools.merge.cmap import computeMegaCmap -from fontTools.merge.util import * -import logging - - -log = logging.getLogger("fontTools.merge") - - -ttLib.getTableClass("maxp").mergeMap = { - "*": max, - "tableTag": equal, - "tableVersion": equal, - "numGlyphs": sum, - "maxStorage": first, - "maxFunctionDefs": first, - "maxInstructionDefs": first, - # TODO When we correctly merge hinting data, update these values: - # maxFunctionDefs, maxInstructionDefs, maxSizeOfInstructions -} - -headFlagsMergeBitMap = { - "size": 16, - "*": bitwise_or, - 1: bitwise_and, # Baseline at y = 0 - 2: bitwise_and, # lsb at x = 0 - 3: bitwise_and, # Force ppem to integer values. FIXME? - 5: bitwise_and, # Font is vertical - 6: lambda bit: 0, # Always set to zero - 11: bitwise_and, # Font data is 'lossless' - 13: bitwise_and, # Optimized for ClearType - 14: bitwise_and, # Last resort font. FIXME? equal or first may be better - 15: lambda bit: 0, # Always set to zero -} - -ttLib.getTableClass("head").mergeMap = { - "tableTag": equal, - "tableVersion": max, - "fontRevision": max, - "checkSumAdjustment": lambda lst: 0, # We need *something* here - "magicNumber": equal, - "flags": mergeBits(headFlagsMergeBitMap), - "unitsPerEm": equal, - "created": current_time, - "modified": current_time, - "xMin": min, - "yMin": min, - "xMax": max, - "yMax": max, - "macStyle": first, - "lowestRecPPEM": max, - "fontDirectionHint": lambda lst: 2, - "indexToLocFormat": first, - "glyphDataFormat": equal, -} - -ttLib.getTableClass("hhea").mergeMap = { - "*": equal, - "tableTag": equal, - "tableVersion": max, - "ascent": max, - "descent": min, - "lineGap": max, - "advanceWidthMax": max, - "minLeftSideBearing": min, - "minRightSideBearing": min, - "xMaxExtent": max, - "caretSlopeRise": first, - "caretSlopeRun": first, - "caretOffset": first, - "numberOfHMetrics": recalculate, -} - -ttLib.getTableClass("vhea").mergeMap = { - "*": equal, - "tableTag": equal, - "tableVersion": max, - "ascent": max, - "descent": min, - "lineGap": max, - "advanceHeightMax": max, - "minTopSideBearing": min, - "minBottomSideBearing": min, - "yMaxExtent": max, - "caretSlopeRise": first, - "caretSlopeRun": first, - "caretOffset": first, - "numberOfVMetrics": recalculate, -} - -os2FsTypeMergeBitMap = { - "size": 16, - "*": lambda bit: 0, - 1: bitwise_or, # no embedding permitted - 2: bitwise_and, # allow previewing and printing documents - 3: bitwise_and, # allow editing documents - 8: bitwise_or, # no subsetting permitted - 9: bitwise_or, # no embedding of outlines permitted -} - - -def mergeOs2FsType(lst): - lst = list(lst) - if all(item == 0 for item in lst): - return 0 - - # Compute least restrictive logic for each fsType value - for i in range(len(lst)): - # unset bit 1 (no embedding permitted) if either bit 2 or 3 is set - if lst[i] & 0x000C: - lst[i] &= ~0x0002 - # set bit 2 (allow previewing) if bit 3 is set (allow editing) - elif lst[i] & 0x0008: - lst[i] |= 0x0004 - # set bits 2 and 3 if everything is allowed - elif lst[i] == 0: - lst[i] = 0x000C - - fsType = mergeBits(os2FsTypeMergeBitMap)(lst) - # unset bits 2 and 3 if bit 1 is set (some font is "no embedding") - if fsType & 0x0002: - fsType &= ~0x000C - return fsType - - -ttLib.getTableClass("OS/2").mergeMap = { - "*": first, - "tableTag": equal, - "version": max, - "xAvgCharWidth": first, # Will be recalculated at the end on the merged font - "fsType": mergeOs2FsType, # Will be overwritten - "panose": first, # FIXME: should really be the first Latin font - "ulUnicodeRange1": bitwise_or, - "ulUnicodeRange2": bitwise_or, - "ulUnicodeRange3": bitwise_or, - "ulUnicodeRange4": bitwise_or, - "fsFirstCharIndex": min, - "fsLastCharIndex": max, - "sTypoAscender": max, - "sTypoDescender": min, - "sTypoLineGap": max, - "usWinAscent": max, - "usWinDescent": max, - # Version 1 - "ulCodePageRange1": onlyExisting(bitwise_or), - "ulCodePageRange2": onlyExisting(bitwise_or), - # Version 2, 3, 4 - "sxHeight": onlyExisting(max), - "sCapHeight": onlyExisting(max), - "usDefaultChar": onlyExisting(first), - "usBreakChar": onlyExisting(first), - "usMaxContext": onlyExisting(max), - # version 5 - "usLowerOpticalPointSize": onlyExisting(min), - "usUpperOpticalPointSize": onlyExisting(max), -} - - -@add_method(ttLib.getTableClass("OS/2")) -def merge(self, m, tables): - DefaultTable.merge(self, m, tables) - if self.version < 2: - # bits 8 and 9 are reserved and should be set to zero - self.fsType &= ~0x0300 - if self.version >= 3: - # Only one of bits 1, 2, and 3 may be set. We already take - # care of bit 1 implications in mergeOs2FsType. So unset - # bit 2 if bit 3 is already set. - if self.fsType & 0x0008: - self.fsType &= ~0x0004 - return self - - -ttLib.getTableClass("post").mergeMap = { - "*": first, - "tableTag": equal, - "formatType": max, - "isFixedPitch": min, - "minMemType42": max, - "maxMemType42": lambda lst: 0, - "minMemType1": max, - "maxMemType1": lambda lst: 0, - "mapping": onlyExisting(sumDicts), - "extraNames": lambda lst: [], -} - -ttLib.getTableClass("vmtx").mergeMap = ttLib.getTableClass("hmtx").mergeMap = { - "tableTag": equal, - "metrics": sumDicts, -} - -ttLib.getTableClass("name").mergeMap = { - "tableTag": equal, - "names": first, # FIXME? Does mixing name records make sense? -} - -ttLib.getTableClass("loca").mergeMap = { - "*": recalculate, - "tableTag": equal, -} - -ttLib.getTableClass("glyf").mergeMap = { - "tableTag": equal, - "glyphs": sumDicts, - "glyphOrder": sumLists, - "axisTags": equal, -} - - -@add_method(ttLib.getTableClass("glyf")) -def merge(self, m, tables): - for i, table in enumerate(tables): - for g in table.glyphs.values(): - if i: - # Drop hints for all but first font, since - # we don't map functions / CVT values. - g.removeHinting() - # Expand composite glyphs to load their - # composite glyph names. - if g.isComposite() or g.isVarComposite(): - g.expand(table) - return DefaultTable.merge(self, m, tables) - - -ttLib.getTableClass("prep").mergeMap = lambda self, lst: first(lst) -ttLib.getTableClass("fpgm").mergeMap = lambda self, lst: first(lst) -ttLib.getTableClass("cvt ").mergeMap = lambda self, lst: first(lst) -ttLib.getTableClass("gasp").mergeMap = lambda self, lst: first( - lst -) # FIXME? Appears irreconcilable - - -@add_method(ttLib.getTableClass("CFF ")) -def merge(self, m, tables): - if any(hasattr(table.cff[0], "FDSelect") for table in tables): - raise NotImplementedError("Merging CID-keyed CFF tables is not supported yet") - - for table in tables: - table.cff.desubroutinize() - - newcff = tables[0] - newfont = newcff.cff[0] - private = newfont.Private - newDefaultWidthX, newNominalWidthX = private.defaultWidthX, private.nominalWidthX - storedNamesStrings = [] - glyphOrderStrings = [] - glyphOrder = set(newfont.getGlyphOrder()) - - for name in newfont.strings.strings: - if name not in glyphOrder: - storedNamesStrings.append(name) - else: - glyphOrderStrings.append(name) - - chrset = list(newfont.charset) - newcs = newfont.CharStrings - log.debug("FONT 0 CharStrings: %d.", len(newcs)) - - for i, table in enumerate(tables[1:], start=1): - font = table.cff[0] - defaultWidthX, nominalWidthX = ( - font.Private.defaultWidthX, - font.Private.nominalWidthX, - ) - widthsDiffer = ( - defaultWidthX != newDefaultWidthX or nominalWidthX != newNominalWidthX - ) - font.Private = private - fontGlyphOrder = set(font.getGlyphOrder()) - for name in font.strings.strings: - if name in fontGlyphOrder: - glyphOrderStrings.append(name) - cs = font.CharStrings - gs = table.cff.GlobalSubrs - log.debug("Font %d CharStrings: %d.", i, len(cs)) - chrset.extend(font.charset) - if newcs.charStringsAreIndexed: - for i, name in enumerate(cs.charStrings, start=len(newcs)): - newcs.charStrings[name] = i - newcs.charStringsIndex.items.append(None) - for name in cs.charStrings: - if widthsDiffer: - c = cs[name] - defaultWidthXToken = object() - extractor = T2WidthExtractor([], [], nominalWidthX, defaultWidthXToken) - extractor.execute(c) - width = extractor.width - if width is not defaultWidthXToken: - c.program.pop(0) - else: - width = defaultWidthX - if width != newDefaultWidthX: - c.program.insert(0, width - newNominalWidthX) - newcs[name] = cs[name] - - newfont.charset = chrset - newfont.numGlyphs = len(chrset) - newfont.strings.strings = glyphOrderStrings + storedNamesStrings - - return newcff - - -@add_method(ttLib.getTableClass("cmap")) -def merge(self, m, tables): - # TODO Handle format=14. - if not hasattr(m, "cmap"): - computeMegaCmap(m, tables) - cmap = m.cmap - - cmapBmpOnly = {uni: gid for uni, gid in cmap.items() if uni <= 0xFFFF} - self.tables = [] - module = ttLib.getTableModule("cmap") - if len(cmapBmpOnly) != len(cmap): - # format-12 required. - cmapTable = module.cmap_classes[12](12) - cmapTable.platformID = 3 - cmapTable.platEncID = 10 - cmapTable.language = 0 - cmapTable.cmap = cmap - self.tables.append(cmapTable) - # always create format-4 - cmapTable = module.cmap_classes[4](4) - cmapTable.platformID = 3 - cmapTable.platEncID = 1 - cmapTable.language = 0 - cmapTable.cmap = cmapBmpOnly - # ordered by platform then encoding - self.tables.insert(0, cmapTable) - self.tableVersion = 0 - self.numSubTables = len(self.tables) - return self diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-59243695.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-59243695.js deleted file mode 100644 index 6a8d8f9900ab4da6e717fd09a40c4f9faa5db3a8..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-59243695.js +++ /dev/null @@ -1,2 +0,0 @@ -const{SvelteComponent:c,append:u,attr:d,detach:g,element:o,init:v,insert:r,noop:f,safe_not_equal:y,set_data:m,text:b,toggle_class:i}=window.__gradio__svelte__internal;function h(a){let e,n;return{c(){e=o("div"),n=b(a[0]),d(e,"class","svelte-1ayixqk"),i(e,"table",a[1]==="table"),i(e,"gallery",a[1]==="gallery"),i(e,"selected",a[2])},m(t,l){r(t,e,l),u(e,n)},p(t,[l]){l&1&&m(n,t[0]),l&2&&i(e,"table",t[1]==="table"),l&2&&i(e,"gallery",t[1]==="gallery"),l&4&&i(e,"selected",t[2])},i:f,o:f,d(t){t&&g(e)}}}function q(a,e,n){let{value:t}=e,{type:l}=e,{selected:_=!1}=e;return a.$$set=s=>{"value"in s&&n(0,t=s.value),"type"in s&&n(1,l=s.type),"selected"in s&&n(2,_=s.selected)},[t,l,_]}class w extends c{constructor(e){super(),v(this,e,q,h,y,{value:0,type:1,selected:2})}}export{w as default}; -//# sourceMappingURL=Example-59243695.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_financial_expired.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_financial_expired.py deleted file mode 100644 index 838f999a61e6d8345c8bf348dbafa5619ec420e0..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/numpy/lib/tests/test_financial_expired.py +++ /dev/null @@ -1,11 +0,0 @@ -import sys -import pytest -import numpy as np - - -def test_financial_expired(): - match = 'NEP 32' - with pytest.warns(DeprecationWarning, match=match): - func = np.fv - with pytest.raises(RuntimeError, match=match): - func(1, 2, 3) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/ranges/test_constructors.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/ranges/test_constructors.py deleted file mode 100644 index 5e6f16075ae636a3aa14e7443097f426bd6f998a..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/indexes/ranges/test_constructors.py +++ /dev/null @@ -1,164 +0,0 @@ -from datetime import datetime - -import numpy as np -import pytest - -from pandas import ( - Index, - RangeIndex, - Series, -) -import pandas._testing as tm - - -class TestRangeIndexConstructors: - @pytest.mark.parametrize("name", [None, "foo"]) - @pytest.mark.parametrize( - "args, kwargs, start, stop, step", - [ - ((5,), {}, 0, 5, 1), - ((1, 5), {}, 1, 5, 1), - ((1, 5, 2), {}, 1, 5, 2), - ((0,), {}, 0, 0, 1), - ((0, 0), {}, 0, 0, 1), - ((), {"start": 0}, 0, 0, 1), - ((), {"stop": 0}, 0, 0, 1), - ], - ) - def test_constructor(self, args, kwargs, start, stop, step, name): - result = RangeIndex(*args, name=name, **kwargs) - expected = Index(np.arange(start, stop, step, dtype=np.int64), name=name) - assert isinstance(result, RangeIndex) - assert result.name is name - assert result._range == range(start, stop, step) - tm.assert_index_equal(result, expected, exact="equiv") - - def test_constructor_invalid_args(self): - msg = "RangeIndex\\(\\.\\.\\.\\) must be called with integers" - with pytest.raises(TypeError, match=msg): - RangeIndex() - - with pytest.raises(TypeError, match=msg): - RangeIndex(name="Foo") - - # we don't allow on a bare Index - msg = ( - r"Index\(\.\.\.\) must be called with a collection of some " - r"kind, 0 was passed" - ) - with pytest.raises(TypeError, match=msg): - Index(0) - - @pytest.mark.parametrize( - "args", - [ - Index(["a", "b"]), - Series(["a", "b"]), - np.array(["a", "b"]), - [], - np.arange(0, 10), - np.array([1]), - [1], - ], - ) - def test_constructor_additional_invalid_args(self, args): - msg = f"Value needs to be a scalar value, was type {type(args).__name__}" - with pytest.raises(TypeError, match=msg): - RangeIndex(args) - - @pytest.mark.parametrize("args", ["foo", datetime(2000, 1, 1, 0, 0)]) - def test_constructor_invalid_args_wrong_type(self, args): - msg = f"Wrong type {type(args)} for value {args}" - with pytest.raises(TypeError, match=msg): - RangeIndex(args) - - def test_constructor_same(self): - # pass thru w and w/o copy - index = RangeIndex(1, 5, 2) - result = RangeIndex(index, copy=False) - assert result.identical(index) - - result = RangeIndex(index, copy=True) - tm.assert_index_equal(result, index, exact=True) - - result = RangeIndex(index) - tm.assert_index_equal(result, index, exact=True) - - with pytest.raises( - ValueError, - match="Incorrect `dtype` passed: expected signed integer, received float64", - ): - RangeIndex(index, dtype="float64") - - def test_constructor_range_object(self): - result = RangeIndex(range(1, 5, 2)) - expected = RangeIndex(1, 5, 2) - tm.assert_index_equal(result, expected, exact=True) - - def test_constructor_range(self): - result = RangeIndex.from_range(range(1, 5, 2)) - expected = RangeIndex(1, 5, 2) - tm.assert_index_equal(result, expected, exact=True) - - result = RangeIndex.from_range(range(5, 6)) - expected = RangeIndex(5, 6, 1) - tm.assert_index_equal(result, expected, exact=True) - - # an invalid range - result = RangeIndex.from_range(range(5, 1)) - expected = RangeIndex(0, 0, 1) - tm.assert_index_equal(result, expected, exact=True) - - result = RangeIndex.from_range(range(5)) - expected = RangeIndex(0, 5, 1) - tm.assert_index_equal(result, expected, exact=True) - - result = Index(range(1, 5, 2)) - expected = RangeIndex(1, 5, 2) - tm.assert_index_equal(result, expected, exact=True) - - msg = ( - r"(RangeIndex.)?from_range\(\) got an unexpected keyword argument( 'copy')?" - ) - with pytest.raises(TypeError, match=msg): - RangeIndex.from_range(range(10), copy=True) - - def test_constructor_name(self): - # GH#12288 - orig = RangeIndex(10) - orig.name = "original" - - copy = RangeIndex(orig) - copy.name = "copy" - - assert orig.name == "original" - assert copy.name == "copy" - - new = Index(copy) - assert new.name == "copy" - - new.name = "new" - assert orig.name == "original" - assert copy.name == "copy" - assert new.name == "new" - - def test_constructor_corner(self): - arr = np.array([1, 2, 3, 4], dtype=object) - index = RangeIndex(1, 5) - assert index.values.dtype == np.int64 - expected = Index(arr).astype("int64") - - tm.assert_index_equal(index, expected, exact="equiv") - - # non-int raise Exception - with pytest.raises(TypeError, match=r"Wrong type \"): - RangeIndex("1", "10", "1") - with pytest.raises(TypeError, match=r"Wrong type \"): - RangeIndex(1.1, 10.2, 1.3) - - # invalid passed type - with pytest.raises( - ValueError, - match="Incorrect `dtype` passed: expected signed integer, received float64", - ): - RangeIndex(1, 5, dtype="float64") diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/interchange/test_utils.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/interchange/test_utils.py deleted file mode 100644 index a47bc2752ff32f5eb7630a3960e7611242cb73e3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/interchange/test_utils.py +++ /dev/null @@ -1,89 +0,0 @@ -import numpy as np -import pytest - -import pandas as pd -from pandas.core.interchange.utils import dtype_to_arrow_c_fmt - -# TODO: use ArrowSchema to get reference C-string. -# At the time, there is no way to access ArrowSchema holding a type format string -# from python. The only way to access it is to export the structure to a C-pointer, -# see DataType._export_to_c() method defined in -# https://github.com/apache/arrow/blob/master/python/pyarrow/types.pxi - - -@pytest.mark.parametrize( - "pandas_dtype, c_string", - [ - (np.dtype("bool"), "b"), - (np.dtype("int8"), "c"), - (np.dtype("uint8"), "C"), - (np.dtype("int16"), "s"), - (np.dtype("uint16"), "S"), - (np.dtype("int32"), "i"), - (np.dtype("uint32"), "I"), - (np.dtype("int64"), "l"), - (np.dtype("uint64"), "L"), - (np.dtype("float16"), "e"), - (np.dtype("float32"), "f"), - (np.dtype("float64"), "g"), - (pd.Series(["a"]).dtype, "u"), - ( - pd.Series([0]).astype("datetime64[ns]").dtype, - "tsn:", - ), - (pd.CategoricalDtype(["a"]), "l"), - (np.dtype("O"), "u"), - ], -) -def test_dtype_to_arrow_c_fmt(pandas_dtype, c_string): # PR01 - """Test ``dtype_to_arrow_c_fmt`` utility function.""" - assert dtype_to_arrow_c_fmt(pandas_dtype) == c_string - - -@pytest.mark.parametrize( - "pa_dtype, args_kwargs, c_string", - [ - ["null", {}, "n"], - ["bool_", {}, "b"], - ["uint8", {}, "C"], - ["uint16", {}, "S"], - ["uint32", {}, "I"], - ["uint64", {}, "L"], - ["int8", {}, "c"], - ["int16", {}, "S"], - ["int32", {}, "i"], - ["int64", {}, "l"], - ["float16", {}, "e"], - ["float32", {}, "f"], - ["float64", {}, "g"], - ["string", {}, "u"], - ["binary", {}, "z"], - ["time32", ("s",), "tts"], - ["time32", ("ms",), "ttm"], - ["time64", ("us",), "ttu"], - ["time64", ("ns",), "ttn"], - ["date32", {}, "tdD"], - ["date64", {}, "tdm"], - ["timestamp", {"unit": "s"}, "tss:"], - ["timestamp", {"unit": "ms"}, "tsm:"], - ["timestamp", {"unit": "us"}, "tsu:"], - ["timestamp", {"unit": "ns"}, "tsn:"], - ["timestamp", {"unit": "ns", "tz": "UTC"}, "tsn:UTC"], - ["duration", ("s",), "tDs"], - ["duration", ("ms",), "tDm"], - ["duration", ("us",), "tDu"], - ["duration", ("ns",), "tDn"], - ["decimal128", {"precision": 4, "scale": 2}, "d:4,2"], - ], -) -def test_dtype_to_arrow_c_fmt_arrowdtype(pa_dtype, args_kwargs, c_string): - # GH 52323 - pa = pytest.importorskip("pyarrow") - if not args_kwargs: - pa_type = getattr(pa, pa_dtype)() - elif isinstance(args_kwargs, tuple): - pa_type = getattr(pa, pa_dtype)(*args_kwargs) - else: - pa_type = getattr(pa, pa_dtype)(**args_kwargs) - arrow_type = pd.ArrowDtype(pa_type) - assert dtype_to_arrow_c_fmt(arrow_type) == c_string diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/deprecated/json.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/deprecated/json.py deleted file mode 100644 index d06735327f04288aed7cb08645a14cfb8cebddcc..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/deprecated/json.py +++ /dev/null @@ -1,130 +0,0 @@ -import datetime -import warnings -from collections import deque -from decimal import Decimal -from enum import Enum -from ipaddress import IPv4Address, IPv4Interface, IPv4Network, IPv6Address, IPv6Interface, IPv6Network -from pathlib import Path -from re import Pattern -from types import GeneratorType -from typing import TYPE_CHECKING, Any, Callable, Dict, Type, Union -from uuid import UUID - -from typing_extensions import deprecated - -from ..color import Color -from ..networks import NameEmail -from ..types import SecretBytes, SecretStr -from ..warnings import PydanticDeprecatedSince20 - -if not TYPE_CHECKING: - # See PyCharm issues https://youtrack.jetbrains.com/issue/PY-21915 - # and https://youtrack.jetbrains.com/issue/PY-51428 - DeprecationWarning = PydanticDeprecatedSince20 - -__all__ = 'pydantic_encoder', 'custom_pydantic_encoder', 'timedelta_isoformat' - - -def isoformat(o: Union[datetime.date, datetime.time]) -> str: - return o.isoformat() - - -def decimal_encoder(dec_value: Decimal) -> Union[int, float]: - """Encodes a Decimal as int of there's no exponent, otherwise float. - - This is useful when we use ConstrainedDecimal to represent Numeric(x,0) - where a integer (but not int typed) is used. Encoding this as a float - results in failed round-tripping between encode and parse. - Our Id type is a prime example of this. - - >>> decimal_encoder(Decimal("1.0")) - 1.0 - - >>> decimal_encoder(Decimal("1")) - 1 - """ - exponent = dec_value.as_tuple().exponent - if isinstance(exponent, int) and exponent >= 0: - return int(dec_value) - else: - return float(dec_value) - - -ENCODERS_BY_TYPE: Dict[Type[Any], Callable[[Any], Any]] = { - bytes: lambda o: o.decode(), - Color: str, - datetime.date: isoformat, - datetime.datetime: isoformat, - datetime.time: isoformat, - datetime.timedelta: lambda td: td.total_seconds(), - Decimal: decimal_encoder, - Enum: lambda o: o.value, - frozenset: list, - deque: list, - GeneratorType: list, - IPv4Address: str, - IPv4Interface: str, - IPv4Network: str, - IPv6Address: str, - IPv6Interface: str, - IPv6Network: str, - NameEmail: str, - Path: str, - Pattern: lambda o: o.pattern, - SecretBytes: str, - SecretStr: str, - set: list, - UUID: str, -} - - -@deprecated( - 'pydantic_encoder is deprecated, use pydantic_core.to_jsonable_python instead.', category=PydanticDeprecatedSince20 -) -def pydantic_encoder(obj: Any) -> Any: - from dataclasses import asdict, is_dataclass - - from ..main import BaseModel - - warnings.warn('pydantic_encoder is deprecated, use BaseModel.model_dump instead.', DeprecationWarning, stacklevel=2) - if isinstance(obj, BaseModel): - return obj.model_dump() - elif is_dataclass(obj): - return asdict(obj) - - # Check the class type and its superclasses for a matching encoder - for base in obj.__class__.__mro__[:-1]: - try: - encoder = ENCODERS_BY_TYPE[base] - except KeyError: - continue - return encoder(obj) - else: # We have exited the for loop without finding a suitable encoder - raise TypeError(f"Object of type '{obj.__class__.__name__}' is not JSON serializable") - - -# TODO: Add a suggested migration path once there is a way to use custom encoders -@deprecated('custom_pydantic_encoder is deprecated.', category=PydanticDeprecatedSince20) -def custom_pydantic_encoder(type_encoders: Dict[Any, Callable[[Type[Any]], Any]], obj: Any) -> Any: - # Check the class type and its superclasses for a matching encoder - warnings.warn( - 'custom_pydantic_encoder is deprecated, use BaseModel.model_dump instead.', DeprecationWarning, stacklevel=2 - ) - for base in obj.__class__.__mro__[:-1]: - try: - encoder = type_encoders[base] - except KeyError: - continue - - return encoder(obj) - else: # We have exited the for loop without finding a suitable encoder - return pydantic_encoder(obj) - - -@deprecated('timedelta_isoformat is deprecated.', category=PydanticDeprecatedSince20) -def timedelta_isoformat(td: datetime.timedelta) -> str: - """ISO 8601 encoding for Python timedelta object.""" - warnings.warn('timedelta_isoformat is deprecated.', DeprecationWarning, stacklevel=2) - minutes, seconds = divmod(td.seconds, 60) - hours, minutes = divmod(minutes, 60) - return f'{"-" if td.days < 0 else ""}P{abs(td.days)}DT{hours:d}H{minutes:d}M{seconds:d}.{td.microseconds:06d}S' diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/typer/_completion_shared.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/typer/_completion_shared.py deleted file mode 100644 index 7cbaf98d75438af40772c60206a2431613af74b5..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/typer/_completion_shared.py +++ /dev/null @@ -1,244 +0,0 @@ -import os -import re -import subprocess -import sys -from enum import Enum -from pathlib import Path -from typing import Optional, Tuple - -import click - -try: - import shellingham -except ImportError: # pragma: nocover - shellingham = None - - -from typing import Optional - - -class Shells(str, Enum): - bash = "bash" - zsh = "zsh" - fish = "fish" - powershell = "powershell" - pwsh = "pwsh" - - -COMPLETION_SCRIPT_BASH = """ -%(complete_func)s() { - local IFS=$'\n' - COMPREPLY=( $( env COMP_WORDS="${COMP_WORDS[*]}" \\ - COMP_CWORD=$COMP_CWORD \\ - %(autocomplete_var)s=complete_bash $1 ) ) - return 0 -} - -complete -o default -F %(complete_func)s %(prog_name)s -""" - -COMPLETION_SCRIPT_ZSH = """ -#compdef %(prog_name)s - -%(complete_func)s() { - eval $(env _TYPER_COMPLETE_ARGS="${words[1,$CURRENT]}" %(autocomplete_var)s=complete_zsh %(prog_name)s) -} - -compdef %(complete_func)s %(prog_name)s -""" - -COMPLETION_SCRIPT_FISH = 'complete --command %(prog_name)s --no-files --arguments "(env %(autocomplete_var)s=complete_fish _TYPER_COMPLETE_FISH_ACTION=get-args _TYPER_COMPLETE_ARGS=(commandline -cp) %(prog_name)s)" --condition "env %(autocomplete_var)s=complete_fish _TYPER_COMPLETE_FISH_ACTION=is-args _TYPER_COMPLETE_ARGS=(commandline -cp) %(prog_name)s"' - -COMPLETION_SCRIPT_POWER_SHELL = """ -Import-Module PSReadLine -Set-PSReadLineKeyHandler -Chord Tab -Function MenuComplete -$scriptblock = { - param($wordToComplete, $commandAst, $cursorPosition) - $Env:%(autocomplete_var)s = "complete_powershell" - $Env:_TYPER_COMPLETE_ARGS = $commandAst.ToString() - $Env:_TYPER_COMPLETE_WORD_TO_COMPLETE = $wordToComplete - %(prog_name)s | ForEach-Object { - $commandArray = $_ -Split ":::" - $command = $commandArray[0] - $helpString = $commandArray[1] - [System.Management.Automation.CompletionResult]::new( - $command, $command, 'ParameterValue', $helpString) - } - $Env:%(autocomplete_var)s = "" - $Env:_TYPER_COMPLETE_ARGS = "" - $Env:_TYPER_COMPLETE_WORD_TO_COMPLETE = "" -} -Register-ArgumentCompleter -Native -CommandName %(prog_name)s -ScriptBlock $scriptblock -""" - -_completion_scripts = { - "bash": COMPLETION_SCRIPT_BASH, - "zsh": COMPLETION_SCRIPT_ZSH, - "fish": COMPLETION_SCRIPT_FISH, - "powershell": COMPLETION_SCRIPT_POWER_SHELL, - "pwsh": COMPLETION_SCRIPT_POWER_SHELL, -} - -# TODO: Probably refactor this, copied from Click 7.x -_invalid_ident_char_re = re.compile(r"[^a-zA-Z0-9_]") - - -def get_completion_script(*, prog_name: str, complete_var: str, shell: str) -> str: - cf_name = _invalid_ident_char_re.sub("", prog_name.replace("-", "_")) - script = _completion_scripts.get(shell) - if script is None: - click.echo(f"Shell {shell} not supported.", err=True) - sys.exit(1) - return ( - script - % dict( - complete_func="_{}_completion".format(cf_name), - prog_name=prog_name, - autocomplete_var=complete_var, - ) - ).strip() - - -def install_bash(*, prog_name: str, complete_var: str, shell: str) -> Path: - # Ref: https://github.com/scop/bash-completion#faq - # It seems bash-completion is the official completion system for bash: - # Ref: https://www.gnu.org/software/bash/manual/html_node/A-Programmable-Completion-Example.html - # But installing in the locations from the docs doesn't seem to have effect - completion_path = Path.home() / f".bash_completions/{prog_name}.sh" - rc_path = Path.home() / ".bashrc" - rc_path.parent.mkdir(parents=True, exist_ok=True) - rc_content = "" - if rc_path.is_file(): - rc_content = rc_path.read_text() - completion_init_lines = [f"source {completion_path}"] - for line in completion_init_lines: - if line not in rc_content: # pragma: nocover - rc_content += f"\n{line}" - rc_content += "\n" - rc_path.write_text(rc_content) - # Install completion - completion_path.parent.mkdir(parents=True, exist_ok=True) - script_content = get_completion_script( - prog_name=prog_name, complete_var=complete_var, shell=shell - ) - completion_path.write_text(script_content) - return completion_path - - -def install_zsh(*, prog_name: str, complete_var: str, shell: str) -> Path: - # Setup Zsh and load ~/.zfunc - zshrc_path = Path.home() / ".zshrc" - zshrc_path.parent.mkdir(parents=True, exist_ok=True) - zshrc_content = "" - if zshrc_path.is_file(): - zshrc_content = zshrc_path.read_text() - completion_init_lines = [ - "autoload -Uz compinit", - "compinit", - "zstyle ':completion:*' menu select", - "fpath+=~/.zfunc", - ] - for line in completion_init_lines: - if line not in zshrc_content: # pragma: nocover - zshrc_content += f"\n{line}" - zshrc_content += "\n" - zshrc_path.write_text(zshrc_content) - # Install completion under ~/.zfunc/ - path_obj = Path.home() / f".zfunc/_{prog_name}" - path_obj.parent.mkdir(parents=True, exist_ok=True) - script_content = get_completion_script( - prog_name=prog_name, complete_var=complete_var, shell=shell - ) - path_obj.write_text(script_content) - return path_obj - - -def install_fish(*, prog_name: str, complete_var: str, shell: str) -> Path: - path_obj = Path.home() / f".config/fish/completions/{prog_name}.fish" - parent_dir: Path = path_obj.parent - parent_dir.mkdir(parents=True, exist_ok=True) - script_content = get_completion_script( - prog_name=prog_name, complete_var=complete_var, shell=shell - ) - path_obj.write_text(f"{script_content}\n") - return path_obj - - -def install_powershell(*, prog_name: str, complete_var: str, shell: str) -> Path: - subprocess.run( - [ - shell, - "-Command", - "Set-ExecutionPolicy", - "Unrestricted", - "-Scope", - "CurrentUser", - ] - ) - result = subprocess.run( - [shell, "-NoProfile", "-Command", "echo", "$profile"], - check=True, - stdout=subprocess.PIPE, - ) - if result.returncode != 0: # pragma: nocover - click.echo("Couldn't get PowerShell user profile", err=True) - raise click.exceptions.Exit(result.returncode) - path_str = "" - if isinstance(result.stdout, str): # pragma: nocover - path_str = result.stdout - if isinstance(result.stdout, bytes): - try: - # PowerShell would be predominant in Windows - path_str = result.stdout.decode("windows-1252") - except UnicodeDecodeError: # pragma: nocover - try: - path_str = result.stdout.decode("utf8") - except UnicodeDecodeError: - click.echo("Couldn't decode the path automatically", err=True) - raise click.exceptions.Exit(1) - path_obj = Path(path_str.strip()) - parent_dir: Path = path_obj.parent - parent_dir.mkdir(parents=True, exist_ok=True) - script_content = get_completion_script( - prog_name=prog_name, complete_var=complete_var, shell=shell - ) - with path_obj.open(mode="a") as f: - f.write(f"{script_content}\n") - return path_obj - - -def install( - shell: Optional[str] = None, - prog_name: Optional[str] = None, - complete_var: Optional[str] = None, -) -> Tuple[str, Path]: - prog_name = prog_name or click.get_current_context().find_root().info_name - assert prog_name - if complete_var is None: - complete_var = "_{}_COMPLETE".format(prog_name.replace("-", "_").upper()) - test_disable_detection = os.getenv("_TYPER_COMPLETE_TEST_DISABLE_SHELL_DETECTION") - if shell is None and shellingham is not None and not test_disable_detection: - shell, _ = shellingham.detect_shell() - if shell == "bash": - installed_path = install_bash( - prog_name=prog_name, complete_var=complete_var, shell=shell - ) - return shell, installed_path - elif shell == "zsh": - installed_path = install_zsh( - prog_name=prog_name, complete_var=complete_var, shell=shell - ) - return shell, installed_path - elif shell == "fish": - installed_path = install_fish( - prog_name=prog_name, complete_var=complete_var, shell=shell - ) - return shell, installed_path - elif shell in {"powershell", "pwsh"}: - installed_path = install_powershell( - prog_name=prog_name, complete_var=complete_var, shell=shell - ) - return shell, installed_path - else: - click.echo(f"Shell {shell} is not supported.") - raise click.exceptions.Exit(1) diff --git a/spaces/pyodide-demo/self-hosted/soupsieve.js b/spaces/pyodide-demo/self-hosted/soupsieve.js deleted file mode 100644 index 08e895897f1f0eda288df50dd3c6484447879c1c..0000000000000000000000000000000000000000 --- a/spaces/pyodide-demo/self-hosted/soupsieve.js +++ /dev/null @@ -1 +0,0 @@ -var Module=typeof globalThis.__pyodide_module!=="undefined"?globalThis.__pyodide_module:{};if(!Module.expectedDataFileDownloads){Module.expectedDataFileDownloads=0}Module.expectedDataFileDownloads++;(function(){var loadPackage=function(metadata){var PACKAGE_PATH="";if(typeof window==="object"){PACKAGE_PATH=window["encodeURIComponent"](window.location.pathname.toString().substring(0,window.location.pathname.toString().lastIndexOf("/"))+"/")}else if(typeof process==="undefined"&&typeof location!=="undefined"){PACKAGE_PATH=encodeURIComponent(location.pathname.toString().substring(0,location.pathname.toString().lastIndexOf("/"))+"/")}var PACKAGE_NAME="soupsieve.data";var REMOTE_PACKAGE_BASE="soupsieve.data";if(typeof Module["locateFilePackage"]==="function"&&!Module["locateFile"]){Module["locateFile"]=Module["locateFilePackage"];err("warning: you defined Module.locateFilePackage, that has been renamed to Module.locateFile (using your locateFilePackage for now)")}var REMOTE_PACKAGE_NAME=Module["locateFile"]?Module["locateFile"](REMOTE_PACKAGE_BASE,""):REMOTE_PACKAGE_BASE;var REMOTE_PACKAGE_SIZE=metadata["remote_package_size"];var PACKAGE_UUID=metadata["package_uuid"];function fetchRemotePackage(packageName,packageSize,callback,errback){if(typeof process==="object"){require("fs").readFile(packageName,(function(err,contents){if(err){errback(err)}else{callback(contents.buffer)}}));return}var xhr=new XMLHttpRequest;xhr.open("GET",packageName,true);xhr.responseType="arraybuffer";xhr.onprogress=function(event){var url=packageName;var size=packageSize;if(event.total)size=event.total;if(event.loaded){if(!xhr.addedTotal){xhr.addedTotal=true;if(!Module.dataFileDownloads)Module.dataFileDownloads={};Module.dataFileDownloads[url]={loaded:event.loaded,total:size}}else{Module.dataFileDownloads[url].loaded=event.loaded}var total=0;var loaded=0;var num=0;for(var download in Module.dataFileDownloads){var data=Module.dataFileDownloads[download];total+=data.total;loaded+=data.loaded;num++}total=Math.ceil(total*Module.expectedDataFileDownloads/num);if(Module["setStatus"])Module["setStatus"]("Downloading data... ("+loaded+"/"+total+")")}else if(!Module.dataFileDownloads){if(Module["setStatus"])Module["setStatus"]("Downloading data...")}};xhr.onerror=function(event){throw new Error("NetworkError for: "+packageName)};xhr.onload=function(event){if(xhr.status==200||xhr.status==304||xhr.status==206||xhr.status==0&&xhr.response){var packageData=xhr.response;callback(packageData)}else{throw new Error(xhr.statusText+" : "+xhr.responseURL)}};xhr.send(null)}function handleError(error){console.error("package error:",error)}var fetchedCallback=null;var fetched=Module["getPreloadedPackage"]?Module["getPreloadedPackage"](REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE):null;if(!fetched)fetchRemotePackage(REMOTE_PACKAGE_NAME,REMOTE_PACKAGE_SIZE,(function(data){if(fetchedCallback){fetchedCallback(data);fetchedCallback=null}else{fetched=data}}),handleError);function runWithFS(){function assert(check,msg){if(!check)throw msg+(new Error).stack}Module["FS_createPath"]("/","lib",true,true);Module["FS_createPath"]("/lib","python3.9",true,true);Module["FS_createPath"]("/lib/python3.9","site-packages",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","soupsieve",true,true);Module["FS_createPath"]("/lib/python3.9/site-packages","soupsieve-2.3.1-py3.9.egg-info",true,true);function processPackageData(arrayBuffer){assert(arrayBuffer,"Loading data file failed.");assert(arrayBuffer instanceof ArrayBuffer,"bad input to processPackageData");var byteArray=new Uint8Array(arrayBuffer);var curr;var compressedData={data:null,cachedOffset:73410,cachedIndexes:[-1,-1],cachedChunks:[null,null],offsets:[0,1566,2272,3536,4782,5714,7011,8288,9263,10204,11124,12318,13462,14167,15262,16319,17470,18567,19567,20446,21292,22091,22918,23848,24816,25951,26959,27935,29022,30047,31201,32221,32917,34073,35130,36371,37589,38764,40111,41228,42314,43293,44341,45343,46398,47123,47985,48903,50047,50994,51829,52828,53642,54407,55414,56494,57616,58499,59590,60623,61424,62231,63167,64412,65317,66557,67809,69128,70669,71999,72828],sizes:[1566,706,1264,1246,932,1297,1277,975,941,920,1194,1144,705,1095,1057,1151,1097,1e3,879,846,799,827,930,968,1135,1008,976,1087,1025,1154,1020,696,1156,1057,1241,1218,1175,1347,1117,1086,979,1048,1002,1055,725,862,918,1144,947,835,999,814,765,1007,1080,1122,883,1091,1033,801,807,936,1245,905,1240,1252,1319,1541,1330,829,582],successes:[1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]};compressedData["data"]=byteArray;assert(typeof Module.LZ4==="object","LZ4 not present - was your app build with -s LZ4=1 ?");Module.LZ4.loadPackage({metadata:metadata,compressedData:compressedData},true);Module["removeRunDependency"]("datafile_soupsieve.data")}Module["addRunDependency"]("datafile_soupsieve.data");if(!Module.preloadResults)Module.preloadResults={};Module.preloadResults[PACKAGE_NAME]={fromCache:false};if(fetched){processPackageData(fetched);fetched=null}else{fetchedCallback=processPackageData}}if(Module["calledRun"]){runWithFS()}else{if(!Module["preRun"])Module["preRun"]=[];Module["preRun"].push(runWithFS)}};loadPackage({files:[{filename:"/lib/python3.9/site-packages/soupsieve/__init__.py",start:0,end:4717,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve/__meta__.py",start:4717,end:11526,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve/css_match.py",start:11526,end:69693,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve/css_parser.py",start:69693,end:117596,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve/css_types.py",start:117596,end:128075,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve/pretty.py",start:128075,end:132093,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve/util.py",start:132093,end:135464,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve/py.typed",start:135464,end:135464,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve-2.3.1-py3.9.egg-info/PKG-INFO",start:135464,end:140961,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve-2.3.1-py3.9.egg-info/SOURCES.txt",start:140961,end:145026,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve-2.3.1-py3.9.egg-info/dependency_links.txt",start:145026,end:145027,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve-2.3.1-py3.9.egg-info/requires.txt",start:145027,end:145082,audio:0},{filename:"/lib/python3.9/site-packages/soupsieve-2.3.1-py3.9.egg-info/top_level.txt",start:145082,end:145092,audio:0}],remote_package_size:77506,package_uuid:"8673b8a3-498b-4628-a85e-78e46080c499"})})(); \ No newline at end of file diff --git a/spaces/qefunaba/iamkaikai-amazing-logos-v3/app.py b/spaces/qefunaba/iamkaikai-amazing-logos-v3/app.py deleted file mode 100644 index f9ce46f486198515b6dd44f0d6d8fd1cf8126c6f..0000000000000000000000000000000000000000 --- a/spaces/qefunaba/iamkaikai-amazing-logos-v3/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/iamkaikai/amazing-logos-v3").launch() \ No newline at end of file diff --git a/spaces/qingxu98/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h b/spaces/qingxu98/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h deleted file mode 100644 index c9004bb8043a12e32814436baa6262a00c8ef68e..0000000000000000000000000000000000000000 --- a/spaces/qingxu98/gpt-academic/crazy_functions/test_project/cpp/longcode/prod_cons.h +++ /dev/null @@ -1,433 +0,0 @@ -#pragma once - -#include -#include -#include -#include -#include - -#include "libipc/def.h" - -#include "libipc/platform/detail.h" -#include "libipc/circ/elem_def.h" -#include "libipc/utility/log.h" -#include "libipc/utility/utility.h" - -namespace ipc { - -//////////////////////////////////////////////////////////////// -/// producer-consumer implementation -//////////////////////////////////////////////////////////////// - -template -struct prod_cons_impl; - -template <> -struct prod_cons_impl> { - - template - struct elem_t { - std::aligned_storage_t data_ {}; - }; - - alignas(cache_line_size) std::atomic rd_; // read index - alignas(cache_line_size) std::atomic wt_; // write index - - constexpr circ::u2_t cursor() const noexcept { - return 0; - } - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - auto cur_wt = circ::index_of(wt_.load(std::memory_order_relaxed)); - if (cur_wt == circ::index_of(rd_.load(std::memory_order_acquire) - 1)) { - return false; // full - } - std::forward(f)(&(elems[cur_wt].data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - /** - * In single-single-unicast, 'force_push' means 'no reader' or 'the only one reader is dead'. - * So we could just disconnect all connections of receiver, and return false. - */ - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(~static_cast(0u)); - return false; - } - - template - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - auto cur_rd = circ::index_of(rd_.load(std::memory_order_relaxed)); - if (cur_rd == circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::forward(f)(&(elems[cur_rd].data_)); - std::forward(out)(true); - rd_.fetch_add(1, std::memory_order_release); - return true; - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - if (circ::index_of(cur_rd) == - circ::index_of(wt_.load(std::memory_order_acquire))) { - return false; // empty - } - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> - : prod_cons_impl> { - - using flag_t = std::uint64_t; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - - template - bool push(W* /*wrapper*/, F&& f, E* elems) { - circ::u2_t cur_ct, nxt_ct; - for (unsigned k = 0;;) { - cur_ct = ct_.load(std::memory_order_relaxed); - if (circ::index_of(nxt_ct = cur_ct + 1) == - circ::index_of(rd_.load(std::memory_order_acquire))) { - return false; // full - } - if (ct_.compare_exchange_weak(cur_ct, nxt_ct, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - auto* el = elems + circ::index_of(cur_ct); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - while (1) { - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if (cur_ct != wt_.load(std::memory_order_relaxed)) { - return true; - } - if ((~cac_ct) != cur_ct) { - return true; - } - if (!el->f_ct_.compare_exchange_strong(cac_ct, 0, std::memory_order_relaxed)) { - return true; - } - wt_.store(nxt_ct, std::memory_order_release); - cur_ct = nxt_ct; - nxt_ct = cur_ct + 1; - el = elems + circ::index_of(cur_ct); - } - return true; - } - - template - bool force_push(W* wrapper, F&&, E*) { - wrapper->elems()->disconnect_receiver(1); - return false; - } - - template class E, std::size_t DS, std::size_t AS> - bool pop(W* /*wrapper*/, circ::u2_t& /*cur*/, F&& f, R&& out, E* elems) { - byte_t buff[DS]; - for (unsigned k = 0;;) { - auto cur_rd = rd_.load(std::memory_order_relaxed); - auto cur_wt = wt_.load(std::memory_order_acquire); - auto id_rd = circ::index_of(cur_rd); - auto id_wt = circ::index_of(cur_wt); - if (id_rd == id_wt) { - auto* el = elems + id_wt; - auto cac_ct = el->f_ct_.load(std::memory_order_acquire); - if ((~cac_ct) != cur_wt) { - return false; // empty - } - if (el->f_ct_.compare_exchange_weak(cac_ct, 0, std::memory_order_relaxed)) { - wt_.store(cur_wt + 1, std::memory_order_release); - } - k = 0; - } - else { - std::memcpy(buff, &(elems[circ::index_of(cur_rd)].data_), sizeof(buff)); - if (rd_.compare_exchange_weak(cur_rd, cur_rd + 1, std::memory_order_release)) { - std::forward(f)(buff); - std::forward(out)(true); - return true; - } - ipc::yield(k); - } - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - - enum : rc_t { - ep_mask = 0x00000000ffffffffull, - ep_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - }; - - alignas(cache_line_size) std::atomic wt_; // write index - alignas(cache_line_size) rc_t epoch_ { 0 }; // only one writer - - circ::u2_t cursor() const noexcept { - return wt_.load(std::memory_order_acquire); - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch_)) { - return false; // has not finished yet - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - epoch_ += ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(wt_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & ep_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, epoch_ | static_cast(cc), std::memory_order_release)) { - break; - } - ipc::yield(k); - } - std::forward(f)(&(el->data_)); - wt_.fetch_add(1, std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E* elems) { - if (cur == cursor()) return false; // acquire - auto* el = elems + circ::index_of(cur++); - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & ep_mask) == 0) { - std::forward(out)(true); - return true; - } - auto nxt_rc = cur_rc & ~static_cast(wrapper->connected_id()); - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)((nxt_rc & ep_mask) == 0); - return true; - } - ipc::yield(k); - } - } -}; - -template <> -struct prod_cons_impl> { - - using rc_t = std::uint64_t; - using flag_t = std::uint64_t; - - enum : rc_t { - rc_mask = 0x00000000ffffffffull, - ep_mask = 0x00ffffffffffffffull, - ep_incr = 0x0100000000000000ull, - ic_mask = 0xff000000ffffffffull, - ic_incr = 0x0000000100000000ull - }; - - template - struct elem_t { - std::aligned_storage_t data_ {}; - std::atomic rc_ { 0 }; // read-counter - std::atomic f_ct_ { 0 }; // commit flag - }; - - alignas(cache_line_size) std::atomic ct_; // commit index - alignas(cache_line_size) std::atomic epoch_ { 0 }; - - circ::u2_t cursor() const noexcept { - return ct_.load(std::memory_order_acquire); - } - - constexpr static rc_t inc_rc(rc_t rc) noexcept { - return (rc & ic_mask) | ((rc + ic_incr) & ~ic_mask); - } - - constexpr static rc_t inc_mask(rc_t rc) noexcept { - return inc_rc(rc) & ~rc_mask; - } - - template - bool push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.load(std::memory_order_acquire); - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_relaxed); - circ::cc_t rem_cc = cur_rc & rc_mask; - if ((cc & rem_cc) && ((cur_rc & ~ep_mask) == epoch)) { - return false; // has not finished yet - } - else if (!rem_cc) { - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if ((cur_fl != cur_ct) && cur_fl) { - return false; // full - } - } - // consider rem_cc to be 0 here - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed) && - epoch_.compare_exchange_weak(epoch, epoch, std::memory_order_acq_rel)) { - break; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool force_push(W* wrapper, F&& f, E* elems) { - E* el; - circ::u2_t cur_ct; - rc_t epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - for (unsigned k = 0;;) { - circ::cc_t cc = wrapper->elems()->connections(std::memory_order_relaxed); - if (cc == 0) return false; // no reader - el = elems + circ::index_of(cur_ct = ct_.load(std::memory_order_relaxed)); - // check all consumers have finished reading this element - auto cur_rc = el->rc_.load(std::memory_order_acquire); - circ::cc_t rem_cc = cur_rc & rc_mask; - if (cc & rem_cc) { - ipc::log("force_push: k = %u, cc = %u, rem_cc = %u\n", k, cc, rem_cc); - cc = wrapper->elems()->disconnect_receiver(rem_cc); // disconnect all invalid readers - if (cc == 0) return false; // no reader - } - // just compare & exchange - if (el->rc_.compare_exchange_weak( - cur_rc, inc_mask(epoch | (cur_rc & ep_mask)) | static_cast(cc), std::memory_order_relaxed)) { - if (epoch == epoch_.load(std::memory_order_acquire)) { - break; - } - else if (push(wrapper, std::forward(f), elems)) { - return true; - } - epoch = epoch_.fetch_add(ep_incr, std::memory_order_release) + ep_incr; - } - ipc::yield(k); - } - // only one thread/process would touch here at one time - ct_.store(cur_ct + 1, std::memory_order_release); - std::forward(f)(&(el->data_)); - // set flag & try update wt - el->f_ct_.store(~static_cast(cur_ct), std::memory_order_release); - return true; - } - - template - bool pop(W* wrapper, circ::u2_t& cur, F&& f, R&& out, E(& elems)[N]) { - auto* el = elems + circ::index_of(cur); - auto cur_fl = el->f_ct_.load(std::memory_order_acquire); - if (cur_fl != ~static_cast(cur)) { - return false; // empty - } - ++cur; - std::forward(f)(&(el->data_)); - for (unsigned k = 0;;) { - auto cur_rc = el->rc_.load(std::memory_order_acquire); - if ((cur_rc & rc_mask) == 0) { - std::forward(out)(true); - el->f_ct_.store(cur + N - 1, std::memory_order_release); - return true; - } - auto nxt_rc = inc_rc(cur_rc) & ~static_cast(wrapper->connected_id()); - bool last_one = false; - if ((last_one = (nxt_rc & rc_mask) == 0)) { - el->f_ct_.store(cur + N - 1, std::memory_order_release); - } - if (el->rc_.compare_exchange_weak(cur_rc, nxt_rc, std::memory_order_release)) { - std::forward(out)(last_one); - return true; - } - ipc::yield(k); - } - } -}; - -} // namespace ipc diff --git a/spaces/qinzhu/moe-tts-tech/transforms.py b/spaces/qinzhu/moe-tts-tech/transforms.py deleted file mode 100644 index 4793d67ca5a5630e0ffe0f9fb29445c949e64dae..0000000000000000000000000000000000000000 --- a/spaces/qinzhu/moe-tts-tech/transforms.py +++ /dev/null @@ -1,193 +0,0 @@ -import torch -from torch.nn import functional as F - -import numpy as np - - -DEFAULT_MIN_BIN_WIDTH = 1e-3 -DEFAULT_MIN_BIN_HEIGHT = 1e-3 -DEFAULT_MIN_DERIVATIVE = 1e-3 - - -def piecewise_rational_quadratic_transform(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails=None, - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - - if tails is None: - spline_fn = rational_quadratic_spline - spline_kwargs = {} - else: - spline_fn = unconstrained_rational_quadratic_spline - spline_kwargs = { - 'tails': tails, - 'tail_bound': tail_bound - } - - outputs, logabsdet = spline_fn( - inputs=inputs, - unnormalized_widths=unnormalized_widths, - unnormalized_heights=unnormalized_heights, - unnormalized_derivatives=unnormalized_derivatives, - inverse=inverse, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative, - **spline_kwargs - ) - return outputs, logabsdet - - -def searchsorted(bin_locations, inputs, eps=1e-6): - bin_locations[..., -1] += eps - return torch.sum( - inputs[..., None] >= bin_locations, - dim=-1 - ) - 1 - - -def unconstrained_rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - tails='linear', - tail_bound=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - inside_interval_mask = (inputs >= -tail_bound) & (inputs <= tail_bound) - outside_interval_mask = ~inside_interval_mask - - outputs = torch.zeros_like(inputs) - logabsdet = torch.zeros_like(inputs) - - if tails == 'linear': - unnormalized_derivatives = F.pad(unnormalized_derivatives, pad=(1, 1)) - constant = np.log(np.exp(1 - min_derivative) - 1) - unnormalized_derivatives[..., 0] = constant - unnormalized_derivatives[..., -1] = constant - - outputs[outside_interval_mask] = inputs[outside_interval_mask] - logabsdet[outside_interval_mask] = 0 - else: - raise RuntimeError('{} tails are not implemented.'.format(tails)) - - outputs[inside_interval_mask], logabsdet[inside_interval_mask] = rational_quadratic_spline( - inputs=inputs[inside_interval_mask], - unnormalized_widths=unnormalized_widths[inside_interval_mask, :], - unnormalized_heights=unnormalized_heights[inside_interval_mask, :], - unnormalized_derivatives=unnormalized_derivatives[inside_interval_mask, :], - inverse=inverse, - left=-tail_bound, right=tail_bound, bottom=-tail_bound, top=tail_bound, - min_bin_width=min_bin_width, - min_bin_height=min_bin_height, - min_derivative=min_derivative - ) - - return outputs, logabsdet - -def rational_quadratic_spline(inputs, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=False, - left=0., right=1., bottom=0., top=1., - min_bin_width=DEFAULT_MIN_BIN_WIDTH, - min_bin_height=DEFAULT_MIN_BIN_HEIGHT, - min_derivative=DEFAULT_MIN_DERIVATIVE): - if torch.min(inputs) < left or torch.max(inputs) > right: - raise ValueError('Input to a transform is not within its domain') - - num_bins = unnormalized_widths.shape[-1] - - if min_bin_width * num_bins > 1.0: - raise ValueError('Minimal bin width too large for the number of bins') - if min_bin_height * num_bins > 1.0: - raise ValueError('Minimal bin height too large for the number of bins') - - widths = F.softmax(unnormalized_widths, dim=-1) - widths = min_bin_width + (1 - min_bin_width * num_bins) * widths - cumwidths = torch.cumsum(widths, dim=-1) - cumwidths = F.pad(cumwidths, pad=(1, 0), mode='constant', value=0.0) - cumwidths = (right - left) * cumwidths + left - cumwidths[..., 0] = left - cumwidths[..., -1] = right - widths = cumwidths[..., 1:] - cumwidths[..., :-1] - - derivatives = min_derivative + F.softplus(unnormalized_derivatives) - - heights = F.softmax(unnormalized_heights, dim=-1) - heights = min_bin_height + (1 - min_bin_height * num_bins) * heights - cumheights = torch.cumsum(heights, dim=-1) - cumheights = F.pad(cumheights, pad=(1, 0), mode='constant', value=0.0) - cumheights = (top - bottom) * cumheights + bottom - cumheights[..., 0] = bottom - cumheights[..., -1] = top - heights = cumheights[..., 1:] - cumheights[..., :-1] - - if inverse: - bin_idx = searchsorted(cumheights, inputs)[..., None] - else: - bin_idx = searchsorted(cumwidths, inputs)[..., None] - - input_cumwidths = cumwidths.gather(-1, bin_idx)[..., 0] - input_bin_widths = widths.gather(-1, bin_idx)[..., 0] - - input_cumheights = cumheights.gather(-1, bin_idx)[..., 0] - delta = heights / widths - input_delta = delta.gather(-1, bin_idx)[..., 0] - - input_derivatives = derivatives.gather(-1, bin_idx)[..., 0] - input_derivatives_plus_one = derivatives[..., 1:].gather(-1, bin_idx)[..., 0] - - input_heights = heights.gather(-1, bin_idx)[..., 0] - - if inverse: - a = (((inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta) - + input_heights * (input_delta - input_derivatives))) - b = (input_heights * input_derivatives - - (inputs - input_cumheights) * (input_derivatives - + input_derivatives_plus_one - - 2 * input_delta)) - c = - input_delta * (inputs - input_cumheights) - - discriminant = b.pow(2) - 4 * a * c - assert (discriminant >= 0).all() - - root = (2 * c) / (-b - torch.sqrt(discriminant)) - outputs = root * input_bin_widths + input_cumwidths - - theta_one_minus_theta = root * (1 - root) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * root.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - root).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, -logabsdet - else: - theta = (inputs - input_cumwidths) / input_bin_widths - theta_one_minus_theta = theta * (1 - theta) - - numerator = input_heights * (input_delta * theta.pow(2) - + input_derivatives * theta_one_minus_theta) - denominator = input_delta + ((input_derivatives + input_derivatives_plus_one - 2 * input_delta) - * theta_one_minus_theta) - outputs = input_cumheights + numerator / denominator - - derivative_numerator = input_delta.pow(2) * (input_derivatives_plus_one * theta.pow(2) - + 2 * input_delta * theta_one_minus_theta - + input_derivatives * (1 - theta).pow(2)) - logabsdet = torch.log(derivative_numerator) - 2 * torch.log(denominator) - - return outputs, logabsdet diff --git a/spaces/qtoino/form_matcher/public/form1.html b/spaces/qtoino/form_matcher/public/form1.html deleted file mode 100644 index 90ce8e09bad32fa11f2b5a5d26532010ad788f6f..0000000000000000000000000000000000000000 --- a/spaces/qtoino/form_matcher/public/form1.html +++ /dev/null @@ -1,31 +0,0 @@ - - - - - Contact Form - - -

        Contact Form

        -
        - - -
        - - -
        - - -
        - - -
        - - -
        - - -

        - -
        - - \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/CRACK Picture Instruments Image 2 LUT Pro 5.2.7 (64-bit) !!TOP!!.md b/spaces/quidiaMuxgu/Expedit-SAM/CRACK Picture Instruments Image 2 LUT Pro 5.2.7 (64-bit) !!TOP!!.md deleted file mode 100644 index 7c85c05015868271b22d54656e4e5fa856b779af..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/CRACK Picture Instruments Image 2 LUT Pro 5.2.7 (64-bit) !!TOP!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        CRACK Picture Instruments Image 2 LUT Pro 5.2.7 (64-bit)


        DOWNLOAD »»» https://geags.com/2uCrqe



        - -1750 Submitted by: Enginl'ering Consultants Group (ECG) Bldg. 2, Block 10, ... Interpreted from Landsat-TM Images, Scale 1:100.000 ESIA for Helwan South Power ... ENGINEERING CONSULTANTS GROUP SA 5.2.7 Background Air Quality Air ... South Power Plant Transmission Line Interconnection Project CH 5-Page 64 ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Download Renault Carminat Navigation Communication - Europe V32.2 Torrent 16 !EXCLUSIVE!.md b/spaces/quidiaMuxgu/Expedit-SAM/Download Renault Carminat Navigation Communication - Europe V32.2 Torrent 16 !EXCLUSIVE!.md deleted file mode 100644 index 53856f97830b80cabaddedbe9c29c632e526235b..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Download Renault Carminat Navigation Communication - Europe V32.2 Torrent 16 !EXCLUSIVE!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Download renault carminat navigation communication - europe v32.2 torrent 16


        Download ••• https://geags.com/2uCqoX



        - -Case Closed The Time Bombed Skyscraper Torrent Download ... 16 mai 2020 ... Dvd Gps Renault 2013 Cnc V32.2 Carminat Navigation Communication 40 ... carminat navigation communication europe v34, renault carminat navigation ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Ejay Techno 4 Reloaded Serial Number !!LINK!!.md b/spaces/quidiaMuxgu/Expedit-SAM/Ejay Techno 4 Reloaded Serial Number !!LINK!!.md deleted file mode 100644 index 3e1e683fc79a5dd9501db9d6b827dd9783703126..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Ejay Techno 4 Reloaded Serial Number !!LINK!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        ejay techno 4 reloaded serial number


        Download Ziphttps://geags.com/2uCqBM



        - -Keil arm mdk 5.00 keygen serial crack PDF file: The Big Alfie Out. ... Chhote Miyan hindi 720p download ejay techno 4 reloaded serial number descargar teowin ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Flexisign Pro 75 V2 Crack.md b/spaces/quidiaMuxgu/Expedit-SAM/Flexisign Pro 75 V2 Crack.md deleted file mode 100644 index c0b20f7086f352b7316cbc22ff203305e658ca93..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Flexisign Pro 75 V2 Crack.md +++ /dev/null @@ -1,14 +0,0 @@ -

        Flexisign Pro 75 V2 Crack


        Download ✔✔✔ https://geags.com/2uCqdX



        -
        -FlexiSIGN.Pro.v8.6.v2.build.1472-ISO 1CD Flexisign.Pro.7.6.v2-ISO 2CD(,) Scanvec.Amiable.Enroute.v3. 2- ISO 3CD??CAD/CAM?? ?? Top Systems Ltd.?? Vector.Optimizer.v4.2- ISO 4CD??CAD/CAM?? ?? -TwinCorporation.Vector.Vector.Pro.v8.2.- ISO 5CD??CAD/CAM?? ?? -Trace.Cut.v10.0.- ISO 6CD??CAD/CAM?? ? -Top Systems Ltd.??Vector.Optimizer.v4.2-ISO 7CD??CAD/CAM?? ?? -CAD.Mill.V8.1.-ISO 8CD??CAD/CAM?? ?? -CAD/CAM/CAE. -Vector.Auto.CAD.v7.1-ISO 9CD??CAD/CAM?? ?? -CAD.Mill.V8.1.-ISO 10CD??CAD/CAM?? ?? -Vector.Auto.CAD.v7.1.-ISO 11CD??CAD/CAM?? ?? 8a78ff9644
        -
        -
        -

        diff --git a/spaces/r3gm/RVC_HF/train/mel_processing.py b/spaces/r3gm/RVC_HF/train/mel_processing.py deleted file mode 100644 index 1c871ab6b838b174407d163c201df899cc3e2b14..0000000000000000000000000000000000000000 --- a/spaces/r3gm/RVC_HF/train/mel_processing.py +++ /dev/null @@ -1,130 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - return dynamic_range_compression_torch(magnitudes) - - -def spectral_de_normalize_torch(magnitudes): - return dynamic_range_decompression_torch(magnitudes) - - -# Reusable banks -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - """Convert waveform into Linear-frequency Linear-amplitude spectrogram. - - Args: - y :: (B, T) - Audio waveforms - n_fft - sampling_rate - hop_size - win_size - center - Returns: - :: (B, Freq, Frame) - Linear-frequency Linear-amplitude spectrogram - """ - # Validation - if torch.min(y) < -1.07: - print("min value is ", torch.min(y)) - if torch.max(y) > 1.07: - print("max value is ", torch.max(y)) - - # Window - Cache if needed - global hann_window - dtype_device = str(y.dtype) + "_" + str(y.device) - wnsize_dtype_device = str(win_size) + "_" + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to( - dtype=y.dtype, device=y.device - ) - - # Padding - y = torch.nn.functional.pad( - y.unsqueeze(1), - (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode="reflect", - ) - y = y.squeeze(1) - - # Complex Spectrogram :: (B, T) -> (B, Freq, Frame, RealComplex=2) - spec = torch.stft( - y, - n_fft, - hop_length=hop_size, - win_length=win_size, - window=hann_window[wnsize_dtype_device], - center=center, - pad_mode="reflect", - normalized=False, - onesided=True, - return_complex=False, - ) - - # Linear-frequency Linear-amplitude spectrogram :: (B, Freq, Frame, RealComplex=2) -> (B, Freq, Frame) - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - # MelBasis - Cache if needed - global mel_basis - dtype_device = str(spec.dtype) + "_" + str(spec.device) - fmax_dtype_device = str(fmax) + "_" + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn( - sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax - ) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to( - dtype=spec.dtype, device=spec.device - ) - - # Mel-frequency Log-amplitude spectrogram :: (B, Freq=num_mels, Frame) - melspec = torch.matmul(mel_basis[fmax_dtype_device], spec) - melspec = spectral_normalize_torch(melspec) - return melspec - - -def mel_spectrogram_torch( - y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False -): - """Convert waveform into Mel-frequency Log-amplitude spectrogram. - - Args: - y :: (B, T) - Waveforms - Returns: - melspec :: (B, Freq, Frame) - Mel-frequency Log-amplitude spectrogram - """ - # Linear-frequency Linear-amplitude spectrogram :: (B, T) -> (B, Freq, Frame) - spec = spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center) - - # Mel-frequency Log-amplitude spectrogram :: (B, Freq, Frame) -> (B, Freq=num_mels, Frame) - melspec = spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax) - - return melspec diff --git a/spaces/radames/MusicGen-Continuation/audiocraft/utils/export.py b/spaces/radames/MusicGen-Continuation/audiocraft/utils/export.py deleted file mode 100644 index b513b52267f7bf5aae09282c15b0a2e20c8a8fee..0000000000000000000000000000000000000000 --- a/spaces/radames/MusicGen-Continuation/audiocraft/utils/export.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility to export a training checkpoint to a lightweight release checkpoint. -""" - -from pathlib import Path -import typing as tp - -from omegaconf import OmegaConf, DictConfig -import torch - - -def _clean_lm_cfg(cfg: DictConfig): - OmegaConf.set_struct(cfg, False) - # This used to be set automatically in the LM solver, need a more robust solution - # for the future. - cfg['transformer_lm']['card'] = 2048 - cfg['transformer_lm']['n_q'] = 4 - # Experimental params no longer supported. - bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters', - 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop'] - for name in bad_params: - del cfg['transformer_lm'][name] - OmegaConf.set_struct(cfg, True) - return cfg - - -def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['ema']['state']['model'], - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file - - -def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['fsdp_best_state']['model'], - 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg'])) - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Avatar 3D BRRip Anaglyph 720p.Dual Audio [Eng-Hindi]Harshad How to Get the Most Out of Your 3D Glasses and Speakers.md b/spaces/raedeXanto/academic-chatgpt-beta/Avatar 3D BRRip Anaglyph 720p.Dual Audio [Eng-Hindi]Harshad How to Get the Most Out of Your 3D Glasses and Speakers.md deleted file mode 100644 index 2bf11404eaef6d6201917660df5a524b5ce08a97..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Avatar 3D BRRip Anaglyph 720p.Dual Audio [Eng-Hindi]Harshad How to Get the Most Out of Your 3D Glasses and Speakers.md +++ /dev/null @@ -1,100 +0,0 @@ - -

        Avatar 3D BRRip Anaglyph 720p.Dual Audio [Eng-Hindi]~Harshad: A Review

        -

        Are you looking for a movie that will take you to a different world, where you can experience amazing adventures and stunning visuals? If so, you might want to check out Avatar 3D BRRip Anaglyph 720p.Dual Audio [Eng-Hindi]~Harshad, a torrent file that contains a high-quality version of the 2009 epic science fiction film directed by James Cameron. In this article, I will review this movie and explain why it is worth watching.

        -

        Avatar 3D BRRip Anaglyph 720p.Dual Audio [Eng-Hindi]~Harshad


        Download ✵✵✵ https://tinourl.com/2uL5b0



        -

        Introduction

        -

        Before we dive into the plot and analysis of Avatar, let me first introduce some basic concepts that will help you understand what this movie is about and how it was made.

        -

        What is Avatar?

        -

        Avatar is a movie that tells the story of Jake Sully, a former marine who joins a mission to explore and exploit a distant moon called Pandora, where a humanoid race called the Na'vi lives. Jake becomes part of an avatar program, which allows him to control a genetically engineered body that resembles the Na'vi. He soon falls in love with Neytiri, a Na'vi princess, and becomes involved in a conflict between the humans and the natives.

        -

        What is anaglyph 3D?

        -

        Anaglyph 3D is a technique that creates a stereoscopic 3D effect by using two images of different colors, usually red and cyan, that are superimposed on each other. When viewed with special glasses that have red and cyan lenses, the images appear to have depth and dimension. Avatar was originally released in theaters in different formats of 3D, but anaglyph 3D is one of the most accessible and affordable ways to watch it at home.

        -

        What is dual audio?

        -

        Dual audio is a feature that allows you to choose between two audio tracks when watching a movie. In this case, you can choose between English and Hindi, depending on your preference and understanding. Dual audio is useful for people who want to enjoy movies in their native language or learn a new language by listening to it.

        -

        Plot summary

        -

        Now that you have some background information about Avatar, let me give you a brief summary of its plot. Be warned, though, that this section contains spoilers, so if you haven't watched the movie yet, you might want to skip it.

        -

        Avatar 3D Blu-ray Anaglyph 720p.Hindi-English Dual Audio~Harshad
        -Avatar 3D BRRip Red-Cyan 720p.Dual Audio [Eng-Hindi]~Harshad
        -Avatar 3D BRRip Anaglyph 720p.English-Hindi Dual Audio~Harshad
        -Avatar 3D Blu-ray Red-Cyan 720p.Dual Audio [Eng-Hindi]~Harshad
        -Avatar 3D BRRip Anaglyph 720p.Hindi-English Dual Audio~Harshad
        -Avatar 3D Blu-ray Anaglyph 720p.Dual Audio [Eng-Hindi]~Harshad
        -Avatar 3D BRRip Red-Cyan 720p.English-Hindi Dual Audio~Harshad
        -Avatar 3D Blu-ray Red-Cyan 720p.Hindi-English Dual Audio~Harshad
        -Avatar 3D BRRip Anaglyph 720p.Dual Audio [Hindi-Eng]~Harshad
        -Avatar 3D Blu-ray Anaglyph 720p.English-Hindi Dual Audio~Harshad
        -Avatar 3D BRRip Red-Cyan 720p.Dual Audio [Hindi-Eng]~Harshad
        -Avatar 3D Blu-ray Red-Cyan 720p.English-Hindi Dual Audio~Harshad
        -Avatar 3D BRRip Anaglyph 1080p.Dual Audio [Eng-Hindi]~Harshad
        -Avatar 3D Blu-ray Anaglyph 1080p.Dual Audio [Eng-Hindi]~Harshad
        -Avatar 3D BRRip Red-Cyan 1080p.Dual Audio [Eng-Hindi]~Harshad
        -Avatar 3D Blu-ray Red-Cyan 1080p.Dual Audio [Eng-Hindi]~Harshad
        -Avatar 3D BRRip Anaglyph 1080p.English-Hindi Dual Audio~Harshad
        -Avatar 3D Blu-ray Anaglyph 1080p.English-Hindi Dual Audio~Harshad
        -Avatar 3D BRRip Red-Cyan 1080p.English-Hindi Dual Audio~Harshad
        -Avatar 3D Blu-ray Red-Cyan 1080p.English-Hindi Dual Audio~Harshad
        -Avatar 3D BRRip Anaglyph 1080p.Hindi-English Dual Audio~Harshad
        -Avatar 3D Blu-ray Anaglyph 1080p.Hindi-English Dual Audio~Harshad
        -Avatar 3D BRRip Red-Cyan 1080p.Hindi-English Dual Audio~Harshad
        -Avatar 3D Blu-ray Red-Cyan 1080p.Hindi-English Dual Audio~Harshad
        -Avatar 3D BRRip Anaglyph HD.Dual Audio [Eng-Hindi]~Harshad
        -Avatar 3D Blu-ray Anaglyph HD.Dual Audio [Eng-Hindi]~Harshad
        -Avatar 3D BRRip Red-Cyan HD.Dual Audio [Eng-Hindi]~Harshad
        -Avatar 3D Blu-ray Red-Cyan HD.Dual Audio [Eng-Hindi]~Harshad
        -Avatar 3D BRRip Anaglyph HD.English-Hindi Dual Audio~Harshad
        -Avatar 3D Blu-ray Anaglyph HD.English-Hindi Dual Audio~Harshad
        -Avatar

        -

        The setting

        -

        The movie takes place in the year 2154, when humans have depleted most of the natural resources on Earth and are looking for alternatives in other planets. One of them is Pandora, a lush moon that orbits a gas giant called Polyphemus. Pandora has a rich biodiversity and a valuable mineral called unobtanium, which is sought after by a corporation called RDA. However, Pandora also has a hostile environment and dangerous creatures that make it difficult for humans to survive.

        -

        The main characters

        -
          -
        • Jake Sully (Sam Worthington) - He is the protagonist of the movie. He is a paraplegic former marine who joins the avatar program after his twin brother dies. He becomes fascinated by Pandora and its inhabitants, and eventually betrays the humans to join the Na'vi.
        • -
        • Neytiri (Zoe Saldana) - She is the deuteragonist of the movie. She is the daughter of the chief and the spiritual leader of the Omaticaya clan, one of the Na'vi tribes. She rescues Jake from a pack of viperwolves and becomes his teacher and lover.
        • -
        • Grace Augustine (Sigourney Weaver) - She is a supporting character in the movie. She is a scientist who leads the avatar program and studies Pandora's flora and fauna. She is sympathetic to the Na'vi and tries to protect them from RDA's aggression.
        • -
        • Colonel Miles Quaritch (Stephen Lang) - He is the main antagonist of the movie. He is the head of RDA's security force and a ruthless military leader who wants to destroy the Na'vi and their home tree to access the unobtanium deposits underneath.
        • -
        • Parker Selfridge (Giovanni Ribisi) - He is a secondary antagonist in the movie. He is the administrator of RDA's operations on Pandora and Quaritch's boss. He is greedy and arrogant, but also cowardly and indecisive.
        • -
        -

        The conflict

        -

        The main conflict of the movie arises from the clash between two cultures: the human culture that values technology, progress, and profit; and the Na'vi culture that values nature, harmony, and spirituality. The humans want to exploit Pandora's resources regardless of the consequences for its inhabitants; while the Na'vi want to preserve their way of life and their connection to Eywa, their goddess and life force.

        -

        The resolution

        -

        The climax of the movie occurs when Quaritch launches an attack on the home tree of the Omaticaya clan, killing many Na'vi and destroying their habitat. Jake tries to stop him but fails. He then rallies other Na'vi clans to fight back against RDA's forces in an epic battle. With Eywa's help, they manage to defeat Quaritch and his army. Jake then transfers his consciousness permanently into his avatar body and becomes one of the Na'vi.

        -

        Analysis

        -

        In this section, I will analyze some aspects of Avatar that make it an outstanding movie: its visual effects, its themes and messages, and its cultural impact.

        -

        The visual effects

        -

        One of the most impressive features of Avatar is its stunning visual effects that create a realistic and immersive experience for the viewers. The movie uses cutting-edge technology such as performance capture, computer-generated imagery (CGI), stereoscopic 3D, motion capture, facial capture, photorealistic rendering, digital cinematography, etc., to bring Pandora and its creatures to life.

        -

        The movie also uses color symbolism to contrast different elements: blue represents nature, life, spirituality; red represents technology, warfare, violence; green represents balance, harmony, growth; yellow represents greed, corruption, destruction. The movie also uses lighting, sound, and music to enhance the mood and atmosphere of each scene.

        -

        The themes and messages

        -

        Another remarkable aspect of Avatar is its exploration of various themes and messages that are relevant and meaningful for today's society. Some of them are:

        -
          -
        • Environmentalism - The movie criticizes the human exploitation of natural resources and its negative impact on biodiversity and climate change. It also celebrates the beauty and diversity of nature and its importance for human well-being.
        • -
        • Colonialism - The movie denounces the human oppression of indigenous peoples and their cultures, histories, and identities. It also portrays the resistance and resilience of these peoples and their struggle for self-determination.
        • -
        • Spirituality - The movie explores the concept of interconnectedness between all living beings and their source of energy and consciousness. It also contrasts the human materialism and individualism with the Na'vi spiritualism and collectivism.
        • -
        • Identity - The movie examines the process of identity formation and transformation, especially through Jake's journey. It also questions the notions of belonging, loyalty, and betrayal, as well as the role of choice, agency, and responsibility.
        • -
        -

        The cultural impact

        -

        The final aspect of Avatar that I will analyze is its cultural impact on audiences and media around the world. The movie

        The cultural impact

        -

        The final aspect of Avatar that I will analyze is its cultural impact on audiences and media around the world. The movie was a global phenomenon that broke box office records, won awards, inspired fans, and sparked debates.

        -
          -
        • Box office records - Avatar was the first movie to gross more than $2 billion worldwide, and it remained the highest-grossing film of all time for almost a decade until Avengers: Endgame surpassed it in 2019. However, Avatar reclaimed its title in 2021 after a re-release in China. The movie also holds the record for the highest-grossing 3D film and the highest-grossing original film (not based on any pre-existing material).
        • -
        • Awards - Avatar received nine Academy Award nominations, including Best Picture and Best Director, and won three: Best Cinematography, Best Visual Effects, and Best Art Direction. It also won four Golden Globe Awards, including Best Motion Picture - Drama and Best Director. The movie was praised by critics and audiences alike for its technical achievements, emotional storytelling, and social commentary.
        • -
        • Fans - Avatar generated a huge fan base that created fan art, fan fiction, cosplay, websites, forums, podcasts, and even learned to speak Na'vi. Some fans also reported feeling depressed or suicidal after watching the movie because they wanted to live in Pandora instead of Earth. The movie also inspired some environmental activists to protest against deforestation and mining projects that threatened indigenous lands.
        • -
        • Debates - Avatar also sparked controversies and discussions on various topics, such as its portrayal of colonialism, imperialism, racism, militarism, capitalism, environmentalism, spirituality, identity, etc. Some critics accused the movie of being a white savior narrative that stereotyped and appropriated indigenous cultures. Others defended the movie as a powerful allegory that raised awareness and empathy for oppressed peoples and ecosystems.
        • -
        -

        Conclusion

        -

        In conclusion, Avatar 3D BRRip Anaglyph 720p.Dual Audio [Eng-Hindi]~Harshad is a movie that deserves your attention and appreciation. It is not only a visual spectacle that showcases the wonders of Pandora and its inhabitants, but also a compelling story that explores relevant and meaningful themes and messages. It is also a cultural phenomenon that had a lasting impact on audiences and media around the world. Whether you watch it for its entertainment value or its artistic merit, you will not regret it.

        -

        I hope you enjoyed this review and found it informative and engaging. If you have any questions or comments about the movie or the review, please feel free to share them below. Thank you for reading!

        -

        FAQs

        -
          -
        1. Q: When will Avatar 2 be released?
          -A: Avatar 2 is scheduled to be released on December 16, 2022. It will be followed by three more sequels in 2024, 2026, and 2028.
        2. -
        3. Q: How can I watch Avatar in 3D at home?
          -A: You will need a 3D TV or monitor, a 3D Blu-ray player or console, a pair of anaglyph glasses (red-cyan), and a copy of Avatar 3D BRRip Anaglyph 720p.Dual Audio [Eng-Hindi]~Harshad torrent file.
        4. -
        5. Q: How can I learn Na'vi?
          -A: You can visit various websites and resources that teach Na'vi online, such as LearnNavi.org , Na'vi Dictionary , Na'vi Language Wiki , etc. You can also join Na'vi communities and forums where you can practice with other learners and speakers.
        6. -
        7. Q: What are some other movies similar to Avatar?
          -A: Some movies that have similar themes or elements to Avatar are Dances With Wolves , FernGully: The Last Rainforest , Pocahontas , The Last Samurai , District 9 , The Matrix , Star Wars , etc.
        8. -
        9. Q: What are some of the best quotes from Avatar?
          -A: Some of the most memorable quotes from Avatar are: - "I see you." - Jake Sully and Neytiri - "You are not in Kansas anymore. You are on Pandora." - Grace Augustine - "Everything is backwards now. Like out there is the true world and in here is the dream." - Jake Sully - "This is how it's done. When people are sitting on shit that you want, you make them your enemy. Then you're justified in taking it." - Colonel Miles Quaritch - "You have a strong heart. No fear. But stupid! Ignorant like a child!" - Neytiri - "Sometimes your whole life boils down to one insane move." - Jake Sully - "I don't want to live in their world anymore." - Grace Augustine - "Eywa has heard you." - Mo'at - "You are like a baby. Making noise don't know what to do." - Tsu'tey - "You will never be one of The People!" - Eytukan
        10. -
        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/B310n B510dn By Orthotamine Free ((BETTER)) Adjustment Program.md b/spaces/raedeXanto/academic-chatgpt-beta/B310n B510dn By Orthotamine Free ((BETTER)) Adjustment Program.md deleted file mode 100644 index 8080b080002e00f3b184d5cdb2d8e76ee68bf33e..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/B310n B510dn By Orthotamine Free ((BETTER)) Adjustment Program.md +++ /dev/null @@ -1,48 +0,0 @@ - -

        How to Reset Epson B310N and B510DN Printers with Orthotamine Free Adjustment Program

        - -

        If you own an Epson B310N or B510DN printer, you may encounter a problem when the printer stops working and displays an error message saying "Service required. Parts inside your printer are near the end of their service life. See your printer documentation." This means that the printer has reached its waste ink counter limit and needs to be reset.

        - -

        Fortunately, there is a solution to this problem. You can use a software tool called Orthotamine Free Adjustment Program to reset the waste ink counter of your Epson B310N or B510DN printer. This program is compatible with Windows XP, Vista, 7, 8, and 10 operating systems. It is also easy to use and does not require any technical skills.

        -

        b310n b510dn by orthotamine free adjustment program


        Download File ————— https://tinourl.com/2uL43L



        - -

        In this article, we will show you how to download and use Orthotamine Free Adjustment Program to reset your Epson B310N or B510DN printer. Follow these steps:

        - -
          -
        1. Download Orthotamine Free Adjustment Program from this link: http://grafguipy.yolasite.com/resources/b310n-b510dn-by-orthotamine-free-adjustment-program.pdf. This is a PDF file that contains the download link and instructions for the program.
        2. -
        3. Extract the ZIP file that you downloaded and run the file named "AdjProg.exe". This will open the Orthotamine Free Adjustment Program window.
        4. -
        5. Select your printer model from the drop-down menu and click "OK".
        6. -
        7. Click "Particular adjustment mode" on the main menu.
        8. -
        9. Select "Waste ink pad counter" from the list of options and click "OK".
        10. -
        11. Check the boxes next to "Main pad counter" and "Platen pad counter" and click "Check". This will show you the current values of the waste ink counters.
        12. -
        13. Click "Initialization" to reset the waste ink counters to zero. A message will appear asking you to turn off your printer. Click "OK" and turn off your printer.
        14. -
        15. Turn on your printer again and click "Finish". Your printer should now be reset and ready to use.
        16. -
        - -

        Congratulations! You have successfully reset your Epson B310N or B510DN printer with Orthotamine Free Adjustment Program. You can now enjoy printing without worrying about the waste ink counter limit.

        - -

        If you have any questions or problems with using Orthotamine Free Adjustment Program, you can visit this website for more information: https://kumu.io/texchlinsdarbooks/b310n-b510dn-by-orthotamine-free-extra-quality-adjustment-program. This is a Kumu project that contains more details and screenshots about the program.

        - -

        We hope this article was helpful for you. If you liked it, please share it with your friends who own Epson B310N or B510DN printers. Thank you for reading!

        - -

        Benefits of Epson B310N and B510DN Printers

        - -

        Epson B310N and B510DN printers are not only economical, but also powerful and versatile. They offer many benefits for businesses that need high-quality color printing over a network. Here are some of the benefits of using these printers:

        -

        - -
          -
        • Fast print speeds: Epson B310N and B510DN printers can print up to 19 black and 18 color ISO pages per minute 1, which means you can get your documents printed quickly and efficiently.
        • -
        • High-capacity paper cassette: Epson B310N and B510DN printers have a front-loading paper tray that can hold up to 500 letter or legal size sheets, depending on the model. This reduces the need to reload paper frequently and allows you to print large volumes of documents without interruption.
        • -
        • Built-in duplexer: Epson B510DN printer has a built-in duplexer that enables automatic two-sided printing. This saves paper and money, as well as reduces environmental impact.
        • -
        • High-yield ink cartridges: Epson B310N and B510DN printers use four individual, high-yield ink cartridges that allow you to print thousands of pages before replacement. The ink cartridges are also easy to install and replace, and use DURABrite® Ultra pigment ink that delivers crisp, smudge-resistant text and vibrant images.
        • -
        • Versatile media handling: Epson B310N and B510DN printers can handle a variety of media types and sizes, such as plain paper, envelopes, labels, card stock, and photo paper. They also have a rear sheet feeder that can accommodate specialty media for added flexibility.
        • -
        • Easy network connectivity: Epson B310N and B510DN printers have a built-in Ethernet interface that allows you to connect them to your network easily and securely. You can also share them with multiple users and devices across your network.
        • -
        • Reliable performance: Epson B310N and B510DN printers have a monthly duty cycle of 10,000 pages for the B-310N and 20,000 pages for the B-510DN 1, which means they can handle heavy workloads with minimal maintenance. They also have an intelligent nozzle verification system that checks and cleans the print head regularly to ensure optimal print quality.
        • -
        • Energy efficiency: Epson B310N and B510DN printers are ENERGY STAR qualified, which means they meet the strict energy efficiency guidelines set by the U.S. Environmental Protection Agency and the U.S. Department of Energy. They also have a power-saving mode that reduces power consumption when not in use.
        • -
        - -

        As you can see, Epson B310N and B510DN printers are ideal solutions for businesses that need fast, economical, and high-quality color printing over a network. They offer many benefits that can help you save time, money, and resources while enhancing your productivity and communication.

        - -

        If you want to learn more about Epson B310N and B510DN printers, you can visit this website for more details and specifications: https://epson.com/For-Work/Printers/Inkjet/Epson-B-510DN-Business-Color-Inkjet-Printer/p/C11CA67201. This is the official website of Epson America where you can find more information about their products and services.

        cec2833e83
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/CorelDRAW.Graphics.Suite.X6.v16.2.0.998.HF1.Incl.Keymaker-CORE .rar REPACK.md b/spaces/raedeXanto/academic-chatgpt-beta/CorelDRAW.Graphics.Suite.X6.v16.2.0.998.HF1.Incl.Keymaker-CORE .rar REPACK.md deleted file mode 100644 index 42da87a52cb6e5eb5b94bf0afd583936780960af..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/CorelDRAW.Graphics.Suite.X6.v16.2.0.998.HF1.Incl.Keymaker-CORE .rar REPACK.md +++ /dev/null @@ -1,171 +0,0 @@ - -

        CorelDRAW Graphics Suite X6: A Powerful Design Software for Professionals

        -

        If you are looking for a software that can help you create stunning graphics for print, web, or any other medium, you might want to consider CorelDRAW Graphics Suite X6. This is a comprehensive package that includes CorelDRAW, Corel PHOTO-PAINT, Corel CONNECT, and other useful applications that can enhance your creativity and productivity. In this article, we will review what CorelDRAW Graphics Suite X6 is, how to install or reinstall it, how to use it, and what are its advantages and disadvantages.

        -

        What is CorelDRAW Graphics Suite X6?

        -

        CorelDRAW Graphics Suite X6 is a software that was released in 2012 by Corel Corporation, a Canadian company that specializes in graphics and productivity software. It is the 16th version of the CorelDRAW Graphics Suite, which was first launched in 1989. It is designed for professional graphic designers, artists, illustrators, web developers, and hobbyists who want to create and edit vector graphics, bitmap images, layouts, logos, icons, web graphics, and more.

        -

        CorelDRAW.Graphics.Suite.X6.v16.2.0.998.HF1.Incl.Keymaker-CORE .rar


        Download File > https://tinourl.com/2uL1VK



        -

        The main features and benefits of CorelDRAW Graphics Suite X6

        -

        CorelDRAW Graphics Suite X6 offers many features and benefits that can help you create amazing graphics with ease and efficiency. Some of them are:

        -
          -
        • 64-bit and multi-core support: This feature allows you to handle larger and more complex files faster and smoother. It also improves the performance and stability of the software.
        • -
        • Styles: This feature allows you to create a consistent appearance across your graphics by applying predefined or custom styles to objects, text, or pages. You can also save, import, export, and share your styles with others.
        • -
        • Smart Carver: This feature allows you to remove unwanted objects from your photos without affecting the quality or composition of the image. You can also use it to resize or reshape your photos.
        • -
        • Smear, Twirl, Attract, and Repel tools: These tools allow you to refine your vector objects by applying creative effects such as smearing, twisting, attracting, or repelling parts of them.
        • -
        • OpenType support: This feature allows you to access advanced typographic features such as ligatures, swashes, ornaments, small caps, fractions, and more from OpenType fonts.
        • -
        • Placeholder text: This feature allows you to insert dummy text into your layout to get a preview of how it will look with real content.
        • -
        • Export dialog box: This feature allows you to export your graphics to over 100 popular file formats with customizable settings such as color profiles, file types, file sizes, and more.
        • -
        -

        The system requirements and compatibility of CorelDRAW Graphics Suite X

        The system requirements and compatibility of CorelDRAW Graphics Suite X6

        -

        To install and run CorelDRAW Graphics Suite X6, you need to have a computer that meets the following minimum system requirements:

        - - - - - - - - -
        Operating systemWindows 8 (32-bit or 64-bit editions), Windows 7 (32-bit or 64-bit editions), Windows Vista (32-bit or 64-bit editions), or Windows XP (32-bit)
        ProcessorIntel Pentium 4, AMD Athlon 64 or AMD Opteron
        Memory1 GB RAM (2 GB recommended)
        Hard disk space1.5 GB hard disk space (for typical installation without content; additional disk space is required during installation)
        Display1024 x 768 screen resolution (768 x 1024 on a Tablet PC)
        Graphics cardDVD drive, mouse or tablet
        Internet connectionInternet connection for product activation and access to the plug-in store
        -

        CorelDRAW Graphics Suite X6 is compatible with the following file formats:

        -
          -
        • Vector formats: AI, CDR, CMX, CGM, DXF, EMF, EPS, PDF, PLT, SVG, SWF, WMF, and more.
        • -
        • Bitmap formats: BMP, CAL, CLP, CUR, CUT, DCX, DIB, EMF, EPS, GIF, HDP, IFF, IMG, J2C, J2K, JP2, JPC, JPG, PNG, PSD, RAS, RAW, RIF, RLE, SCT, TGA, TIF, WBM, WEBP, WMF and more.
        • -
        • Font formats: FNT, FON, TTF, OTF and more.
        • -
        • Other formats: DOCX, PPTX and more.
        • -
        -

        How to install or reinstall CorelDRAW Graphics Suite X6?

        -

        If you have purchased CorelDRAW Graphics Suite X6 from a retail store or an online vendor, you can install it from a disc or an electronic download. Here are the steps to follow for each method:

        -

        If installing from a disc

        -
          -
        1. Insert the installation disc into your DVD drive.
        2. -
        3. If the Autorun screen does not appear automatically, browse to the DVD drive on your computer and double-click the setup.exe file.
        4. -
        5. Follow the instructions on your screen to install the software. You can choose to install the full suite or select individual applications.
        6. -
        7. When prompted for a serial number, enter the serial number that is printed on the back of your disc case or on your order confirmation email.
        8. -
        9. When the installation is complete, you can register your product online to get access to updates and other benefits.
        10. -
        -

        If installing from an electronic download

        -
          -
        1. Download the installation files from the link that was provided to you by email or on your order confirmation page.
        2. -
        3. Save the files to a location on your computer that you can easily find.
        4. -
        5. Browse to the location where you saved the files and double-click the setup.exe file.
        6. -
        7. Follow the instructions on your screen to install the software. You can choose to install the full suite or select individual applications.
        8. -
        9. When prompted for a serial number, enter the serial number that was provided to you by email or on your order confirmation page.
        10. -
        11. When the installation is complete, you can register your product online to get access to updates and other benefits.
        12. -
        -

        How to use the keymaker-CORE to activate the software

        -

        If you have downloaded CorelDRAW Graphics Suite X6 from a torrent site or a file-sharing platform

        If you have downloaded CorelDRAW Graphics Suite X6 from a torrent site or a file-sharing platform, you might need to use the keymaker-CORE to activate the software. The keymaker-CORE is a tool that generates serial numbers and activation codes for various software products, including CorelDRAW Graphics Suite X6. However, using the keymaker-CORE is illegal and risky, as it may contain viruses, malware, or spyware that can harm your computer or compromise your personal data. Therefore, we do not recommend using the keymaker-CORE or any other pirated software. Instead, we advise you to purchase a legitimate copy of CorelDRAW Graphics Suite X6 from the official website or a trusted vendor.

        -

        -

        If you still want to use the keymaker-CORE at your own risk, here are the steps to follow:

        -
          -
        1. Download the keymaker-CORE from a reliable source and save it to a location on your computer that you can easily find.
        2. -
        3. Disable your antivirus software and firewall temporarily, as they may interfere with the keymaker-CORE or flag it as malicious.
        4. -
        5. Browse to the location where you saved the keymaker-CORE and right-click on it. Select "Run as administrator" from the menu.
        6. -
        7. A window will open with the keymaker-CORE interface. Select "CorelDRAW Graphics Suite X6" from the drop-down list and click on "Generate". A serial number and an activation code will be displayed.
        8. -
        9. Copy the serial number and paste it into the installation wizard when prompted. Complete the installation process.
        10. -
        11. Launch CorelDRAW Graphics Suite X6 and click on "Other Activation Options" when asked to activate the software. Select "Phone Corel" from the menu.
        12. -
        13. A window will open with an installation code. Copy the installation code and paste it into the keymaker-CORE interface. Click on "Activation". A new activation code will be displayed.
        14. -
        15. Copy the new activation code and paste it into the activation wizard. Click on "Continue". Your software should be activated successfully.
        16. -
        17. Enable your antivirus software and firewall again and scan your computer for any potential threats.
        18. -
        -

        How to use CorelDRAW Graphics Suite X6 to create and edit graphics?

        -

        Once you have installed and activated CorelDRAW Graphics Suite X6, you can start using it to create and edit graphics for various purposes. Here are some basic steps to follow:

        -

        The user interface and tools of CorelDRAW Graphics Suite X6

        -

        The user interface of CorelDRAW Graphics Suite X6 consists of several elements that help you access and use the features and functions of the software. Some of them are:

        -
          -
        • The application window: This is the main window that contains all the other elements of the user interface. You can resize, move, minimize, maximize, or close it as you wish.
        • -
        • The title bar: This is the horizontal bar at the top of the application window that displays the name of the active document and the name of the active application.
        • -
        • The menu bar: This is the horizontal bar below the title bar that contains menus such as File, Edit, View, Layout, Object, Effects, Bitmaps, Text, Tools, Window, and Help. You can click on these menus to access various commands and options.
        • -
        • The standard toolbar: This is the horizontal bar below the menu bar that contains icons for common commands such as New, Open, Save, Print, Cut, Copy, Paste, Undo, Redo, Zoom, Align, Arrange, Group, Ungroup, etc. You can click on these icons to perform these actions quickly.
        • -
        • The property bar: This is the horizontal bar below the standard toolbar that displays context-sensitive options for the selected tool or object. You can use these options to modify or customize the properties of the tool or object.
        • -
        • The toolbox: This is the vertical bar on the left side of the application window that contains icons for various tools such as Pick, Shape, Crop, Zoom, Freehand, Rectangle, Ellipse, Polygon, Bezier, Artistic Media, Text, Table, Eyedropper, Interactive Fill, Interactive Transparency, Interactive Blend, Interactive Contour, Interactive Distortion, Interactive Drop Shadow, Interactive Extrude, and more. You can click on these icons to select and use these tools.
        • -
        • The docker window: This is the window on the right side of the application window that contains various dockers such as Object Manager, Color Palette, Color Styles, Hints, Layers, Transformations, Symbols, and more. You can use these dockers to access and manage various aspects of your document and objects.
        • -
        • The status bar: This is the horizontal bar at the bottom of the application window that displays information such as the coordinates of the cursor, the size and position of the selected object, the zoom level, the color mode, and the number of pages and objects in the document. You can also use it to access some commands and options such as Snap to Objects, View Manager, Document Options, and more.
        • -
        • The drawing window: This is the main area of the application window where you create and edit your graphics. It contains one or more pages that you can switch between using the page tabs at the bottom. You can also use rulers, grids, guidelines, and page borders to help you align and position your objects.
        • -
        -

        How to create a new document or open an existing one

        -

        To create a new document in CorelDRAW Graphics Suite X6, you can follow these steps:

        -
          -
        1. Click on File > New or press Ctrl+N on your keyboard.
        2. -
        3. A dialog box will open where you can specify the name, size, orientation, resolution, color mode, and rendering intent of your document. You can also choose a preset or a template from the list.
        4. -
        5. Click on OK to create your document.
        6. -
        -

        To open an existing document in CorelDRAW Graphics Suite X6, you can follow these steps:

        -
          -
        1. Click on File > Open or press Ctrl+O on your keyboard.
        2. -
        3. A dialog box will open where you can browse to the location of your document and select it. You can also use the search box or the recent files list to find your document.
        4. -
        5. Click on Open to open your document.
        6. -
        -

        How to draw and manipulate vector objects

        -

        Vector objects are graphics that are made of lines, curves, and shapes that are defined by mathematical equations. They are scalable and editable without losing quality or detail. CorelDRAW Graphics Suite X6 provides various tools and commands that allow you to draw and manipulate vector objects. Here are some examples:

        -
          -
        • To draw basic shapes: You can use tools such as Rectangle, Ellipse, Polygon, and Fit to Page to manipulate objects using the menu bar, the standard toolbar, the property bar, or the keyboard shortcuts.
        • -
        • To combine and break apart objects: You can use commands such as Weld, Trim, Intersect, Simplify, Front Minus Back, Back Minus Front, and Break Apart to combine and break apart objects using the menu bar, the standard toolbar, the property bar, or the keyboard shortcuts.
        • -
        • To convert objects to curves: You can use commands such as Convert to Curves, Convert Outline to Object, and Convert to Bitmap to convert objects to curves using the menu bar, the standard toolbar, the property bar, or the keyboard shortcuts. This allows you to edit the nodes and segments of the objects using the Shape tool.
        • -
        -

        How to work with text and fonts

        -

        Text and fonts are essential elements of any graphic design project. CorelDRAW Graphics Suite X6 provides various tools and commands that allow you to work with text and fonts. Here are some examples:

        -
          -
        • To create text: You can use tools such as Text, Table, and Callout to create text by clicking and dragging on the drawing window. You can also use commands such as Insert Symbol Character, Insert Placeholder Text, and Insert Barcode to create text using the menu bar or the property bar.
        • -
        • To edit text: You can use tools such as Shape and Text to edit text by double-clicking on it or selecting it with the Pick tool. You can also use commands such as Cut, Copy, Paste, Delete, Undo, Redo, Find and Replace, Spell Check, Thesaurus , and Grammar Check to edit text using the menu bar, the standard toolbar, the property bar, or the keyboard shortcuts.
        • -
        • To format text: You can use tools such as Text, Shape, and Interactive Fit Text to Path to format text by selecting it with the Pick tool or double-clicking on it. You can also use commands such as Font, Font Size, Bold, Italic, Underline, Align, Bullets and Numbering, Indent, Spacing, Tabs, Columns, Drop Cap, and Styles to format text using the menu bar, the standard toolbar, the property bar, or the keyboard shortcuts.
        • -
        • To apply effects to text: You can use tools such as Interactive Blend, Interactive Contour, Interactive Distortion, Interactive Drop Shadow, Interactive Envelope, Interactive Extrude, and Interactive Transparency to apply effects to text by selecting it with the Pick tool or double-clicking on it. You can also use commands such as Convert to Curves, Convert Outline to Object, Convert to Bitmap , and Apply Effects to apply effects to text using the menu bar, the standard toolbar, the property bar, or the keyboard shortcuts.
        • -
        -

        How to apply effects and styles

        -

        Effects and styles are visual enhancements that you can apply to your objects and text to make them more attractive and expressive. CorelDRAW Graphics Suite X6 provides various tools and commands that allow you to apply effects and styles. Here are some examples:

        -
          -
        • To apply effects: You can use tools such as Interactive Blend, Interactive Contour, Interactive Distortion, Interactive Drop Shadow, Interactive Envelope, Interactive Extrude, Interactive Transparency, and Lens to apply effects to your objects and text by selecting them with the Pick tool or double-clicking on them. You can also use commands such as Apply Effects, Add Perspective, PowerClip, and Bitmap Color Mask to apply effects to your objects and text using the menu bar, the standard toolbar, the property bar, or the keyboard shortcuts.
        • -
        • To apply styles: You can use tools such as Eyedropper and Paint Bucket to apply styles to your objects and text by clicking on them. You can also use commands such as Styles, Copy Properties From, Paste Properties To, Create Style From Selection, and Load/Save Styles to apply styles to your objects and text using the menu bar, the standard toolbar, the property bar, or the keyboard shortcuts.
        • -
        -

        How to export and print your graphics

        -

        Once you have created and edited your graphics, you might want to export or print them for various purposes. CorelDRAW Graphics Suite X6 provides various tools and commands that allow you to export and print your graphics. Here are some examples:

        -
          -
        • To export your graphics: You can use commands such as Export, Publish to PDF, Publish to HTML, Publish to WordPress, Publish to Flickr, Publish to Facebook , Publish to YouTube, and Send by Email to export your graphics to various file formats and platforms using the menu bar, the standard toolbar, the property bar, or the keyboard shortcuts. You can also use the Export dialog box to customize the settings and options for your export.
        • -
        • To print your graphics: You can use commands such as Print, Print Preview, Print Merge, and Collect for Output to print your graphics using the menu bar, the standard toolbar, the property bar, or the keyboard shortcuts. You can also use the Print dialog box to adjust the settings and options for your print.
        • -
        -

        What are the advantages and disadvantages of CorelDRAW Graphics Suite X6?

        -

        CorelDRAW Graphics Suite X6 is a powerful and versatile software that can help you create and edit graphics for various purposes. However, like any other software, it also has its advantages and disadvantages. Here are some of them:

        -

        The pros of CorelDRAW Graphics Suite X6

        -

        Some of the pros of CorelDRAW Graphics Suite X6 are:

        -
          -
        • It offers a comprehensive package of applications and tools: CorelDRAW Graphics Suite X6 includes CorelDRAW, Corel PHOTO-PAINT, Corel CONNECT, Corel CAPTURE, Corel Website Creator, Bitstream Font Navigator, PhotoZoom Pro 2, ConceptShare, and more. These applications and tools can help you create and edit vector graphics, bitmap images, layouts, web pages, screen captures, and more.
        • -
        • It supports a wide range of file formats and platforms: CorelDRAW Graphics Suite X6 can import and export over 100 file formats, including AI, CDR, CMX, CGM, DXF, EMF, EPS, PDF, PLT, SVG, SWF, WMF, BMP, CAL , CLP, CUR, CUT, DCX, DIB, EMF, EPS, GIF, HDP, IFF, IMG, J2C, J2K, JP2, JPC, JPG, PNG, PSD, RAS, RAW, RIF, RLE, SCT, TGA, TIF, WBM, WEBP, WMF and more. It can also publish to various platforms such as PDF, HTML, WordPress, Flickr, Facebook, YouTube and more.
        • -
        • It has a user-friendly and customizable interface: CorelDRAW Graphics Suite X6 has a user-friendly and intuitive interface that allows you to access and use the features and functions of the software with ease. You can also customize the interface according to your preferences and needs by changing the color scheme, the workspace layout, the toolbars and dockers, the keyboard shortcuts and more.
        • -
        • It has a rich collection of content and resources: CorelDRAW Graphics Suite X6 comes with a rich collection of content and resources that can help you enhance your creativity and productivity. You can access over 10,000 high-quality clipart and digital images, over 1,000 professional fonts, over 350 professionally designed templates , over 2,000 vehicle templates, over 800 frames and patterns, and more. You can also access online tutorials, tips and tricks, webinars, videos, blogs, forums, and more to learn and improve your skills.
        • -
        • It has a high compatibility and performance: CorelDRAW Graphics Suite X6 is compatible with Windows 8, Windows 7, Windows Vista, and Windows XP operating systems. It also supports 64-bit and multi-core processing, which improves the speed and stability of the software.
        • -
        -

        The cons of CorelDRAW Graphics Suite X6

        -

        Some of the cons of CorelDRAW Graphics Suite X6 are:

        -
          -
        • It has a steep learning curve: CorelDRAW Graphics Suite X6 is a complex and advanced software that has many features and functions that can be overwhelming and confusing for beginners or casual users. It requires a lot of time and effort to master the software and use it effectively.
        • -
        • It has a high price: CorelDRAW Graphics Suite X6 is a premium software that costs $499 for the full version or $199 for the upgrade version. This can be a significant investment for some users who have a limited budget or who do not need all the features and functions of the software.
        • -
        • It has some bugs and glitches: CorelDRAW Graphics Suite X6 is not a perfect software that is free from errors or problems. Some users have reported issues such as crashes, freezes, slow performance, compatibility problems, missing features, or poor customer support. These issues can affect the quality and efficiency of your work and cause frustration and dissatisfaction.
        • -
        -

        Conclusion

        -

        CorelDRAW Graphics Suite X6 is a powerful and versatile software that can help you create and edit graphics for various purposes. It offers a comprehensive package of applications and tools that can enhance your creativity and productivity. It also supports a wide range of file formats and platforms that can expand your possibilities and opportunities. However, it also has some drawbacks such as a steep learning curve, a high price, and some bugs and glitches that can affect your experience and results. Therefore, you should weigh the pros and cons of CorelDRAW Graphics Suite X6 before deciding whether to use it or not.

        -

        FAQs

        -

        Here are some frequently asked questions about CorelDRAW Graphics Suite X6:

        -
          -
        1. What is the difference between CorelDRAW Graphics Suite X6 and CorelDRAW Graphics Suite 2021?
        2. -

          CorelDRAW Graphics Suite 2021 is the latest version of the CorelDRAW Graphics Suite that was released in March 2021. It has some new features and improvements such as collaboration tools, multipage view, perspective drawing, variable fonts , AI-powered image enhancement, and more. It also has a higher price of $599 for the full version or $299 for the upgrade version. CorelDRAW Graphics Suite X6 is an older version of the CorelDRAW Graphics Suite that was released in 2012. It has some features and functions that are not available in the newer versions, such as Smart Carver, Placeholder Text, and Export dialog box. It also has a lower price of $499 for the full version or $199 for the upgrade version.

          -
        3. How can I get a free trial of CorelDRAW Graphics Suite X6?
        4. -

          You can get a free trial of CorelDRAW Graphics Suite X6 by visiting the official website of Corel and clicking on the "Free Trial" button. You will need to provide your name, email address, and country to download the installation files. You will also need to create a Corel account to activate the trial. The trial will last for 15 days and will give you access to all the features and functions of the software.

          -
        5. How can I update or uninstall CorelDRAW Graphics Suite X6?
        6. -

          You can update or uninstall CorelDRAW Graphics Suite X6 by using the Corel Update Helper or the Windows Control Panel. To use the Corel Update Helper, you can follow these steps:

          -
            -
          1. Launch CorelDRAW Graphics Suite X6 and click on Help > Check for Updates.
          2. -
          3. A window will open with the Corel Update Helper interface. You can see the available updates for your software and choose which ones to install.
          4. -
          5. Click on Install Updates to download and install the updates.
          6. -
          -

          To use the Windows Control Panel, you can follow these steps:

          -
            -
          1. Click on Start > Control Panel > Programs and Features.
          2. -
          3. A window will open with a list of installed programs on your computer. Find and select CorelDRAW Graphics Suite X6 from the list.
          4. -
          5. Click on Uninstall/Change to uninstall or modify the software.
          6. -
          -
        7. How can I contact Corel customer support?
        8. -

          You can contact Corel customer support by visiting the official website of Corel and clicking on Support > Contact Us. You can choose from various options such as phone, chat, email, or online form to reach out to the support team. You can also access other support resources such as knowledge base, user guides, video tutorials, community forums, and more.

          -
        9. How can I learn more about CorelDRAW Graphics Suite X6?
        10. -

          You can learn more about CorelDRAW Graphics Suite X6 by visiting the official website of Corel and clicking on Products > CorelDRAW Graphics Suite X6. You can find information such as features, benefits, system requirements, reviews, testimonials, screenshots, videos, and more. You can also download a free trial, buy the software, or access other resources such as tutorials, tips and tricks, webinars, blogs, forums, and more.

          -

        b2dd77e56b
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Durst A21 Gearbox Parts Comparing Different Models and Options.md b/spaces/raedeXanto/academic-chatgpt-beta/Durst A21 Gearbox Parts Comparing Different Models and Options.md deleted file mode 100644 index 01315935263e560cd230f6d032b4851436eeca5b..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Durst A21 Gearbox Parts Comparing Different Models and Options.md +++ /dev/null @@ -1,98 +0,0 @@ -
        -
        - What are the benefits of the Durst A21 gearbox? | | How to Choose the Right Durst A21 Gearbox Parts Supplier? | - Check their credentials
        - Check their inventory
        - Check their service | | Conclusion | - Summarize the main points of the article | | FAQs | - How do I know what size and type of pump I need for my Durst A21 gearbox?
        - How often do I need to change the oil in my Durst A21 gearbox?
        - What are some common signs of wear or damage in my Durst A21 gearbox?
        - How do I install or remove my Durst A21 gearbox?
        - Where can I find more information about the Durst A21 gearbox? | ## Table 2: Article with HTML Formatting

        Durst A21 Gearbox Parts: A Guide for Buyers

        -

        If you are looking for a reliable and durable gearbox for your off-highway equipment, you might want to consider the Durst A21 gearbox. This gearbox is designed to handle high torque and power applications in various industries, such as forestry, mining, construction, and agriculture. In this article, we will provide you with an overview of the Durst A21 gearbox parts, their features and benefits, and how to find the best supplier for your needs.

        -

        What is the Durst A21 Gearbox?

        -

        The Durst A21 gearbox is a modular design hydraulic pump drive that can be easily configured to meet your specific application requirements. It is part of the Durst pump drive series, which are leading solutions for specialty vehicle equipment used in off-highway applications .

        -

        durst a21 gearbox parts


        DOWNLOAD ->>->>->> https://tinourl.com/2uL5sJ



        -

        The Durst A21 gearbox features AGMA Class 10 spur gears that run on heavy-duty ball bearings, ensuring lower operating temperatures and increased life. It also has an internal spline that is compatible with any SAE pump shaft, offering quieter operation and easy installation .

        -

        The Durst A21 gearbox can accommodate up to four hydraulic pumps with a maximum input torque of 2,100 lb-ft and a maximum input speed of 3,000 rpm. It can also be mounted in various positions, such as side-mount, over-center mount, or over-under mount.

        -

        What are the Benefits of the Durst A21 Gearbox?

        -

        The Durst A21 gearbox offers several benefits for off-highway equipment operators, such as:

        -
          -
        • Versatility: The Durst A21 gearbox can be customized to fit different pump sizes and types, as well as different mounting options. This allows you to optimize your hydraulic system performance and efficiency for your specific application.
        • -
        • Reliability: The Durst A21 gearbox is built with high-quality materials and components that can withstand harsh operating conditions and heavy loads. It also has a robust sealing system that prevents oil leakage and contamination.
        • -
        • Durability: The Durst A21 gearbox has a long service life due to its low friction and wear characteristics. It also has a corrosion-resistant coating that protects it from rust and corrosion.
        • -
        • Ease of maintenance: The Durst A21 gearbox has a simple and compact design that makes it easy to access and service. It also has a drain plug that allows you to change the oil without removing the gearbox from the equipment.
        • -
        -

        How to Choose the Right Durst A21 Gearbox Parts Supplier?

        -

        When you need to buy or replace your Durst A21 gearbox parts, you want to make sure that you choose a reputable and reliable supplier that can provide you with genuine and high-quality parts. Here are some tips on how to find the best Durst A21 gearbox parts supplier for your needs:

        -
          -
        • Check their credentials: Look for a supplier that is an authorized distributor and service provider for Durst products. This means that they have the expertise and experience to handle your Durst A21 gearbox parts needs. They also have access to the latest technical information and support from Durst.
        • -
        • Check their inventory: Look for a supplier that has a large and extensive inventory of Durst A21 gearbox parts in stock, ready to ship. This means that they can deliver your parts quickly and efficiently, minimizing your downtime and costs.
        • -
        • Check their service: Look for a supplier that offers excellent customer service and support for your Durst A21 gearbox parts needs. This means that they can provide you with expert advice, diagnostics, troubleshooting, installation, repair, and warranty services.
        • -
        -

        Conclusion

        -

        The Durst A21 gearbox is a great choice for your off-highway equipment hydraulic system. It offers versatility, reliability, durability, and ease of maintenance. However, to ensure its optimal performance and longevity, you need to buy or replace your Durst A21 gearbox parts from a trusted supplier.

        -

        We hope that this article has given you some useful information about the Durst A21 gearbox parts, their features and benefits, and how to find the best supplier for your needs. If you have any questions or need any assistance with your Durst A21 gearbox parts needs, please feel free to contact us at any time.

        -

        FAQs

        -

        Here are some frequently asked questions about the Durst A21 gearbox parts:

        -

        Q: How do I know what size and type of pump I need for my Durst A21 gearbox?

        -

        A: You can use the Durst pump drive selection tool on their website to find the best pump configuration for your application. You just need to enter some basic information about your input power source, output power requirements, operating conditions, and mounting preferences.

        -

        durst a21 gearbox parts for sale
        -durst a21 gearbox parts diagram
        -durst a21 gearbox parts manual
        -durst a21 gearbox parts list
        -durst a21 gearbox parts catalog
        -durst a21 gearbox parts online
        -durst a21 gearbox parts supplier
        -durst a21 gearbox parts price
        -durst a21 gearbox parts australia
        -durst a21 gearbox parts usa
        -durst a21 gearbox parts uk
        -durst a21 gearbox parts canada
        -durst a21 gearbox parts india
        -durst a21 gearbox parts china
        -durst a21 gearbox parts germany
        -durst a21 gearbox parts replacement
        -durst a21 gearbox parts repair
        -durst a21 gearbox parts service
        -durst a21 gearbox parts warranty
        -durst a21 gearbox parts review
        -durst a21 gearbox parts specification
        -durst a21 gearbox parts model
        -durst a21 gearbox parts serial number
        -durst a21 gearbox parts identification
        -durst a21 gearbox parts installation
        -durst a21 gearbox parts maintenance
        -durst a21 gearbox parts troubleshooting
        -durst a21 gearbox parts adjustment
        -durst a21 gearbox parts lubrication
        -durst a21 gearbox parts cleaning
        -durst a21 gearbox parts overhaul
        -durst a21 gearbox parts upgrade
        -durst a21 gearbox parts modification
        -durst a21 gearbox parts customisation
        -durst a21 gearbox parts compatibility
        -durst a21 gearbox parts interchangeability
        -durst a21 gearbox parts performance
        -durst a21 gearbox parts efficiency
        -durst a21 gearbox parts reliability
        -durst a21 gearbox parts durability
        -durst a21 gearbox parts quality
        -durst a21 gearbox parts safety
        -durst a21 gearbox parts noise reduction
        -durst a21 gearbox parts vibration reduction
        -durst a21 gearbox parts temperature control
        -durst a21 gearbox parts torque transmission
        -durst a21 gearbox parts speed ratio
        -durst a21 gearbox parts input shaft
        -durst a21 gearbox parts output shaft

        -

        Q: How often do I need to change the oil in my Durst A21 gearbox?

        -

        A: The recommended oil change interval for the Durst A21 gearbox is every 500 hours of operation or every six months, whichever comes first. However, this may vary depending on your operating environment and conditions. You should always follow the manufacturer's instructions and recommendations for oil change frequency and type.

        -

        Q: What are some common signs of wear or damage in my Durst A21 gearbox?

        -

        A: Some common signs of wear or damage in your Durst A21 gearbox are:

        -
          -
        • Excessive noise or vibration
        • -
        • Oil leakage or contamination
        • -
        • Loss of power or efficiency
        • -
        • Overheating or smoking
        • -
        • Difficulty in shifting or engaging
        • -
        -

        If you notice any of these signs in your Durst A21 gearbox, you should stop using it immediately and contact your supplier or service provider for inspection and repair.

        -

        Q: How do I install or remove my Durst A21 gearbox?

        -

        A: The installation or removal of your Durst A21 gearbox should be done by a qualified technician who has the proper tools and equipment. You should always follow the manufacturer's instructions and safety precautions when installing or removing your Durst A21 gearbox.

        -

        Q: Where can I find more information about the Durst A21 gearbox?

        -

        A: You can find more information about the Durst A21 gearbox on their website or by downloading their product catalog. You can also contact them directly by phone or email if you have any specific questions or inquiries.

        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Guitar Hero 2 for PC Free Download Full Version How to Install and Play the Classic Game.md b/spaces/raedeXanto/academic-chatgpt-beta/Guitar Hero 2 for PC Free Download Full Version How to Install and Play the Classic Game.md deleted file mode 100644 index 82e12d84255d21b05e8dd2bf1469016913415fe1..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Guitar Hero 2 for PC Free Download Full Version How to Install and Play the Classic Game.md +++ /dev/null @@ -1,99 +0,0 @@ -
        -

        Guitar Hero 2 for PC Free Download Full Version

        -

        Do you love playing guitar and rocking out to your favorite songs? Do you want to experience the thrill of being a rockstar on your PC? If so, you might be interested in Guitar Hero 2, one of the most popular music rhythm games ever made. In this article, we will tell you everything you need to know about Guitar Hero 2 for PC, including what it is, how to download it, and how to play it. Let's get started!

        -

        guitar hero 2 for pc free download full version


        Download File ::: https://tinourl.com/2uL3zx



        -

        What is Guitar Hero 2?

        -

        Guitar Hero 2 is a video game that simulates playing guitar along with various rock songs. It was released in 2006 for PlayStation 2 and Xbox 360, and later in 2007 for PC. The game features over 60 songs from different genres and eras, ranging from classic rock to metal to punk. You can play solo or with a friend in co-op or versus mode, or challenge other players online. You can also customize your character, your guitar, and your stage.

        -

        The game is played using a special controller that resembles a guitar, with five colored buttons on the neck and a strum bar on the body. As the song plays, colored notes scroll down on the screen, corresponding to the buttons on the controller. You have to press the buttons and strum at the right time to match the notes and score points. The more accurate you are, the higher your score and your rock meter will be. If you miss too many notes, your rock meter will drop and you will fail the song.

        -

        Guitar Hero 2 is widely considered as one of the best games in the Guitar Hero series, as it improved on many aspects of the original game, such as adding more songs, more difficulty levels, more modes, and more features. It also introduced the hammer-on and pull-off technique, which allows you to play faster notes without strumming every time.

        -

        How to download Guitar Hero 2 for PC?

        -

        If you want to play Guitar Hero 2 on your PC, you have two options: either buy a copy of the game or use an emulator or a mod. Let's see what each option entails.

        -

        The official way: buying a copy of the game

        -

        The easiest and most legal way to play Guitar Hero 2 on your PC is to buy a copy of the game from an online retailer or a physical store. The game was released for PC as Guitar Hero III: Legends of Rock, which includes Guitar Hero 2 as well as Guitar Hero: Aerosmith and Guitar Hero: World Tour. You can find it on Amazon, eBay, or other platforms for around $20-$30. You will also need a compatible controller, such as the Xplorer guitar controller that comes with the game or any other USB guitar controller.

        -

        Once you have the game and the controller, you just need to install it on your PC following the instructions on the screen. You will also need to register an account with Aspyr Media, the developer of the PC version, to access some features of the game. After that, you can launch the game and start rocking out!

        -

        The unofficial way: using an emulator or a mod

        -

        If you don't want to buy a copy of the game or you can't find one, you can still play Guitar Hero 2 on your PC using an emulator or a mod. However, this method is not recommended as it may be illegal in some countries and may cause some issues with your PC or your game.

        -

        Emulators: what are they and how to use them

        -

        An emulator is a software that allows you to run games from different platforms on your PC. For example, you can use an emulator to play PlayStation 2 games on your PC. There are many emulators available online for free, such as PCSX2, Dolphin, or RPCS3. To use an emulator, you will need two things: a ROM file of the game and a BIOS file of the console.

        -

        A ROM file is a digital copy of the game that you can download from various websites or rip from your own disc using a special device. A BIOS file is a system file that contains information about the console's hardware and software. You can also download it from various websites or extract it from your own console using a special device.

        -

        guitar hero 2 pc game free download
        -download guitar hero 2 for windows 10 full version
        -guitar hero 2 pc full crack free download
        -how to play guitar hero 2 on pc without emulator
        -guitar hero 2 pc torrent download full version
        -guitar hero 2 pc system requirements and installation guide
        -guitar hero 2 pc mods and custom songs free download
        -guitar hero 2 pc controller support and setup
        -guitar hero 2 pc cheats and codes free download
        -guitar hero 2 pc online multiplayer mode free download
        -guitar hero 2 pc iso download full version
        -guitar hero 2 pc rip download full version
        -guitar hero 2 pc gameplay and review free download
        -guitar hero 2 pc patch and update free download
        -guitar hero 2 pc keyboard mapping and configuration
        -guitar hero 2 pc best settings and performance tips
        -guitar hero 2 pc save file and progress backup
        -guitar hero 2 pc unlock all songs and characters free download
        -guitar hero 2 pc alternative download links and mirrors
        -guitar hero 2 pc error fix and troubleshooting guide
        -guitar hero 2 for mac free download full version
        -guitar hero 2 for linux free download full version
        -guitar hero 2 for android free download full version
        -guitar hero 2 for ios free download full version
        -guitar hero 2 for ps4 free download full version
        -guitar hero 2 for xbox one free download full version
        -guitar hero 2 for switch free download full version
        -guitar hero 2 remastered for pc free download full version
        -guitar hero 2 legends of rock for pc free download full version
        -guitar hero 2 encore rocks the 80s for pc free download full version
        -how to get guitar hero 2 for free on steam
        -how to get guitar hero 2 for free on origin
        -how to get guitar hero 2 for free on epic games store
        -how to get guitar hero 2 for free on gog.com
        -how to get guitar hero 2 for free on humble bundle
        -how to get guitar hero 2 for free on uplay
        -how to get guitar hero 2 for free on microsoft store
        -how to get guitar hero 2 for free on playstation store
        -how to get guitar hero 2 for free on xbox store
        -how to get guitar hero 2 for free on nintendo eshop
        -best websites to download guitar hero 2 for pc free full version
        -best youtube channels to watch guitar hero 2 for pc gameplay and tutorials
        -best blogs and forums to read about guitar hero 2 for pc tips and tricks
        -best podcasts and audiobooks to listen to about guitar hero 2 for pc history and trivia
        -best books and ebooks to learn about guitar hero 2 for pc development and design
        -best courses and classes to enroll in to master guitar hero 2 for pc skills and techniques
        -best tools and software to enhance your guitar hero 2 for pc experience and enjoyment
        -best accessories and hardware to buy for your guitar hero 2 for pc setup and gaming rig
        -best deals and discounts to save money on your guitar hero 2 for pc purchase and subscription

        -

        Once you have both files, you need to load them into your emulator following its instructions. You will also need to configure some settings such as graphics, sound, controls, etc. You will also need a compatible controller that can be recognized by your emulator.

        -

        After that, you can launch the game and play it on your PC. However, be aware that emulators are not perfect and may cause some glitches, bugs, crashes, or performance issues with some games.

        -

        Mods: what are they and how to use them

        -

        A mod is a modification of an existing game that adds new features or changes some aspects of it. For example, you can use a mod to add new songs or graphics to Guitar Hero 2. There are many mods available online for free, such as Guitar Hero II Deluxe 2.0 by MiloHax or Clone Hero by Srylain.

        -

        To use a mod, you will need two things: a copy of the original game and a mod file. You can either buy a copy of the game as explained above or use an emulator as explained above. A mod file is a digital file that contains all the changes made by the modder. You can download it from various websites or forums.

        -

        Once you have both files, you need to install them on your PC following their instructions. You will also need a compatible controller that can be recognized by your mod.

        -

        After that, you can launch the modded game and play it on your PC. However, be aware that mods are not official and may cause some glitches, bugs, crashes, or compatibility issues with some games.

        -

        How to play Guitar Hero 2 on PC?

        -

        Conclusion

        -

        Guitar Hero 2 is one of the best music rhythm games ever made, and you can play it on your PC using either an official copy of the game or an emulator or a mod. You just need to have a compatible controller and a good PC, and follow some simple steps to install and configure the game. You can also use some tips and tricks to play like a rockstar and have fun with the game.

        -

        If you love playing guitar and rocking out to your favorite songs, you should definitely try Guitar Hero 2 on your PC. It is a great way to enjoy music and gaming at the same time. You can also challenge your friends or other players online and show off your skills. So what are you waiting for? Grab your controller and start playing Guitar Hero 2 on your PC today!

        -

        FAQs

        -

        What are the best songs to play on Guitar Hero 2?

        -

        This is a matter of personal preference, but some of the most popular and challenging songs to play on Guitar Hero 2 are:

        -
          -
        • Free Bird by Lynyrd Skynyrd
        • -
        • Crazy Train by Ozzy Osbourne
        • -
        • Hangar 18 by Megadeth
        • -
        • Beast and the Harlot by Avenged Sevenfold
        • -
        • Jordan by Buckethead
        • -
        -

        Can I use a real guitar to play Guitar Hero 2 on PC?

        -

        Yes, you can use a real guitar to play Guitar Hero 2 on PC if you have a special device called a MIDI guitar controller. This is a device that converts the signals from your guitar into MIDI signals that can be recognized by your PC. You can connect your guitar to your PC using a USB cable or a wireless adapter. However, this method is not very common or easy to use, and it may not work with all guitars or PCs.

        -

        Can I add custom songs to Guitar Hero 2 on PC?

        -

        Yes, you can add custom songs to Guitar Hero 2 on PC if you use a mod such as Guitar Hero II Deluxe 2.0 or Clone Hero. These mods allow you to download and play songs that are not included in the original game, such as songs from other Guitar Hero games or songs from other artists or genres. You can find many custom songs online on various websites or forums.

        -

        Can I play Guitar Hero 2 on PC with other players?

        -

        Yes, you can play Guitar Hero 2 on PC with other players in various modes such as co-op mode, versus mode, or online mode. You can either play with a friend on the same PC using two controllers or play with other players online using an internet connection. You can also chat with other players using a microphone or a keyboard.

        -

        Is Guitar Hero 2 for PC safe to download and play?

        -

        If you buy an official copy of the game from a reputable source, it is safe to download and play Guitar Hero 2 on PC. However, if you use an emulator or a mod, it may not be safe as they may contain viruses, malware, or other harmful software that may damage your PC or your game. You should always scan any files that you download from unknown sources with an antivirus program before installing them on your PC.

        -

        0a6ba089eb
        -
        -
        \ No newline at end of file diff --git a/spaces/renatotn7/teste2/gfpgan/archs/stylegan2_bilinear_arch.py b/spaces/renatotn7/teste2/gfpgan/archs/stylegan2_bilinear_arch.py deleted file mode 100644 index 1342ee3c9a6b8f742fb76ce7d5b907cd39fbc350..0000000000000000000000000000000000000000 --- a/spaces/renatotn7/teste2/gfpgan/archs/stylegan2_bilinear_arch.py +++ /dev/null @@ -1,613 +0,0 @@ -import math -import random -import torch -from basicsr.ops.fused_act import FusedLeakyReLU, fused_leaky_relu -from basicsr.utils.registry import ARCH_REGISTRY -from torch import nn -from torch.nn import functional as F - - -class NormStyleCode(nn.Module): - - def forward(self, x): - """Normalize the style codes. - - Args: - x (Tensor): Style codes with shape (b, c). - - Returns: - Tensor: Normalized tensor. - """ - return x * torch.rsqrt(torch.mean(x**2, dim=1, keepdim=True) + 1e-8) - - -class EqualLinear(nn.Module): - """Equalized Linear as StyleGAN2. - - Args: - in_channels (int): Size of each sample. - out_channels (int): Size of each output sample. - bias (bool): If set to ``False``, the layer will not learn an additive - bias. Default: ``True``. - bias_init_val (float): Bias initialized value. Default: 0. - lr_mul (float): Learning rate multiplier. Default: 1. - activation (None | str): The activation after ``linear`` operation. - Supported: 'fused_lrelu', None. Default: None. - """ - - def __init__(self, in_channels, out_channels, bias=True, bias_init_val=0, lr_mul=1, activation=None): - super(EqualLinear, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.lr_mul = lr_mul - self.activation = activation - if self.activation not in ['fused_lrelu', None]: - raise ValueError(f'Wrong activation value in EqualLinear: {activation}' - "Supported ones are: ['fused_lrelu', None].") - self.scale = (1 / math.sqrt(in_channels)) * lr_mul - - self.weight = nn.Parameter(torch.randn(out_channels, in_channels).div_(lr_mul)) - if bias: - self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val)) - else: - self.register_parameter('bias', None) - - def forward(self, x): - if self.bias is None: - bias = None - else: - bias = self.bias * self.lr_mul - if self.activation == 'fused_lrelu': - out = F.linear(x, self.weight * self.scale) - out = fused_leaky_relu(out, bias) - else: - out = F.linear(x, self.weight * self.scale, bias=bias) - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, bias={self.bias is not None})') - - -class ModulatedConv2d(nn.Module): - """Modulated Conv2d used in StyleGAN2. - - There is no bias in ModulatedConv2d. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - num_style_feat (int): Channel number of style features. - demodulate (bool): Whether to demodulate in the conv layer. - Default: True. - sample_mode (str | None): Indicating 'upsample', 'downsample' or None. - Default: None. - eps (float): A value added to the denominator for numerical stability. - Default: 1e-8. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=True, - sample_mode=None, - eps=1e-8, - interpolation_mode='bilinear'): - super(ModulatedConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.demodulate = demodulate - self.sample_mode = sample_mode - self.eps = eps - self.interpolation_mode = interpolation_mode - if self.interpolation_mode == 'nearest': - self.align_corners = None - else: - self.align_corners = False - - self.scale = 1 / math.sqrt(in_channels * kernel_size**2) - # modulation inside each modulated conv - self.modulation = EqualLinear( - num_style_feat, in_channels, bias=True, bias_init_val=1, lr_mul=1, activation=None) - - self.weight = nn.Parameter(torch.randn(1, out_channels, in_channels, kernel_size, kernel_size)) - self.padding = kernel_size // 2 - - def forward(self, x, style): - """Forward function. - - Args: - x (Tensor): Tensor with shape (b, c, h, w). - style (Tensor): Tensor with shape (b, num_style_feat). - - Returns: - Tensor: Modulated tensor after convolution. - """ - b, c, h, w = x.shape # c = c_in - # weight modulation - style = self.modulation(style).view(b, 1, c, 1, 1) - # self.weight: (1, c_out, c_in, k, k); style: (b, 1, c, 1, 1) - weight = self.scale * self.weight * style # (b, c_out, c_in, k, k) - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + self.eps) - weight = weight * demod.view(b, self.out_channels, 1, 1, 1) - - weight = weight.view(b * self.out_channels, c, self.kernel_size, self.kernel_size) - - if self.sample_mode == 'upsample': - x = F.interpolate(x, scale_factor=2, mode=self.interpolation_mode, align_corners=self.align_corners) - elif self.sample_mode == 'downsample': - x = F.interpolate(x, scale_factor=0.5, mode=self.interpolation_mode, align_corners=self.align_corners) - - b, c, h, w = x.shape - x = x.view(1, b * c, h, w) - # weight: (b*c_out, c_in, k, k), groups=b - out = F.conv2d(x, weight, padding=self.padding, groups=b) - out = out.view(b, self.out_channels, *out.shape[2:4]) - - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, ' - f'kernel_size={self.kernel_size}, ' - f'demodulate={self.demodulate}, sample_mode={self.sample_mode})') - - -class StyleConv(nn.Module): - """Style conv. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - num_style_feat (int): Channel number of style features. - demodulate (bool): Whether demodulate in the conv layer. Default: True. - sample_mode (str | None): Indicating 'upsample', 'downsample' or None. - Default: None. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=True, - sample_mode=None, - interpolation_mode='bilinear'): - super(StyleConv, self).__init__() - self.modulated_conv = ModulatedConv2d( - in_channels, - out_channels, - kernel_size, - num_style_feat, - demodulate=demodulate, - sample_mode=sample_mode, - interpolation_mode=interpolation_mode) - self.weight = nn.Parameter(torch.zeros(1)) # for noise injection - self.activate = FusedLeakyReLU(out_channels) - - def forward(self, x, style, noise=None): - # modulate - out = self.modulated_conv(x, style) - # noise injection - if noise is None: - b, _, h, w = out.shape - noise = out.new_empty(b, 1, h, w).normal_() - out = out + self.weight * noise - # activation (with bias) - out = self.activate(out) - return out - - -class ToRGB(nn.Module): - """To RGB from features. - - Args: - in_channels (int): Channel number of input. - num_style_feat (int): Channel number of style features. - upsample (bool): Whether to upsample. Default: True. - """ - - def __init__(self, in_channels, num_style_feat, upsample=True, interpolation_mode='bilinear'): - super(ToRGB, self).__init__() - self.upsample = upsample - self.interpolation_mode = interpolation_mode - if self.interpolation_mode == 'nearest': - self.align_corners = None - else: - self.align_corners = False - self.modulated_conv = ModulatedConv2d( - in_channels, - 3, - kernel_size=1, - num_style_feat=num_style_feat, - demodulate=False, - sample_mode=None, - interpolation_mode=interpolation_mode) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, x, style, skip=None): - """Forward function. - - Args: - x (Tensor): Feature tensor with shape (b, c, h, w). - style (Tensor): Tensor with shape (b, num_style_feat). - skip (Tensor): Base/skip tensor. Default: None. - - Returns: - Tensor: RGB images. - """ - out = self.modulated_conv(x, style) - out = out + self.bias - if skip is not None: - if self.upsample: - skip = F.interpolate( - skip, scale_factor=2, mode=self.interpolation_mode, align_corners=self.align_corners) - out = out + skip - return out - - -class ConstantInput(nn.Module): - """Constant input. - - Args: - num_channel (int): Channel number of constant input. - size (int): Spatial size of constant input. - """ - - def __init__(self, num_channel, size): - super(ConstantInput, self).__init__() - self.weight = nn.Parameter(torch.randn(1, num_channel, size, size)) - - def forward(self, batch): - out = self.weight.repeat(batch, 1, 1, 1) - return out - - -@ARCH_REGISTRY.register() -class StyleGAN2GeneratorBilinear(nn.Module): - """StyleGAN2 Generator. - - Args: - out_size (int): The spatial size of outputs. - num_style_feat (int): Channel number of style features. Default: 512. - num_mlp (int): Layer number of MLP style layers. Default: 8. - channel_multiplier (int): Channel multiplier for large networks of - StyleGAN2. Default: 2. - lr_mlp (float): Learning rate multiplier for mlp layers. Default: 0.01. - narrow (float): Narrow ratio for channels. Default: 1.0. - """ - - def __init__(self, - out_size, - num_style_feat=512, - num_mlp=8, - channel_multiplier=2, - lr_mlp=0.01, - narrow=1, - interpolation_mode='bilinear'): - super(StyleGAN2GeneratorBilinear, self).__init__() - # Style MLP layers - self.num_style_feat = num_style_feat - style_mlp_layers = [NormStyleCode()] - for i in range(num_mlp): - style_mlp_layers.append( - EqualLinear( - num_style_feat, num_style_feat, bias=True, bias_init_val=0, lr_mul=lr_mlp, - activation='fused_lrelu')) - self.style_mlp = nn.Sequential(*style_mlp_layers) - - channels = { - '4': int(512 * narrow), - '8': int(512 * narrow), - '16': int(512 * narrow), - '32': int(512 * narrow), - '64': int(256 * channel_multiplier * narrow), - '128': int(128 * channel_multiplier * narrow), - '256': int(64 * channel_multiplier * narrow), - '512': int(32 * channel_multiplier * narrow), - '1024': int(16 * channel_multiplier * narrow) - } - self.channels = channels - - self.constant_input = ConstantInput(channels['4'], size=4) - self.style_conv1 = StyleConv( - channels['4'], - channels['4'], - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode=None, - interpolation_mode=interpolation_mode) - self.to_rgb1 = ToRGB(channels['4'], num_style_feat, upsample=False, interpolation_mode=interpolation_mode) - - self.log_size = int(math.log(out_size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - self.num_latent = self.log_size * 2 - 2 - - self.style_convs = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channels = channels['4'] - # noise - for layer_idx in range(self.num_layers): - resolution = 2**((layer_idx + 5) // 2) - shape = [1, 1, resolution, resolution] - self.noises.register_buffer(f'noise{layer_idx}', torch.randn(*shape)) - # style convs and to_rgbs - for i in range(3, self.log_size + 1): - out_channels = channels[f'{2**i}'] - self.style_convs.append( - StyleConv( - in_channels, - out_channels, - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode='upsample', - interpolation_mode=interpolation_mode)) - self.style_convs.append( - StyleConv( - out_channels, - out_channels, - kernel_size=3, - num_style_feat=num_style_feat, - demodulate=True, - sample_mode=None, - interpolation_mode=interpolation_mode)) - self.to_rgbs.append( - ToRGB(out_channels, num_style_feat, upsample=True, interpolation_mode=interpolation_mode)) - in_channels = out_channels - - def make_noise(self): - """Make noise for noise injection.""" - device = self.constant_input.weight.device - noises = [torch.randn(1, 1, 4, 4, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2**i, 2**i, device=device)) - - return noises - - def get_latent(self, x): - return self.style_mlp(x) - - def mean_latent(self, num_latent): - latent_in = torch.randn(num_latent, self.num_style_feat, device=self.constant_input.weight.device) - latent = self.style_mlp(latent_in).mean(0, keepdim=True) - return latent - - def forward(self, - styles, - input_is_latent=False, - noise=None, - randomize_noise=True, - truncation=1, - truncation_latent=None, - inject_index=None, - return_latents=False): - """Forward function for StyleGAN2Generator. - - Args: - styles (list[Tensor]): Sample codes of styles. - input_is_latent (bool): Whether input is latent style. - Default: False. - noise (Tensor | None): Input noise or None. Default: None. - randomize_noise (bool): Randomize noise, used when 'noise' is - False. Default: True. - truncation (float): TODO. Default: 1. - truncation_latent (Tensor | None): TODO. Default: None. - inject_index (int | None): The injection index for mixing noise. - Default: None. - return_latents (bool): Whether to return style latents. - Default: False. - """ - # style codes -> latents with Style MLP layer - if not input_is_latent: - styles = [self.style_mlp(s) for s in styles] - # noises - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers # for each style conv layer - else: # use the stored noise - noise = [getattr(self.noises, f'noise{i}') for i in range(self.num_layers)] - # style truncation - if truncation < 1: - style_truncation = [] - for style in styles: - style_truncation.append(truncation_latent + truncation * (style - truncation_latent)) - styles = style_truncation - # get style latent with injection - if len(styles) == 1: - inject_index = self.num_latent - - if styles[0].ndim < 3: - # repeat latent code for all the layers - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - else: # used for encoder with different latent code for each layer - latent = styles[0] - elif len(styles) == 2: # mixing noises - if inject_index is None: - inject_index = random.randint(1, self.num_latent - 1) - latent1 = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.num_latent - inject_index, 1) - latent = torch.cat([latent1, latent2], 1) - - # main generation - out = self.constant_input(latent.shape[0]) - out = self.style_conv1(out, latent[:, 0], noise=noise[0]) - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip(self.style_convs[::2], self.style_convs[1::2], noise[1::2], - noise[2::2], self.to_rgbs): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - i += 2 - - image = skip - - if return_latents: - return image, latent - else: - return image, None - - -class ScaledLeakyReLU(nn.Module): - """Scaled LeakyReLU. - - Args: - negative_slope (float): Negative slope. Default: 0.2. - """ - - def __init__(self, negative_slope=0.2): - super(ScaledLeakyReLU, self).__init__() - self.negative_slope = negative_slope - - def forward(self, x): - out = F.leaky_relu(x, negative_slope=self.negative_slope) - return out * math.sqrt(2) - - -class EqualConv2d(nn.Module): - """Equalized Linear as StyleGAN2. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Size of the convolving kernel. - stride (int): Stride of the convolution. Default: 1 - padding (int): Zero-padding added to both sides of the input. - Default: 0. - bias (bool): If ``True``, adds a learnable bias to the output. - Default: ``True``. - bias_init_val (float): Bias initialized value. Default: 0. - """ - - def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=0, bias=True, bias_init_val=0): - super(EqualConv2d, self).__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.stride = stride - self.padding = padding - self.scale = 1 / math.sqrt(in_channels * kernel_size**2) - - self.weight = nn.Parameter(torch.randn(out_channels, in_channels, kernel_size, kernel_size)) - if bias: - self.bias = nn.Parameter(torch.zeros(out_channels).fill_(bias_init_val)) - else: - self.register_parameter('bias', None) - - def forward(self, x): - out = F.conv2d( - x, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return (f'{self.__class__.__name__}(in_channels={self.in_channels}, ' - f'out_channels={self.out_channels}, ' - f'kernel_size={self.kernel_size},' - f' stride={self.stride}, padding={self.padding}, ' - f'bias={self.bias is not None})') - - -class ConvLayer(nn.Sequential): - """Conv Layer used in StyleGAN2 Discriminator. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - kernel_size (int): Kernel size. - downsample (bool): Whether downsample by a factor of 2. - Default: False. - bias (bool): Whether with bias. Default: True. - activate (bool): Whether use activateion. Default: True. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - downsample=False, - bias=True, - activate=True, - interpolation_mode='bilinear'): - layers = [] - self.interpolation_mode = interpolation_mode - # downsample - if downsample: - if self.interpolation_mode == 'nearest': - self.align_corners = None - else: - self.align_corners = False - - layers.append( - torch.nn.Upsample(scale_factor=0.5, mode=interpolation_mode, align_corners=self.align_corners)) - stride = 1 - self.padding = kernel_size // 2 - # conv - layers.append( - EqualConv2d( - in_channels, out_channels, kernel_size, stride=stride, padding=self.padding, bias=bias - and not activate)) - # activation - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channels)) - else: - layers.append(ScaledLeakyReLU(0.2)) - - super(ConvLayer, self).__init__(*layers) - - -class ResBlock(nn.Module): - """Residual block used in StyleGAN2 Discriminator. - - Args: - in_channels (int): Channel number of the input. - out_channels (int): Channel number of the output. - """ - - def __init__(self, in_channels, out_channels, interpolation_mode='bilinear'): - super(ResBlock, self).__init__() - - self.conv1 = ConvLayer(in_channels, in_channels, 3, bias=True, activate=True) - self.conv2 = ConvLayer( - in_channels, - out_channels, - 3, - downsample=True, - interpolation_mode=interpolation_mode, - bias=True, - activate=True) - self.skip = ConvLayer( - in_channels, - out_channels, - 1, - downsample=True, - interpolation_mode=interpolation_mode, - bias=False, - activate=False) - - def forward(self, x): - out = self.conv1(x) - out = self.conv2(out) - skip = self.skip(x) - out = (out + skip) / math.sqrt(2) - return out diff --git a/spaces/renumics/cifar100-enriched/prepare.py b/spaces/renumics/cifar100-enriched/prepare.py deleted file mode 100644 index 45da2d8863271354ad468b0dea1e0992138b861f..0000000000000000000000000000000000000000 --- a/spaces/renumics/cifar100-enriched/prepare.py +++ /dev/null @@ -1,22 +0,0 @@ -import pickle -import datasets -import os - -if __name__ == "__main__": - cache_file = "dataset_cache.pkl" - if os.path.exists(cache_file): - # Load dataset from cache - with open(cache_file, "rb") as file: - dataset = pickle.load(file) - print("Dataset loaded from cache.") - else: - # Load dataset using datasets.load_dataset() - dataset = datasets.load_dataset("renumics/cifar100-enriched", split="train") - print("Dataset loaded using datasets.load_dataset().") - - # Save dataset to cache - with open(cache_file, "wb") as file: - pickle.dump(dataset, file) - - print("Dataset saved to cache.") - diff --git a/spaces/rorallitri/biomedical-language-models/logs/Apsicxbench3051 !!LINK!!.md b/spaces/rorallitri/biomedical-language-models/logs/Apsicxbench3051 !!LINK!!.md deleted file mode 100644 index 3d189ec3f2210e7fc7265b2ef3b5b3afb0e7609c..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Apsicxbench3051 !!LINK!!.md +++ /dev/null @@ -1,6 +0,0 @@ -

        apsicxbench3051


        DOWNLOADhttps://tinurll.com/2uzo1w



        -
        -Vous aimerez aussi : Sarah Brightman Harem A Desert Fantasy (2004) DVDrip · Firmware Update For Mobily 4g Router · Apsicxbench3051 · Download Film The ... 1fdad05405
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Bachna.ae.haseeno.2008.hindi.1080p.blu.ray.x264.dd.5.1.msubs.masti.md b/spaces/rorallitri/biomedical-language-models/logs/Bachna.ae.haseeno.2008.hindi.1080p.blu.ray.x264.dd.5.1.msubs.masti.md deleted file mode 100644 index 8fa8b64d8b4aa215ae1ed6200557a424c76dd48c..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Bachna.ae.haseeno.2008.hindi.1080p.blu.ray.x264.dd.5.1.msubs.masti.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Bachna.ae.haseeno.2008.hindi.1080p.blu.ray.x264.dd.5.1.msubs.masti


        Download File ✒ ✒ ✒ https://tinurll.com/2uzmPU



        - -8.1 3g descargar adobe fireworks cs5 full serial de oro activacion windows 7 ... bachna.ae.haseeno.2008.hindi.1080p.blu.ray.x264.dd.5.1.msubs.masti. 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/rorallitri/biomedical-language-models/logs/Farmakologi dan Terapi FKUI PDF 42 Ketersediaan Ulasan dan File Digital.md b/spaces/rorallitri/biomedical-language-models/logs/Farmakologi dan Terapi FKUI PDF 42 Ketersediaan Ulasan dan File Digital.md deleted file mode 100644 index d59213e7c29ea44a11256d0d84e9c337a88afa51..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Farmakologi dan Terapi FKUI PDF 42 Ketersediaan Ulasan dan File Digital.md +++ /dev/null @@ -1,9 +0,0 @@ -
        -

        Dismenore atau nyeri haid merupakan nyeri yang terjadi saat menstruasi yang dialami perempuan usia produktif. Akupresur efektif terhadap penurunan nyeri dysmenorhea. Selain itu, akupresur juga merupakan terapi yang mudah dipelajari (praktis) dan aman. Tujuan kegiatan ini untuk mengatasi dismenore pada remaja yang akan bermanfaat bagi remaja agar bisa mengurangi dan mengatasi dismenore yang dialami saat haid. Pengambilan sampel dilakukan dengan menggunakan teknik simple random sampling. Pengambilan sampel secara acak dan sederhana terhadap anggota populasi. Pengetahuan remaja tentang dismenore yang cukup tinggi yaitu 88%, sebagian besar mereka menggunakan obat kimiawi yaitu sekitar 42,4%, pengetahuan mengenai teknik pijat akuprseur menunjukan hasil 0,5% yaitu rendah. Banyak remaja yang belum mengetahui mengenai teknik pijat akupresure terhadap pengurangan dismenore. Hasil penelitian ini berharap dapat menambah wawasan orang diluar sana, serta dapat menerapkan teknik akupresure secara mandiri dirumah

        -

        farmakologi dan terapi fkui pdf 42


        Download 🆓 https://tinurll.com/2uzost



        -

        Latar belakang. Infeksi jamur sistemik merupakan salah satu penyebab morbiditas dan mortalitas pada
        neonatus dengan gejala klinis yang mirip dengan sepsis. Mengetahui prevalens, pola jamur, faktor risiko,
        profil klinis, terapi, dan luaran klinis diharapkan dapat mengurangi morbiditas dan mortalitas infeksi jamur
        pada neonatus.
        Tujuan. Mengetahui prevalens dan faktor risiko mikosis sistemik pada neonatus dengan sepsis awitan
        lambat.
        Metode. Studi potong lintang retrospektif dengan penelusuran rekam medis Departemen Ilmu Kesehatan
        Anak sejak bulan Januari 2005- Desember 2008.
        Hasil. Seratus empat puluh satu neonatus mengalami sepsis awitan lambat, 10 subjek tidak memenuhi
        kriteria inklusi sehingga terdapat 131 subjek yang dapat dianalisis. Lima puluh lima (42%) subjek terbukti
        mengalami infeksi mikosis sistemik. Manifestasi klinis yang mencolok adalah infeksi pada sistem respirasi
        dan gastrointestinal. Faktor risiko infeksi jamur yang ditemukan pada studi ini adalah pemasangan kateter
        intravena, nutrisi parenteral, dan masa rawat lama. Profil laboratorium yang jelas adalah trombositopeni,
        CRP> 10, dan rasio IT >0,2. Setelah dilakukan analisis multivariat dan regresi logistik maka faktor risiko
        yang bermakna adalah muntah, tidak diare, dan masa rawat lama.
        Kesimpulan.Prevalens infeksi jamur sistemik pada sepsis awitan lambat 42% dengan penyebab Candida sp. Faktor
        risiko yang bermakna adalah muntah, tidak diare, dan masa rawat lama.Â

        -

        Sadraei, H., Ghannadi, A., and Malekshahi, K. (2003). Relaxant Effect of Essential Oil of Melissa officinalis and Citral on Rat Ileum contractions. Iran: Isfahan University of Medical Sciences. Fitoterapia 74, 445-452.

        -

        Pneumonia adalah infeksi jaringan paru yang disebabkan oleh bakteri, jamur, virus atau parasit. Antibiotik merupakan terapi utama pada pneumonia oleh bakteri. Tujuan penelitian ini untuk mengetahui gambaran dan kesesuaian atau ketepatan penggunaan antibiotik pada pasien pneumonia di Rumah Sakit Umum Daerah Tulungagung periode Januari-Juni 2017. Metode penelitian observasional, data diambil secara retrospektif dari rekam medis pasien dan data penggunaan antibiotik dari Instalasi Farmasi, kemudian dianalisis secara deskriptif. Hasil penelitian menunjukkan jenis antibiotik terbanyak yang digunakan pada 130 pasien pneumonia unspecified rawat inap non ICU di Ruang Pulmonary adalah levofloxacin iv (62,71%), ceftriaxone (27,21%), dan cefotaxim (5,67%). Kesesuaian penggunaan antibiotik berdasarkan pedoman terapi berupa Panduan Praktik Klinik RSUD Dr. Iskak Tulungagung SM Paru 2014, Perhimpunan Dokter Paru Indonesia (PDPI, 2014), Infections Diseases Sociaty of America/American Thoracis Consensus Guidelines on the Managemen of Community-Acquired Pneumonia in Adult (IDSA/ATS, 2014) dan Drug Information Handbook (DIH, 2011) didapatkan hasil penelitian bahwa yang tepat jenis antibiotik 85,38%, tepat dosis 100%, tepat frekuensi 100% dan tepat lama pemberian 42,34%. Penilaian ketepatan penggunaan antibiotik yang rasional berdasarkan rata-rata kriteria 4 tepat adalah sebesar 81, 93%.

        -

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Keygen Point Layout Land Desktop 2013 64 Bit The Best Tool for Designing and Drafting.md b/spaces/rorallitri/biomedical-language-models/logs/Keygen Point Layout Land Desktop 2013 64 Bit The Best Tool for Designing and Drafting.md deleted file mode 100644 index 519d26a1156d1c4996ca3e253a318c9375bda230..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Keygen Point Layout Land Desktop 2013 64 Bit The Best Tool for Designing and Drafting.md +++ /dev/null @@ -1,6 +0,0 @@ - -

        Mp4gain full crack
        MS Office 2013 SP1 Pro Plus VL X86 MULTi-22 AUG 2018 Gen2 Serial Key keygen
        123 flash chat v10 0 nulled 22
        staad pro free download full version with crack
        Codesoft 9 0 Crack
        Dunkirk (English) hindi 720p free download
        facing the giants download kickass movie
        Race 2 full movie in hindi download hd
        smart2dcutting 3 crack
        Diary of an Oxygen Thief by Anonymous [EPUB]

        -

        Keygen Point Layout Land Desktop 2013 64 Bit


        Download === https://tinurll.com/2uzofG



        -

        On these versions of Windows, you can install the OpenSSH server using PowerShell: var quads_screen_width = document.body.clientWidth;if ( quads_screen_width >= 1140 ) []).push();if ( quads_screen_width >= 1024 && quads_screen_width < 1140 ) if ( quads_screen_width >= 768 && quads_screen_width < 1024 ) if ( quads_screen_width < 768 )

        aaccfb2cb3
        -
        -
        \ No newline at end of file diff --git a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/conv.py b/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/conv.py deleted file mode 100644 index 972938ab84712eb06e1b10cea25444eee51d6637..0000000000000000000000000000000000000000 --- a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/conv.py +++ /dev/null @@ -1,245 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import math -import typing as tp -import warnings - -import torch -from torch import nn -from torch.nn import functional as F -from torch.nn.utils import spectral_norm, weight_norm - - -CONV_NORMALIZATIONS = frozenset(['none', 'weight_norm', 'spectral_norm', - 'time_group_norm']) - - -def apply_parametrization_norm(module: nn.Module, norm: str = 'none'): - assert norm in CONV_NORMALIZATIONS - if norm == 'weight_norm': - return weight_norm(module) - elif norm == 'spectral_norm': - return spectral_norm(module) - else: - # We already check was in CONV_NORMALIZATION, so any other choice - # doesn't need reparametrization. - return module - - -def get_norm_module(module: nn.Module, causal: bool = False, norm: str = 'none', **norm_kwargs): - """Return the proper normalization module. If causal is True, this will ensure the returned - module is causal, or return an error if the normalization doesn't support causal evaluation. - """ - assert norm in CONV_NORMALIZATIONS - if norm == 'time_group_norm': - if causal: - raise ValueError("GroupNorm doesn't support causal evaluation.") - assert isinstance(module, nn.modules.conv._ConvNd) - return nn.GroupNorm(1, module.out_channels, **norm_kwargs) - else: - return nn.Identity() - - -def get_extra_padding_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, - padding_total: int = 0) -> int: - """See `pad_for_conv1d`. - """ - length = x.shape[-1] - n_frames = (length - kernel_size + padding_total) / stride + 1 - ideal_length = (math.ceil(n_frames) - 1) * stride + (kernel_size - padding_total) - return ideal_length - length - - -def pad_for_conv1d(x: torch.Tensor, kernel_size: int, stride: int, padding_total: int = 0): - """Pad for a convolution to make sure that the last window is full. - Extra padding is added at the end. This is required to ensure that we can rebuild - an output of the same length, as otherwise, even with padding, some time steps - might get removed. - For instance, with total padding = 4, kernel size = 4, stride = 2: - 0 0 1 2 3 4 5 0 0 # (0s are padding) - 1 2 3 # (output frames of a convolution, last 0 is never used) - 0 0 1 2 3 4 5 0 # (output of tr. conv., but pos. 5 is going to get removed as padding) - 1 2 3 4 # once you removed padding, we are missing one time step ! - """ - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - return F.pad(x, (0, extra_padding)) - - -def pad1d(x: torch.Tensor, paddings: tp.Tuple[int, int], mode: str = 'constant', value: float = 0.): - """Tiny wrapper around F.pad, just to allow for reflect padding on small input. - If this is the case, we insert extra 0 padding to the right before the reflection happen. - """ - length = x.shape[-1] - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - if mode == 'reflect': - max_pad = max(padding_left, padding_right) - extra_pad = 0 - if length <= max_pad: - extra_pad = max_pad - length + 1 - x = F.pad(x, (0, extra_pad)) - padded = F.pad(x, paddings, mode, value) - end = padded.shape[-1] - extra_pad - return padded[..., :end] - else: - return F.pad(x, paddings, mode, value) - - -def unpad1d(x: torch.Tensor, paddings: tp.Tuple[int, int]): - """Remove padding from x, handling properly zero padding. Only for 1d! - """ - padding_left, padding_right = paddings - assert padding_left >= 0 and padding_right >= 0, (padding_left, padding_right) - assert (padding_left + padding_right) <= x.shape[-1] - end = x.shape[-1] - padding_right - return x[..., padding_left: end] - - -class NormConv1d(nn.Module): - """Wrapper around Conv1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConv2d(nn.Module): - """Wrapper around Conv2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.conv = apply_parametrization_norm(nn.Conv2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.conv, causal=False, norm=norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.conv(x) - x = self.norm(x) - return x - - -class NormConvTranspose1d(nn.Module): - """Wrapper around ConvTranspose1d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, causal: bool = False, norm: str = 'none', - norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose1d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal, norm, **norm_kwargs) - self.norm_type = norm - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class NormConvTranspose2d(nn.Module): - """Wrapper around ConvTranspose2d and normalization applied to this conv - to provide a uniform interface across normalization approaches. - """ - def __init__(self, *args, norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, **kwargs): - super().__init__() - self.convtr = apply_parametrization_norm(nn.ConvTranspose2d(*args, **kwargs), norm) - self.norm = get_norm_module(self.convtr, causal=False, norm=norm, **norm_kwargs) - - def forward(self, x): - x = self.convtr(x) - x = self.norm(x) - return x - - -class StreamableConv1d(nn.Module): - """Conv1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, dilation: int = 1, - groups: int = 1, bias: bool = True, causal: bool = False, - norm: str = 'none', norm_kwargs: tp.Dict[str, tp.Any] = {}, - pad_mode: str = 'reflect'): - super().__init__() - # warn user on unusual setup between dilation and stride - if stride > 1 and dilation > 1: - warnings.warn('StreamableConv1d has been initialized with stride > 1 and dilation > 1' - f' (kernel_size={kernel_size} stride={stride}, dilation={dilation}).') - self.conv = NormConv1d(in_channels, out_channels, kernel_size, stride, - dilation=dilation, groups=groups, bias=bias, causal=causal, - norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.pad_mode = pad_mode - - def forward(self, x): - B, C, T = x.shape - kernel_size = self.conv.conv.kernel_size[0] - stride = self.conv.conv.stride[0] - dilation = self.conv.conv.dilation[0] - kernel_size = (kernel_size - 1) * dilation + 1 # effective kernel size with dilations - padding_total = kernel_size - stride - extra_padding = get_extra_padding_for_conv1d(x, kernel_size, stride, padding_total) - if self.causal: - # Left padding for causal - x = pad1d(x, (padding_total, extra_padding), mode=self.pad_mode) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - x = pad1d(x, (padding_left, padding_right + extra_padding), mode=self.pad_mode) - return self.conv(x) - - -class StreamableConvTranspose1d(nn.Module): - """ConvTranspose1d with some builtin handling of asymmetric or causal padding - and normalization. - """ - def __init__(self, in_channels: int, out_channels: int, - kernel_size: int, stride: int = 1, causal: bool = False, - norm: str = 'none', trim_right_ratio: float = 1., - norm_kwargs: tp.Dict[str, tp.Any] = {}): - super().__init__() - self.convtr = NormConvTranspose1d(in_channels, out_channels, kernel_size, stride, - causal=causal, norm=norm, norm_kwargs=norm_kwargs) - self.causal = causal - self.trim_right_ratio = trim_right_ratio - assert self.causal or self.trim_right_ratio == 1., \ - "`trim_right_ratio` != 1.0 only makes sense for causal convolutions" - assert self.trim_right_ratio >= 0. and self.trim_right_ratio <= 1. - - def forward(self, x): - kernel_size = self.convtr.convtr.kernel_size[0] - stride = self.convtr.convtr.stride[0] - padding_total = kernel_size - stride - - y = self.convtr(x) - - # We will only trim fixed padding. Extra padding from `pad_for_conv1d` would be - # removed at the very end, when keeping only the right length for the output, - # as removing it here would require also passing the length at the matching layer - # in the encoder. - if self.causal: - # Trim the padding on the right according to the specified ratio - # if trim_right_ratio = 1.0, trim everything from right - padding_right = math.ceil(padding_total * self.trim_right_ratio) - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - else: - # Asymmetric padding required for odd strides - padding_right = padding_total // 2 - padding_left = padding_total - padding_right - y = unpad1d(y, (padding_left, padding_right)) - return y diff --git a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/transformer.py b/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/transformer.py deleted file mode 100644 index e69cca829d774d0b8b36c0de9b7924373da81b43..0000000000000000000000000000000000000000 --- a/spaces/rstallman/Mayfair-Partner-Music/audiocraft/modules/transformer.py +++ /dev/null @@ -1,747 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Transformer model, with streaming support, xformer attention support -and easy causal attention with a potentially finite receptive field. - -See `StreamingTransformer` for more information. - -Unlike regular PyTorch Transformer, we make the hard choice that batches are first. -""" - -import typing as tp - -from einops import rearrange -import torch -import torch.nn as nn -from torch.nn import functional as F -from torch.utils.checkpoint import checkpoint as torch_checkpoint -from xformers import ops - -from .rope import RotaryEmbedding -from .streaming import StreamingModule - -_efficient_attention_backend: str = 'torch' - - -def set_efficient_attention_backend(backend: str = 'torch'): - # Using torch by default, it seems a bit faster on older P100 GPUs (~20% faster). - global _efficient_attention_backend - assert _efficient_attention_backend in ['xformers', 'torch'] - _efficient_attention_backend = backend - - -def _get_attention_time_dimension() -> int: - if _efficient_attention_backend == 'torch': - return 2 - else: - return 1 - - -def _is_profiled() -> bool: - # Return true if we are currently running with a xformers profiler activated. - try: - from xformers.profiler import profiler - except ImportError: - return False - return profiler._Profiler._CURRENT_PROFILER is not None - - -def create_norm_fn(norm_type: str, dim: int, **kwargs) -> nn.Module: - """Create normalization module for transformer encoder layer. - - Args: - norm_type (str): Normalization method. - dim (int): Dimension of the normalized layer. - **kwargs (dict): Additional parameters for normalization layer. - Returns: - nn.Module: Normalization module. - """ - if norm_type == 'layer_norm': - return nn.LayerNorm(dim, eps=1e-5, **kwargs) - else: - raise ValueError(f"Unknown norm type: {norm_type}") - - -def create_sin_embedding(positions: torch.Tensor, dim: int, max_period: float = 10000, - dtype: torch.dtype = torch.float32) -> torch.Tensor: - """Create sinusoidal positional embedding, with shape `[B, T, C]`. - - Args: - positions (torch.Tensor): LongTensor of positions. - dim (int): Dimension of the embedding. - max_period (float): Maximum period of the cosine/sine functions. - dtype (torch.dtype or str): dtype to use to generate the embedding. - Returns: - torch.Tensor: Sinusoidal positional embedding. - """ - # We aim for BTC format - assert dim % 2 == 0 - half_dim = dim // 2 - positions = positions.to(dtype) - adim = torch.arange(half_dim, device=positions.device, dtype=dtype).view(1, 1, -1) - max_period_tensor = torch.full([], max_period, device=positions.device, dtype=dtype) # avoid sync point - phase = positions / (max_period_tensor ** (adim / (half_dim - 1))) - return torch.cat([torch.cos(phase), torch.sin(phase)], dim=-1) - - -def expand_repeated_kv(x: torch.Tensor, n_rep: int) -> torch.Tensor: - """torch.repeat_interleave(x, dim=2, repeats=n_rep) from xlformers""" - if n_rep == 1: - return x - if _efficient_attention_backend == 'torch': - bs, n_kv_heads, slen, head_dim = x.shape - return ( - x[:, :, None, :, :] - .expand(bs, n_kv_heads, n_rep, slen, head_dim) - .reshape(bs, n_kv_heads * n_rep, slen, head_dim) - ) - else: - bs, slen, n_kv_heads, head_dim = x.shape - return ( - x[:, :, :, None, :] - .expand(bs, slen, n_kv_heads, n_rep, head_dim) - .reshape(bs, slen, n_kv_heads * n_rep, head_dim) - ) - - -class LayerScale(nn.Module): - """Layer scale from [Touvron et al 2021] (https://arxiv.org/pdf/2103.17239.pdf). - This rescales diagonaly the residual outputs close to 0, with a learnt scale. - - Args: - channels (int): Number of channels. - init (float): Initial scale. - channel_last (bool): If True, expect `[*, C]` shaped tensors, otherwise, `[*, C, T]`. - device (torch.device or None): Device on which to initialize the module. - dtype (torch.dtype or None): dtype to use to initialize the module. - """ - def __init__(self, channels: int, init: float = 1e-4, channel_last: bool = True, - device=None, dtype=None): - super().__init__() - self.channel_last = channel_last - self.scale = nn.Parameter( - torch.full((channels,), init, - requires_grad=True, device=device, dtype=dtype)) - - def forward(self, x: torch.Tensor): - if self.channel_last: - return self.scale * x - else: - return self.scale[:, None] * x - - -class StreamingMultiheadAttention(StreamingModule): - """Similar to `nn.MultiheadAttention` but with support for streaming, causal evaluation. - - Args: - embed_dim (int): Dimension to project to. - num_heads (int): Number of heads. - dropout (float): Dropout level. - bias (bool): Use bias in projections. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - rope (`RotaryEmbedding` or None): Rope embedding to use. - cross_attention: Should be true when used as a cross attention. - All keys and values must be available at once, streaming is only for the queries. - Cannot be used with `causal` or `rope` (as it wouldn't make sens to - intepret the time steps in the keys relative to those in the queries). - safe_streaming (bool): Bug fix, will go away with xformers update. - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Sevice on which to initialize. - dtype (torch.dtype or None): dtype to use. - """ - def __init__(self, embed_dim: int, num_heads: int, dropout: float = 0.0, bias: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - rope: tp.Optional[RotaryEmbedding] = None, cross_attention: bool = False, - safe_streaming: bool = True, qk_layer_norm: bool = False, kv_repeat: int = 1, - device=None, dtype=None): - super().__init__() - factory_kwargs = {'device': device, 'dtype': dtype} - if past_context is not None: - assert causal - - self.embed_dim = embed_dim - self.causal = causal - self.past_context = past_context - self.memory_efficient = memory_efficient - self.attention_as_float32 = attention_as_float32 - self.rope = rope - self.cross_attention = cross_attention - self.safe_streaming = safe_streaming - self.num_heads = num_heads - self.dropout = dropout - self.kv_repeat = kv_repeat - if cross_attention: - assert not causal, "Causal cannot work with cross attention." - assert rope is None, "Rope cannot work with cross attention." - - if memory_efficient: - _verify_xformers_memory_efficient_compat() - - self.custom = _is_custom(custom, memory_efficient) - if self.custom: - out_dim = embed_dim - assert num_heads % kv_repeat == 0 - assert not cross_attention or kv_repeat == 1 - num_kv = num_heads // kv_repeat - kv_dim = (embed_dim // num_heads) * num_kv - out_dim += 2 * kv_dim - in_proj = nn.Linear(embed_dim, out_dim, bias=bias, **factory_kwargs) - # We try to follow the default PyTorch MHA convention, to easily compare results. - self.in_proj_weight = in_proj.weight - self.in_proj_bias = in_proj.bias - if bias: - self.in_proj_bias.data.zero_() # Following Pytorch convention - self.out_proj = nn.Linear(embed_dim, embed_dim, bias=bias, **factory_kwargs) - if bias: - self.out_proj.bias.data.zero_() - else: - assert not qk_layer_norm - assert kv_repeat == 1 - self.mha = nn.MultiheadAttention( - embed_dim, num_heads, dropout=dropout, bias=bias, batch_first=True, - **factory_kwargs) - self.qk_layer_norm = qk_layer_norm - if qk_layer_norm: - assert self.custom - assert kv_repeat == 1 - ln_dim = embed_dim - self.q_layer_norm = nn.LayerNorm(ln_dim) - self.k_layer_norm = nn.LayerNorm(ln_dim) - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - if not self.custom: - # Support compat with regular MHA - keys = [n for n, _ in self.mha.named_parameters()] - for key in keys: - if prefix + key in state_dict: - state_dict[prefix + "mha." + key] = state_dict.pop(prefix + key) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def _get_mask(self, current_steps: int, device: torch.device, dtype: torch.dtype): - # Return a causal mask, accounting for potentially stored past keys/values - # We actually return a bias for the attention score, as this has the same - # convention both in the builtin MHA in Pytorch, and Xformers functions. - time_dim = _get_attention_time_dimension() - if self.memory_efficient: - from xformers.ops import LowerTriangularMask - if current_steps == 1: - # If we only have one step, then we do not need a mask. - return None - elif 'past_keys' in self._streaming_state: - raise RuntimeError('Not supported at the moment') - else: - # Then we can safely use a lower triangular mask - return LowerTriangularMask() - if self._streaming_state: - past_keys = self._streaming_state['past_keys'] - past_steps = past_keys.shape[time_dim] - else: - past_steps = 0 - - queries_pos = torch.arange( - past_steps, current_steps + past_steps, device=device).view(-1, 1) - keys_pos = torch.arange(past_steps + current_steps, device=device).view(1, -1) - delta = queries_pos - keys_pos - valid = delta >= 0 - if self.past_context is not None: - valid &= (delta <= self.past_context) - return torch.where( - valid, - torch.zeros([], device=device, dtype=dtype), - torch.full([], float('-inf'), device=device, dtype=dtype)) - - def _complete_kv(self, k, v): - time_dim = _get_attention_time_dimension() - if self.cross_attention: - # With cross attention we assume all keys and values - # are already available, and streaming is with respect - # to the queries only. - return k, v - # Complete the key/value pair using the streaming state. - if self._streaming_state: - pk = self._streaming_state['past_keys'] - nk = torch.cat([pk, k], dim=time_dim) - if v is k: - nv = nk - else: - pv = self._streaming_state['past_values'] - nv = torch.cat([pv, v], dim=time_dim) - else: - nk = k - nv = v - - assert nk.shape[time_dim] == nv.shape[time_dim] - offset = 0 - if self.past_context is not None: - offset = max(0, nk.shape[time_dim] - self.past_context) - if self._is_streaming: - self._streaming_state['past_keys'] = nk[:, offset:] - if v is not k: - self._streaming_state['past_values'] = nv[:, offset:] - if 'offset' in self._streaming_state: - self._streaming_state['offset'] += offset - else: - self._streaming_state['offset'] = torch.tensor(0) - return nk, nv - - def _apply_rope(self, query: torch.Tensor, key: torch.Tensor): - # TODO: fix and verify layout. - assert _efficient_attention_backend == 'xformers', 'Rope not supported with torch attn.' - # Apply rope embeddings to query and key tensors. - assert self.rope is not None - if 'past_keys' in self._streaming_state: - past_keys_offset = self._streaming_state['past_keys'].shape[1] - else: - past_keys_offset = 0 - if 'offset' in self._streaming_state: - past_context_offset = int(self._streaming_state['offset'].item()) - else: - past_context_offset = 0 - streaming_offset = past_context_offset + past_keys_offset - return self.rope.rotate_qk(query, key, start=streaming_offset) - - def forward(self, query: torch.Tensor, key: torch.Tensor, value: torch.Tensor, - key_padding_mask=None, need_weights=False, attn_mask=None, - average_attn_weights=True, is_causal=False): - assert attn_mask is None - assert not is_causal, ("new param added in torch 2.0.1 not supported, " - "use the causal args in the constructor.") - - time_dim = _get_attention_time_dimension() - if time_dim == 2: - layout = "b h t d" - else: - layout = "b t h d" - dtype = query.dtype - if self._is_streaming: - assert self.causal or self.cross_attention, \ - "Streaming only available for causal or cross attention" - - if self.causal: - # At the moment we specialize only for the self-attention case. - assert query.shape[1] == key.shape[1], "Causal only for same length query / key / value" - assert value.shape[1] == key.shape[1], "Causal only for same length query / key / value" - attn_mask = self._get_mask(query.shape[1], query.device, query.dtype) - - if self.custom: - # custom implementation - assert need_weights is False - assert key_padding_mask is None - if self.cross_attention: - # Different queries, keys, values, we have to spit manually the weights - # before applying the linear. - dim = self.in_proj_weight.shape[0] // 3 - if self.in_proj_bias is None: - bias_q, bias_k, bias_v = None, None, None - else: - bias_q = self.in_proj_bias[:dim] - bias_k = self.in_proj_bias[dim: 2 * dim] - bias_v = self.in_proj_bias[2 * dim:] - q = nn.functional.linear(query, self.in_proj_weight[:dim], bias_q) - # todo: when streaming, we could actually save k, v and check the shape actually match. - k = nn.functional.linear(key, self.in_proj_weight[dim: 2 * dim], bias_k) - v = nn.functional.linear(value, self.in_proj_weight[2 * dim:], bias_v) - if self.qk_layer_norm is True: - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k, v = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k, v]] - else: - if not _is_profiled(): - # profiling breaks that propertysomehow. - assert query is key, "specialized implementation" - assert value is key, "specialized implementation" - projected = nn.functional.linear(query, self.in_proj_weight, self.in_proj_bias) - if self.kv_repeat == 1: - if time_dim == 2: - bound_layout = "b h p t d" - else: - bound_layout = "b t p h d" - packed = rearrange(projected, f"b t (p h d) -> {bound_layout}", p=3, h=self.num_heads) - q, k, v = ops.unbind(packed, dim=2) - else: - embed_dim = self.embed_dim - per_head_dim = (embed_dim // self.num_heads) - kv_heads = self.num_heads // self.kv_repeat - q = projected[:, :, :embed_dim] - start = embed_dim - end = start + per_head_dim * kv_heads - k = projected[:, :, start: end] - v = projected[:, :, end:] - q = rearrange(q, f"b t (h d) -> {layout}", h=self.num_heads) - k = rearrange(k, f"b t (h d) -> {layout}", h=kv_heads) - v = rearrange(v, f"b t (h d) -> {layout}", h=kv_heads) - - if self.qk_layer_norm is True: - assert self.kv_repeat == 1 - q, k = [rearrange(x, f"{layout} -> b t (h d)") for x in [q, k]] - q = self.q_layer_norm(q) - k = self.k_layer_norm(k) - q, k = [rearrange(x, f"b t (h d) -> {layout}", h=self.num_heads) for x in [q, k]] - if self.rope: - q, k = self._apply_rope(q, k) - k, v = self._complete_kv(k, v) - if self.kv_repeat > 1: - k = expand_repeated_kv(k, self.kv_repeat) - v = expand_repeated_kv(v, self.kv_repeat) - if self.attention_as_float32: - q, k, v = [x.float() for x in [q, k, v]] - if self.memory_efficient: - p = self.dropout if self.training else 0 - if _efficient_attention_backend == 'torch': - x = torch.nn.functional.scaled_dot_product_attention( - q, k, v, is_causal=attn_mask is not None, dropout_p=p) - else: - x = ops.memory_efficient_attention(q, k, v, attn_mask, p=p) - else: - # We include the dot product as float32, for consistency - # with the other implementations that include that step - # as part of the attention. Note that when using `autocast`, - # the einsums would be done as bfloat16, but the softmax - # would be done as bfloat16, so `attention_as_float32` will - # extend a bit the range of operations done in float32, - # although this should make no difference. - q = q / q.shape[-1] ** 0.5 - key_layout = layout.replace('t', 'k') - query_layout = layout - if self._is_streaming and self.safe_streaming and q.device.type == 'cuda': - with torch.autocast(device_type=q.device.type, dtype=torch.float32): - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - else: - pre_w = torch.einsum(f"{query_layout},{key_layout}-> b h t k", q, k) - if attn_mask is not None: - pre_w = pre_w + attn_mask - w = torch.softmax(pre_w, dim=-1) - w = F.dropout(w, self.dropout, training=self.training).to(v) - # Key and value have the same format. - x = torch.einsum(f"b h t k, {key_layout} -> {layout}", w, v) - x = x.to(dtype) - x = rearrange(x, f"{layout} -> b t (h d)", h=self.num_heads) - x = self.out_proj(x) - else: - key, value = self._complete_kv(key, value) - if self.attention_as_float32: - query, key, value = [x.float() for x in [query, key, value]] - x, _ = self.mha( - query, key, value, key_padding_mask, - need_weights, attn_mask, average_attn_weights) - x = x.to(dtype) - - return x, None - - -class StreamingTransformerLayer(nn.TransformerEncoderLayer): - """TransformerLayer with Streaming / Causal support. - This also integrates cross_attention, when passing `cross_attention=True`, - rather than having two separate classes like in PyTorch. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - qk_layer_norm (bool): Layer normalization applied to queries and keys before dot product in attention. - qk_layer_norm_cross (bool): Same for the cross attention. - cross_attention (bool): If True, expect to get secondary input for cross-attention. - Cross attention will use the default MHA, as it typically won't require - special treatment. - layer_scale (float or None): If not None, LayerScale will be used with - the given value as initial scale. - rope (`RotaryEmbedding` or None): Rope embedding to use. - attention_dropout (float or None): If not None, separate the value of the dimension dropout - in FFN and of the attention dropout. - kv_repeat (int): If > 1, will repeat keys and queries multiple times (need to divide num_heads). - This will lead to faster decoding time on A100 or other GPUs with tensorcore. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, dim_feedforward: int = 2048, dropout: float = 0.1, - bias_ff: bool = True, bias_attn: bool = True, causal: bool = False, - past_context: tp.Optional[int] = None, custom: bool = False, - memory_efficient: bool = False, attention_as_float32: bool = False, - qk_layer_norm: bool = False, qk_layer_norm_cross: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - rope: tp.Optional[RotaryEmbedding] = None, attention_dropout: tp.Optional[float] = None, - kv_repeat: int = 1, norm: str = 'layer_norm', device=None, dtype=None, **kwargs): - super().__init__(d_model, num_heads, dim_feedforward, dropout, - device=device, dtype=dtype, batch_first=True, **kwargs) - factory_kwargs = {'device': device, 'dtype': dtype} - # Redefine self_attn to our streaming multi-head attention - attn_kwargs: tp.Dict[str, tp.Any] = { - 'embed_dim': d_model, - 'num_heads': num_heads, - 'dropout': dropout if attention_dropout is None else attention_dropout, - 'bias': bias_attn, - 'custom': custom, - 'memory_efficient': memory_efficient, - 'attention_as_float32': attention_as_float32, - } - self.self_attn: StreamingMultiheadAttention = StreamingMultiheadAttention( - causal=causal, past_context=past_context, rope=rope, qk_layer_norm=qk_layer_norm, - kv_repeat=kv_repeat, **attn_kwargs, **factory_kwargs) # type: ignore - # Redefine feedforward layers to expose bias parameter - self.linear1 = nn.Linear(d_model, dim_feedforward, bias=bias_ff, **factory_kwargs) - self.linear2 = nn.Linear(dim_feedforward, d_model, bias=bias_ff, **factory_kwargs) - - self.layer_scale_1: nn.Module - self.layer_scale_2: nn.Module - if layer_scale is None: - self.layer_scale_1 = nn.Identity() - self.layer_scale_2 = nn.Identity() - else: - self.layer_scale_1 = LayerScale(d_model, layer_scale, **factory_kwargs) - self.layer_scale_2 = LayerScale(d_model, layer_scale, **factory_kwargs) - - self.cross_attention: tp.Optional[nn.Module] = None - if cross_attention: - self.cross_attention = StreamingMultiheadAttention( - cross_attention=True, qk_layer_norm=qk_layer_norm_cross, - **attn_kwargs, **factory_kwargs) - # Norm and dropout - self.dropout_cross = nn.Dropout(dropout) - # eps value matching that used in PyTorch reference implementation. - self.norm_cross = nn.LayerNorm(d_model, eps=1e-5, **factory_kwargs) - self.layer_scale_cross: nn.Module - if layer_scale is None: - self.layer_scale_cross = nn.Identity() - else: - self.layer_scale_cross = LayerScale(d_model, layer_scale, **factory_kwargs) - self.norm1 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - self.norm2 = create_norm_fn(norm, d_model, **factory_kwargs) # type: ignore - - def _cross_attention_block(self, src: torch.Tensor, - cross_attention_src: torch.Tensor) -> torch.Tensor: - assert self.cross_attention is not None - # queries are from src, keys and values from cross_attention_src. - x = self.cross_attention( - src, cross_attention_src, cross_attention_src, need_weights=False)[0] - return self.dropout_cross(x) # type: ignore - - def forward(self, src: torch.Tensor, src_mask: tp.Optional[torch.Tensor] = None, # type: ignore - src_key_padding_mask: tp.Optional[torch.Tensor] = None, - cross_attention_src: tp.Optional[torch.Tensor] = None): - if self.cross_attention is None: - assert cross_attention_src is None - else: - assert cross_attention_src is not None - x = src - if self.norm_first: - x = x + self.layer_scale_1( - self._sa_block(self.norm1(x), src_mask, src_key_padding_mask)) - if cross_attention_src is not None: - x = x + self.layer_scale_cross( - self._cross_attention_block( - self.norm_cross(x), cross_attention_src)) - x = x + self.layer_scale_2(self._ff_block(self.norm2(x))) - else: - x = self.norm1(x + self.layer_scale_1( - self._sa_block(x, src_mask, src_key_padding_mask))) - if cross_attention_src is not None: - x = self.norm_cross( - x + self.layer_scale_cross( - self._cross_attention_block(src, cross_attention_src))) - x = self.norm2(x + self.layer_scale_2(self._ff_block(x))) - return x - - -class StreamingTransformer(StreamingModule): - """Transformer with Streaming / Causal support. - - Args: - d_model (int): Dimension of the data. - num_heads (int): Number of heads. - dim_feedforward (int): Intermediate dimension of FF module. - dropout (float): Dropout both for MHA and FF. - bias_ff (bool): Use bias for FF. - bias_attn (bool): Use bias for MHA. - causal (bool): Causal mask applied automatically. - past_context (int or None): Receptive field for the causal mask, infinite if None. - custom (bool): Use custom MHA implementation, for testing / benchmarking. - memory_efficient (bool): Use xformers based memory efficient attention. - attention_as_float32 (bool): Perform the attention as float32 - (especially important with memory_efficient as autocast won't do this automatically). - cross_attention (bool): If True, expect to get secondary input for cross-attention. - layer_scale (float or None): If not None, LayerScale will be used - with the given value as initial scale. - positional_embedding (str): Positional embedding strategy (sin, rope, or sin_rope). - max_period (float): Maximum period of the time embedding. - positional_scale (float): Scale of positional embedding, set to 0 to deactivate. - xpos (bool): Apply xpos exponential decay to positional embedding (rope only). - lr (float or None): learning rate override through the `make_optim_group` API. - weight_decay (float or None): Weight_decay override through the `make_optim_group` API. - layer_class: (subclass of `StreamingTransformerLayer): class to use - to initialize the layers, allowing further customization outside of Audiocraft. - checkpointing (str): Checkpointing strategy to reduce memory usage. - No checkpointing if set to 'none'. Per layer checkpointing using PyTorch - if set to 'torch' (entire layer checkpointed, i.e. linears are evaluated twice, - minimal memory usage, but maximal runtime). Finally, `xformers_default` provide - a policy for opting-out some operations of the checkpointing like - linear layers and attention, providing a middle ground between speed and memory. - device (torch.device or None): Device on which to initialize. - dtype (torch.dtype or None): dtype to use. - **kwargs: See `nn.TransformerEncoderLayer`. - """ - def __init__(self, d_model: int, num_heads: int, num_layers: int, dim_feedforward: int = 2048, - dropout: float = 0.1, bias_ff: bool = True, bias_attn: bool = True, - causal: bool = False, past_context: tp.Optional[int] = None, - custom: bool = False, memory_efficient: bool = False, attention_as_float32: bool = False, - cross_attention: bool = False, layer_scale: tp.Optional[float] = None, - positional_embedding: str = 'sin', max_period: float = 10_000, positional_scale: float = 1., - xpos: bool = False, lr: tp.Optional[float] = None, weight_decay: tp.Optional[float] = None, - layer_class: tp.Type[StreamingTransformerLayer] = StreamingTransformerLayer, - checkpointing: str = 'none', device=None, dtype=None, **kwargs): - super().__init__() - assert d_model % num_heads == 0 - - self.positional_embedding = positional_embedding - self.max_period = max_period - self.positional_scale = positional_scale - self.weight_decay = weight_decay - self.lr = lr - - assert positional_embedding in ['sin', 'rope', 'sin_rope'] - self.rope: tp.Optional[RotaryEmbedding] = None - if self.positional_embedding in ['rope', 'sin_rope']: - assert _is_custom(custom, memory_efficient) - self.rope = RotaryEmbedding(d_model // num_heads, max_period=max_period, - xpos=xpos, scale=positional_scale, device=device) - - self.checkpointing = checkpointing - - assert checkpointing in ['none', 'torch', 'xformers_default', 'xformers_mm'] - if self.checkpointing.startswith('xformers'): - _verify_xformers_internal_compat() - - self.layers = nn.ModuleList() - for idx in range(num_layers): - self.layers.append( - layer_class( - d_model=d_model, num_heads=num_heads, dim_feedforward=dim_feedforward, - dropout=dropout, bias_ff=bias_ff, bias_attn=bias_attn, - causal=causal, past_context=past_context, custom=custom, - memory_efficient=memory_efficient, attention_as_float32=attention_as_float32, - cross_attention=cross_attention, layer_scale=layer_scale, rope=self.rope, - device=device, dtype=dtype, **kwargs)) - - if self.checkpointing != 'none': - for layer in self.layers: - # see audiocraft/optim/fsdp.py, magic signal to indicate this requires fixing the - # backward hook inside of FSDP... - layer._magma_checkpointed = True # type: ignore - assert layer.layer_drop == 0., "Need further checking" # type: ignore - - def _apply_layer(self, layer, *args, **kwargs): - method = self.checkpointing - if method == 'none': - return layer(*args, **kwargs) - elif method == 'torch': - return torch_checkpoint(layer, *args, use_reentrant=False, **kwargs) - elif method.startswith('xformers'): - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy - if method == 'xformers_default': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "xformers.efficient_attention_forward_cutlass.default", - "xformers_flash.flash_fwd.default", - "aten.addmm.default", - "aten.mm.default", - ] - elif method == 'xformers_mm': - # those operations will be saved, and not recomputed. - # According to Francisco we can get smarter policies but this is a good start. - allow_list = [ - "aten.addmm.default", - "aten.mm.default", - ] - else: - raise ValueError(f"xformers checkpointing xformers policy {method} is not known.") - policy_fn = _get_default_policy(allow_list) - return checkpoint(layer, *args, policy_fn=policy_fn, **kwargs) - else: - raise ValueError(f"Checkpointing method {method} is unknown.") - - def forward(self, x: torch.Tensor, *args, **kwargs): - B, T, C = x.shape - - if 'offsets' in self._streaming_state: - offsets = self._streaming_state['offsets'] - else: - offsets = torch.zeros(B, dtype=torch.long, device=x.device) - - if self.positional_embedding in ['sin', 'sin_rope']: - positions = torch.arange(T, device=x.device).view(1, -1, 1) - positions = positions + offsets.view(-1, 1, 1) - pos_emb = create_sin_embedding(positions, C, max_period=self.max_period, dtype=x.dtype) - x = x + self.positional_scale * pos_emb - - for layer in self.layers: - x = self._apply_layer(layer, x, *args, **kwargs) - - if self._is_streaming: - self._streaming_state['offsets'] = offsets + T - - return x - - def make_optim_group(self): - group = {"params": list(self.parameters())} - if self.lr is not None: - group["lr"] = self.lr - if self.weight_decay is not None: - group["weight_decay"] = self.weight_decay - return group - - -# special attention attention related function - -def _verify_xformers_memory_efficient_compat(): - try: - from xformers.ops import memory_efficient_attention, LowerTriangularMask # noqa - except ImportError: - raise ImportError( - "xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _verify_xformers_internal_compat(): - try: - from xformers.checkpoint_fairinternal import checkpoint, _get_default_policy # noqa - except ImportError: - raise ImportError( - "Francisco's fairinternal xformers is not installed. Please install it and try again.\n" - "To install on AWS and Azure, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='8.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n" - "To install on FAIR Cluster, run \n" - "FORCE_CUDA=1 TORCH_CUDA_ARCH_LIST='6.0;7.0'\\\n" - "pip install -U git+https://git@github.com/fairinternal/xformers.git#egg=xformers\n") - - -def _is_custom(custom: bool, memory_efficient: bool): - return custom or memory_efficient diff --git a/spaces/runa91/bite_gradio/src/graph_networks/graphcmr/graph_cnn_groundcontact.py b/spaces/runa91/bite_gradio/src/graph_networks/graphcmr/graph_cnn_groundcontact.py deleted file mode 100644 index bc358cef22022b0b16089a5b9d8bed49b112c6d8..0000000000000000000000000000000000000000 --- a/spaces/runa91/bite_gradio/src/graph_networks/graphcmr/graph_cnn_groundcontact.py +++ /dev/null @@ -1,101 +0,0 @@ -""" -code from https://raw.githubusercontent.com/nkolot/GraphCMR/master/models/graph_cnn.py -This file contains the Definition of GraphCNN -GraphCNN includes ResNet50 as a submodule -""" -from __future__ import division - -import torch -import torch.nn as nn - -# from .resnet import resnet50 -import torchvision.models as models - - -import os -import sys -sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..', '..')) -from src.graph_networks.graphcmr.utils_mesh import Mesh -from src.graph_networks.graphcmr.graph_layers import GraphResBlock, GraphLinear - - -class GraphCNN(nn.Module): - - def __init__(self, A, ref_vertices, n_resnet_in, n_resnet_out, num_layers=5, num_channels=512): - super(GraphCNN, self).__init__() - self.A = A - self.ref_vertices = ref_vertices - # self.resnet = resnet50(pretrained=True) - # -> within the GraphCMR network they ignore the last fully connected layer - # replace the first layer - self.resnet = models.resnet34(pretrained=False) - n_in = 3 + 1 - self.resnet.conv1 = nn.Conv2d(n_resnet_in, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False) - # replace the last layer - self.resnet.fc = nn.Linear(512, n_resnet_out) - - - layers = [GraphLinear(3 + n_resnet_out, 2 * num_channels)] # [GraphLinear(3 + 2048, 2 * num_channels)] - layers.append(GraphResBlock(2 * num_channels, num_channels, A)) - for i in range(num_layers): - layers.append(GraphResBlock(num_channels, num_channels, A)) - self.n_out_gc = 2 # two labels per vertex - self.gc = nn.Sequential(GraphResBlock(num_channels, 64, A), - GraphResBlock(64, 32, A), - nn.GroupNorm(32 // 8, 32), - nn.ReLU(inplace=True), - GraphLinear(32, self.n_out_gc)) - self.gcnn = nn.Sequential(*layers) - self.n_out_flatground = 1 - self.flat_ground = nn.Sequential(nn.GroupNorm(num_channels // 8, num_channels), - nn.ReLU(inplace=True), - GraphLinear(num_channels, 1), - nn.ReLU(inplace=True), - nn.Linear(A.shape[0], self.n_out_flatground)) - - def forward(self, image): - """Forward pass - Inputs: - image: size = (B, 3, 256, 256) - Returns: - Regressed (subsampled) non-parametric shape: size = (B, 1723, 3) - Weak-perspective camera: size = (B, 3) - """ - # import pdb; pdb.set_trace() - - batch_size = image.shape[0] - ref_vertices = self.ref_vertices[None, :, :].expand(batch_size, -1, -1) # (bs, 3, 973) - image_resnet = self.resnet(image) # (bs, 512) - image_enc = image_resnet.view(batch_size, -1, 1).expand(-1, -1, ref_vertices.shape[-1]) # (bs, 512, 973) - x = torch.cat([ref_vertices, image_enc], dim=1) - x = self.gcnn(x) # (bs, 512, 973) - ground_contact = self.gc(x) # (bs, 2, 973) - ground_flatness = self.flat_ground(x).view(batch_size, self.n_out_flatground) # (bs, 1) - return ground_contact, ground_flatness - - - - -# how to use it: -# -# from src.graph_networks.graphcmr.utils_mesh import Mesh -# -# create Mesh object -# self.mesh = Mesh() -# self.faces = self.mesh.faces.to(self.device) -# -# create GraphCNN -# self.graph_cnn = GraphCNN(self.mesh.adjmat, -# self.mesh.ref_vertices.t(), -# num_channels=self.options.num_channels, -# num_layers=self.options.num_layers -# ).to(self.device) -# ------------ -# -# Feed image in the GraphCNN -# Returns subsampled mesh and camera parameters -# pred_vertices_sub, pred_camera = self.graph_cnn(images) -# -# Upsample mesh in the original size -# pred_vertices = self.mesh.upsample(pred_vertices_sub.transpose(1,2)) -# \ No newline at end of file diff --git a/spaces/rupeshs/fastsdcpu/frontend/gui/image_generator_worker.py b/spaces/rupeshs/fastsdcpu/frontend/gui/image_generator_worker.py deleted file mode 100644 index 3a948365085ece82337309ac91d278e77fa03e40..0000000000000000000000000000000000000000 --- a/spaces/rupeshs/fastsdcpu/frontend/gui/image_generator_worker.py +++ /dev/null @@ -1,37 +0,0 @@ -from PyQt5.QtCore import ( - pyqtSlot, - QRunnable, - pyqtSignal, - pyqtSlot, -) -from PyQt5.QtCore import QObject -import traceback -import sys - - -class WorkerSignals(QObject): - finished = pyqtSignal() - error = pyqtSignal(tuple) - result = pyqtSignal(object) - - -class ImageGeneratorWorker(QRunnable): - def __init__(self, fn, *args, **kwargs): - super(ImageGeneratorWorker, self).__init__() - self.fn = fn - self.args = args - self.kwargs = kwargs - self.signals = WorkerSignals() - - @pyqtSlot() - def run(self): - try: - result = self.fn(*self.args, **self.kwargs) - except: - traceback.print_exc() - exctype, value = sys.exc_info()[:2] - self.signals.error.emit((exctype, value, traceback.format_exc())) - else: - self.signals.result.emit(result) - finally: - self.signals.finished.emit() diff --git a/spaces/ryansilk/quantycs/StreamLit/quantycs/3_Machine_Learning.py b/spaces/ryansilk/quantycs/StreamLit/quantycs/3_Machine_Learning.py deleted file mode 100644 index e0713e1a037c7d315ae09dc5cce261d65bfde307..0000000000000000000000000000000000000000 --- a/spaces/ryansilk/quantycs/StreamLit/quantycs/3_Machine_Learning.py +++ /dev/null @@ -1,141 +0,0 @@ -import streamlit as st -from streamlit_option_menu import option_menu -import pyEX as p -import tensorflow -import keras - -st.set_page_config(page_title="Machine Learning") -st.title('Machine Learning') - -st.sidebar.success("Select Stock Data Below") - -# Initialize the variable to an empty string -my_input = "" - -# Display the option menu -selected = option_menu( - menu_title=None, - options=["LSTM", "Linear Regression", "Multi Linear Regression"], - icons=["pencil-fill", "bar-chart-fill"], - orientation="horizontal", -) - -# Give the user a dropdown menu to select a stock -stock_option = st.sidebar.selectbox('Select one symbol', ('AAPL', 'MSFT', "SPY", 'WMT')) - -if selected == "LSTM": - # Call the API from IEX Cloud based on user input from 'stock_option' variable - token = 'sk_705b5ca7744f49009b2004c682c3a010' - c = p.Client(api_token=token, version='stable') - - # Assign the API call to a pandas dataframe called ticker - ticker = c.chartDF(symbol=stock_option, timeframe='2y')[ - ['open', 'high', 'low', 'close', 'volume']] - - import yfinance as yf - import pandas as pd - import numpy as np - from keras.models import Sequential - from keras.layers import Dense, Dropout, LSTM - from sklearn.preprocessing import MinMaxScaler - import matplotlib.pyplot as plt - import tensorflow - - - # Step 2: Fetch the stock data using the Yahoo Finance API - def fetch_stock_data(ticker, start, end): - stock_data = yf.download(ticker, start=start, end=end) - return stock_data - - - # Define the stock and the date range - ticker = 'SPY' - start_date = '2015-01-01' - end_date = '2021-09-30' - - stock_data = fetch_stock_data(ticker, start_date, end_date) - stock_data['Close'].plot(title=f'{ticker} Stock Prices') - - - # Step 3: Preprocess the data - def preprocess_data(data, lookback): - data = data.values.reshape(-1, 1) - scaler = MinMaxScaler(feature_range=(0, 1)) - data_scaled = scaler.fit_transform(data) - - x, y = [], [] - for i in range(lookback, len(data_scaled)): - x.append(data_scaled[i - lookback:i, 0]) - y.append(data_scaled[i, 0]) - x, y = np.array(x), np.array(y) - x = np.reshape(x, (x.shape[0], x.shape[1], 1)) - return x, y, scaler - - - lookback = 60 - x_train, y_train, scaler = preprocess_data(stock_data['Close'], lookback) - - - # Step 4: Create and train an LSTM model - def create_lstm_model(input_shape): - model = Sequential() - model.add(LSTM(units=50, return_sequences=True, input_shape=input_shape)) - model.add(Dropout(0.2)) - model.add(LSTM(units=50, return_sequences=True)) - model.add(Dropout(0.2)) - model.add(LSTM(units=50)) - model.add(Dropout(0.2)) - model.add(Dense(units=1)) - model.compile(optimizer='adam', loss='mean_squared_error') - return model - - - model = create_lstm_model((x_train.shape[1], 1)) - model.summary() - - # Train the model - model.fit(x_train, y_train, epochs=10, batch_size=32) - - - # Step 5: Make predictions and evaluate the model - def predict_stock_prices(model, data, scaler): - predicted_stock_prices = model.predict(data) - predicted_stock_prices = scaler.inverse_transform(predicted_stock_prices) - return predicted_stock_prices - - - predictions = predict_stock_prices(model, x_train, scaler) - - # Plot the results - plt.figure(figsize=(16, 8)) - plt.plot(stock_data.index[lookback:], stock_data['Close'][lookback:], color='blue', label='Actual Stock Price') - plt.plot(stock_data.index[lookback:], predictions, color='red', label='Predicted Stock Price') - plt.title(f'{ticker} Stock Price Prediction') - plt.xlabel('Date') - plt.ylabel('Stock Price') - plt.legend() - plt.show() - - - # Step 6: Predict the next 30 days of stock prices - def predict_next_30_days(model, data, scaler, lookback): - predictions = [] - last_60_days = data[-lookback:].reshape(-1) - - for _ in range(30): - input_data = last_60_days[-lookback:].reshape((1, lookback, 1)) - predicted_price = model.predict(input_data) - predictions.append(predicted_price[0][0]) - - last_60_days = np.append(last_60_days, predicted_price) - - return scaler.inverse_transform(np.array(predictions).reshape(-1, 1)) - - - next_30_days = predict_next_30_days(model, x_train, scaler, lookback) - - # Display the next 30 days of stock prediction prices in a pandas DataFrame - future_dates = pd.date_range(stock_data.index[-1] + pd.DateOffset(days=1), periods=30, freq='D') - predicted_prices_df = pd.DataFrame(next_30_days, columns=['Predicted Price'], index=future_dates) - print(predicted_prices_df) - diff --git a/spaces/safi842/FashionGen/netdissect/sampler.py b/spaces/safi842/FashionGen/netdissect/sampler.py deleted file mode 100644 index 72f1b46da117403c7f6ddcc1877bd9d70ded962b..0000000000000000000000000000000000000000 --- a/spaces/safi842/FashionGen/netdissect/sampler.py +++ /dev/null @@ -1,134 +0,0 @@ -''' -A sampler is just a list of integer listing the indexes of the -inputs in a data set to sample. For reproducibility, the -FixedRandomSubsetSampler uses a seeded prng to produce the same -sequence always. FixedSubsetSampler is just a wrapper for an -explicit list of integers. - -coordinate_sample solves another sampling problem: when testing -convolutional outputs, we can reduce data explosing by sampling -random points of the feature map rather than the entire feature map. -coordinate_sample does this in a deterministic way that is also -resolution-independent. -''' - -import numpy -import random -from torch.utils.data.sampler import Sampler - -class FixedSubsetSampler(Sampler): - """Represents a fixed sequence of data set indices. - Subsets can be created by specifying a subset of output indexes. - """ - def __init__(self, samples): - self.samples = samples - - def __iter__(self): - return iter(self.samples) - - def __len__(self): - return len(self.samples) - - def __getitem__(self, key): - return self.samples[key] - - def subset(self, new_subset): - return FixedSubsetSampler(self.dereference(new_subset)) - - def dereference(self, indices): - ''' - Translate output sample indices (small numbers indexing the sample) - to input sample indices (larger number indexing the original full set) - ''' - return [self.samples[i] for i in indices] - - -class FixedRandomSubsetSampler(FixedSubsetSampler): - """Samples a fixed number of samples from the dataset, deterministically. - Arguments: - data_source, - sample_size, - seed (optional) - """ - def __init__(self, data_source, start=None, end=None, seed=1): - rng = random.Random(seed) - shuffled = list(range(len(data_source))) - rng.shuffle(shuffled) - self.data_source = data_source - super(FixedRandomSubsetSampler, self).__init__(shuffled[start:end]) - - def class_subset(self, class_filter): - ''' - Returns only the subset matching the given rule. - ''' - if isinstance(class_filter, int): - rule = lambda d: d[1] == class_filter - else: - rule = class_filter - return self.subset([i for i, j in enumerate(self.samples) - if rule(self.data_source[j])]) - -def coordinate_sample(shape, sample_size, seeds, grid=13, seed=1, flat=False): - ''' - Returns a (end-start) sets of sample_size grid points within - the shape given. If the shape dimensions are a multiple of 'grid', - then sampled points within the same row will never be duplicated. - ''' - if flat: - sampind = numpy.zeros((len(seeds), sample_size), dtype=int) - else: - sampind = numpy.zeros((len(seeds), 2, sample_size), dtype=int) - assert sample_size <= grid - for j, seed in enumerate(seeds): - rng = numpy.random.RandomState(seed) - # Shuffle the 169 random grid squares, and pick :sample_size. - square_count = grid ** len(shape) - square = numpy.stack(numpy.unravel_index( - rng.choice(square_count, square_count)[:sample_size], - (grid,) * len(shape))) - # Then add a random offset to each x, y and put in the range [0...1) - # Notice this selects the same locations regardless of resolution. - uniform = (square + rng.uniform(size=square.shape)) / grid - # TODO: support affine scaling so that we can align receptive field - # centers exactly when sampling neurons in different layers. - coords = (uniform * numpy.array(shape)[:,None]).astype(int) - # Now take sample_size without replacement. We do this in a way - # such that if sample_size is decreased or increased up to 'grid', - # the selected points become a subset, not totally different points. - if flat: - sampind[j] = numpy.ravel_multi_index(coords, dims=shape) - else: - sampind[j] = coords - return sampind - -if __name__ == '__main__': - from numpy.testing import assert_almost_equal - # Test that coordinate_sample is deterministic, in-range, and scalable. - assert_almost_equal(coordinate_sample((26, 26), 10, range(101, 102)), - [[[14, 0, 12, 11, 8, 13, 11, 20, 7, 20], - [ 9, 22, 7, 11, 23, 18, 21, 15, 2, 5]]]) - assert_almost_equal(coordinate_sample((13, 13), 10, range(101, 102)), - [[[ 7, 0, 6, 5, 4, 6, 5, 10, 3, 20 // 2], - [ 4, 11, 3, 5, 11, 9, 10, 7, 1, 5 // 2]]]) - assert_almost_equal(coordinate_sample((13, 13), 10, range(100, 102), - flat=True), - [[ 8, 24, 67, 103, 87, 79, 138, 94, 98, 53], - [ 95, 11, 81, 70, 63, 87, 75, 137, 40, 2+10*13]]) - assert_almost_equal(coordinate_sample((13, 13), 10, range(101, 103), - flat=True), - [[ 95, 11, 81, 70, 63, 87, 75, 137, 40, 132], - [ 0, 78, 114, 111, 66, 45, 72, 73, 79, 135]]) - assert_almost_equal(coordinate_sample((26, 26), 10, range(101, 102), - flat=True), - [[373, 22, 319, 297, 231, 356, 307, 535, 184, 5+20*26]]) - # Test FixedRandomSubsetSampler - fss = FixedRandomSubsetSampler(range(10)) - assert len(fss) == 10 - assert_almost_equal(list(fss), [8, 0, 3, 4, 5, 2, 9, 6, 7, 1]) - fss = FixedRandomSubsetSampler(range(10), 3, 8) - assert len(fss) == 5 - assert_almost_equal(list(fss), [4, 5, 2, 9, 6]) - fss = FixedRandomSubsetSampler([(i, i % 3) for i in range(10)], - class_filter=1) - assert len(fss) == 3 - assert_almost_equal(list(fss), [4, 7, 1]) diff --git a/spaces/sandrocalzada/DemoHF/README.md b/spaces/sandrocalzada/DemoHF/README.md deleted file mode 100644 index d2436dc20d09985c94c2905829e8fbee1d2f559d..0000000000000000000000000000000000000000 --- a/spaces/sandrocalzada/DemoHF/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: DemoHF -emoji: ⚡ -colorFrom: red -colorTo: indigo -sdk: streamlit -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/sandy9808/EleutherAI-gpt-j-6B/app.py b/spaces/sandy9808/EleutherAI-gpt-j-6B/app.py deleted file mode 100644 index b4ab9549994514c1b64784efe8b81534bb3fde6e..0000000000000000000000000000000000000000 --- a/spaces/sandy9808/EleutherAI-gpt-j-6B/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/EleutherAI/gpt-j-6B").launch() \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/ADOBE MASTER COLLECTION CS6 Crack With Serial Number Download.md b/spaces/scedlatioru/img-to-music/example/ADOBE MASTER COLLECTION CS6 Crack With Serial Number Download.md deleted file mode 100644 index c3a6b6198ac34032d0695bf10f0ed8b1ea550b62..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/ADOBE MASTER COLLECTION CS6 Crack With Serial Number Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

        ADOBE MASTER COLLECTION CS6 Crack With Serial Number Download


        DOWNLOAD ⇒⇒⇒ https://gohhs.com/2uEz6M



        - -CS6. ALL.rar na koncie użytkownika Almeusz • folder - Crack -Patch-Keygen • Data ... -Patch-Keygen / Aktywator. CS6. ALL.rar. Download: Aktywator. CS6. ALL.rar ... Aktywacja: * Adobe Photoshop CS6 - Extended (32-Bit & 64-Bit) * Adobe ... 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/Like Water For Chocolate Novel Download Pdf.md b/spaces/scedlatioru/img-to-music/example/Like Water For Chocolate Novel Download Pdf.md deleted file mode 100644 index 70fc44e964b9f5f2cec847787d6cf1c50921ba13..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Like Water For Chocolate Novel Download Pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Like Water For Chocolate Novel Download Pdf


        DOWNLOAD ○○○ https://gohhs.com/2uEAaM



        - -January 1, 2019 - Laura Esquivel depicted this practice in her novel Like Water to Chocolate. It conveys the connection between food and emotions. and home remedies. "Like Water to Chocolate" and "Like Water to Chocolate" share the same basic theme - something that makes food, something that makes us feel better when we eat, something that makes us feel good. loved ones, something that makes us feel alive. In both novels, it's chocolate - and you can think of other items that do the same thing. But what dish do you think has the same effect? 8a78ff9644
        -
        -
        -

        diff --git a/spaces/scedlatioru/img-to-music/example/Rename Cortana In Windows 10 With MyCortana App.md b/spaces/scedlatioru/img-to-music/example/Rename Cortana In Windows 10 With MyCortana App.md deleted file mode 100644 index debd66e0b46f47e230994d59b15fa497db739067..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Rename Cortana In Windows 10 With MyCortana App.md +++ /dev/null @@ -1,6 +0,0 @@ -

        Rename Cortana in Windows 10 with MyCortana app


        Downloadhttps://gohhs.com/2uEAqI



        - -Botana app. Jun 01, 2017 · Download MyCortana for free. Rename Cortana in Windows 10. Latest Version - 2.0.0.4 -Added support for Windows 10 build 15063. 4d29de3e1b
        -
        -
        -

        diff --git a/spaces/seduerr/text_analytics/text_analytics/indices/lexical_diversity_indices.py b/spaces/seduerr/text_analytics/text_analytics/indices/lexical_diversity_indices.py deleted file mode 100644 index 3f2c0f92e6849d35542a74b98d21806820a6430c..0000000000000000000000000000000000000000 --- a/spaces/seduerr/text_analytics/text_analytics/indices/lexical_diversity_indices.py +++ /dev/null @@ -1,37 +0,0 @@ -import multiprocessing -import spacy -import string - -from text_analytics.constants import ACCEPTED_LANGUAGES -from text_analytics.utils.utils import is_content_word -from text_analytics.utils.utils import is_word -from text_analytics.utils.utils import split_text_into_paragraphs - -class LexicalDiversityIndices: - def __init__(self, nlp, language: str='en') -> None: - self.language = language - self._nlp = nlp - - def get_type_token_ratio_between_all_words(self, text: str, workers=-1) -> float: - paragraphs = split_text_into_paragraphs(text) - threads = 1 - tokens = [] - disable_pipeline = [] - - tokens = [token.text.lower() - for doc in self._nlp.pipe(paragraphs, batch_size=threads, disable=disable_pipeline, n_process=threads) - for token in doc - if is_word(token)] - - return 0 if len(tokens) == 0 else len(set(tokens)) / len(tokens) - - def get_type_token_ratio_of_content_words(self, text: str, workers=-1) -> float: - paragraphs = split_text_into_paragraphs(text) - threads = 1 - tokens = [] - disable_pipeline = [] - tokens = [token.text.lower() - for doc in self._nlp.pipe(paragraphs, batch_size=threads, disable=disable_pipeline, n_process=threads) - for token in doc - if is_content_word(token)] - return 0 if len(tokens) == 0 else len(set(tokens)) / len(tokens) diff --git a/spaces/segments-tobias/conex/espnet/bin/asr_enhance.py b/spaces/segments-tobias/conex/espnet/bin/asr_enhance.py deleted file mode 100644 index 98f0d693caa47fd752c0a9cd8e577dacf47f74b9..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/bin/asr_enhance.py +++ /dev/null @@ -1,191 +0,0 @@ -#!/usr/bin/env python3 -import configargparse -from distutils.util import strtobool -import logging -import os -import random -import sys - -import numpy as np - -from espnet.asr.pytorch_backend.asr import enhance - - -# NOTE: you need this func to generate our sphinx doc -def get_parser(): - parser = configargparse.ArgumentParser( - description="Enhance noisy speech for speech recognition", - config_file_parser_class=configargparse.YAMLConfigFileParser, - formatter_class=configargparse.ArgumentDefaultsHelpFormatter, - ) - # general configuration - parser.add("--config", is_config_file=True, help="config file path") - parser.add( - "--config2", - is_config_file=True, - help="second config file path that overwrites the settings in `--config`.", - ) - parser.add( - "--config3", - is_config_file=True, - help="third config file path that overwrites the settings " - "in `--config` and `--config2`.", - ) - - parser.add_argument("--ngpu", default=0, type=int, help="Number of GPUs") - parser.add_argument( - "--backend", - default="chainer", - type=str, - choices=["chainer", "pytorch"], - help="Backend library", - ) - parser.add_argument("--debugmode", default=1, type=int, help="Debugmode") - parser.add_argument("--seed", default=1, type=int, help="Random seed") - parser.add_argument("--verbose", "-V", default=1, type=int, help="Verbose option") - parser.add_argument( - "--batchsize", - default=1, - type=int, - help="Batch size for beam search (0: means no batch processing)", - ) - parser.add_argument( - "--preprocess-conf", - type=str, - default=None, - help="The configuration file for the pre-processing", - ) - # task related - parser.add_argument( - "--recog-json", type=str, help="Filename of recognition data (json)" - ) - # model (parameter) related - parser.add_argument( - "--model", type=str, required=True, help="Model file parameters to read" - ) - parser.add_argument( - "--model-conf", type=str, default=None, help="Model config file" - ) - - # Outputs configuration - parser.add_argument( - "--enh-wspecifier", - type=str, - default=None, - help="Specify the output way for enhanced speech." - "e.g. ark,scp:outdir,wav.scp", - ) - parser.add_argument( - "--enh-filetype", - type=str, - default="sound", - choices=["mat", "hdf5", "sound.hdf5", "sound"], - help="Specify the file format for enhanced speech. " - '"mat" is the matrix format in kaldi', - ) - parser.add_argument("--fs", type=int, default=16000, help="The sample frequency") - parser.add_argument( - "--keep-length", - type=strtobool, - default=True, - help="Adjust the output length to match " "with the input for enhanced speech", - ) - parser.add_argument( - "--image-dir", type=str, default=None, help="The directory saving the images." - ) - parser.add_argument( - "--num-images", - type=int, - default=20, - help="The number of images files to be saved. " - "If negative, all samples are to be saved.", - ) - - # IStft - parser.add_argument( - "--apply-istft", - type=strtobool, - default=True, - help="Apply istft to the output from the network", - ) - parser.add_argument( - "--istft-win-length", - type=int, - default=512, - help="The window length for istft. " - "This option is ignored " - "if stft is found in the preprocess-conf", - ) - parser.add_argument( - "--istft-n-shift", - type=str, - default=256, - help="The window type for istft. " - "This option is ignored " - "if stft is found in the preprocess-conf", - ) - parser.add_argument( - "--istft-window", - type=str, - default="hann", - help="The window type for istft. " - "This option is ignored " - "if stft is found in the preprocess-conf", - ) - return parser - - -def main(args): - parser = get_parser() - args = parser.parse_args(args) - - # logging info - if args.verbose == 1: - logging.basicConfig( - level=logging.INFO, - format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s", - ) - elif args.verbose == 2: - logging.basicConfig( - level=logging.DEBUG, - format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s", - ) - else: - logging.basicConfig( - level=logging.WARN, - format="%(asctime)s (%(module)s:%(lineno)d) %(levelname)s: %(message)s", - ) - logging.warning("Skip DEBUG/INFO messages") - - # check CUDA_VISIBLE_DEVICES - if args.ngpu > 0: - cvd = os.environ.get("CUDA_VISIBLE_DEVICES") - if cvd is None: - logging.warning("CUDA_VISIBLE_DEVICES is not set.") - elif args.ngpu != len(cvd.split(",")): - logging.error("#gpus is not matched with CUDA_VISIBLE_DEVICES.") - sys.exit(1) - - # TODO(kamo): support of multiple GPUs - if args.ngpu > 1: - logging.error("The program only supports ngpu=1.") - sys.exit(1) - - # display PYTHONPATH - logging.info("python path = " + os.environ.get("PYTHONPATH", "(None)")) - - # seed setting - random.seed(args.seed) - np.random.seed(args.seed) - logging.info("set random seed = %d" % args.seed) - - # recog - logging.info("backend = " + args.backend) - if args.backend == "pytorch": - enhance(args) - else: - raise ValueError("Only pytorch is supported.") - - -if __name__ == "__main__": - main(sys.argv[1:]) diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/streaming/window.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/streaming/window.py deleted file mode 100644 index 5565c232eb6feebfd0595fa46d07c0ecfc32c3dc..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/streaming/window.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch - - -# TODO(pzelasko): Currently allows half-streaming only; -# needs streaming attention decoder implementation -class WindowStreamingE2E(object): - """WindowStreamingE2E constructor. - - :param E2E e2e: E2E ASR object - :param recog_args: arguments for "recognize" method of E2E - """ - - def __init__(self, e2e, recog_args, rnnlm=None): - self._e2e = e2e - self._recog_args = recog_args - self._char_list = e2e.char_list - self._rnnlm = rnnlm - - self._e2e.eval() - - self._offset = 0 - self._previous_encoder_recurrent_state = None - self._encoder_states = [] - self._ctc_posteriors = [] - self._last_recognition = None - - assert ( - self._recog_args.ctc_weight > 0.0 - ), "WindowStreamingE2E works only with combined CTC and attention decoders." - - def accept_input(self, x): - """Call this method each time a new batch of input is available.""" - - h, ilen = self._e2e.subsample_frames(x) - - # Streaming encoder - h, _, self._previous_encoder_recurrent_state = self._e2e.enc( - h.unsqueeze(0), ilen, self._previous_encoder_recurrent_state - ) - self._encoder_states.append(h.squeeze(0)) - - # CTC posteriors for the incoming audio - self._ctc_posteriors.append(self._e2e.ctc.log_softmax(h).squeeze(0)) - - def _input_window_for_decoder(self, use_all=False): - if use_all: - return ( - torch.cat(self._encoder_states, dim=0), - torch.cat(self._ctc_posteriors, dim=0), - ) - - def select_unprocessed_windows(window_tensors): - last_offset = self._offset - offset_traversed = 0 - selected_windows = [] - for es in window_tensors: - if offset_traversed > last_offset: - selected_windows.append(es) - continue - offset_traversed += es.size(1) - return torch.cat(selected_windows, dim=0) - - return ( - select_unprocessed_windows(self._encoder_states), - select_unprocessed_windows(self._ctc_posteriors), - ) - - def decode_with_attention_offline(self): - """Run the attention decoder offline. - - Works even if the previous layers (encoder and CTC decoder) were - being run in the online mode. - This method should be run after all the audio has been consumed. - This is used mostly to compare the results between offline - and online implementation of the previous layers. - """ - h, lpz = self._input_window_for_decoder(use_all=True) - - return self._e2e.dec.recognize_beam( - h, lpz, self._recog_args, self._char_list, self._rnnlm - ) diff --git a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/tacotron2/encoder.py b/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/tacotron2/encoder.py deleted file mode 100644 index fee4b1c555205ba7ef0176cc033743d9a360bafa..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet/nets/pytorch_backend/tacotron2/encoder.py +++ /dev/null @@ -1,176 +0,0 @@ -#!/usr/bin/env python3 -# -*- coding: utf-8 -*- - -# Copyright 2019 Nagoya University (Tomoki Hayashi) -# Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0) - -"""Tacotron2 encoder related modules.""" - -import six - -import torch - -from torch.nn.utils.rnn import pack_padded_sequence -from torch.nn.utils.rnn import pad_packed_sequence - - -def encoder_init(m): - """Initialize encoder parameters.""" - if isinstance(m, torch.nn.Conv1d): - torch.nn.init.xavier_uniform_(m.weight, torch.nn.init.calculate_gain("relu")) - - -class Encoder(torch.nn.Module): - """Encoder module of Spectrogram prediction network. - - This is a module of encoder of Spectrogram prediction network in Tacotron2, - which described in `Natural TTS Synthesis by Conditioning WaveNet on Mel - Spectrogram Predictions`_. This is the encoder which converts either a sequence - of characters or acoustic features into the sequence of hidden states. - - .. _`Natural TTS Synthesis by Conditioning WaveNet on Mel Spectrogram Predictions`: - https://arxiv.org/abs/1712.05884 - - """ - - def __init__( - self, - idim, - input_layer="embed", - embed_dim=512, - elayers=1, - eunits=512, - econv_layers=3, - econv_chans=512, - econv_filts=5, - use_batch_norm=True, - use_residual=False, - dropout_rate=0.5, - padding_idx=0, - ): - """Initialize Tacotron2 encoder module. - - Args: - idim (int) Dimension of the inputs. - input_layer (str): Input layer type. - embed_dim (int, optional) Dimension of character embedding. - elayers (int, optional) The number of encoder blstm layers. - eunits (int, optional) The number of encoder blstm units. - econv_layers (int, optional) The number of encoder conv layers. - econv_filts (int, optional) The number of encoder conv filter size. - econv_chans (int, optional) The number of encoder conv filter channels. - use_batch_norm (bool, optional) Whether to use batch normalization. - use_residual (bool, optional) Whether to use residual connection. - dropout_rate (float, optional) Dropout rate. - - """ - super(Encoder, self).__init__() - # store the hyperparameters - self.idim = idim - self.use_residual = use_residual - - # define network layer modules - if input_layer == "linear": - self.embed = torch.nn.Linear(idim, econv_chans) - elif input_layer == "embed": - self.embed = torch.nn.Embedding(idim, embed_dim, padding_idx=padding_idx) - else: - raise ValueError("unknown input_layer: " + input_layer) - - if econv_layers > 0: - self.convs = torch.nn.ModuleList() - for layer in six.moves.range(econv_layers): - ichans = ( - embed_dim if layer == 0 and input_layer == "embed" else econv_chans - ) - if use_batch_norm: - self.convs += [ - torch.nn.Sequential( - torch.nn.Conv1d( - ichans, - econv_chans, - econv_filts, - stride=1, - padding=(econv_filts - 1) // 2, - bias=False, - ), - torch.nn.BatchNorm1d(econv_chans), - torch.nn.ReLU(), - torch.nn.Dropout(dropout_rate), - ) - ] - else: - self.convs += [ - torch.nn.Sequential( - torch.nn.Conv1d( - ichans, - econv_chans, - econv_filts, - stride=1, - padding=(econv_filts - 1) // 2, - bias=False, - ), - torch.nn.ReLU(), - torch.nn.Dropout(dropout_rate), - ) - ] - else: - self.convs = None - if elayers > 0: - iunits = econv_chans if econv_layers != 0 else embed_dim - self.blstm = torch.nn.LSTM( - iunits, eunits // 2, elayers, batch_first=True, bidirectional=True - ) - else: - self.blstm = None - - # initialize - self.apply(encoder_init) - - def forward(self, xs, ilens=None): - """Calculate forward propagation. - - Args: - xs (Tensor): Batch of the padded sequence. Either character ids (B, Tmax) - or acoustic feature (B, Tmax, idim * encoder_reduction_factor). Padded - value should be 0. - ilens (LongTensor): Batch of lengths of each input batch (B,). - - Returns: - Tensor: Batch of the sequences of encoder states(B, Tmax, eunits). - LongTensor: Batch of lengths of each sequence (B,) - - """ - xs = self.embed(xs).transpose(1, 2) - if self.convs is not None: - for i in six.moves.range(len(self.convs)): - if self.use_residual: - xs += self.convs[i](xs) - else: - xs = self.convs[i](xs) - if self.blstm is None: - return xs.transpose(1, 2) - if not isinstance(ilens, torch.Tensor): - ilens = torch.tensor(ilens) - xs = pack_padded_sequence(xs.transpose(1, 2), ilens.cpu(), batch_first=True) - self.blstm.flatten_parameters() - xs, _ = self.blstm(xs) # (B, Tmax, C) - xs, hlens = pad_packed_sequence(xs, batch_first=True) - - return xs, hlens - - def inference(self, x): - """Inference. - - Args: - x (Tensor): The sequeunce of character ids (T,) - or acoustic feature (T, idim * encoder_reduction_factor). - - Returns: - Tensor: The sequences of encoder states(T, eunits). - - """ - xs = x.unsqueeze(0) - ilens = torch.tensor([x.size(0)]) - - return self.forward(xs, ilens)[0][0] diff --git a/spaces/segments-tobias/conex/espnet2/main_funcs/collect_stats.py b/spaces/segments-tobias/conex/espnet2/main_funcs/collect_stats.py deleted file mode 100644 index 9916ae650d75e82cde716a4db4c7cfbe6d6b6838..0000000000000000000000000000000000000000 --- a/spaces/segments-tobias/conex/espnet2/main_funcs/collect_stats.py +++ /dev/null @@ -1,126 +0,0 @@ -from collections import defaultdict -import logging -from pathlib import Path -from typing import Dict -from typing import Iterable -from typing import List -from typing import Optional -from typing import Tuple - -import numpy as np -import torch -from torch.nn.parallel import data_parallel -from torch.utils.data import DataLoader -from typeguard import check_argument_types - -from espnet2.fileio.datadir_writer import DatadirWriter -from espnet2.fileio.npy_scp import NpyScpWriter -from espnet2.torch_utils.device_funcs import to_device -from espnet2.torch_utils.forward_adaptor import ForwardAdaptor -from espnet2.train.abs_espnet_model import AbsESPnetModel - - -@torch.no_grad() -def collect_stats( - model: AbsESPnetModel, - train_iter: DataLoader and Iterable[Tuple[List[str], Dict[str, torch.Tensor]]], - valid_iter: DataLoader and Iterable[Tuple[List[str], Dict[str, torch.Tensor]]], - output_dir: Path, - ngpu: Optional[int], - log_interval: Optional[int], - write_collected_feats: bool, -) -> None: - """Perform on collect_stats mode. - - Running for deriving the shape information from data - and gathering statistics. - This method is used before executing train(). - - """ - assert check_argument_types() - - npy_scp_writers = {} - for itr, mode in zip([train_iter, valid_iter], ["train", "valid"]): - if log_interval is None: - try: - log_interval = max(len(itr) // 20, 10) - except TypeError: - log_interval = 100 - - sum_dict = defaultdict(lambda: 0) - sq_dict = defaultdict(lambda: 0) - count_dict = defaultdict(lambda: 0) - - with DatadirWriter(output_dir / mode) as datadir_writer: - for iiter, (keys, batch) in enumerate(itr, 1): - batch = to_device(batch, "cuda" if ngpu > 0 else "cpu") - - # 1. Write shape file - for name in batch: - if name.endswith("_lengths"): - continue - for i, (key, data) in enumerate(zip(keys, batch[name])): - if f"{name}_lengths" in batch: - lg = int(batch[f"{name}_lengths"][i]) - data = data[:lg] - datadir_writer[f"{name}_shape"][key] = ",".join( - map(str, data.shape) - ) - - # 2. Extract feats - if ngpu <= 1: - data = model.collect_feats(**batch) - else: - # Note that data_parallel can parallelize only "forward()" - data = data_parallel( - ForwardAdaptor(model, "collect_feats"), - (), - range(ngpu), - module_kwargs=batch, - ) - - # 3. Calculate sum and square sum - for key, v in data.items(): - for i, (uttid, seq) in enumerate(zip(keys, v.cpu().numpy())): - # Truncate zero-padding region - if f"{key}_lengths" in data: - length = data[f"{key}_lengths"][i] - # seq: (Length, Dim, ...) - seq = seq[:length] - else: - # seq: (Dim, ...) -> (1, Dim, ...) - seq = seq[None] - # Accumulate value, its square, and count - sum_dict[key] += seq.sum(0) - sq_dict[key] += (seq ** 2).sum(0) - count_dict[key] += len(seq) - - # 4. [Option] Write derived features as npy format file. - if write_collected_feats: - # Instantiate NpyScpWriter for the first iteration - if (key, mode) not in npy_scp_writers: - p = output_dir / mode / "collect_feats" - npy_scp_writers[(key, mode)] = NpyScpWriter( - p / f"data_{key}", p / f"{key}.scp" - ) - # Save array as npy file - npy_scp_writers[(key, mode)][uttid] = seq - - if iiter % log_interval == 0: - logging.info(f"Niter: {iiter}") - - for key in sum_dict: - np.savez( - output_dir / mode / f"{key}_stats.npz", - count=count_dict[key], - sum=sum_dict[key], - sum_square=sq_dict[key], - ) - - # batch_keys and stats_keys are used by aggregate_stats_dirs.py - with (output_dir / mode / "batch_keys").open("w", encoding="utf-8") as f: - f.write( - "\n".join(filter(lambda x: not x.endswith("_lengths"), batch)) + "\n" - ) - with (output_dir / mode / "stats_keys").open("w", encoding="utf-8") as f: - f.write("\n".join(sum_dict) + "\n") diff --git a/spaces/shawarmabytes/stream-your-emotions/README.md b/spaces/shawarmabytes/stream-your-emotions/README.md deleted file mode 100644 index ad8bf40ed7b4cef7610b963f0b62e32d19cb66b9..0000000000000000000000000000000000000000 --- a/spaces/shawarmabytes/stream-your-emotions/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: emotion streamer -emoji: 🎧 -colorFrom: yellow -colorTo: indigo -sdk: streamlit -sdk_version: 1.2.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/shencc/gpt/config.py b/spaces/shencc/gpt/config.py deleted file mode 100644 index 74932d4bfb237cf2813bbfa80f53ef391dda27aa..0000000000000000000000000000000000000000 --- a/spaces/shencc/gpt/config.py +++ /dev/null @@ -1,77 +0,0 @@ -# [step 1]>> 例如: API_KEY = "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" (此key无效) -API_KEY = "sk-此处填API密钥" # 可同时填写多个API-KEY,用英文逗号分割,例如API_KEY = "sk-openaikey1,sk-openaikey2,fkxxxx-api2dkey1,fkxxxx-api2dkey2" - -# [step 2]>> 改为True应用代理,如果直接在海外服务器部署,此处不修改 -USE_PROXY = False -if USE_PROXY: - # 填写格式是 [协议]:// [地址] :[端口],填写之前不要忘记把USE_PROXY改成True,如果直接在海外服务器部署,此处不修改 - # 例如 "socks5h://localhost:11284" - # [协议] 常见协议无非socks5h/http; 例如 v2**y 和 ss* 的默认本地协议是socks5h; 而cl**h 的默认本地协议是http - # [地址] 懂的都懂,不懂就填localhost或者127.0.0.1肯定错不了(localhost意思是代理软件安装在本机上) - # [端口] 在代理软件的设置里找。虽然不同的代理软件界面不一样,但端口号都T应该在最显眼的位置上 - - # 代理网络的地址,打开你的*学*网软件查看代理的协议(socks5/http)、地址(localhost)和端口(11284) - proxies = { - # [协议]:// [地址] :[端口] - "http": "http://127.0.0.1:7890", # 再例如 "http": "http://127.0.0.1:7890", - "https": "http://127.0.0.1:7890", # 再例如 "https": "http://127.0.0.1:7890", - } -else: - proxies = None - -# [step 3]>> 多线程函数插件中,默认允许多少路线程同时访问OpenAI。Free trial users的限制是每分钟3次,Pay-as-you-go users的限制是每分钟3500次 -# 一言以蔽之:免费用户填3,OpenAI绑了信用卡的用户可以填 16 或者更高。提高限制请查询:https://platform.openai.com/docs/guides/rate-limits/overview -DEFAULT_WORKER_NUM = 3 - - -# [step 4]>> 以下配置可以优化体验,但大部分场合下并不需要修改 -# 对话窗的高度 -CHATBOT_HEIGHT = 1115 - -# 代码高亮 -CODE_HIGHLIGHT = True - -# 窗口布局 -LAYOUT = "LEFT-RIGHT" # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局) -DARK_MODE = True # "LEFT-RIGHT"(左右布局) # "TOP-DOWN"(上下布局) - -# 发送请求到OpenAI后,等待多久判定为超时 -TIMEOUT_SECONDS = 30 - -# 网页的端口, -1代表随机端口 -WEB_PORT = -1 - -# 如果OpenAI不响应(网络卡顿、代理失败、KEY失效),重试的次数限制 -MAX_RETRY = 2 - -# OpenAI模型选择是(gpt4现在只对申请成功的人开放) -LLM_MODEL = "gpt-3.5-turbo" # 可选 "chatglm" -AVAIL_LLM_MODELS = ["gpt-3.5-turbo", "gpt-4", "api2d-gpt-4", "api2d-gpt-3.5-turbo"] - -# 本地LLM模型如ChatGLM的执行方式 CPU/GPU -LOCAL_MODEL_DEVICE = "cpu" # 可选 "cuda" - -# 设置gradio的并行线程数(不需要修改) -CONCURRENT_COUNT = 100 - -# 加一个看板娘装饰 -ADD_WAIFU = True - -# 设置用户名和密码(不需要修改)(相关功能不稳定,与gradio版本和网络都相关,如果本地使用不建议加这个) -# [("username", "password"), ("username2", "password2"), ...] -AUTHENTICATION = [] - -# 重新URL重新定向,实现更换API_URL的作用(常规情况下,不要修改!!) -# (高危设置!通过修改此设置,您将把您的API-KEY和对话隐私完全暴露给您设定的中间人!) -# 格式 {"https://api.openai.com/v1/chat/completions": "在这里填写重定向的api.openai.com的URL"} -# 例如 API_URL_REDIRECT = {"https://api.openai.com/v1/chat/completions": "https://ai.open.com/api/conversation"} -API_URL_REDIRECT = {} - -# 如果需要在二级路径下运行(常规情况下,不要修改!!)(需要配合修改main.py才能生效!) -CUSTOM_PATH = "/" - -# 如果需要使用newbing,把newbing的长长的cookie放到这里 -NEWBING_STYLE = "creative" # ["creative", "balanced", "precise"] -NEWBING_COOKIES = """ -your bing cookies here -""" diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/infer/infer-pm-index256.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/infer/infer-pm-index256.py deleted file mode 100644 index 66e38d49071994e9c850f7d75d0a3b2e5c79b0da..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/Retrieval-based-Voice-Conversion-WebUI/infer/infer-pm-index256.py +++ /dev/null @@ -1,199 +0,0 @@ -""" - -对源特征进行检索 -""" -import torch, pdb, os, parselmouth - -os.environ["CUDA_VISIBLE_DEVICES"] = "0" -import numpy as np -import soundfile as sf - -# from models import SynthesizerTrn256#hifigan_nonsf -# from infer_pack.models import SynthesizerTrn256NSF as SynthesizerTrn256#hifigan_nsf -from infer_pack.models import ( - SynthesizerTrnMs256NSFsid as SynthesizerTrn256, -) # hifigan_nsf - -# from infer_pack.models import SynthesizerTrnMs256NSFsid_sim as SynthesizerTrn256#hifigan_nsf -# from models import SynthesizerTrn256NSFsim as SynthesizerTrn256#hifigan_nsf -# from models import SynthesizerTrn256NSFsimFlow as SynthesizerTrn256#hifigan_nsf - - -from scipy.io import wavfile -from fairseq import checkpoint_utils - -# import pyworld -import librosa -import torch.nn.functional as F -import scipy.signal as signal - -# import torchcrepe -from time import time as ttime - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -model_path = r"E:\codes\py39\vits_vc_gpu_train\hubert_base.pt" # -print("load model(s) from {}".format(model_path)) -models, saved_cfg, task = checkpoint_utils.load_model_ensemble_and_task( - [model_path], - suffix="", -) -model = models[0] -model = model.to(device) -model = model.half() -model.eval() - -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],183,256,is_half=True)#hifigan#512#256 -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],109,256,is_half=True)#hifigan#512#256 -net_g = SynthesizerTrn256( - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 10, 2, 2], - 512, - [16, 16, 4, 4], - 183, - 256, - is_half=True, -) # hifigan#512#256#no_dropout -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,3,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2,2],512,[16,16,4,4],0)#ts3 -# net_g = SynthesizerTrn256(1025,32,192,192,768,2,6,3,0.1,"1", [3,7,11],[[1,3,5], [1,3,5], [1,3,5]],[10,10,2],512,[16,16,4],0)#hifigan-ps-sr -# -# net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [5,5], 512, [15,15], 0)#ms -# net_g = SynthesizerTrn(1025, 32, 192, 192, 768, 2, 6, 3, 0.1, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,10], 512, [16,16], 0)#idwt2 - -# weights=torch.load("infer/ft-mi_1k-noD.pt") -# weights=torch.load("infer/ft-mi-freeze-vocoder-flow-enc_q_1k.pt") -# weights=torch.load("infer/ft-mi-freeze-vocoder_true_1k.pt") -# weights=torch.load("infer/ft-mi-sim1k.pt") -weights = torch.load("infer/ft-mi-no_opt-no_dropout.pt") -print(net_g.load_state_dict(weights, strict=True)) - -net_g.eval().to(device) -net_g.half() - - -def get_f0(x, p_len, f0_up_key=0): - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = ( - parselmouth.Sound(x, 16000) - .to_pitch_ac( - time_step=time_step / 1000, - voicing_threshold=0.6, - pitch_floor=f0_min, - pitch_ceiling=f0_max, - ) - .selected_array["frequency"] - ) - - pad_size = (p_len - len(f0) + 1) // 2 - if pad_size > 0 or p_len - len(f0) - pad_size > 0: - f0 = np.pad(f0, [[pad_size, p_len - len(f0) - pad_size]], mode="constant") - f0 *= pow(2, f0_up_key / 12) - f0bak = f0.copy() - - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - # f0_mel[f0_mel > 188] = 188 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak - - -import faiss - -index = faiss.read_index("infer/added_IVF512_Flat_mi_baseline_src_feat.index") -big_npy = np.load("infer/big_src_feature_mi.npy") -ta0 = ta1 = ta2 = 0 -for idx, name in enumerate( - [ - "冬之花clip1.wav", - ] -): ## - wav_path = "todo-songs/%s" % name # - f0_up_key = -2 # - audio, sampling_rate = sf.read(wav_path) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - - feats = torch.from_numpy(audio).float() - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - padding_mask = torch.BoolTensor(feats.shape).fill_(False) - inputs = { - "source": feats.half().to(device), - "padding_mask": padding_mask.to(device), - "output_layer": 9, # layer 9 - } - if torch.cuda.is_available(): - torch.cuda.synchronize() - t0 = ttime() - with torch.no_grad(): - logits = model.extract_features(**inputs) - feats = model.final_proj(logits[0]) - - ####索引优化 - npy = feats[0].cpu().numpy().astype("float32") - D, I = index.search(npy, 1) - feats = ( - torch.from_numpy(big_npy[I.squeeze()].astype("float16")).unsqueeze(0).to(device) - ) - - feats = F.interpolate(feats.permute(0, 2, 1), scale_factor=2).permute(0, 2, 1) - if torch.cuda.is_available(): - torch.cuda.synchronize() - t1 = ttime() - # p_len = min(feats.shape[1],10000,pitch.shape[0])#太大了爆显存 - p_len = min(feats.shape[1], 10000) # - pitch, pitchf = get_f0(audio, p_len, f0_up_key) - p_len = min(feats.shape[1], 10000, pitch.shape[0]) # 太大了爆显存 - if torch.cuda.is_available(): - torch.cuda.synchronize() - t2 = ttime() - feats = feats[:, :p_len, :] - pitch = pitch[:p_len] - pitchf = pitchf[:p_len] - p_len = torch.LongTensor([p_len]).to(device) - pitch = torch.LongTensor(pitch).unsqueeze(0).to(device) - sid = torch.LongTensor([0]).to(device) - pitchf = torch.FloatTensor(pitchf).unsqueeze(0).to(device) - with torch.no_grad(): - audio = ( - net_g.infer(feats, p_len, pitch, pitchf, sid)[0][0, 0] - .data.cpu() - .float() - .numpy() - ) # nsf - if torch.cuda.is_available(): - torch.cuda.synchronize() - t3 = ttime() - ta0 += t1 - t0 - ta1 += t2 - t1 - ta2 += t3 - t2 - # wavfile.write("ft-mi_1k-index256-noD-%s.wav"%name, 40000, audio)## - # wavfile.write("ft-mi-freeze-vocoder-flow-enc_q_1k-%s.wav"%name, 40000, audio)## - # wavfile.write("ft-mi-sim1k-%s.wav"%name, 40000, audio)## - wavfile.write("ft-mi-no_opt-no_dropout-%s.wav" % name, 40000, audio) ## - - -print(ta0, ta1, ta2) # diff --git a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/nets_537238KB.py b/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/nets_537238KB.py deleted file mode 100644 index 1ceac4a470ca311d594818d52e5f96919cfddb26..0000000000000000000000000000000000000000 --- a/spaces/shenfangqi/Retrieval-based-Voice-Conversion-WebUI/uvr5_pack/lib_v5/nets_537238KB.py +++ /dev/null @@ -1,123 +0,0 @@ -import torch -import numpy as np -from torch import nn -import torch.nn.functional as F - -from uvr5_pack.lib_v5 import layers_537238KB as layers - - -class BaseASPPNet(nn.Module): - def __init__(self, nin, ch, dilations=(4, 8, 16)): - super(BaseASPPNet, self).__init__() - self.enc1 = layers.Encoder(nin, ch, 3, 2, 1) - self.enc2 = layers.Encoder(ch, ch * 2, 3, 2, 1) - self.enc3 = layers.Encoder(ch * 2, ch * 4, 3, 2, 1) - self.enc4 = layers.Encoder(ch * 4, ch * 8, 3, 2, 1) - - self.aspp = layers.ASPPModule(ch * 8, ch * 16, dilations) - - self.dec4 = layers.Decoder(ch * (8 + 16), ch * 8, 3, 1, 1) - self.dec3 = layers.Decoder(ch * (4 + 8), ch * 4, 3, 1, 1) - self.dec2 = layers.Decoder(ch * (2 + 4), ch * 2, 3, 1, 1) - self.dec1 = layers.Decoder(ch * (1 + 2), ch, 3, 1, 1) - - def __call__(self, x): - h, e1 = self.enc1(x) - h, e2 = self.enc2(h) - h, e3 = self.enc3(h) - h, e4 = self.enc4(h) - - h = self.aspp(h) - - h = self.dec4(h, e4) - h = self.dec3(h, e3) - h = self.dec2(h, e2) - h = self.dec1(h, e1) - - return h - - -class CascadedASPPNet(nn.Module): - def __init__(self, n_fft): - super(CascadedASPPNet, self).__init__() - self.stg1_low_band_net = BaseASPPNet(2, 64) - self.stg1_high_band_net = BaseASPPNet(2, 64) - - self.stg2_bridge = layers.Conv2DBNActiv(66, 32, 1, 1, 0) - self.stg2_full_band_net = BaseASPPNet(32, 64) - - self.stg3_bridge = layers.Conv2DBNActiv(130, 64, 1, 1, 0) - self.stg3_full_band_net = BaseASPPNet(64, 128) - - self.out = nn.Conv2d(128, 2, 1, bias=False) - self.aux1_out = nn.Conv2d(64, 2, 1, bias=False) - self.aux2_out = nn.Conv2d(64, 2, 1, bias=False) - - self.max_bin = n_fft // 2 - self.output_bin = n_fft // 2 + 1 - - self.offset = 128 - - def forward(self, x, aggressiveness=None): - mix = x.detach() - x = x.clone() - - x = x[:, :, : self.max_bin] - - bandw = x.size()[2] // 2 - aux1 = torch.cat( - [ - self.stg1_low_band_net(x[:, :, :bandw]), - self.stg1_high_band_net(x[:, :, bandw:]), - ], - dim=2, - ) - - h = torch.cat([x, aux1], dim=1) - aux2 = self.stg2_full_band_net(self.stg2_bridge(h)) - - h = torch.cat([x, aux1, aux2], dim=1) - h = self.stg3_full_band_net(self.stg3_bridge(h)) - - mask = torch.sigmoid(self.out(h)) - mask = F.pad( - input=mask, - pad=(0, 0, 0, self.output_bin - mask.size()[2]), - mode="replicate", - ) - - if self.training: - aux1 = torch.sigmoid(self.aux1_out(aux1)) - aux1 = F.pad( - input=aux1, - pad=(0, 0, 0, self.output_bin - aux1.size()[2]), - mode="replicate", - ) - aux2 = torch.sigmoid(self.aux2_out(aux2)) - aux2 = F.pad( - input=aux2, - pad=(0, 0, 0, self.output_bin - aux2.size()[2]), - mode="replicate", - ) - return mask * mix, aux1 * mix, aux2 * mix - else: - if aggressiveness: - mask[:, :, : aggressiveness["split_bin"]] = torch.pow( - mask[:, :, : aggressiveness["split_bin"]], - 1 + aggressiveness["value"] / 3, - ) - mask[:, :, aggressiveness["split_bin"] :] = torch.pow( - mask[:, :, aggressiveness["split_bin"] :], - 1 + aggressiveness["value"], - ) - - return mask * mix - - def predict(self, x_mag, aggressiveness=None): - h = self.forward(x_mag, aggressiveness) - - if self.offset > 0: - h = h[:, :, :, self.offset : -self.offset] - assert h.size()[3] > 0 - - return h diff --git a/spaces/shi-labs/FcF-Inpainting/app.py b/spaces/shi-labs/FcF-Inpainting/app.py deleted file mode 100644 index 415211b4f1ed1fd5c0fffbae3ba1774dc93002ea..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/FcF-Inpainting/app.py +++ /dev/null @@ -1,192 +0,0 @@ -import subprocess -subprocess.run('sh setup.sh', shell=True) - -print("Installed the dependencies!") - -from typing import Tuple -import dnnlib -from PIL import Image -import numpy as np -import torch -import legacy -import cv2 -from streamlit_drawable_canvas import st_canvas -import streamlit as st - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -class_idx = None -truncation_psi = 0.1 - -title = "FcF-Inpainting" - -description = "

        \ - [Note: The Inpainted Image display may take up to a minute depending on the Queue. The image and mask are resized to 512x512 before inpainting. The Run FcF-Inpainting button will automatically appear after you draw a mask.] To use FcF-Inpainting:
        \ - (1) Upload an Image or select a sample image on the left.
        \ - (2) Adjust the brush stroke width and draw the mask on the image. You may also change the drawing tool on the sidebar.
        \ - (3) After drawing a mask, click the Run FcF-Inpainting and witness the MAGIC! 🪄 ✨ ✨
        \ - (4) You may download/undo/redo/delete the changes on the image using the options below the image box.

        " - -article = "

        Project Page | Keys to Better Image Inpainting: Structure and Texture Go Hand in Hand | Github

        " - -def create_model(network_pkl): - print('Loading networks from "%s"...' % network_pkl) - with dnnlib.util.open_url(network_pkl) as f: - G = legacy.load_network_pkl(f)['G_ema'] # type: ignore - - G = G.eval().to(device) - netG_params = sum(p.numel() for p in G.parameters()) - print("Generator Params: {} M".format(netG_params/1e6)) - return G - -def fcf_inpaint(G, org_img, erased_img, mask): - label = torch.zeros([1, G.c_dim], device=device) - if G.c_dim != 0: - if class_idx is None: - ValueError("class_idx can't be None.") - label[:, class_idx] = 1 - else: - if class_idx is not None: - print ('warn: --class=lbl ignored when running on an unconditional network') - - pred_img = G(img=torch.cat([0.5 - mask, erased_img], dim=1), c=label, truncation_psi=truncation_psi, noise_mode='const') - comp_img = mask.to(device) * pred_img + (1 - mask).to(device) * org_img.to(device) - return comp_img - - -def denorm(img): - img = np.asarray(img[0].cpu(), dtype=np.float32).transpose(1, 2, 0) - img = (img +1) * 127.5 - img = np.rint(img).clip(0, 255).astype(np.uint8) - return img - -def pil_to_numpy(pil_img: Image) -> Tuple[torch.Tensor, torch.Tensor]: - img = np.array(pil_img) - return torch.from_numpy(img)[None].permute(0, 3, 1, 2).float() / 127.5 - 1 - -def process_mask(input_img, mask): - rgb = cv2.cvtColor(input_img, cv2.COLOR_RGBA2RGB) - mask = 255 - mask[:,:,3] - mask = (mask > 0) * 1 - - rgb = np.array(rgb) - mask_tensor = torch.from_numpy(mask).to(torch.float32) - mask_tensor = mask_tensor.unsqueeze(0) - mask_tensor = mask_tensor.unsqueeze(0).to(device) - - rgb = rgb.transpose(2,0,1) - rgb = torch.from_numpy(rgb.astype(np.float32)).unsqueeze(0) - rgb = (rgb.to(torch.float32) / 127.5 - 1).to(device) - rgb_erased = rgb.clone() - rgb_erased = rgb_erased * (1 - mask_tensor) # erase rgb - rgb_erased = rgb_erased.to(torch.float32) - - rgb_erased = denorm(rgb_erased) - return rgb_erased - -def inpaint(input_img, mask, model): - rgb = cv2.cvtColor(input_img, cv2.COLOR_RGBA2RGB) - mask = 255 - mask[:,:,3] - mask = (mask > 0) * 1 - - rgb = np.array(rgb) - mask_tensor = torch.from_numpy(mask).to(torch.float32) - mask_tensor = mask_tensor.unsqueeze(0) - mask_tensor = mask_tensor.unsqueeze(0).to(device) - - rgb = rgb.transpose(2,0,1) - rgb = torch.from_numpy(rgb.astype(np.float32)).unsqueeze(0) - rgb = (rgb.to(torch.float32) / 127.5 - 1).to(device) - rgb_erased = rgb.clone() - rgb_erased = rgb_erased * (1 - mask_tensor) # erase rgb - rgb_erased = rgb_erased.to(torch.float32) - - comp_img = fcf_inpaint(G=model, org_img=rgb.to(torch.float32), erased_img=rgb_erased.to(torch.float32), mask=mask_tensor.to(torch.float32)) - rgb_erased = denorm(rgb_erased) - comp_img = denorm(comp_img) - return comp_img - -def run_app(model): - - if "button_id" not in st.session_state: - st.session_state["button_id"] = "" - if "color_to_label" not in st.session_state: - st.session_state["color_to_label"] = {} - image_inpainting(model) - - with st.sidebar: - st.markdown("---") - -def image_inpainting(model): - if 'reuse_image' not in st.session_state: - st.session_state.reuse_image = None - - st.title(title) - st.markdown(article, unsafe_allow_html=True) - st.markdown(description, unsafe_allow_html=True) - - image = st.sidebar.file_uploader("Upload an Image", type=["png", "jpg", "jpeg"]) - - sample_image = st.sidebar.radio('Choose a Sample Image', [ - 'wall-1.jpeg', - 'wall-2.jpeg', - 'house.jpeg', - 'door.jpeg', - 'floor.jpeg', - 'church.jpeg', - 'person-cliff.jpeg', - 'person-fence.png', - 'persons-white-fence.jpeg', - ]) - - drawing_mode = st.sidebar.selectbox( - "Drawing tool:", ("freedraw", "line") -) - - image = Image.open(image).convert("RGBA") if image else Image.open(f"./test_512/{sample_image}").convert("RGBA") - image = image.resize((512, 512)) - width, height = image.size - stroke_width = st.sidebar.slider("Stroke width: ", 1, 100, 20) - - canvas_result = st_canvas( - stroke_color="rgba(255, 0, 255, 0.8)", - stroke_width=stroke_width, - background_image=image, - height=height, - width=width, - drawing_mode=drawing_mode, - key="canvas", - ) - if canvas_result.image_data is not None and image and len(canvas_result.json_data["objects"]) > 0: - - im = canvas_result.image_data.copy() - background = np.where( - (im[:, :, 0] == 0) & - (im[:, :, 1] == 0) & - (im[:, :, 2] == 0) - ) - drawing = np.where( - (im[:, :, 0] == 255) & - (im[:, :, 1] == 0) & - (im[:, :, 2] == 255) - ) - im[background]=[0,0,0,255] - im[drawing]=[0,0,0,0] #RGBA - if st.button('Run FcF-Inpainting'): - col1, col2 = st.columns([1,1]) - with col1: - # if st.button('Show Image with Holes'): - st.write("Masked Image") - mask_show = process_mask(np.array(image), np.array(im)) - st.image(mask_show) - with col2: - st.write("Inpainted Image") - inpainted_img = inpaint(np.array(image), np.array(im), model) - st.image(inpainted_img) - -if __name__ == "__main__": - st.set_page_config( - page_title="FcF-Inpainting", page_icon=":sparkles:" - ) - st.sidebar.subheader("Configuration") - model = create_model("models/places_512.pkl") - run_app(model) \ No newline at end of file diff --git a/spaces/shi-labs/FcF-Inpainting/torch_utils/ops/conv2d_resample.py b/spaces/shi-labs/FcF-Inpainting/torch_utils/ops/conv2d_resample.py deleted file mode 100644 index cd4750744c83354bab78704d4ef51ad1070fcc4a..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/FcF-Inpainting/torch_utils/ops/conv2d_resample.py +++ /dev/null @@ -1,156 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""2D convolution with optional up/downsampling.""" - -import torch - -from .. import misc -from . import conv2d_gradfix -from . import upfirdn2d -from .upfirdn2d import _parse_padding -from .upfirdn2d import _get_filter_size - -#---------------------------------------------------------------------------- - -def _get_weight_shape(w): - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - shape = [int(sz) for sz in w.shape] - misc.assert_shape(w, shape) - return shape - -#---------------------------------------------------------------------------- - -def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True): - """Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations. - """ - out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w) - - # Flip weight if requested. - if not flip_weight: # conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False). - w = w.flip([2, 3]) - - # Workaround performance pitfall in cuDNN 8.0.5, triggered when using - # 1x1 kernel + memory_format=channels_last + less than 64 channels. - if kw == 1 and kh == 1 and stride == 1 and padding in [0, [0, 0], (0, 0)] and not transpose: - if x.stride()[1] == 1 and min(out_channels, in_channels_per_group) < 64: - if out_channels <= 4 and groups == 1: - in_shape = x.shape - x = w.squeeze(3).squeeze(2) @ x.reshape([in_shape[0], in_channels_per_group, -1]) - x = x.reshape([in_shape[0], out_channels, in_shape[2], in_shape[3]]) - else: - x = x.to(memory_format=torch.contiguous_format) - w = w.to(memory_format=torch.contiguous_format) - x = conv2d_gradfix.conv2d(x, w, groups=groups) - return x.to(memory_format=torch.channels_last) - - # Otherwise => execute using conv2d_gradfix. - op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d - return op(x, w, stride=stride, padding=padding, groups=groups) - -#---------------------------------------------------------------------------- - -@misc.profiled_function -def conv2d_resample(x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False): - r"""2D convolution with optional up/downsampling. - - Padding is performed only once at the beginning, not between the operations. - - Args: - x: Input tensor of shape - `[batch_size, in_channels, in_height, in_width]`. - w: Weight tensor of shape - `[out_channels, in_channels//groups, kernel_height, kernel_width]`. - f: Low-pass filter for up/downsampling. Must be prepared beforehand by - calling upfirdn2d.setup_filter(). None = identity (default). - up: Integer upsampling factor (default: 1). - down: Integer downsampling factor (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - groups: Split input channels into N groups (default: 1). - flip_weight: False = convolution, True = correlation (default: True). - flip_filter: False = convolution, True = correlation (default: False). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - # Validate arguments. - assert isinstance(x, torch.Tensor) and (x.ndim == 4) - assert isinstance(w, torch.Tensor) and (w.ndim == 4) and (w.dtype == x.dtype) - assert f is None or (isinstance(f, torch.Tensor) and f.ndim in [1, 2] and f.dtype == torch.float32) - assert isinstance(up, int) and (up >= 1) - assert isinstance(down, int) and (down >= 1) - assert isinstance(groups, int) and (groups >= 1) - out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w) - fw, fh = _get_filter_size(f) - px0, px1, py0, py1 = _parse_padding(padding) - - # Adjust padding to account for up/downsampling. - if up > 1: - px0 += (fw + up - 1) // 2 - px1 += (fw - up) // 2 - py0 += (fh + up - 1) // 2 - py1 += (fh - up) // 2 - if down > 1: - px0 += (fw - down + 1) // 2 - px1 += (fw - down) // 2 - py0 += (fh - down + 1) // 2 - py1 += (fh - down) // 2 - - # Fast path: 1x1 convolution with downsampling only => downsample first, then convolve. - if kw == 1 and kh == 1 and (down > 1 and up == 1): - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, padding=[px0,px1,py0,py1], flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - return x - - # Fast path: 1x1 convolution with upsampling only => convolve first, then upsample. - if kw == 1 and kh == 1 and (up > 1 and down == 1): - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - x = upfirdn2d.upfirdn2d(x=x, f=f, up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter) - return x - - # Fast path: downsampling only => use strided convolution. - if down > 1 and up == 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0,px1,py0,py1], flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, stride=down, groups=groups, flip_weight=flip_weight) - return x - - # Fast path: upsampling with optional downsampling => use transpose strided convolution. - if up > 1: - if groups == 1: - w = w.transpose(0, 1) - else: - w = w.reshape(groups, out_channels // groups, in_channels_per_group, kh, kw) - w = w.transpose(1, 2) - w = w.reshape(groups * in_channels_per_group, out_channels // groups, kh, kw) - px0 -= kw - 1 - px1 -= kw - up - py0 -= kh - 1 - py1 -= kh - up - pxt = max(min(-px0, -px1), 0) - pyt = max(min(-py0, -py1), 0) - x = _conv2d_wrapper(x=x, w=w, stride=up, padding=[pyt,pxt], groups=groups, transpose=True, flip_weight=(not flip_weight)) - x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[px0+pxt,px1+pxt,py0+pyt,py1+pyt], gain=up**2, flip_filter=flip_filter) - if down > 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter) - return x - - # Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d. - if up == 1 and down == 1: - if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0: - return _conv2d_wrapper(x=x, w=w, padding=[py0,px0], groups=groups, flip_weight=flip_weight) - - # Fallback: Generic reference implementation. - x = upfirdn2d.upfirdn2d(x=x, f=(f if up > 1 else None), up=up, padding=[px0,px1,py0,py1], gain=up**2, flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - if down > 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter) - return x - -#---------------------------------------------------------------------------- diff --git a/spaces/shi-labs/Matting-Anything/segment-anything/segment_anything/modeling/transformer.py b/spaces/shi-labs/Matting-Anything/segment-anything/segment_anything/modeling/transformer.py deleted file mode 100644 index 28fafea52288603fea275f3a100790471825c34a..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Matting-Anything/segment-anything/segment_anything/modeling/transformer.py +++ /dev/null @@ -1,240 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from torch import Tensor, nn - -import math -from typing import Tuple, Type - -from .common import MLPBlock - - -class TwoWayTransformer(nn.Module): - def __init__( - self, - depth: int, - embedding_dim: int, - num_heads: int, - mlp_dim: int, - activation: Type[nn.Module] = nn.ReLU, - attention_downsample_rate: int = 2, - ) -> None: - """ - A transformer decoder that attends to an input image using - queries whose positional embedding is supplied. - - Args: - depth (int): number of layers in the transformer - embedding_dim (int): the channel dimension for the input embeddings - num_heads (int): the number of heads for multihead attention. Must - divide embedding_dim - mlp_dim (int): the channel dimension internal to the MLP block - activation (nn.Module): the activation to use in the MLP block - """ - super().__init__() - self.depth = depth - self.embedding_dim = embedding_dim - self.num_heads = num_heads - self.mlp_dim = mlp_dim - self.layers = nn.ModuleList() - - for i in range(depth): - self.layers.append( - TwoWayAttentionBlock( - embedding_dim=embedding_dim, - num_heads=num_heads, - mlp_dim=mlp_dim, - activation=activation, - attention_downsample_rate=attention_downsample_rate, - skip_first_layer_pe=(i == 0), - ) - ) - - self.final_attn_token_to_image = Attention( - embedding_dim, num_heads, downsample_rate=attention_downsample_rate - ) - self.norm_final_attn = nn.LayerNorm(embedding_dim) - - def forward( - self, - image_embedding: Tensor, - image_pe: Tensor, - point_embedding: Tensor, - ) -> Tuple[Tensor, Tensor]: - """ - Args: - image_embedding (torch.Tensor): image to attend to. Should be shape - B x embedding_dim x h x w for any h and w. - image_pe (torch.Tensor): the positional encoding to add to the image. Must - have the same shape as image_embedding. - point_embedding (torch.Tensor): the embedding to add to the query points. - Must have shape B x N_points x embedding_dim for any N_points. - - Returns: - torch.Tensor: the processed point_embedding - torch.Tensor: the processed image_embedding - """ - # BxCxHxW -> BxHWxC == B x N_image_tokens x C - bs, c, h, w = image_embedding.shape - image_embedding = image_embedding.flatten(2).permute(0, 2, 1) - image_pe = image_pe.flatten(2).permute(0, 2, 1) - - # Prepare queries - queries = point_embedding - keys = image_embedding - - # Apply transformer blocks and final layernorm - for layer in self.layers: - queries, keys = layer( - queries=queries, - keys=keys, - query_pe=point_embedding, - key_pe=image_pe, - ) - - # Apply the final attention layer from the points to the image - q = queries + point_embedding - k = keys + image_pe - attn_out = self.final_attn_token_to_image(q=q, k=k, v=keys) - queries = queries + attn_out - queries = self.norm_final_attn(queries) - - return queries, keys - - -class TwoWayAttentionBlock(nn.Module): - def __init__( - self, - embedding_dim: int, - num_heads: int, - mlp_dim: int = 2048, - activation: Type[nn.Module] = nn.ReLU, - attention_downsample_rate: int = 2, - skip_first_layer_pe: bool = False, - ) -> None: - """ - A transformer block with four layers: (1) self-attention of sparse - inputs, (2) cross attention of sparse inputs to dense inputs, (3) mlp - block on sparse inputs, and (4) cross attention of dense inputs to sparse - inputs. - - Arguments: - embedding_dim (int): the channel dimension of the embeddings - num_heads (int): the number of heads in the attention layers - mlp_dim (int): the hidden dimension of the mlp block - activation (nn.Module): the activation of the mlp block - skip_first_layer_pe (bool): skip the PE on the first layer - """ - super().__init__() - self.self_attn = Attention(embedding_dim, num_heads) - self.norm1 = nn.LayerNorm(embedding_dim) - - self.cross_attn_token_to_image = Attention( - embedding_dim, num_heads, downsample_rate=attention_downsample_rate - ) - self.norm2 = nn.LayerNorm(embedding_dim) - - self.mlp = MLPBlock(embedding_dim, mlp_dim, activation) - self.norm3 = nn.LayerNorm(embedding_dim) - - self.norm4 = nn.LayerNorm(embedding_dim) - self.cross_attn_image_to_token = Attention( - embedding_dim, num_heads, downsample_rate=attention_downsample_rate - ) - - self.skip_first_layer_pe = skip_first_layer_pe - - def forward( - self, queries: Tensor, keys: Tensor, query_pe: Tensor, key_pe: Tensor - ) -> Tuple[Tensor, Tensor]: - # Self attention block - if self.skip_first_layer_pe: - queries = self.self_attn(q=queries, k=queries, v=queries) - else: - q = queries + query_pe - attn_out = self.self_attn(q=q, k=q, v=queries) - queries = queries + attn_out - queries = self.norm1(queries) - - # Cross attention block, tokens attending to image embedding - q = queries + query_pe - k = keys + key_pe - attn_out = self.cross_attn_token_to_image(q=q, k=k, v=keys) - queries = queries + attn_out - queries = self.norm2(queries) - - # MLP block - mlp_out = self.mlp(queries) - queries = queries + mlp_out - queries = self.norm3(queries) - - # Cross attention block, image embedding attending to tokens - q = queries + query_pe - k = keys + key_pe - attn_out = self.cross_attn_image_to_token(q=k, k=q, v=queries) - keys = keys + attn_out - keys = self.norm4(keys) - - return queries, keys - - -class Attention(nn.Module): - """ - An attention layer that allows for downscaling the size of the embedding - after projection to queries, keys, and values. - """ - - def __init__( - self, - embedding_dim: int, - num_heads: int, - downsample_rate: int = 1, - ) -> None: - super().__init__() - self.embedding_dim = embedding_dim - self.internal_dim = embedding_dim // downsample_rate - self.num_heads = num_heads - assert self.internal_dim % num_heads == 0, "num_heads must divide embedding_dim." - - self.q_proj = nn.Linear(embedding_dim, self.internal_dim) - self.k_proj = nn.Linear(embedding_dim, self.internal_dim) - self.v_proj = nn.Linear(embedding_dim, self.internal_dim) - self.out_proj = nn.Linear(self.internal_dim, embedding_dim) - - def _separate_heads(self, x: Tensor, num_heads: int) -> Tensor: - b, n, c = x.shape - x = x.reshape(b, n, num_heads, c // num_heads) - return x.transpose(1, 2) # B x N_heads x N_tokens x C_per_head - - def _recombine_heads(self, x: Tensor) -> Tensor: - b, n_heads, n_tokens, c_per_head = x.shape - x = x.transpose(1, 2) - return x.reshape(b, n_tokens, n_heads * c_per_head) # B x N_tokens x C - - def forward(self, q: Tensor, k: Tensor, v: Tensor) -> Tensor: - # Input projections - q = self.q_proj(q) - k = self.k_proj(k) - v = self.v_proj(v) - - # Separate into heads - q = self._separate_heads(q, self.num_heads) - k = self._separate_heads(k, self.num_heads) - v = self._separate_heads(v, self.num_heads) - - # Attention - _, _, _, c_per_head = q.shape - attn = q @ k.permute(0, 1, 3, 2) # B x N_heads x N_tokens x N_tokens - attn = attn / math.sqrt(c_per_head) - attn = torch.softmax(attn, dim=-1) - - # Get output - out = attn @ v - out = self._recombine_heads(out) - out = self.out_proj(out) - - return out diff --git a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/autokl_utils.py b/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/autokl_utils.py deleted file mode 100644 index 903fdded9ce0a771090648827322a04c8fccf8f5..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Versatile-Diffusion/lib/model_zoo/autokl_utils.py +++ /dev/null @@ -1,400 +0,0 @@ -import torch -import torch.nn as nn -import functools - -class ActNorm(nn.Module): - def __init__(self, num_features, logdet=False, affine=True, - allow_reverse_init=False): - assert affine - super().__init__() - self.logdet = logdet - self.loc = nn.Parameter(torch.zeros(1, num_features, 1, 1)) - self.scale = nn.Parameter(torch.ones(1, num_features, 1, 1)) - self.allow_reverse_init = allow_reverse_init - - self.register_buffer('initialized', torch.tensor(0, dtype=torch.uint8)) - - def initialize(self, input): - with torch.no_grad(): - flatten = input.permute(1, 0, 2, 3).contiguous().view(input.shape[1], -1) - mean = ( - flatten.mean(1) - .unsqueeze(1) - .unsqueeze(2) - .unsqueeze(3) - .permute(1, 0, 2, 3) - ) - std = ( - flatten.std(1) - .unsqueeze(1) - .unsqueeze(2) - .unsqueeze(3) - .permute(1, 0, 2, 3) - ) - - self.loc.data.copy_(-mean) - self.scale.data.copy_(1 / (std + 1e-6)) - - def forward(self, input, reverse=False): - if reverse: - return self.reverse(input) - if len(input.shape) == 2: - input = input[:,:,None,None] - squeeze = True - else: - squeeze = False - - _, _, height, width = input.shape - - if self.training and self.initialized.item() == 0: - self.initialize(input) - self.initialized.fill_(1) - - h = self.scale * (input + self.loc) - - if squeeze: - h = h.squeeze(-1).squeeze(-1) - - if self.logdet: - log_abs = torch.log(torch.abs(self.scale)) - logdet = height*width*torch.sum(log_abs) - logdet = logdet * torch.ones(input.shape[0]).to(input) - return h, logdet - - return h - - def reverse(self, output): - if self.training and self.initialized.item() == 0: - if not self.allow_reverse_init: - raise RuntimeError( - "Initializing ActNorm in reverse direction is " - "disabled by default. Use allow_reverse_init=True to enable." - ) - else: - self.initialize(output) - self.initialized.fill_(1) - - if len(output.shape) == 2: - output = output[:,:,None,None] - squeeze = True - else: - squeeze = False - - h = output / self.scale - self.loc - - if squeeze: - h = h.squeeze(-1).squeeze(-1) - return h - -################# -# Discriminator # -################# - -def weights_init(m): - classname = m.__class__.__name__ - if classname.find('Conv') != -1: - nn.init.normal_(m.weight.data, 0.0, 0.02) - elif classname.find('BatchNorm') != -1: - nn.init.normal_(m.weight.data, 1.0, 0.02) - nn.init.constant_(m.bias.data, 0) - -class NLayerDiscriminator(nn.Module): - """Defines a PatchGAN discriminator as in Pix2Pix - --> see https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/models/networks.py - """ - def __init__(self, input_nc=3, ndf=64, n_layers=3, use_actnorm=False): - """Construct a PatchGAN discriminator - Parameters: - input_nc (int) -- the number of channels in input images - ndf (int) -- the number of filters in the last conv layer - n_layers (int) -- the number of conv layers in the discriminator - norm_layer -- normalization layer - """ - super(NLayerDiscriminator, self).__init__() - if not use_actnorm: - norm_layer = nn.BatchNorm2d - else: - norm_layer = ActNorm - if type(norm_layer) == functools.partial: # no need to use bias as BatchNorm2d has affine parameters - use_bias = norm_layer.func != nn.BatchNorm2d - else: - use_bias = norm_layer != nn.BatchNorm2d - - kw = 4 - padw = 1 - sequence = [nn.Conv2d(input_nc, ndf, kernel_size=kw, stride=2, padding=padw), nn.LeakyReLU(0.2, True)] - nf_mult = 1 - nf_mult_prev = 1 - for n in range(1, n_layers): # gradually increase the number of filters - nf_mult_prev = nf_mult - nf_mult = min(2 ** n, 8) - sequence += [ - nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=2, padding=padw, bias=use_bias), - norm_layer(ndf * nf_mult), - nn.LeakyReLU(0.2, True) - ] - - nf_mult_prev = nf_mult - nf_mult = min(2 ** n_layers, 8) - sequence += [ - nn.Conv2d(ndf * nf_mult_prev, ndf * nf_mult, kernel_size=kw, stride=1, padding=padw, bias=use_bias), - norm_layer(ndf * nf_mult), - nn.LeakyReLU(0.2, True) - ] - - sequence += [ - nn.Conv2d(ndf * nf_mult, 1, kernel_size=kw, stride=1, padding=padw)] # output 1 channel prediction map - self.main = nn.Sequential(*sequence) - - def forward(self, input): - """Standard forward.""" - return self.main(input) - -######### -# LPIPS # -######### - -class ScalingLayer(nn.Module): - def __init__(self): - super(ScalingLayer, self).__init__() - self.register_buffer('shift', torch.Tensor([-.030, -.088, -.188])[None, :, None, None]) - self.register_buffer('scale', torch.Tensor([.458, .448, .450])[None, :, None, None]) - - def forward(self, inp): - return (inp - self.shift) / self.scale - -class NetLinLayer(nn.Module): - """ A single linear layer which does a 1x1 conv """ - def __init__(self, chn_in, chn_out=1, use_dropout=False): - super(NetLinLayer, self).__init__() - layers = [nn.Dropout(), ] if (use_dropout) else [] - layers += [nn.Conv2d(chn_in, chn_out, 1, stride=1, padding=0, bias=False), ] - self.model = nn.Sequential(*layers) - -from collections import namedtuple -from torchvision import models -from torchvision.models import VGG16_Weights - -class vgg16(torch.nn.Module): - def __init__(self, requires_grad=False, pretrained=True): - super(vgg16, self).__init__() - if pretrained: - vgg_pretrained_features = models.vgg16(weights=VGG16_Weights.IMAGENET1K_V1).features - self.slice1 = torch.nn.Sequential() - self.slice2 = torch.nn.Sequential() - self.slice3 = torch.nn.Sequential() - self.slice4 = torch.nn.Sequential() - self.slice5 = torch.nn.Sequential() - self.N_slices = 5 - for x in range(4): - self.slice1.add_module(str(x), vgg_pretrained_features[x]) - for x in range(4, 9): - self.slice2.add_module(str(x), vgg_pretrained_features[x]) - for x in range(9, 16): - self.slice3.add_module(str(x), vgg_pretrained_features[x]) - for x in range(16, 23): - self.slice4.add_module(str(x), vgg_pretrained_features[x]) - for x in range(23, 30): - self.slice5.add_module(str(x), vgg_pretrained_features[x]) - if not requires_grad: - for param in self.parameters(): - param.requires_grad = False - - def forward(self, X): - h = self.slice1(X) - h_relu1_2 = h - h = self.slice2(h) - h_relu2_2 = h - h = self.slice3(h) - h_relu3_3 = h - h = self.slice4(h) - h_relu4_3 = h - h = self.slice5(h) - h_relu5_3 = h - vgg_outputs = namedtuple("VggOutputs", ['relu1_2', 'relu2_2', 'relu3_3', 'relu4_3', 'relu5_3']) - out = vgg_outputs(h_relu1_2, h_relu2_2, h_relu3_3, h_relu4_3, h_relu5_3) - return out - -def normalize_tensor(x,eps=1e-10): - norm_factor = torch.sqrt(torch.sum(x**2,dim=1,keepdim=True)) - return x/(norm_factor+eps) - -def spatial_average(x, keepdim=True): - return x.mean([2,3],keepdim=keepdim) - -def get_ckpt_path(*args, **kwargs): - return 'pretrained/lpips.pth' - -class LPIPS(nn.Module): - # Learned perceptual metric - def __init__(self, use_dropout=True): - super().__init__() - self.scaling_layer = ScalingLayer() - self.chns = [64, 128, 256, 512, 512] # vg16 features - self.net = vgg16(pretrained=True, requires_grad=False) - self.lin0 = NetLinLayer(self.chns[0], use_dropout=use_dropout) - self.lin1 = NetLinLayer(self.chns[1], use_dropout=use_dropout) - self.lin2 = NetLinLayer(self.chns[2], use_dropout=use_dropout) - self.lin3 = NetLinLayer(self.chns[3], use_dropout=use_dropout) - self.lin4 = NetLinLayer(self.chns[4], use_dropout=use_dropout) - self.load_from_pretrained() - for param in self.parameters(): - param.requires_grad = False - - def load_from_pretrained(self, name="vgg_lpips"): - ckpt = get_ckpt_path(name, "taming/modules/autoencoder/lpips") - self.load_state_dict(torch.load(ckpt, map_location=torch.device("cpu")), strict=False) - print("loaded pretrained LPIPS loss from {}".format(ckpt)) - - @classmethod - def from_pretrained(cls, name="vgg_lpips"): - if name != "vgg_lpips": - raise NotImplementedError - model = cls() - ckpt = get_ckpt_path(name) - model.load_state_dict(torch.load(ckpt, map_location=torch.device("cpu")), strict=False) - return model - - def forward(self, input, target): - in0_input, in1_input = (self.scaling_layer(input), self.scaling_layer(target)) - outs0, outs1 = self.net(in0_input), self.net(in1_input) - feats0, feats1, diffs = {}, {}, {} - lins = [self.lin0, self.lin1, self.lin2, self.lin3, self.lin4] - for kk in range(len(self.chns)): - feats0[kk], feats1[kk] = normalize_tensor(outs0[kk]), normalize_tensor(outs1[kk]) - diffs[kk] = (feats0[kk] - feats1[kk]) ** 2 - - res = [spatial_average(lins[kk].model(diffs[kk]), keepdim=True) for kk in range(len(self.chns))] - val = res[0] - for l in range(1, len(self.chns)): - val += res[l] - return val - -############ -# The loss # -############ - -def adopt_weight(weight, global_step, threshold=0, value=0.): - if global_step < threshold: - weight = value - return weight - -def hinge_d_loss(logits_real, logits_fake): - loss_real = torch.mean(F.relu(1. - logits_real)) - loss_fake = torch.mean(F.relu(1. + logits_fake)) - d_loss = 0.5 * (loss_real + loss_fake) - return d_loss - -def vanilla_d_loss(logits_real, logits_fake): - d_loss = 0.5 * ( - torch.mean(torch.nn.functional.softplus(-logits_real)) + - torch.mean(torch.nn.functional.softplus(logits_fake))) - return d_loss - -class LPIPSWithDiscriminator(nn.Module): - def __init__(self, disc_start, logvar_init=0.0, kl_weight=1.0, pixelloss_weight=1.0, - disc_num_layers=3, disc_in_channels=3, disc_factor=1.0, disc_weight=1.0, - perceptual_weight=1.0, use_actnorm=False, disc_conditional=False, - disc_loss="hinge"): - - super().__init__() - assert disc_loss in ["hinge", "vanilla"] - self.kl_weight = kl_weight - self.pixel_weight = pixelloss_weight - self.perceptual_loss = LPIPS().eval() - self.perceptual_weight = perceptual_weight - # output log variance - self.logvar = nn.Parameter(torch.ones(size=()) * logvar_init) - - self.discriminator = NLayerDiscriminator(input_nc=disc_in_channels, - n_layers=disc_num_layers, - use_actnorm=use_actnorm - ).apply(weights_init) - self.discriminator_iter_start = disc_start - self.disc_loss = hinge_d_loss if disc_loss == "hinge" else vanilla_d_loss - self.disc_factor = disc_factor - self.discriminator_weight = disc_weight - self.disc_conditional = disc_conditional - - def calculate_adaptive_weight(self, nll_loss, g_loss, last_layer=None): - if last_layer is not None: - nll_grads = torch.autograd.grad(nll_loss, last_layer, retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, last_layer, retain_graph=True)[0] - else: - nll_grads = torch.autograd.grad(nll_loss, self.last_layer[0], retain_graph=True)[0] - g_grads = torch.autograd.grad(g_loss, self.last_layer[0], retain_graph=True)[0] - - d_weight = torch.norm(nll_grads) / (torch.norm(g_grads) + 1e-4) - d_weight = torch.clamp(d_weight, 0.0, 1e4).detach() - d_weight = d_weight * self.discriminator_weight - return d_weight - - def forward(self, inputs, reconstructions, posteriors, optimizer_idx, - global_step, last_layer=None, cond=None, split="train", - weights=None): - rec_loss = torch.abs(inputs.contiguous() - reconstructions.contiguous()) - if self.perceptual_weight > 0: - p_loss = self.perceptual_loss(inputs.contiguous(), reconstructions.contiguous()) - rec_loss = rec_loss + self.perceptual_weight * p_loss - - nll_loss = rec_loss / torch.exp(self.logvar) + self.logvar - weighted_nll_loss = nll_loss - if weights is not None: - weighted_nll_loss = weights*nll_loss - weighted_nll_loss = torch.sum(weighted_nll_loss) / weighted_nll_loss.shape[0] - nll_loss = torch.sum(nll_loss) / nll_loss.shape[0] - kl_loss = posteriors.kl() - kl_loss = torch.sum(kl_loss) / kl_loss.shape[0] - - # now the GAN part - if optimizer_idx == 0: - # generator update - if cond is None: - assert not self.disc_conditional - logits_fake = self.discriminator(reconstructions.contiguous()) - else: - assert self.disc_conditional - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous(), cond), dim=1)) - g_loss = -torch.mean(logits_fake) - - if self.disc_factor > 0.0: - try: - d_weight = self.calculate_adaptive_weight(nll_loss, g_loss, last_layer=last_layer) - except RuntimeError: - assert not self.training - d_weight = torch.tensor(0.0) - else: - d_weight = torch.tensor(0.0) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - loss = weighted_nll_loss + self.kl_weight * kl_loss + d_weight * disc_factor * g_loss - - log = {"Loss": loss.clone().detach().mean(), - "logvar": self.logvar.detach(), - "loss_kl": kl_loss.detach().mean(), - "loss_nll": nll_loss.detach().mean(), - "loss_rec": rec_loss.detach().mean(), - "d_weight": d_weight.detach(), - "disc_factor": torch.tensor(disc_factor), - "loss_g": g_loss.detach().mean(), - } - return loss, log - - if optimizer_idx == 1: - # second pass for discriminator update - if cond is None: - logits_real = self.discriminator(inputs.contiguous().detach()) - logits_fake = self.discriminator(reconstructions.contiguous().detach()) - else: - logits_real = self.discriminator(torch.cat((inputs.contiguous().detach(), cond), dim=1)) - logits_fake = self.discriminator(torch.cat((reconstructions.contiguous().detach(), cond), dim=1)) - - disc_factor = adopt_weight(self.disc_factor, global_step, threshold=self.discriminator_iter_start) - d_loss = disc_factor * self.disc_loss(logits_real, logits_fake) - - log = {"Loss": d_loss.clone().detach().mean(), - "loss_disc": d_loss.clone().detach().mean(), - "logits_real": logits_real.detach().mean(), - "logits_fake": logits_fake.detach().mean() - } - return d_loss, log diff --git a/spaces/shikunl/prismer/label_prettify.py b/spaces/shikunl/prismer/label_prettify.py deleted file mode 100644 index 0a17225cc3756439ee393ac04b026118df84eb71..0000000000000000000000000000000000000000 --- a/spaces/shikunl/prismer/label_prettify.py +++ /dev/null @@ -1,146 +0,0 @@ -import os -import json -import random -import torch -import matplotlib.pyplot as plt -import matplotlib -import numpy as np -import shutil - -from prismer.utils import create_ade20k_label_colormap -matplotlib.use('agg') - -obj_label_map = torch.load('prismer/dataset/detection_features.pt')['labels'] -coco_label_map = torch.load('prismer/dataset/coco_features.pt')['labels'] -ade_color = create_ade20k_label_colormap() - - -def islight(rgb): - r, g, b = rgb - hsp = np.sqrt(0.299 * (r * r) + 0.587 * (g * g) + 0.114 * (b * b)) - if hsp > 127.5: - return True - else: - return False - - -def depth_prettify(file_path): - pretty_path = file_path.replace('.png', '_p.png') - if not os.path.exists(pretty_path): - depth = plt.imread(file_path) - plt.imsave(pretty_path, depth, cmap='rainbow') - - -def obj_detection_prettify(rgb_path, path_name): - pretty_path = path_name.replace('.png', '_p.png') - if not os.path.exists(pretty_path): - rgb = plt.imread(rgb_path) - obj_labels = plt.imread(path_name) - obj_labels_dict = json.load(open(path_name.replace('.png', '.json'))) - - plt.imshow(rgb) - - if len(np.unique(obj_labels)) == 1: - plt.axis('off') - plt.savefig(path_name, bbox_inches='tight', transparent=True, pad_inches=0) - plt.close() - else: - num_objs = np.unique(obj_labels)[:-1].max() - plt.imshow(obj_labels, cmap='terrain', vmax=num_objs + 1 / 255., alpha=0.8) - cmap = matplotlib.colormaps.get_cmap('terrain') - for i in np.unique(obj_labels)[:-1]: - obj_idx_all = np.where(obj_labels == i) - x, y = obj_idx_all[1].mean(), obj_idx_all[0].mean() - obj_name = obj_label_map[obj_labels_dict[str(int(i * 255))]] - obj_name = obj_name.split(',')[0] - if islight([c*255 for c in cmap(i / num_objs)[:3]]): - plt.text(x, y, obj_name, c='black', horizontalalignment='center', verticalalignment='center', clip_on=True) - else: - plt.text(x, y, obj_name, c='white', horizontalalignment='center', verticalalignment='center', clip_on=True) - - plt.axis('off') - plt.savefig(pretty_path, bbox_inches='tight', transparent=True, pad_inches=0) - plt.close() - - -def seg_prettify(rgb_path, file_name): - pretty_path = file_name.replace('.png', '_p.png') - if not os.path.exists(pretty_path): - rgb = plt.imread(rgb_path) - seg_labels = plt.imread(file_name) - - plt.imshow(rgb) - - seg_map = np.zeros(list(seg_labels.shape) + [3], dtype=np.int16) - for i in np.unique(seg_labels): - seg_map[seg_labels == i] = ade_color[int(i * 255)] - - plt.imshow(seg_map, alpha=0.8) - - for i in np.unique(seg_labels): - obj_idx_all = np.where(seg_labels == i) - if len(obj_idx_all[0]) > 20: # only plot the label with its number of labelled pixel more than 20 - obj_idx = random.randint(0, len(obj_idx_all[0]) - 1) - x, y = obj_idx_all[1][obj_idx], obj_idx_all[0][obj_idx] - obj_name = coco_label_map[int(i * 255)] - obj_name = obj_name.split(',')[0] - if islight(seg_map[int(y), int(x)]): - plt.text(x, y, obj_name, c='black', horizontalalignment='center', verticalalignment='center', clip_on=True) - else: - plt.text(x, y, obj_name, c='white', horizontalalignment='center', verticalalignment='center', clip_on=True) - - plt.axis('off') - plt.savefig(pretty_path, bbox_inches='tight', transparent=True, pad_inches=0) - plt.close() - - -def ocr_detection_prettify(rgb_path, file_name): - pretty_path = file_name.replace('.png', '_p.png') - if not os.path.exists(pretty_path): - if os.path.exists(file_name): - rgb = plt.imread(rgb_path) - ocr_labels = plt.imread(file_name) - ocr_labels_dict = torch.load(file_name.replace('.png', '.pt')) - - plt.imshow(rgb) - plt.imshow(ocr_labels, cmap='gray', alpha=0.8) - - for i in np.unique(ocr_labels)[:-1]: - text_idx_all = np.where(ocr_labels == i) - x, y = text_idx_all[1].mean(), text_idx_all[0].mean() - text = ocr_labels_dict[int(i * 255)]['text'] - plt.text(x, y, text, c='white', horizontalalignment='center', verticalalignment='center', clip_on=True) - - plt.axis('off') - plt.savefig(pretty_path, bbox_inches='tight', transparent=True, pad_inches=0) - plt.close() - else: - rgb = plt.imread(rgb_path) - ocr_labels = np.ones_like(rgb, dtype=np.float32()) - - plt.imshow(rgb) - plt.imshow(ocr_labels, cmap='gray', alpha=0.8) - - x, y = rgb.shape[1] / 2, rgb.shape[0] / 2 - plt.text(x, y, 'No text detected', c='black', horizontalalignment='center', verticalalignment='center', clip_on=True) - plt.axis('off') - - os.makedirs(os.path.dirname(file_name), exist_ok=True) - plt.savefig(pretty_path, bbox_inches='tight', transparent=True, pad_inches=0) - plt.close() - - -def label_prettify(rgb_path, expert_paths): - for expert_path in expert_paths: - if 'depth' in expert_path: - depth_prettify(expert_path) - elif 'seg' in expert_path: - seg_prettify(rgb_path, expert_path) - elif 'ocr' in expert_path: - ocr_detection_prettify(rgb_path, expert_path) - elif 'obj' in expert_path: - obj_detection_prettify(rgb_path, expert_path) - else: - pretty_path = expert_path.replace('.png', '_p.png') - if not os.path.exists(pretty_path): - shutil.copyfile(expert_path, pretty_path) diff --git a/spaces/sijunhe/poet/README.md b/spaces/sijunhe/poet/README.md deleted file mode 100644 index 538c8fd7d8214cdd413740e434a5beb7fd2cd96a..0000000000000000000000000000000000000000 --- a/spaces/sijunhe/poet/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Poet -emoji: 👀 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: afl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/silencewing/server/youyou/.history/math_20230613231757.html b/spaces/silencewing/server/youyou/.history/math_20230613231757.html deleted file mode 100644 index d50038487c83438febd36449d911d2fa7f04497b..0000000000000000000000000000000000000000 --- a/spaces/silencewing/server/youyou/.history/math_20230613231757.html +++ /dev/null @@ -1,234 +0,0 @@ - - - - - - - - - - Document - - - - -
        - - - - - - - - - - - - - - - - - - - - - - - - -
        题目
        答案
        正误
        得分
        -
        - - - - diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bus Simulator Indonesia Tamilnadu Traffic Mod Apk Download Link and Features.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bus Simulator Indonesia Tamilnadu Traffic Mod Apk Download Link and Features.md deleted file mode 100644 index 57347586f0d1f311320ec98c961d72c877402658..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bus Simulator Indonesia Tamilnadu Traffic Mod Apk Download Link and Features.md +++ /dev/null @@ -1,112 +0,0 @@ -
        -

        Bus Simulator Indonesia Tamilnadu Traffic Mod APK Download

        -

        Do you love driving buses in realistic and immersive environments? Do you want to experience the thrill of driving on the roads of Tamilnadu, one of the most populous and diverse states in India? If yes, then you should try Bus Simulator Indonesia, a popular bus simulation game for Android devices. And if you want to make your gameplay even more exciting and realistic, you should also download and install the Tamilnadu Traffic Mod APK, which adds various vehicles and traffic rules from Tamilnadu to the game. In this article, we will tell you everything you need to know about Bus Simulator Indonesia and Tamilnadu Traffic Mod, including their features, benefits, and how to download and install them on your device.

        -

        What is Bus Simulator Indonesia?

        -

        Bus Simulator Indonesia, also known as BUSSID, is a bus simulation game developed by Maleo, an Indonesian game studio. The game lets you drive various types of buses on the roads of Indonesia, with realistic graphics, physics, and sounds. You can customize your bus with different liveries, accessories, horns, and stickers. You can also create your own routes and scenarios with the built-in map editor. You can play the game offline or online with other players. The game is free to download and play, but it also offers in-app purchases for extra features and content.

        -

        bus simulator indonesia tamilnadu traffic mod apk download


        Download Ziphttps://ssurll.com/2uNVLr



        -

        Features of Bus Simulator Indonesia

        -
          -
        • Realistic and detailed 3D graphics of Indonesian environments and landmarks
        • -
        • Various types of buses to choose from, such as city buses, intercity buses, double-decker buses, etc.
        • -
        • Customizable bus liveries, accessories, horns, stickers, etc.
        • -
        • Built-in map editor to create your own routes and scenarios
        • -
        • Offline and online modes to play solo or with other players
        • -
        • Realistic bus physics, sounds, and controls
        • -
        • Dynamic weather and day-night cycle
        • -
        • Friendly and helpful user interface and controls
        • -
        • Regular updates with new features and content
        • -
        -

        How to download and install Bus Simulator Indonesia APK

        -
          -
        1. Go to the Google Play Store on your Android device and search for Bus Simulator Indonesia.
        2. -
        3. Tap on the Install button and wait for the download to finish.
        4. -
        5. Once the download is complete, tap on the Open button to launch the game.
        6. -
        7. Alternatively, you can also download the APK file from a trusted third-party source such as [APKPure](^1^) or [APKMirror](^2^).
        8. -
        9. After downloading the APK file, go to your device settings and enable the option to install apps from unknown sources.
        10. -
        11. Locate the APK file on your device storage and tap on it to install it.
        12. -
        13. Once the installation is done, you can open the game from your app drawer or home screen.
        14. -
        -

        What is Tamilnadu Traffic Mod?

        -

        Tamilnadu Traffic Mod is a mod for Bus Simulator Indonesia that adds various vehicles and traffic rules from Tamilnadu to the game. The mod was created by Team KBR, a group of modders from India. The mod includes vehicles such as cars, bikes, trucks, autorickshaws, etc., with different models and colors. The mod also adds traffic signals, speed breakers, toll booths, etc., to make the driving experience more realistic and challenging. The mod is compatible with the latest version of Bus Simulator Indonesia and does not require root access or any other mods.

        -

        Benefits of using Tamil

        How to download and install Tamilnadu Traffic Mod APK

        -
          -
        1. Go to the official website of Team KBR and find the download link for Tamilnadu Traffic Mod APK. You can also use this [direct link] to download the mod.
        2. -
        3. After downloading the mod, go to your device settings and enable the option to install apps from unknown sources.
        4. -
        5. Locate the mod file on your device storage and tap on it to install it.
        6. -
        7. Once the installation is done, you can open Bus Simulator Indonesia and enjoy the mod.
        8. -
        -

        How to set Tamilnadu Traffic Mod in Bus Simulator Indonesia

        -

        After installing the mod, you need to set it in the game settings to activate it. Here are the steps to do that:

        -

        Steps to set Tamilnadu Traffic Mod in Bus Simulator Indonesia

        -
          -
        1. Open Bus Simulator Indonesia and go to the Settings menu.
        2. -
        3. Tap on the Traffic option and select Custom Traffic.
        4. -
        5. Tap on the Load button and choose Tamilnadu Traffic Mod from the list of available mods.
        6. -
        7. Tap on the Apply button and confirm your choice.
        8. -
        9. Exit the Settings menu and start a new game or resume your previous game.
        10. -
        11. You will see the Tamilnadu Traffic Mod in action on the roads of Indonesia.
        12. -
        -

        Tips and tricks for using Tamilnadu Traffic Mod in Bus Simulator Indonesia

        -
          -
        • Be careful when driving on the roads with Tamilnadu Traffic Mod, as the traffic rules and conditions are different from Indonesia. For example, you need to drive on the left side of the road, follow the traffic signals, pay attention to the speed breakers, etc.
        • -
        • You can also use the horn button to honk at other vehicles and pedestrians, as it is a common practice in Tamilnadu. However, do not honk excessively or unnecessarily, as it may annoy other drivers and cause accidents.
        • -
        • You can enjoy the different models and colors of vehicles from Tamilnadu, such as cars, bikes, trucks, autorickshaws, etc. You can also see some unique vehicles such as bullock carts, cycle rickshaws, etc., that add more realism and diversity to the game.
        • -
        • You can also customize your bus with different liveries, accessories, horns, stickers, etc., that are inspired by Tamilnadu culture and style. For example, you can use a livery with a picture of a famous actor or politician from Tamilnadu, or a sticker with a slogan or a message in Tamil language.
        • -
        -

        Conclusion

        -

        In this article, we have explained what Bus Simulator Indonesia and Tamilnadu Traffic Mod are, how to download and install them on your device, and how to set them in the game. We have also given you some tips and tricks for using them in the game. We hope you have enjoyed reading this article and learned something new. If you are a fan of bus simulation games and want to experience driving on the roads of Tamilnadu, you should definitely try Bus Simulator Indonesia and Tamilnadu Traffic Mod. They will give you a realistic and immersive gameplay experience that you will not forget. So what are you waiting for? Download them now and have fun!

        -

        FAQs

        -
          -
        • Q: Is Bus Simulator Indonesia free to play?
        • -
        • A: Yes, Bus Simulator Indonesia is free to download and play from the Google Play Store. However, it also offers in-app purchases for extra features and content.
        • -
        • Q: Is Tamilnadu Traffic Mod free to download?
        • -
        • A: Yes, Tamilnadu Traffic Mod is free to download from the official website of Team KBR or from this [direct link].
        • -
        • Q: Do I need root access or any other mods to use Tamilnadu Traffic Mod?
        • -
        • A: No, you do not need root access or any other mods to use Tamilnadu Traffic Mod. You just need to install it on your device and set it in the game settings.
        • -
        • Q: Can I use Tamilnadu Traffic Mod with other mods?
        • -
        • A: Yes, you can use Tamilnadu Traffic Mod with other mods that are compatible with Bus Simulator Indonesia. However, you may encounter some glitches or conflicts if you use too many mods at once.
        • -
        • Q: Can I play Bus Simulator Indonesia offline?
        • -
        • A: Yes, you can play Bus Simulator Indonesia offline without an internet connection. However, you will not be able to access the online mode or the map editor.
        • -

        -

        How to set tamil nadu map traffic mod in bussid
        -New tn traffic mod tamil for bus simulator indonesia
        -Tamilnadu tnstc mod for bussid apk free download
        -Bus simulator indonesia tamilnadu traffic mod tutorial
        -Download tamilnadu traffic mod for bussid latest version
        -Bus simulator indonesia tamilnadu map and traffic mod
        -Tamilnadu traffic mod in bussid by dinesh gaming
        -Bus simulator indonesia tamilnadu traffic mod gameplay
        -Tamilnadu tnstc bus mod for bussid android app
        -Bus simulator indonesia tamilnadu traffic mod review
        -Tamilnadu traffic mod for bussid with atal v3 bus
        -Bus simulator indonesia tamilnadu traffic mod update
        -Tamilnadu tnstc skin and livery for bussid mod
        -Bus simulator indonesia tamilnadu traffic mod features
        -Tamilnadu traffic mod for bussid z archiver app link
        -Bus simulator indonesia tamilnadu traffic mod installation
        -Tamilnadu tnstc bus sound and horn for bussid mod
        -Bus simulator indonesia tamilnadu traffic mod settings
        -Tamilnadu traffic mod for bussid rpn bus livery link
        -Bus simulator indonesia tamilnadu traffic mod comparison
        -Tamilnadu tnstc bus driving in bussid mod apk
        -Bus simulator indonesia tamilnadu traffic mod requirements
        -Tamilnadu traffic mod for bussid team kbr website link
        -Bus simulator indonesia tamilnadu traffic mod problems
        -Tamilnadu tnstc bus interior and exterior for bussid mod
        -Bus simulator indonesia tamilnadu traffic mod tips and tricks
        -Tamilnadu traffic mod for bussid sharemods download link
        -Bus simulator indonesia tamilnadu traffic mod feedback
        -Tamilnadu tnstc bus speed and acceleration for bussid mod
        -Bus simulator indonesia tamilnadu traffic mod benefits
        -Tamilnadu traffic mod for bussid mediafire download link
        -Bus simulator indonesia tamilnadu traffic mod challenges
        -Tamilnadu tnstc bus design and color for bussid mod
        -Bus simulator indonesia tamilnadu traffic mod alternatives
        -Tamilnadu traffic mod for bussid apkcombo download link
        -Bus simulator indonesia tamilnadu traffic mod support
        -Tamilnadu tnstc bus route and destination for bussid mod
        -Bus simulator indonesia tamilnadu traffic mod suggestions
        -Tamilnadu traffic mod for bussid fikolappsdtc download link
        -Bus simulator indonesia tamilnadu traffic mod improvements

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Car Mechanic Simulator 2021 The Best Simulation Game for Car Lovers.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Car Mechanic Simulator 2021 The Best Simulation Game for Car Lovers.md deleted file mode 100644 index eb77da8a27b1258a390231ecbae5a29d3d703ef7..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Car Mechanic Simulator 2021 The Best Simulation Game for Car Lovers.md +++ /dev/null @@ -1,129 +0,0 @@ -
        -

        Car Simulator 5: A Review of the Best Driving Simulation Game of 2023

        -

        If you are a fan of driving simulation games, you might have heard of Car Simulator 5, the latest installment in the popular Car Simulator series. This game is one of the most realistic and immersive driving simulation games ever made, with over 85 new cars, a large open world, and amazing gameplay. In this article, we will review Car Simulator 5 and tell you why you should play it.

        -

        What is Car Simulator 5?

        -

        Car Simulator 5 is a driving simulation game developed by Oppana Games FZC LLC and released in May 2023. It is available for Android devices on Google Play Store and for Windows devices on Steam. It is the sequel to Car Simulator 2, which was released in March 2019.

        -

        car simulator 5


        Download File >>> https://ssurll.com/2uNZyk



        -

        Features and gameplay of Car Simulator 5

        -

        Car Simulator 5 has many features and gameplay modes that make it stand out from other driving simulation games. Here are some of them:

        -

        Online and single-player modes

        -

        You can play Car Simulator 5 online with real players from all over the world, or offline by yourself. You can chat with other players, join clubs, compete in leaderboards, and earn currency that you can spend on new cars, upgrades, garages, and a house.

        -

        3D open world

        -

        You can explore a large city with different districts, such as downtown, suburbs, industrial zone, airport, beach, and more. You can also drive on highways, bridges, tunnels, off-road tracks, and ramps. The city is full of life, with traffic, pedestrians, animals, weather effects, and day-night cycle.

        -

        Daily bonuses and quests

        -

        You can get daily bonuses by logging in every day and completing various tasks. You can also get quests from different characters in the city, such as taxi drivers, mobsters, police officers, racers, mechanics, and more. Quests can involve delivering passengers or goods, escaping from the cops or rivals, repairing or tuning cars, winning races or challenges, and more.

        -

        Fully detailed car models

        -

        You can choose from over 85 new cars in Car Simulator 5, ranging from classic cars to modern cars, from sports cars to luxury hoods, bumpers, grills, lights, mirrors, and more. You can also repair your cars if they are damaged or broken.

        -

        Interactive gas station

        -

        You can visit a gas station in the city to refuel your cars. You can also buy snacks, drinks, and other items from the convenience store. You can also use the car wash, the air pump, and the tire changer at the gas station.

        -

        Exciting missions in the form of quests, arcade challenges, and races

        -

        You can enjoy various missions in Car Simulator 5, such as quests, arcade challenges, and races. Quests are story-based missions that involve different characters and scenarios. Arcade challenges are mini-games that test your driving skills, such as parking, drifting, obstacle course, and more. Races are competitive missions that pit you against other drivers, either online or offline. You can win rewards and trophies by completing missions.

        -

        Dynamic day-night cycle

        -

        You can experience a dynamic day-night cycle in Car Simulator 5, where the time of day changes according to the real time. You can see the sunrise and sunset, the moon and stars, and the changing shadows and lights. You can also adjust the time of day manually in the settings.

        -

        car simulator 5 game
        -car simulator 5 online
        -car simulator 5 download
        -car simulator 5 apk
        -car simulator 5 mod
        -car simulator 5 pc
        -car simulator 5 android
        -car simulator 5 review
        -car simulator 5 gameplay
        -car simulator 5 cheats
        -car simulator 5 free
        -car simulator 5 demo
        -car simulator 5 steam
        -car simulator 5 update
        -car simulator 5 tips
        -car simulator 5 trailer
        -car simulator 5 hack
        -car simulator 5 ios
        -car simulator 5 app
        -car simulator 5 play store
        -car simulator 5 best cars
        -car simulator 5 multiplayer
        -car simulator 5 graphics
        -car simulator 5 mods apk
        -car simulator 5 system requirements
        -car simulator 5 wiki
        -car simulator 5 reddit
        -car simulator 5 codes
        -car simulator 5 windows
        -car simulator 5 mac
        -car simulator 5 features
        -car simulator 5 release date
        -car simulator 5 cars list
        -car simulator 5 guide
        -car simulator 5 walkthrough
        -car simulator 5 settings
        -car simulator 5 controls
        -car simulator 5 missions
        -car simulator 5 challenges
        -car simulator 5 achievements
        -car simulator 5 customizations
        -car simulator 5 garage
        -car simulator 5 off-road mode
        -car simulator 5 parking mode
        -car simulator 5 drag racing mode
        -car simulator 5 electric cars
        -car simulator 5 nissan dlc
        -car simulator 5 jaguar dlc
        -car simulator 5 land rover dlc

        -

        Why should you play Car Simulator 5?

        -

        Car Simulator 5 is a fun, free, realistic, diverse, engaging, and addictive driving simulation game that will keep you entertained for hours. Here are some of the reasons why you should play it:

        -

        The pros and cons of Car Simulator 5

        -

        Like any game, Car Simulator 5 has its pros and cons. Here are some of them:

        -

        Pros:

        -
          -
        • Fun: Car Simulator 5 is a fun game that lets you drive different cars in a large open world. You can explore the city, complete missions, customize your cars, interact with other players, and more.
        • -
        • Free: Car Simulator 5 is a free game that you can download and play on your Android or Windows device. You don't need to pay anything to enjoy the game.
        • -
        • Realistic: Car Simulator 5 is a realistic game that simulates the physics, sounds, graphics, and details of driving a car. You can feel the difference between different cars, see the damage effects, hear the engine sounds, and more.
        • -
        • Diverse: Car Simulator 5 is a diverse game that offers over 85 new cars to choose from, each with its own characteristics and features. You can also drive in different environments, such as city streets, highways, off-road tracks, ramps, and more.
        • -
        • Engaging: Car Simulator 5 is an engaging game that keeps you interested with various gameplay modes and missions. You can play online or offline, chat with other players, join clubs, compete in leaderboards, and earn currency and rewards.
        • -
        • Addictive: Car Simulator 5 is an addictive game that makes you want to play more and more. You can always find something new and exciting to do in the game, such as buying new cars, upgrading your cars, completing new missions, and more.
        • -
        -

        Cons:

        -
          -
        • Requires a good phone: Car Simulator 5 is a high-quality game that requires a good phone to run smoothly. You may experience lag, crashes, or errors if your phone is not compatible or powerful enough.
        • -
        • Has ads and in-app purchases: Car Simulator 5 is a free game that relies on ads and in-app purchases to generate revenue. You may see ads pop up occasionally or have to watch ads to get some bonuses. You may also have to buy some items or currency with real money if you want to unlock them faster.
        • -
        • May have some bugs and glitches: Car Simulator 5 is a complex game that may have some bugs and glitches that affect the gameplay. You may encounter some errors, freezes, or crashes while playing the game. You may also find some inconsistencies, mistakes, or loopholes in the game.
        • -
        -

        The ratings and reviews of Car Simulator 5

        -

        Car Simulator 5 is a popular game that has received many ratings and reviews from users and critics. Here are some of them:

        -

        Google Play Store: 4.5 stars out of 5 from over 1 million reviews

        -

        Car Simulator 5 has been downloaded over 10 million times on Google Play Store and has received over 1 million reviews from users. The majority of the reviews are positive, praising the game for its realism, graphics, gameplay, variety, and fun factor. Some of the negative reviews are about the ads, the in-app purchases, the bugs, and the compatibility issues.

        -

        Steam: Positive reviews from over 10,000 users

        -

        Car Simulator 5 has been released on Steam in June 2023 and has received positive reviews from over 10,000 users. The users appreciate the game for its quality, realism, physics, customization, and online mode. Some of the complaints are about the price, the optimization, the controls, and the glitches.

        -

        How to download and play Car Simulator 5?

        -

        If you are interested in playing Car Simulator 5, you can download it easily on your Android or Windows device. Here are the steps you need to follow:

        -

        The system requirements and compatibility of Car Simulator 5

        -

        Before you download Car Simulator 5, you need to make sure that your device meets the system requirements and compatibility of the game. Here are the minimum specifications you need:

        -

        Android: Requires Android 6.0 and up, varies with device

        -

        If you want to play Car Simulator 5 on your Android device, you need to have Android 6.0 or higher installed on your device. You also need to have enough storage space available on your device, as the game size varies with device.

        -

        Windows: Requires Windows 10, Intel Core i3 or AMD Ryzen 3, 8 GB RAM, NVIDIA GeForce GTX 660 or AMD Radeon R9 270X, DirectX 11, 35 GB available space

        -

        If you want to play Car Simulator 5 on your Windows device, you need to have Windows 10 installed on your device. You also need to have a decent processor, such as Intel Core i3 or AMD Ryzen 3, at least 8 GB of RAM, a good graphics card, such as NVIDIA GeForce GTX 660 or AMD Radeon R9 270X, DirectX 11, and at least 35 GB of free disk space.

        -

        The download links and instructions of Car Simulator 5

        -

        Once you have checked the system requirements and compatibility of Car Simulator 5, you can download it from the following links:

        -

        Google Play Store: [Car Simulator 2 - Apps on Google Play]

        -

        If you want to download Car Simulator 5 on your Android device, you can go to the Google Play Store and search for Car Simulator 2. You can also click on this link: [Car Simulator 2 - Apps on Google Play]. Then, you can tap on the Install button and wait for the game to download and install on your device. You may need to grant some permissions to the game before you can play it.

        -

        Steam: [Car Mechanic Simulator 2021 on Steam]

        -

        If you want to download Car Simulator 5 on your Windows device, you can go to Steam and search for Car Mechanic Simulator 2021. You can also click on this link: [Car Mechanic Simulator 2021 on Steam]. Then, you can add the game to your cart and proceed to checkout. You may need to create a Steam account and install the Steam client if you don't have them already. After you purchase the game, you can download and install it on your device. You may need to activate the game with a product key before you can play it.

        -

        After you have downloaded and installed Car Simulator 5 on your device, you can launch the game and start playing. You can choose between online and offline mode, create your profile, select your car, and explore the city. You can also access the settings menu to adjust the graphics, sound, controls, and other options.

        -

        Conclusion

        -

        Car Simulator 5 is a driving simulation game that offers a realistic and immersive experience of driving different cars in a large open world. You can play online or offline, customize your cars, complete missions, interact with other players, and more. Car Simulator 5 is a fun, free, realistic, diverse, engaging, and addictive game that will keep you entertained for hours. If you are a fan of driving simulation games, you should definitely try Car Simulator 5.

        -

        FAQs

        -

        Here are some of the frequently asked questions about Car Simulator 5:

        -
          -
        • Q: How many cars are there in Car Simulator 5?
        • -
        • A: There are over 85 new cars in Car Simulator 5, ranging from classic cars to modern cars, from sports cars to luxury cars, from SUVs to trucks, and more.
        • -
        • Q: How big is the city in Car Simulator 5?
        • -
        • A: The city in Car Simulator 5 is very large and has different districts, such as downtown, suburbs, industrial zone, airport, beach, and more. You can also drive on highways, bridges, tunnels, off-road tracks, and ramps.
        • -
        • Q: How can I earn money in Car Simulator 5?
        • -
        • A: You can earn money in Car Simulator 5 by completing missions, winning races, selling cars, getting daily bonuses, watching ads, and buying in-app purchases.
        • -
        • Q: How can I interact with other players in Car Simulator 5?
        • -
        • A: You can interact with other players in Car Simulator 5 by playing online mode, chatting with them, joining clubs, competing in leaderboards, and sending gifts.
        • -
        • Q: How can I contact the developers of Car Simulator 5?
        • -
        • A: You can contact the developers of Car Simulator 5 by emailing them at support@oppanagames.com or visiting their website at https://oppanagames.com/.
        • -

        401be4b1e0
        -
        -
        \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pokmon UNITE MOD APK 1.9.1.2 and Unlock All the Pokmon and Items.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pokmon UNITE MOD APK 1.9.1.2 and Unlock All the Pokmon and Items.md deleted file mode 100644 index 981941110f8bc6417766135d9420afa40be1586f..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download Pokmon UNITE MOD APK 1.9.1.2 and Unlock All the Pokmon and Items.md +++ /dev/null @@ -1,82 +0,0 @@ -
        -

        Pokémon UNITE MOD APK 1.9.1.2: Everything You Need to Know

        -

        If you are a fan of Pokémon and MOBA games, you might have heard of Pokémon UNITE, a new game that combines the best of both worlds. But did you know that there is also a Pokémon UNITE MOD APK that you can download and enjoy on your Android device? In this article, we will tell you everything you need to know about this mod apk, including what it is, how to get it, and what are its advantages and disadvantages.

        -

        What is Pokémon UNITE?

        -

        A new Pokémon MOBA game

        -

        Pokémon UNITE is a multiplayer online battle arena (MOBA) game that was released in July 2021 by The Pokémon Company and TiMi Studios. It is the first Pokémon game of its kind, where players can team up with their friends and compete against other players in 5v5 matches. The game features various Pokémon from different generations, each with their own unique abilities and roles.

        -

        pokemon unite mod apk 1.9.1.2


        Download ☆☆☆ https://ssurll.com/2uNSkM



        -

        Features and gameplay

        -

        In Pokémon UNITE, players can choose from a roster of over 20 playable Pokémon, such as Pikachu, Charizard, Snorlax, Lucario, and more. Each Pokémon has four moves that can be upgraded and customized during the match, as well as a Unite Move that can unleash a powerful attack or effect. The goal of the game is to score more points than the opposing team by capturing wild Pokémon and depositing them in the enemy's goal zones.

        -

        The game also has various modes and features that add to the fun and challenge, such as Ranked Matches, Quick Matches, Events, Missions, Battle Passes, Trainer Customization, and more. Players can also earn rewards and unlock new items and Pokémon by playing the game and completing tasks.

        -

        pokemon unite mod apk 1.9.1.2 unlimited money
        -pokemon unite mod apk 1.9.1.2 free download
        -pokemon unite mod apk 1.9.1.2 latest version
        -pokemon unite mod apk 1.9.1.2 android
        -pokemon unite mod apk 1.9.1.2 no root
        -pokemon unite mod apk 1.9.1.2 hack
        -pokemon unite mod apk 1.9.1.2 gems
        -pokemon unite mod apk 1.9.1.2 offline
        -pokemon unite mod apk 1.9.1.2 update
        -pokemon unite mod apk 1.9.1.2 cheats
        -pokemon unite mod apk 1.9.1.2 gameplay
        -pokemon unite mod apk 1.9.1.2 review
        -pokemon unite mod apk 1.9.1.2 features
        -pokemon unite mod apk 1.9.1.2 installation
        -pokemon unite mod apk 1.9.1.2 guide
        -pokemon unite mod apk 1.9.1.2 tips
        -pokemon unite mod apk 1.9.1.2 tricks
        -pokemon unite mod apk 1.9.1.2 online
        -pokemon unite mod apk 1.9.1.2 multiplayer
        -pokemon unite mod apk 1.9.1.2 best team
        -pokemon unite mod apk 1.9.1.2 strategy
        -pokemon unite mod apk 1.9.1.2 tier list
        -pokemon unite mod apk 1.9.1.2 characters
        -pokemon unite mod apk 1.9.1.2 skins
        -pokemon unite mod apk 1.9.1.2 items
        -pokemon unite mod apk 1.9.1.2 maps
        -pokemon unite mod apk 1

        -

        Supported devices and platforms

        -

        Pokémon UNITE is available for free on Nintendo Switch and Android devices. The game also supports cross-platform play, meaning that players can team up or battle with other players across different devices. The game requires an internet connection and a Nintendo Account or a Pokémon Trainer Club account to play.

        -

        What is Pokémon UNITE MOD APK?

        -

        A modified version of the original game

        -

        Pokémon UNITE MOD APK is a modified version of the original game that has been altered by some third-party developers or hackers. The mod apk usually offers some features or benefits that are not available in the official game, such as unlimited coins, gems, tickets, energy, or unlocked items and Pokémon.

        -

        The mod apk is usually downloaded from unofficial sources or websites that are not affiliated with The Pokémon Company or TiMi Studios. The mod apk may also require some additional steps or permissions to install and run on your device.

        -

        Benefits and drawbacks of using the mod apk

        -

        Some of the benefits of using the Pokémon UNITE MOD APK are:

        -
          -
        • You can access some premium features or items without spending real money or grinding in the game.
        • -
        • You can have more fun and variety in the game by trying out different Pokémon and moves.
        • -
        • You can have an edge

          Some of the drawbacks of using the Pokémon UNITE MOD APK are:

          -
            -
          • You may risk getting banned or suspended from the game if the developers detect that you are using an unauthorized version of the game.
          • -
          • You may expose your device or data to malware or viruses that may be hidden in the mod apk or the source website.
          • -
          • You may miss out on some updates or features that are only available in the official game.
          • -
          • You may ruin the balance and fairness of the game for yourself and other players.
          • -
          -

          How to download and install the Pokémon UNITE MOD APK

          -

          If you still want to try out the Pokémon UNITE MOD APK, here are the steps you need to follow:

          -
            -
          1. Find a reliable and trustworthy website that offers the mod apk. You can search online or ask for recommendations from other users.
          2. -
          3. Download the mod apk file from the website. Make sure you have enough storage space on your device and a stable internet connection.
          4. -
          5. Enable the installation of apps from unknown sources on your device. You can do this by going to Settings > Security > Unknown Sources and toggling it on.
          6. -
          7. Locate the mod apk file on your device and tap on it to start the installation process. Follow the instructions on the screen and wait for it to finish.
          8. -
          9. Launch the game and enjoy the mod apk features.
          10. -
          -

          Conclusion

          -

          Summary of the main points

          -

          Pokémon UNITE is a new and exciting Pokémon MOBA game that you can play for free on your Nintendo Switch or Android device. It offers a lot of fun and challenge for Pokémon fans and MOBA lovers alike. However, if you want to experience some extra features or benefits, you can also download and install the Pokémon UNITE MOD APK, which is a modified version of the original game. The mod apk may give you some advantages, such as unlimited resources or unlocked items, but it also comes with some risks, such as getting banned or infected by malware. Therefore, you should be careful and responsible when using the mod apk, and always respect the game and its developers.

          -

          Call to action

          -

          If you are interested in playing Pokémon UNITE or downloading the Pokémon UNITE MOD APK, you can visit their official website or follow their social media accounts for more information and updates. You can also check out some reviews or guides online to learn more about the game and its features. And if you enjoyed this article, please share it with your friends and leave a comment below. Thank you for reading!

          -

          FAQs

          -

          Q: Is Pokémon UNITE MOD APK safe to use?

          -

          A: There is no definitive answer to this question, as different mod apks may have different levels of safety and quality. However, in general, using any mod apk involves some risk, as it is not authorized by the game developers and may contain harmful or malicious code. Therefore, you should always be careful and cautious when downloading and installing any mod apk, and only use it at your own risk.

          -

          Q: Can I play Pokémon UNITE MOD APK online with other players?

          -

          A: Yes, you can play Pokémon UNITE MOD APK online with other players, as long as you have an internet connection and a valid account. However, you should be aware that using a mod apk may affect your online experience, as you may encounter some errors, glitches, or compatibility issues. You may also face some consequences from the game developers, such as getting banned or suspended, if they detect that you are using a mod apk.

          -

          Q: What are the differences between Pokémon UNITE MOD APK and the official game?

          -

          A: The main difference between Pokémon UNITE MOD APK and the official game is that the mod apk has been modified by some third-party developers or hackers to offer some features or benefits that are not available in the official game. For example, the mod apk may give you unlimited coins, gems, tickets, energy, or unlocked items and Pokémon. However, these features may also come with some drawbacks, such as being unsafe, unstable, or unfair.

          -

          Q: How can I update Pokémon UNITE MOD APK to the latest version?

          -

          A: To update Pokémon UNITE MOD APK to the latest version, you need to find and download the updated mod apk file from a reliable and trustworthy website. Then, you need to uninstall the previous version of the mod apk from your device and install the new one following the same steps as before. Alternatively, you can also check if the mod apk has an auto-update feature that allows you to update it within the game itself.

          -

          Q: Where can I find Q: Where can I find more information about Pokémon UNITE MOD APK?

          -

          A: There are many sources of information about Pokémon UNITE MOD APK online, such as blogs, forums, videos, or social media. However, you should always be careful and critical when reading or watching these sources, as they may not be accurate, reliable, or trustworthy. You should also avoid clicking on any suspicious links or downloading any files that may harm your device or data. The best way to find more information about Pokémon UNITE MOD APK is to visit the official website or social media accounts of the game developers, The Pokémon Company and TiMi Studios, as they provide the most authentic and updated information about the game.

          197e85843d
          -
          -
          \ No newline at end of file diff --git a/spaces/sklearn-docs/huber-vs-ridge-regression-for-outliers/app.py b/spaces/sklearn-docs/huber-vs-ridge-regression-for-outliers/app.py deleted file mode 100644 index 97d1a2ea8309ea7d54b69c3086468fb239ab102f..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/huber-vs-ridge-regression-for-outliers/app.py +++ /dev/null @@ -1,82 +0,0 @@ -import numpy as np -import matplotlib.pyplot as plt -from matplotlib.colors import ListedColormap -plt.rcParams['figure.dpi'] = 100 -plt.style.use('ggplot') - -from sklearn.linear_model import HuberRegressor, Ridge - -import gradio as gr - -C1, C2, C3 = '#ff0000', '#09bd00', '#0000ff' -#===================================================== -def create_plot(outlier_ratio=0.1, epsilon=1.35): - num_samples = 100 - x = np.linspace(-15, 15, num_samples) - y = 2*x + 2 + np.random.normal(loc=0, scale=2.5, size=x.shape[0]) - - num_outliers = int(num_samples * outlier_ratio)//2 - outliers_x = np.random.normal(loc=11, scale=1, size=num_outliers) - outliers_y = np.random.normal(loc=-30, scale=4, size=num_outliers) - x = np.concatenate([x, outliers_x]) - y = np.concatenate([y, outliers_y]) - outliers_x = np.random.normal(loc=-11, scale=1, size=num_outliers) - outliers_y = np.random.normal(loc=30, scale=4, size=num_outliers) - x = np.concatenate([x, outliers_x]) - y = np.concatenate([y, outliers_y]) - X = x[..., None] - - x = np.concatenate([x, outliers_x]) - y = np.concatenate([y, outliers_y]) - X = x[..., None] - - ridge = Ridge(alpha=0) - ridge.fit(X, y) - - huber = HuberRegressor(epsilon=epsilon) - huber.fit(X, y) - - fig = plt.figure() - ax = fig.add_subplot(111) - - ax.scatter(x, y, c=C1, edgecolor='k', s=40) - - line_x = np.linspace(-15, 15, 10) - ax.plot(line_x, ridge.coef_*line_x + ridge.intercept_, c=C2, label='Ridge') - ax.plot(line_x, huber.coef_*line_x + huber.intercept_, c=C3, label='Huber') - - ax.set_xlabel('X'); ax.set_ylabel('Y') - ax.legend() - ax.set_title('Huber Regressor vs Ridge Regressor with Outliers') - - return fig - - -info = ''' -# Robustness Against Outliers: Huber vs Ridge Regression - -This example demonstrates a simple linear regression problem in the existence of outliers, and compares the effectiveness of Huber regression vs Ridge regression. - -[Ridge regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.Ridge.html), which is essentially basic L2 linear regression with regularization (but regularization is neglected here), suffers from outliers because the outlying data points are going to heavily increase the loss, forcing the best-fit line to lean towards the outliers to decrease that loss. - -[Huber regression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.HuberRegressor.html) uses the Huber loss instead of the L2 loss. The Huber loss function behaves quadratically when the error is small and linearly when the error is large. Consequently, the loss resulting from outlying points is weighed less heavily than if we use quadratic loss all over. - -The epsilon parameter controls the cut-off point between the quadratic and linear regions of the Huber loss. Use the sliders to increase the outlier ratio and see when the Huber regressor breaks down and how the value of epsilon affects that. - -Created by [huabdul](https://huggingface.co/huabdul) based on [scikit-learn docs](https://scikit-learn.org/stable/auto_examples/linear_model/plot_huber_vs_ridge.html#sphx-glr-auto-examples-linear-model-plot-huber-vs-ridge-py). -''' -with gr.Blocks(analytics_enabled=False) as demo: - with gr.Row(): - with gr.Column(): - gr.Markdown(info) - s_outlier_ratio = gr.Slider(0.01, 0.5, value=0.15, step=0.01, label='Outlier Ratio') - s_epsilon = gr.Slider(1, 2, 1.35, step=0.005, label='Epsilon') - with gr.Column(): - plot = gr.Plot(label='Comparison') - - s_outlier_ratio.change(create_plot, inputs=[s_outlier_ratio, s_epsilon], outputs=[plot]) - s_epsilon.change(create_plot, inputs=[s_outlier_ratio, s_epsilon], outputs=[plot]) - demo.load(create_plot, inputs=[s_outlier_ratio, s_epsilon], outputs=[plot]) - -demo.launch() -#===================================================== \ No newline at end of file diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/yolov5face/__init__.py b/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/yolov5face/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/autoanchor.py b/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/autoanchor.py deleted file mode 100644 index a4eba3e94888709be7d2a7c7499fbcc1808b4a88..0000000000000000000000000000000000000000 --- a/spaces/sklkd93/CodeFormer/CodeFormer/facelib/detection/yolov5face/utils/autoanchor.py +++ /dev/null @@ -1,12 +0,0 @@ -# Auto-anchor utils - - -def check_anchor_order(m): - # Check anchor order against stride order for YOLOv5 Detect() module m, and correct if necessary - a = m.anchor_grid.prod(-1).view(-1) # anchor area - da = a[-1] - a[0] # delta a - ds = m.stride[-1] - m.stride[0] # delta s - if da.sign() != ds.sign(): # same order - print("Reversing anchor order") - m.anchors[:] = m.anchors.flip(0) - m.anchor_grid[:] = m.anchor_grid.flip(0) diff --git a/spaces/sohamb23/informational-therapy-chatbot/app.py b/spaces/sohamb23/informational-therapy-chatbot/app.py deleted file mode 100644 index 9561c3f02f3819cdea6840ee91ead7294f7f44e1..0000000000000000000000000000000000000000 --- a/spaces/sohamb23/informational-therapy-chatbot/app.py +++ /dev/null @@ -1,27 +0,0 @@ -import gradio as gr -import openai -import os - -openai.api_key = os.environ['OPENAI_API_KEY'] - -retrieve_response = openai.FineTune.retrieve(id=os.environ['MODEL_ID']) -fine_tuned_model = retrieve_response.fine_tuned_model - -def chatbot(input): - prompt = " " - if input: - prompt += input - prompt.replace('?', '') - prompt += ' ->' - completion = openai.Completion.create( - model=fine_tuned_model, prompt=prompt, max_tokens = 300, temperature = 0.9, stop = ["\n", "->"] - ) - reply = completion.choices[0].text - return reply - -inputs = gr.inputs.Textbox(lines=7, label="Chat with an Informational Therapy Assistant") -outputs = gr.outputs.Textbox(label="Reply") - -demo = gr.Interface(fn=chatbot, inputs=inputs, outputs=outputs, title="Informational Therapy Chatbot", - description="This model is trained to generally provide information on what therapy is," + "\n" + "what options exist to access therapy, ways in which therapy can be helpful, and more!", - theme="compact").launch() \ No newline at end of file diff --git a/spaces/sonoisa/Irasuto_search_CLIP_zero-shot/README.md b/spaces/sonoisa/Irasuto_search_CLIP_zero-shot/README.md deleted file mode 100644 index c8430369668f84565dd946c3d3ffeb1ccd4eac9f..0000000000000000000000000000000000000000 --- a/spaces/sonoisa/Irasuto_search_CLIP_zero-shot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Irasuto_search_CLIP_zero Shot -emoji: 🦀 -colorFrom: blue -colorTo: gray -sdk: streamlit -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/speakjan/EleutherAI-gpt-j-6b/README.md b/spaces/speakjan/EleutherAI-gpt-j-6b/README.md deleted file mode 100644 index beaeff3e14043a7dbe4e96bdd59821520b5082d3..0000000000000000000000000000000000000000 --- a/spaces/speakjan/EleutherAI-gpt-j-6b/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: EleutherAI Gpt J 6b -emoji: 🐢 -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.32.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/translation/README.md b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/translation/README.md deleted file mode 100644 index 2941f5eb8482dab61dca5eca27a71abd7ee5bf5c..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/translation/README.md +++ /dev/null @@ -1,301 +0,0 @@ -# Neural Machine Translation - -This README contains instructions for [using pretrained translation models](#example-usage-torchhub) -as well as [training new models](#training-a-new-model). - -## Pre-trained models - -Model | Description | Dataset | Download ----|---|---|--- -`conv.wmt14.en-fr` | Convolutional
          ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model:
          [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2)
          newstest2014:
          [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2)
          newstest2012/2013:
          [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.ntst1213.tar.bz2) -`conv.wmt14.en-de` | Convolutional
          ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT14 English-German](http://statmt.org/wmt14/translation-task.html#Download) | model:
          [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-de.fconv-py.tar.bz2)
          newstest2014:
          [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-de.newstest2014.tar.bz2) -`conv.wmt17.en-de` | Convolutional
          ([Gehring et al., 2017](https://arxiv.org/abs/1705.03122)) | [WMT17 English-German](http://statmt.org/wmt17/translation-task.html#Download) | model:
          [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt17.v2.en-de.fconv-py.tar.bz2)
          newstest2014:
          [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt17.v2.en-de.newstest2014.tar.bz2) -`transformer.wmt14.en-fr` | Transformer
          ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT14 English-French](http://statmt.org/wmt14/translation-task.html#Download) | model:
          [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt14.en-fr.joined-dict.transformer.tar.bz2)
          newstest2014:
          [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt14.en-fr.joined-dict.newstest2014.tar.bz2) -`transformer.wmt16.en-de` | Transformer
          ([Ott et al., 2018](https://arxiv.org/abs/1806.00187)) | [WMT16 English-German](https://drive.google.com/uc?export=download&id=0B_bZck-ksdkpM25jRUN2X2UxMm8) | model:
          [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/models/wmt16.en-de.joined-dict.transformer.tar.bz2)
          newstest2014:
          [download (.tar.bz2)](https://dl.fbaipublicfiles.com/fairseq/data/wmt16.en-de.joined-dict.newstest2014.tar.bz2) -`transformer.wmt18.en-de` | Transformer
          ([Edunov et al., 2018](https://arxiv.org/abs/1808.09381))
          WMT'18 winner | [WMT'18 English-German](http://www.statmt.org/wmt18/translation-task.html) | model:
          [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt18.en-de.ensemble.tar.gz)
          See NOTE in the archive -`transformer.wmt19.en-de` | Transformer
          ([Ng et al., 2019](https://arxiv.org/abs/1907.06616))
          WMT'19 winner | [WMT'19 English-German](http://www.statmt.org/wmt19/translation-task.html) | model:
          [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-de.joined-dict.ensemble.tar.gz) -`transformer.wmt19.de-en` | Transformer
          ([Ng et al., 2019](https://arxiv.org/abs/1907.06616))
          WMT'19 winner | [WMT'19 German-English](http://www.statmt.org/wmt19/translation-task.html) | model:
          [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.de-en.joined-dict.ensemble.tar.gz) -`transformer.wmt19.en-ru` | Transformer
          ([Ng et al., 2019](https://arxiv.org/abs/1907.06616))
          WMT'19 winner | [WMT'19 English-Russian](http://www.statmt.org/wmt19/translation-task.html) | model:
          [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.en-ru.ensemble.tar.gz) -`transformer.wmt19.ru-en` | Transformer
          ([Ng et al., 2019](https://arxiv.org/abs/1907.06616))
          WMT'19 winner | [WMT'19 Russian-English](http://www.statmt.org/wmt19/translation-task.html) | model:
          [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt19.ru-en.ensemble.tar.gz) - -## Example usage (torch.hub) - -We require a few additional Python dependencies for preprocessing: -```bash -pip install fastBPE sacremoses subword_nmt -``` - -Interactive translation via PyTorch Hub: -```python -import torch - -# List available models -torch.hub.list('pytorch/fairseq') # [..., 'transformer.wmt16.en-de', ... ] - -# Load a transformer trained on WMT'16 En-De -# Note: WMT'19 models use fastBPE instead of subword_nmt, see instructions below -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt16.en-de', - tokenizer='moses', bpe='subword_nmt') -en2de.eval() # disable dropout - -# The underlying model is available under the *models* attribute -assert isinstance(en2de.models[0], fairseq.models.transformer.TransformerModel) - -# Move model to GPU for faster translation -en2de.cuda() - -# Translate a sentence -en2de.translate('Hello world!') -# 'Hallo Welt!' - -# Batched translation -en2de.translate(['Hello world!', 'The cat sat on the mat.']) -# ['Hallo Welt!', 'Die Katze saß auf der Matte.'] -``` - -Loading custom models: -```python -from fairseq.models.transformer import TransformerModel -zh2en = TransformerModel.from_pretrained( - '/path/to/checkpoints', - checkpoint_file='checkpoint_best.pt', - data_name_or_path='data-bin/wmt17_zh_en_full', - bpe='subword_nmt', - bpe_codes='data-bin/wmt17_zh_en_full/zh.code' -) -zh2en.translate('你好 世界') -# 'Hello World' -``` - -If you are using a `transformer.wmt19` models, you will need to set the `bpe` -argument to `'fastbpe'` and (optionally) load the 4-model ensemble: -```python -en2de = torch.hub.load('pytorch/fairseq', 'transformer.wmt19.en-de', - checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', - tokenizer='moses', bpe='fastbpe') -en2de.eval() # disable dropout -``` - -## Example usage (CLI tools) - -Generation with the binarized test sets can be run in batch mode as follows, e.g. for WMT 2014 English-French on a GTX-1080ti: -```bash -mkdir -p data-bin -curl https://dl.fbaipublicfiles.com/fairseq/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - -C data-bin -curl https://dl.fbaipublicfiles.com/fairseq/data/wmt14.v2.en-fr.newstest2014.tar.bz2 | tar xvjf - -C data-bin -fairseq-generate data-bin/wmt14.en-fr.newstest2014 \ - --path data-bin/wmt14.en-fr.fconv-py/model.pt \ - --beam 5 --batch-size 128 --remove-bpe | tee /tmp/gen.out -# ... -# | Translated 3003 sentences (96311 tokens) in 166.0s (580.04 tokens/s) -# | Generate test with beam=5: BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787) - -# Compute BLEU score -grep ^H /tmp/gen.out | cut -f3- > /tmp/gen.out.sys -grep ^T /tmp/gen.out | cut -f2- > /tmp/gen.out.ref -fairseq-score --sys /tmp/gen.out.sys --ref /tmp/gen.out.ref -# BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787) -``` - -## Training a new model - -### IWSLT'14 German to English (Transformer) - -The following instructions can be used to train a Transformer model on the [IWSLT'14 German to English dataset](http://workshop2014.iwslt.org/downloads/proceeding.pdf). - -First download and preprocess the data: -```bash -# Download and prepare the data -cd examples/translation/ -bash prepare-iwslt14.sh -cd ../.. - -# Preprocess/binarize the data -TEXT=examples/translation/iwslt14.tokenized.de-en -fairseq-preprocess --source-lang de --target-lang en \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/iwslt14.tokenized.de-en \ - --workers 20 -``` - -Next we'll train a Transformer translation model over this data: -```bash -CUDA_VISIBLE_DEVICES=0 fairseq-train \ - data-bin/iwslt14.tokenized.de-en \ - --arch transformer_iwslt_de_en --share-decoder-input-output-embed \ - --optimizer adam --adam-betas '(0.9, 0.98)' --clip-norm 0.0 \ - --lr 5e-4 --lr-scheduler inverse_sqrt --warmup-updates 4000 \ - --dropout 0.3 --weight-decay 0.0001 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --max-tokens 4096 \ - --eval-bleu \ - --eval-bleu-args '{"beam": 5, "max_len_a": 1.2, "max_len_b": 10}' \ - --eval-bleu-detok moses \ - --eval-bleu-remove-bpe \ - --eval-bleu-print-samples \ - --best-checkpoint-metric bleu --maximize-best-checkpoint-metric -``` - -Finally we can evaluate our trained model: -```bash -fairseq-generate data-bin/iwslt14.tokenized.de-en \ - --path checkpoints/checkpoint_best.pt \ - --batch-size 128 --beam 5 --remove-bpe -``` - -### WMT'14 English to German (Convolutional) - -The following instructions can be used to train a Convolutional translation model on the WMT English to German dataset. -See the [Scaling NMT README](../scaling_nmt/README.md) for instructions to train a Transformer translation model on this data. - -The WMT English to German dataset can be preprocessed using the `prepare-wmt14en2de.sh` script. -By default it will produce a dataset that was modeled after [Attention Is All You Need (Vaswani et al., 2017)](https://arxiv.org/abs/1706.03762), but with additional news-commentary-v12 data from WMT'17. - -To use only data available in WMT'14 or to replicate results obtained in the original [Convolutional Sequence to Sequence Learning (Gehring et al., 2017)](https://arxiv.org/abs/1705.03122) paper, please use the `--icml17` option. - -```bash -# Download and prepare the data -cd examples/translation/ -# WMT'17 data: -bash prepare-wmt14en2de.sh -# or to use WMT'14 data: -# bash prepare-wmt14en2de.sh --icml17 -cd ../.. - -# Binarize the dataset -TEXT=examples/translation/wmt17_en_de -fairseq-preprocess \ - --source-lang en --target-lang de \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/wmt17_en_de --thresholdtgt 0 --thresholdsrc 0 \ - --workers 20 - -# Train the model -mkdir -p checkpoints/fconv_wmt_en_de -fairseq-train \ - data-bin/wmt17_en_de \ - --arch fconv_wmt_en_de \ - --dropout 0.2 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer nag --clip-norm 0.1 \ - --lr 0.5 --lr-scheduler fixed --force-anneal 50 \ - --max-tokens 4000 \ - --save-dir checkpoints/fconv_wmt_en_de - -# Evaluate -fairseq-generate data-bin/wmt17_en_de \ - --path checkpoints/fconv_wmt_en_de/checkpoint_best.pt \ - --beam 5 --remove-bpe -``` - -### WMT'14 English to French -```bash -# Download and prepare the data -cd examples/translation/ -bash prepare-wmt14en2fr.sh -cd ../.. - -# Binarize the dataset -TEXT=examples/translation/wmt14_en_fr -fairseq-preprocess \ - --source-lang en --target-lang fr \ - --trainpref $TEXT/train --validpref $TEXT/valid --testpref $TEXT/test \ - --destdir data-bin/wmt14_en_fr --thresholdtgt 0 --thresholdsrc 0 \ - --workers 60 - -# Train the model -mkdir -p checkpoints/fconv_wmt_en_fr -fairseq-train \ - data-bin/wmt14_en_fr \ - --arch fconv_wmt_en_fr \ - --dropout 0.1 \ - --criterion label_smoothed_cross_entropy --label-smoothing 0.1 \ - --optimizer nag --clip-norm 0.1 \ - --lr 0.5 --lr-scheduler fixed --force-anneal 50 \ - --max-tokens 3000 \ - --save-dir checkpoints/fconv_wmt_en_fr - -# Evaluate -fairseq-generate \ - data-bin/fconv_wmt_en_fr \ - --path checkpoints/fconv_wmt_en_fr/checkpoint_best.pt \ - --beam 5 --remove-bpe -``` - -## Multilingual Translation - -We also support training multilingual translation models. In this example we'll -train a multilingual `{de,fr}-en` translation model using the IWSLT'17 datasets. - -Note that we use slightly different preprocessing here than for the IWSLT'14 -En-De data above. In particular we learn a joint BPE code for all three -languages and use fairseq-interactive and sacrebleu for scoring the test set. - -```bash -# First install sacrebleu and sentencepiece -pip install sacrebleu sentencepiece - -# Then download and preprocess the data -cd examples/translation/ -bash prepare-iwslt17-multilingual.sh -cd ../.. - -# Binarize the de-en dataset -TEXT=examples/translation/iwslt17.de_fr.en.bpe16k -fairseq-preprocess --source-lang de --target-lang en \ - --trainpref $TEXT/train.bpe.de-en \ - --validpref $TEXT/valid0.bpe.de-en,$TEXT/valid1.bpe.de-en,$TEXT/valid2.bpe.de-en,$TEXT/valid3.bpe.de-en,$TEXT/valid4.bpe.de-en,$TEXT/valid5.bpe.de-en \ - --destdir data-bin/iwslt17.de_fr.en.bpe16k \ - --workers 10 - -# Binarize the fr-en dataset -# NOTE: it's important to reuse the en dictionary from the previous step -fairseq-preprocess --source-lang fr --target-lang en \ - --trainpref $TEXT/train.bpe.fr-en \ - --validpref $TEXT/valid0.bpe.fr-en,$TEXT/valid1.bpe.fr-en,$TEXT/valid2.bpe.fr-en,$TEXT/valid3.bpe.fr-en,$TEXT/valid4.bpe.fr-en,$TEXT/valid5.bpe.fr-en \ - --tgtdict data-bin/iwslt17.de_fr.en.bpe16k/dict.en.txt \ - --destdir data-bin/iwslt17.de_fr.en.bpe16k \ - --workers 10 - -# Train a multilingual transformer model -# NOTE: the command below assumes 1 GPU, but accumulates gradients from -# 8 fwd/bwd passes to simulate training on 8 GPUs -mkdir -p checkpoints/multilingual_transformer -CUDA_VISIBLE_DEVICES=0 fairseq-train data-bin/iwslt17.de_fr.en.bpe16k/ \ - --max-epoch 50 \ - --ddp-backend=legacy_ddp \ - --task multilingual_translation --lang-pairs de-en,fr-en \ - --arch multilingual_transformer_iwslt_de_en \ - --share-decoders --share-decoder-input-output-embed \ - --optimizer adam --adam-betas '(0.9, 0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --warmup-updates 4000 --warmup-init-lr '1e-07' \ - --label-smoothing 0.1 --criterion label_smoothed_cross_entropy \ - --dropout 0.3 --weight-decay 0.0001 \ - --save-dir checkpoints/multilingual_transformer \ - --max-tokens 4000 \ - --update-freq 8 - -# Generate and score the test set with sacrebleu -SRC=de -sacrebleu --test-set iwslt17 --language-pair ${SRC}-en --echo src \ - | python scripts/spm_encode.py --model examples/translation/iwslt17.de_fr.en.bpe16k/sentencepiece.bpe.model \ - > iwslt17.test.${SRC}-en.${SRC}.bpe -cat iwslt17.test.${SRC}-en.${SRC}.bpe \ - | fairseq-interactive data-bin/iwslt17.de_fr.en.bpe16k/ \ - --task multilingual_translation --lang-pairs de-en,fr-en \ - --source-lang ${SRC} --target-lang en \ - --path checkpoints/multilingual_transformer/checkpoint_best.pt \ - --buffer-size 2000 --batch-size 128 \ - --beam 5 --remove-bpe=sentencepiece \ - > iwslt17.test.${SRC}-en.en.sys -grep ^H iwslt17.test.${SRC}-en.en.sys | cut -f3 \ - | sacrebleu --test-set iwslt17 --language-pair ${SRC}-en -``` - -##### Argument format during inference - -During inference it is required to specify a single `--source-lang` and -`--target-lang`, which indicates the inference langauge direction. -`--lang-pairs`, `--encoder-langtok`, `--decoder-langtok` have to be set to -the same value as training. diff --git a/spaces/stomexserde/gpt4-ui/Examples/Dt00.cpk Pes 2014.md b/spaces/stomexserde/gpt4-ui/Examples/Dt00.cpk Pes 2014.md deleted file mode 100644 index 799df7d0547d62a4a295d9bb0c4df6dc6f16d812..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Dt00.cpk Pes 2014.md +++ /dev/null @@ -1,39 +0,0 @@ -
          -

          How to Use Dt00.cpk Pes 2014 File Manager Tool

          -

          If you are a fan of Pro Evolution Soccer 2014, you might want to customize your game with some mods and patches. One of the tools that can help you do that is the Dt00.cpk Pes 2014 File Manager Tool by sxsxsx. This tool allows you to browse, extract, and replace files in the *.cpk archives that contain the game data. In this article, we will show you how to use this tool and what features it offers.

          -

          What is Dt00.cpk Pes 2014?

          -

          Dt00.cpk Pes 2014 is one of the *.cpk files that are used by Pro Evolution Soccer 2014 to store game data. The *.cpk format is a compressed archive format that can contain multiple files and folders. The Dt00.cpk file contains the audio data for the game, such as commentary, sound effects, and music. You can find this file in the Data folder of your game installation directory.

          -

          Dt00.cpk Pes 2014


          Download File ✓✓✓ https://urlgoal.com/2uIbDd



          -

          How to Use Dt00.cpk Pes 2014 File Manager Tool?

          -

          The Dt00.cpk Pes 2014 File Manager Tool is a menu-driven application that you can use to manage your *.cpk files. You can download it from here. To use it, follow these steps:

          -
            -
          1. Run the tool as administrator.
          2. -
          3. Select File > Open CPK File and browse to your Dt00.cpk file.
          4. -
          5. You will see a list of files and folders inside the archive. You can use the search function to find specific files.
          6. -
          7. To extract a file or folder, select it and click Extract. You can choose a destination folder for the extracted files.
          8. -
          9. To replace a file or folder, select it and click Replace. You can choose a source file or folder from your computer.
          10. -
          11. To save your changes, select File > Save CPK File.
          12. -
          -

          What Features Does Dt00.cpk Pes 2014 File Manager Tool Offer?

          -

          The Dt00.cpk Pes 2014 File Manager Tool offers some features that can help you customize your game. Some of them are:

          -
            -
          • Fast loading of *.cpk files.
          • -
          • Preserving the original lookup names and mapping games to the CPK descriptors.
          • -
          • Supporting basic features for mod makers, such as adding, deleting, renaming, and sorting files and folders.
          • -
          • Including generic game tool features, such as viewing file properties, hex editing, previewing images and sounds, and comparing files.
          • -
          -

          Conclusion

          -

          The Dt00.cpk Pes 2014 File Manager Tool is a useful tool for Pro Evolution Soccer 2014 fans who want to modify their game data. It allows you to browse, extract, and replace files in the *.cpk archives with ease. You can download it from here and try it out yourself.

          - -

          How to Install Mods and Patches for Pro Evolution Soccer 2014?

          -

          Now that you know how to use the Dt00.cpk Pes 2014 File Manager Tool, you might want to install some mods and patches for your game. Mods and patches are files that can enhance your game experience by adding new features, fixing bugs, updating rosters, and more. You can find many mods and patches for Pro Evolution Soccer 2014 on various websites, such as Pes-Patch, ModdingWay, and PesEdit.

          -

          To install mods and patches for your game, you need to follow the instructions provided by the mod or patch creator. Usually, you need to extract the files from the downloaded archive and copy them to your game installation directory. Some mods and patches may require you to use the Dt00.cpk Pes 2014 File Manager Tool or other tools to replace or add files to your *.cpk archives. Some mods and patches may also require you to run an installer or a switcher to activate them.

          -

          -

          Before installing any mod or patch, make sure you backup your original game files and folders. This way, you can restore them if something goes wrong or if you want to uninstall the mod or patch. You can also use different profiles or versions of your game to switch between different mods and patches.

          -

          How to Enjoy Pro Evolution Soccer 2014 with Mods and Patches?

          -

          After installing mods and patches for your game, you can enjoy Pro Evolution Soccer 2014 with improved graphics, gameplay, sound, and content. You can play different modes, such as exhibition, master league, become a legend, online, and more. You can also customize your game settings, such as difficulty, camera angle, speed, and controls. You can also create your own teams, players, stadiums, kits, and logos with the in-game editor or external tools.

          -

          Pro Evolution Soccer 2014 is a fun and realistic soccer simulation game that can offer you hours of entertainment. With mods and patches, you can make it even better and more diverse. You can download mods and patches from various websites and use tools like the Dt00.cpk Pes 2014 File Manager Tool to manage them. You can also share your own mods and patches with other fans and enjoy their creations.

          -

          Conclusion

          -

          In this article, we have shown you how to use the Dt00.cpk Pes 2014 File Manager Tool and how to install mods and patches for Pro Evolution Soccer 2014. We hope you have learned something useful and that you will have fun playing this game with mods and patches. If you have any questions or feedback, feel free to leave a comment below.

          e93f5a0c3f
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Freegate Apk Download __FULL__.md b/spaces/stomexserde/gpt4-ui/Examples/Freegate Apk Download __FULL__.md deleted file mode 100644 index e79a3063d48783866f204a0f84ff073f3caff869..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Freegate Apk Download __FULL__.md +++ /dev/null @@ -1,24 +0,0 @@ -
          -

          How to Download and Install Freegate VPN for Android

          -

          Freegate VPN is a free and powerful web filtering tool that allows you to access blocked websites with ease. It is developed by Dynamic Internet Technology, a pioneer in censorship-circumvention software. Freegate VPN works by tapping into an anti-censorship backbone, DynaWeb, which is a P2P-like proxy network system. Freegate VPN also uses a unique encryption and compression algorithm to enhance its anti-censorship capability.

          -

          Freegate Apk Download


          Download · https://urlgoal.com/2uI6GN



          -

          If you are looking for a safe and legal way to download and install Freegate VPN for Android, you can follow these steps:

          -
            -
          1. Download the Freegate APK file from a trusted source. You can find the latest version of Freegate APK file at this link [^4^]. Alternatively, you can also download the Freegate ZIP file from this link [^2^] and unzip it to get the APK file.
          2. -
          3. Enable the installation of apps from unknown sources on your Android device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
          4. -
          5. Locate the downloaded Freegate APK file on your device and tap on it to start the installation process. Follow the on-screen instructions to complete the installation.
          6. -
          7. Launch the Freegate VPN app on your device and grant it the necessary permissions to access your network and device settings.
          8. -
          9. Select a proxy server from the list of available servers and tap on Connect to start browsing the web anonymously and securely.
          10. -
          -

          Congratulations! You have successfully downloaded and installed Freegate VPN for Android. You can now enjoy unrestricted access to any website you want.

          - -

          Freegate VPN is not only a web filtering tool, but also a privacy and security tool. It encrypts your online traffic and hides your IP address from prying eyes. It also protects you from malicious websites, trackers, and hackers. You can browse the web with confidence and peace of mind.

          -

          Freegate VPN is also very easy to use and fast. It does not require any installation or configuration on your device. It is a single executable file that you can run anytime you want. It automatically connects you to the best proxy server available. It also optimizes your web browsing speed by compressing the data and reducing the bandwidth consumption.

          -

          -

          Freegate VPN is compatible with most Android devices and supports various browsers and apps. You can use it to access any website or app that you want, such as Facebook, YouTube, Twitter, Instagram, WhatsApp, Netflix, and more. You can also switch between different proxy servers and regions as you wish.

          - -

          With Freegate VPN, you can enjoy the freedom and convenience of the internet without any restrictions or risks. You can access any content that you want, no matter where you are or what device you use. You can also stay safe and anonymous online, without worrying about your personal data or online activity being exposed or compromised.

          -

          Freegate VPN is one of the best and most trusted web filtering tools in the world. It has been used by millions of people in China, Cuba, Iran, North Korea, and many other countries where internet censorship is prevalent. It has also been rated as the \"best\" anti-virus and internet security program by several independent review sites.

          -

          Freegate VPN is a must-have app for anyone who values their online freedom and privacy. It is free to download and use, and it does not require any registration or subscription. It is also very simple and user-friendly, and it does not affect your device performance or battery life. You can download it today and start enjoying the internet like never before.

          cec2833e83
          -
          -
          \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Harrys Burger Lincoln Ri [HOT].md b/spaces/stomexserde/gpt4-ui/Examples/Harrys Burger Lincoln Ri [HOT].md deleted file mode 100644 index 30281a44ae33d920f738e15487aa798d5e76fad5..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Harrys Burger Lincoln Ri [HOT].md +++ /dev/null @@ -1,26 +0,0 @@ -
          -

          Harrys Burger Lincoln Ri: The Best Place to Enjoy Sliders, Craft Beer and More

          - -

          If you are looking for a place to enjoy delicious burgers, craft beer and boozy milkshakes, look no further than Harrys Burger Lincoln Ri. Harrys Burger Lincoln Ri is a casual restaurant that offers the highest quality ingredients, 100% pure Hereford beef and fresh local produce. CNN recently named Harry’s Bar & Burger the #1 burger in America, while Burgered.com rated them as the Best Burger in the World.

          -

          Harrys Burger Lincoln Ri


          Download Zip ✶✶✶ https://urlgoal.com/2uI9qr



          -

          Harrys Burger Lincoln Ri is located at 200 Front St, Lincoln, RI 02865-2000. You can call them at +1 401-475-4017 or visit their website at https://www.harrysbarburger.com/lincoln/front-st. They are open from Monday to Saturday from 11:30 am to 10 pm and closed on Sunday.

          -

          What Makes Harrys Burger Lincoln Ri So Special?

          -

          Harrys Burger Lincoln Ri is not your ordinary burger joint. They have a unique philosophy that sets them apart from other restaurants. Here are some of the reasons why Harrys Burger Lincoln Ri is so special:

          -
            -
          • They use 100% pure Hereford beef for their sliders, which are mini burgers that are cooked to order and served on potato rolls. They also offer turkey, chicken and veggie sliders for those who prefer other options.
          • -
          • They use fresh local produce for their toppings and sauces, which are made from scratch daily. They have a variety of choices such as lettuce, tomato, onion, pickle, cheese, bacon, mushroom, avocado and more.
          • -
          • They have an extensive craft beer list that features local and national brews. They have 50 taps and over 100 bottles and cans to choose from. They also have a rotating selection of seasonal and limited edition beers.
          • -
          • They have alcoholic shakes that are made with premium ice cream and liquor. They have flavors such as chocolate peanut butter cup, salted caramel pretzel, oreo cookie and more.
          • -
          • They have a friendly and welcoming atmosphere that makes you feel like home. They have a cozy and rustic decor with wooden tables, chairs and booths. They also have TVs and music to keep you entertained.
          • -
          -

          What Are Some of the Popular Menu Items at Harrys Burger Lincoln Ri?

          -

          Harrys Burger Lincoln Ri has a menu that caters to all tastes and preferences. Whether you want a classic slider or something more adventurous, you will find it at Harrys Burger Lincoln Ri. Here are some of the popular menu items that you should try:

          -

          -
            -
          • The Classic Slider: This is the signature slider that features a 100% pure Hereford beef patty topped with American cheese, lettuce, tomato, onion and pickle.
          • -
          • The M.O.A.B Slider: This is the mother of all burgers that features two 100% pure Hereford beef patties topped with American cheese, bacon, onion rings and BBQ sauce.
          • -
          • The Veggie Slider: This is a vegetarian option that features a black bean patty topped with pepper jack cheese, avocado, lettuce, tomato and chipotle mayo.
          • -
          • The Fried Pickle Chips: This is a crispy appetizer that features sliced pickles coated in cornmeal batter and fried to perfection. They are served with ranch dressing for dipping.
          • -
          • The Mac & Cheese Bites: This is a cheesy appetizer that features macaroni and cheese mixed with bacon bits

            cec2833e83
            -
            -
            \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/Jet Set Go Game Full Version [2021].md b/spaces/stomexserde/gpt4-ui/Examples/Jet Set Go Game Full Version [2021].md deleted file mode 100644 index f14e5276bf6a4511d9d91cd59f3bbb445da2fd71..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Jet Set Go Game Full Version [2021].md +++ /dev/null @@ -1,24 +0,0 @@ -
            -

            Jet Set Go Game Full Version: A Fun and Addictive Time Management Adventure

            - -

            If you love time management games and traveling around the world, then you will enjoy Jet Set Go Game Full Version. This game lets you run your own travel agency and help your customers have the best vacation ever. You will visit exotic destinations like Hawaii, Greece, Alaska, and more. You will also have to deal with various challenges such as booking flights, arranging tours, serving meals, and satisfying your customers' needs.

            -

            jet set go game full version


            Download ✵✵✵ https://urlgoal.com/2uI8ug



            - -

            Jet Set Go Game Full Version is a game that will keep you entertained for hours. You will have to manage your time and resources wisely, as well as upgrade your agency and staff. You will also unlock mini-games and bonus levels that will add more fun and variety to your gameplay. You will be able to customize your character and choose from different outfits and accessories. You will also earn trophies and achievements that will show your progress and skills.

            - -

            Jet Set Go Game Full Version is a game that you can download and play on your PC or Mac. It has colorful graphics, catchy music, and easy controls. It is suitable for players of all ages and levels of experience. It is a game that will make you feel like you are traveling the world and having a blast.

            - -

            If you want to try Jet Set Go Game Full Version, you can download it from the official website or from other online platforms. You can also play the free trial version before you buy the full version. You will not regret it!

            - -

            Jet Set Go Game Full Version is a game that will make you say "Jet set go!"

            - -

            Jet Set Go Game Full Version is a game that will challenge you and reward you. You will have to plan your itinerary, choose your destinations, and manage your budget. You will also have to deal with unexpected events, such as bad weather, lost luggage, or unhappy customers. You will have to think fast and act smart to keep your business running smoothly.

            - -

            Jet Set Go Game Full Version is a game that will make you learn and explore. You will discover new places, cultures, and cuisines. You will also meet different characters, such as celebrities, royalty, and locals. You will have to interact with them and impress them with your service and knowledge. You will also have fun playing mini-games that will test your memory, reflexes, and creativity.

            -

            - -

            Jet Set Go Game Full Version is a game that will make you happy and relaxed. You will enjoy the beautiful scenery, the lively atmosphere, and the cheerful music. You will also feel the satisfaction of making your customers' dreams come true. You will also have the freedom to choose your own style and pace. You can play the game in casual mode or expert mode, depending on your mood and preference.

            - -

            Jet Set Go Game Full Version is a game that you should not miss. It is a game that will make you travel the world without leaving your home. It is a game that will make you smile and have fun. It is a game that will make you jet set go!

            7b8c122e87
            -
            -
            \ No newline at end of file diff --git a/spaces/sub314xxl/MetaGPT/tests/metagpt/roles/test_architect.py b/spaces/sub314xxl/MetaGPT/tests/metagpt/roles/test_architect.py deleted file mode 100644 index d44e0d923287fec90c747c7c04961bfbcb5a6a0f..0000000000000000000000000000000000000000 --- a/spaces/sub314xxl/MetaGPT/tests/metagpt/roles/test_architect.py +++ /dev/null @@ -1,21 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/5/20 14:37 -@Author : alexanderwu -@File : test_architect.py -""" -import pytest - -from metagpt.logs import logger -from metagpt.roles import Architect -from tests.metagpt.roles.mock import MockMessages - - -@pytest.mark.asyncio -async def test_architect(): - role = Architect() - role.recv(MockMessages.req) - rsp = await role.handle(MockMessages.prd) - logger.info(rsp) - assert len(rsp.content) > 0 diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Autodesk Fusion 360 Crack 2.0.5677 _TOP_ Free Download Portable.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Autodesk Fusion 360 Crack 2.0.5677 _TOP_ Free Download Portable.md deleted file mode 100644 index 54393d6bb138c033a13d9d84bf1458ff3a5b762b..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Autodesk Fusion 360 Crack 2.0.5677 _TOP_ Free Download Portable.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Autodesk Fusion 360 Crack 2.0.5677 Free Download Portable


            Downloadhttps://cinurl.com/2uEYkp



            - -Autodesk Fusion 360 Crack is American best program participation which produces software for the design. and construction of buildings,. 4d29de3e1b
            -
            -
            -

            diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15.md deleted file mode 100644 index 6266ed5e0cebbe99c0f5e651506041a421f5d666..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15.md +++ /dev/null @@ -1,15 +0,0 @@ - -

            Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15

            -

            If you are a fan of action, adventure and crime movies, you might want to watch Fast Five (2011), the fifth installment of the Fast and Furious franchise that features some of the most thrilling and spectacular car chases and stunts ever seen on screen. The movie stars Vin Diesel, Paul Walker, Dwayne Johnson, Tyrese Gibson, Ludacris and many more as a team of outlaws who plan to pull off a heist in Rio de Janeiro while being pursued by a relentless federal agent.

            -

            Fast Five (2011) is available to watch online in high definition quality with dual audio option of English and Hindi. You can stream or download the movie from various platforms, such as Amazon Prime Video, Google Play Movies, YouTube, Apple TV, Vudu, Microsoft Store and more. You can also rent or buy the movie on these services in 1080p BluRay quality with X264 codec that ensures crisp and clear picture and sound.

            -

            Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15


            Downloadhttps://cinurl.com/2uEZ73



            -

            How to watch Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15 online

            -

            To watch Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15 online, you will need a compatible device, such as a smart TV, laptop, tablet or smartphone, and a stable internet connection. You will also need to sign up for an account on the streaming service of your choice and pay for a subscription or a rental fee. Some services may offer free trials or discounts for new customers.

            -

            To watch Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15 online with dual audio option, you will need to select the audio track that suits your preference. You can switch between English and Hindi audio tracks anytime during the playback. You can also enable subtitles if you want to follow along with the dialogue.

            -

            Why you should watch Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15 online

            -

            Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15 is one of the best movies of the Fast and Furious series and one of the most entertaining movies to watch online in HD. It is a movie that combines action, adventure, crime, comedy and drama in a fast-paced and exhilarating way. It is a movie that features an amazing cast of actors who have great chemistry and charisma on screen. It is a movie that showcases some of the most impressive and jaw-dropping car stunts and chases ever filmed.

            -

            Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15 is a movie that you can watch online with your friends or family, or even by yourself if you need some adrenaline and excitement. It is a movie that will keep you on the edge of your seat from start to finish. It is also a movie that you can watch more than once and enjoy every time.

            -

            Conclusion

            -

            Fast Five 2011 1080p BluRay X264 Dual Audio English Hindi 15 is one of the best action movies of 2011 and one of the most enjoyable movies to watch online in HD. It is a movie that follows the adventures of a team of outlaws who plan to steal $100 million from a corrupt businessman in Rio de Janeiro while being chased by a determined federal agent. It is a movie that stars Vin Diesel, Paul Walker, Dwayne Johnson and many more as some of the most iconic characters in the Fast and Furious franchise. It is a movie that you should not miss if you are looking for some fun and thrilling entertainment.

            3cee63e6c2
            -
            -
            \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Nikon Total Station Dtm-322 Software Download [PATCHED].md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Nikon Total Station Dtm-322 Software Download [PATCHED].md deleted file mode 100644 index 520d920addf5738439041b4df4a301a876a2214d..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Nikon Total Station Dtm-322 Software Download [PATCHED].md +++ /dev/null @@ -1,6 +0,0 @@ -

            nikon total station dtm-322 software download


            Download File ••• https://cinurl.com/2uEYd2



            - -9-11-10 Higashi-Mita, Shinagawa-Ku, Tokyo, Japan.. 1. If you are a US citizen or permanent resident, you may contact Nikon/Trimble for free technical support and warranty information by emailing trimble@nikon. The Trimble TotalStation DS. 2.The model numbers for the DTM-322 were:. The Trimble TotalStation DTM-322 is an accurate, easy-to-use and fast tool that enables you to quickly create and manage data. The Trimble TotalStation DTM-322 is a portable, powerful, rugged and easy to use survey data collection and mapping system.. The Nikon-Trimble DTM-322, a GPS/NAD-83 compatible with the Total Station DTM-322, provides an easy to use, rugged platform for data collection and mapping.. The Nikon-Trimble DTM-322, a GPS/NAD-83 compatible with the Total Station DTM-322, provides an easy to use, rugged platform for data collection and mapping. 1. The Nikon-Trimble DTM-322, a GPS/NAD-83 compatible with the Total Station DTM-322, provides an easy to use, rugged platform for data collection and mapping. 1. The Nikon-Trimble DTM-322, a GPS/NAD-83 compatible with the Total Station DTM-322, provides an easy to use, rugged platform for data collection and mapping. 1. The Nikon-Trimble DTM-322, a GPS/NAD-83 compatible with the Total Station DTM-322, provides an easy to use, rugged platform for data collection and mapping. 1. The Nikon-Trimble DTM-322, a GPS/NAD-83 compatible with the Total Station DTM-322, provides an easy to use, rugged platform for data collection and mapping. 1. The Nikon-Trimble DTM-322, a GPS/NAD-83 compatible with the Total Station DTM-322, provides an easy to use, rugged platform for data collection and mapping. 1. The Nikon-Trimble DTM-322, a GPS/NAD-83 compatible with the Total Station DTM-322, provides an easy to use, rugged platform for data collection and mapping. The Trimble TotalStation DTM-322 is an accurate, easy-to-use and fast tool that enables you to quickly create and manage data. The Trimble 4fefd39f24
            -
            -
            -

            diff --git a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/models/gnn/gat.py b/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/models/gnn/gat.py deleted file mode 100644 index d7816f22a4b3afd77d3f2d3e69bc65e45b026a14..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Face-Chinese/SPIGA/spiga/models/gnn/gat.py +++ /dev/null @@ -1,62 +0,0 @@ -from copy import deepcopy -import torch -from torch import nn -import torch.nn.functional as F - -from spiga.models.gnn.layers import MLP - - -class GAT(nn.Module): - def __init__(self, input_dim: int, output_dim: int, num_heads=4): - super().__init__() - - num_heads_in = num_heads - self.reshape = None - if input_dim != output_dim: - for num_heads_in in range(num_heads, 0, -1): - if input_dim % num_heads_in == 0: - break - self.reshape = MLP([input_dim, output_dim]) - - self.attention = MessagePassing(input_dim, num_heads_in, out_dim=output_dim) - - def forward(self, features): - message, prob = self.attention(features) - if self.reshape: - features = self.reshape(features) - output = features + message - return output, prob - - -class MessagePassing(nn.Module): - def __init__(self, feature_dim: int, num_heads: int, out_dim=None): - super().__init__() - self.attn = Attention(num_heads, feature_dim) - self.mlp = MLP([feature_dim*2, feature_dim*2, out_dim]) - - def forward(self, features): - message, prob = self.attn(features, features, features) - return self.mlp(torch.cat([features, message], dim=1)), prob - - -class Attention(nn.Module): - def __init__(self, num_heads: int, feature_dim: int): - super().__init__() - assert feature_dim % num_heads == 0 - self.dim = feature_dim // num_heads - self.num_heads = num_heads - self.merge = nn.Conv1d(feature_dim, feature_dim, kernel_size=1) - self.proj = nn.ModuleList([deepcopy(self.merge) for _ in range(3)]) - - def forward(self, query, key, value): - batch_dim = query.size(0) - query, key, value = [l(x).view(batch_dim, self.dim, self.num_heads, -1) - for l, x in zip(self.proj, (query, key, value))] - x, prob = self.attention(query, key, value) - return self.merge(x.contiguous().view(batch_dim, self.dim*self.num_heads, -1)), prob - - def attention(self, query, key, value): - dim = query.shape[1] - scores = torch.einsum('bdhn,bdhm->bhnm', query, key) / dim ** .5 - prob = F.softmax(scores, dim=-1) - return torch.einsum('bhnm,bdhm->bdhn', prob, value), prob diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/three_nn.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/three_nn.py deleted file mode 100644 index 2b01047a129989cd5545a0a86f23a487f4a13ce1..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/ops/three_nn.py +++ /dev/null @@ -1,51 +0,0 @@ -from typing import Tuple - -import torch -from torch.autograd import Function - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['three_nn_forward']) - - -class ThreeNN(Function): - """Find the top-3 nearest neighbors of the target set from the source set. - - Please refer to `Paper of PointNet++ `_ - for more details. - """ - - @staticmethod - def forward(ctx, target: torch.Tensor, - source: torch.Tensor) -> Tuple[torch.Tensor, torch.Tensor]: - """ - Args: - target (Tensor): shape (B, N, 3), points set that needs to - find the nearest neighbors. - source (Tensor): shape (B, M, 3), points set that is used - to find the nearest neighbors of points in target set. - - Returns: - Tensor: shape (B, N, 3), L2 distance of each point in target - set to their corresponding nearest neighbors. - """ - target = target.contiguous() - source = source.contiguous() - - B, N, _ = target.size() - m = source.size(1) - dist2 = torch.cuda.FloatTensor(B, N, 3) - idx = torch.cuda.IntTensor(B, N, 3) - - ext_module.three_nn_forward(target, source, dist2, idx, b=B, n=N, m=m) - if torch.__version__ != 'parrots': - ctx.mark_non_differentiable(idx) - - return torch.sqrt(dist2), idx - - @staticmethod - def backward(ctx, a=None, b=None): - return None, None - - -three_nn = ThreeNN.apply diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/psp_head.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/psp_head.py deleted file mode 100644 index b5f1e71c70c3a20f4007c263ec471a87bb214a48..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmseg/models/decode_heads/psp_head.py +++ /dev/null @@ -1,101 +0,0 @@ -import torch -import torch.nn as nn -from annotator.uniformer.mmcv.cnn import ConvModule - -from annotator.uniformer.mmseg.ops import resize -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -class PPM(nn.ModuleList): - """Pooling Pyramid Module used in PSPNet. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. - in_channels (int): Input channels. - channels (int): Channels after modules, before conv_seg. - conv_cfg (dict|None): Config of conv layers. - norm_cfg (dict|None): Config of norm layers. - act_cfg (dict): Config of activation layers. - align_corners (bool): align_corners argument of F.interpolate. - """ - - def __init__(self, pool_scales, in_channels, channels, conv_cfg, norm_cfg, - act_cfg, align_corners): - super(PPM, self).__init__() - self.pool_scales = pool_scales - self.align_corners = align_corners - self.in_channels = in_channels - self.channels = channels - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - self.act_cfg = act_cfg - for pool_scale in pool_scales: - self.append( - nn.Sequential( - nn.AdaptiveAvgPool2d(pool_scale), - ConvModule( - self.in_channels, - self.channels, - 1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg))) - - def forward(self, x): - """Forward function.""" - ppm_outs = [] - for ppm in self: - ppm_out = ppm(x) - upsampled_ppm_out = resize( - ppm_out, - size=x.size()[2:], - mode='bilinear', - align_corners=self.align_corners) - ppm_outs.append(upsampled_ppm_out) - return ppm_outs - - -@HEADS.register_module() -class PSPHead(BaseDecodeHead): - """Pyramid Scene Parsing Network. - - This head is the implementation of - `PSPNet `_. - - Args: - pool_scales (tuple[int]): Pooling scales used in Pooling Pyramid - Module. Default: (1, 2, 3, 6). - """ - - def __init__(self, pool_scales=(1, 2, 3, 6), **kwargs): - super(PSPHead, self).__init__(**kwargs) - assert isinstance(pool_scales, (list, tuple)) - self.pool_scales = pool_scales - self.psp_modules = PPM( - self.pool_scales, - self.in_channels, - self.channels, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg, - align_corners=self.align_corners) - self.bottleneck = ConvModule( - self.in_channels + len(pool_scales) * self.channels, - self.channels, - 3, - padding=1, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - psp_outs = [x] - psp_outs.extend(self.psp_modules(x)) - psp_outs = torch.cat(psp_outs, dim=1) - output = self.bottleneck(psp_outs) - output = self.cls_seg(output) - return output diff --git a/spaces/taskswithcode/semantic_clustering/twc_clustering.py b/spaces/taskswithcode/semantic_clustering/twc_clustering.py deleted file mode 100644 index 200567399b6901e26a958d8e15afcc7c032efe85..0000000000000000000000000000000000000000 --- a/spaces/taskswithcode/semantic_clustering/twc_clustering.py +++ /dev/null @@ -1,177 +0,0 @@ -from scipy.spatial.distance import cosine -import argparse -import json -import pdb -import torch -import torch.nn.functional as F -import numpy as np -import time -from collections import OrderedDict - - -class TWCClustering: - def __init__(self): - print("In Zscore Clustering") - - def compute_matrix(self,embeddings): - #print("Computing similarity matrix ...)") - embeddings= np.array(embeddings) - start = time.time() - vec_a = embeddings.T #vec_a shape (1024,) - vec_a = vec_a/np.linalg.norm(vec_a,axis=0) #Norm is along axis 0 - rows - vec_a = vec_a.T #vec_a shape becomes (,1024) - similarity_matrix = np.inner(vec_a,vec_a) - end = time.time() - time_val = (end-start)*1000 - #print(f"Similarity matrix computation complete. Time taken:{(time_val/(1000*60)):.2f} minutes") - return similarity_matrix - - def get_terms_above_threshold(self,matrix,embeddings,pivot_index,threshold): - run_index = pivot_index - picked_arr = [] - while (run_index < len(embeddings)): - if (matrix[pivot_index][run_index] >= threshold): - picked_arr.append(run_index) - run_index += 1 - return picked_arr - - def update_picked_dict_arr(self,picked_dict,arr): - for i in range(len(arr)): - picked_dict[arr[i]] = 1 - - def update_picked_dict(self,picked_dict,in_dict): - for key in in_dict: - picked_dict[key] = 1 - - def find_pivot_subgraph(self,pivot_index,arr,matrix,threshold,strict_cluster = True): - center_index = pivot_index - center_score = 0 - center_dict = {} - for i in range(len(arr)): - node_i_index = arr[i] - running_score = 0 - temp_dict = {} - for j in range(len(arr)): - node_j_index = arr[j] - cosine_dist = matrix[node_i_index][node_j_index] - if ((cosine_dist < threshold) and strict_cluster): - continue - running_score += cosine_dist - temp_dict[node_j_index] = cosine_dist - if (running_score > center_score): - center_index = node_i_index - center_dict = temp_dict - center_score = running_score - sorted_d = OrderedDict(sorted(center_dict.items(), key=lambda kv: kv[1], reverse=True)) - return {"pivot_index":center_index,"orig_index":pivot_index,"neighs":sorted_d} - - - def update_overlap_stats(self,overlap_dict,cluster_info): - arr = list(cluster_info["neighs"].keys()) - for val in arr: - if (val not in overlap_dict): - overlap_dict[val] = 1 - else: - overlap_dict[val] += 1 - - def bucket_overlap(self,overlap_dict): - bucket_dict = {} - for key in overlap_dict: - if (overlap_dict[key] not in bucket_dict): - bucket_dict[overlap_dict[key]] = 1 - else: - bucket_dict[overlap_dict[key]] += 1 - sorted_d = OrderedDict(sorted(bucket_dict.items(), key=lambda kv: kv[1], reverse=False)) - return sorted_d - - def merge_clusters(self,ref_cluster,curr_cluster): - dup_arr = ref_cluster.copy() - for j in range(len(curr_cluster)): - if (curr_cluster[j] not in dup_arr): - ref_cluster.append(curr_cluster[j]) - - - def non_overlapped_clustering(self,matrix,embeddings,threshold,mean,std,cluster_dict): - picked_dict = {} - overlap_dict = {} - candidates = [] - - for i in range(len(embeddings)): - if (i in picked_dict): - continue - zscore = mean + threshold*std - arr = self.get_terms_above_threshold(matrix,embeddings,i,zscore) - candidates.append(arr) - self.update_picked_dict_arr(picked_dict,arr) - - # Merge arrays to create non-overlapping sets - run_index_i = 0 - while (run_index_i < len(candidates)): - ref_cluster = candidates[run_index_i] - run_index_j = run_index_i + 1 - found = False - while (run_index_j < len(candidates)): - curr_cluster = candidates[run_index_j] - for k in range(len(curr_cluster)): - if (curr_cluster[k] in ref_cluster): - self.merge_clusters(ref_cluster,curr_cluster) - candidates.pop(run_index_j) - found = True - run_index_i = 0 - break - if (found): - break - else: - run_index_j += 1 - if (not found): - run_index_i += 1 - - - zscore = mean + threshold*std - for i in range(len(candidates)): - arr = candidates[i] - cluster_info = self.find_pivot_subgraph(arr[0],arr,matrix,zscore,strict_cluster = False) - cluster_dict["clusters"].append(cluster_info) - return {} - - def overlapped_clustering(self,matrix,embeddings,threshold,mean,std,cluster_dict): - picked_dict = {} - overlap_dict = {} - - zscore = mean + threshold*std - for i in range(len(embeddings)): - if (i in picked_dict): - continue - arr = self.get_terms_above_threshold(matrix,embeddings,i,zscore) - cluster_info = self.find_pivot_subgraph(i,arr,matrix,zscore,strict_cluster = True) - self.update_picked_dict(picked_dict,cluster_info["neighs"]) - self.update_overlap_stats(overlap_dict,cluster_info) - cluster_dict["clusters"].append(cluster_info) - sorted_d = self.bucket_overlap(overlap_dict) - return sorted_d - - - def cluster(self,output_file,texts,embeddings,threshold,clustering_type): - is_overlapped = True if clustering_type == "overlapped" else False - matrix = self.compute_matrix(embeddings) - mean = np.mean(matrix) - std = np.std(matrix) - zscores = [] - inc = 0 - value = mean - while (value < 1): - zscores.append({"threshold":inc,"cosine":round(value,2)}) - inc += 1 - value = mean + inc*std - #print("In clustering:",round(std,2),zscores) - cluster_dict = {} - cluster_dict["clusters"] = [] - if (is_overlapped): - sorted_d = self.overlapped_clustering(matrix,embeddings,threshold,mean,std,cluster_dict) - else: - sorted_d = self.non_overlapped_clustering(matrix,embeddings,threshold,mean,std,cluster_dict) - curr_threshold = f"{threshold} (cosine:{mean+threshold*std:.2f})" - cluster_dict["info"] ={"mean":mean,"std":std,"current_threshold":curr_threshold,"zscores":zscores,"overlap":list(sorted_d.items())} - return cluster_dict - - diff --git a/spaces/tcapelle/spacy_wandb/app.py b/spaces/tcapelle/spacy_wandb/app.py deleted file mode 100644 index e0936eae46491df1317f79b9470f5d7c22848b33..0000000000000000000000000000000000000000 --- a/spaces/tcapelle/spacy_wandb/app.py +++ /dev/null @@ -1,56 +0,0 @@ -""" -Example using the components provided by spacy-streamlit in an existing app. -Prerequisites: -python -m spacy download en_core_web_sm -""" -import os -import spacy_streamlit, wandb -import streamlit as st - -DEFAULT_TEXT = """Google was founded in September 1998 by Larry Page and Sergey Brin while they were Ph.D. students at Stanford University in California. Together they own about 14 percent of its shares and control 56 percent of the stockholder voting power through supervoting stock. They incorporated Google as a California privately held company on September 4, 1998, in California. Google was then reincorporated in Delaware on October 22, 2002.""" - - -st.title("Visualizing NER using spaCy and Weights and Biases") -st.image("wide.png") - -wandb.login(key = st.secrets.WANDB_API_KEY) -api = wandb.Api() - -ENTITY = "capecape" -PROJECT = "st30" - -ENTITY_PROJECT = ENTITY+"/"+PROJECT - -st.subheader("Setup you W&B project") -ENTITY_PROJECT = st.text_input("Input your wandb project path (entity/project)", ENTITY_PROJECT) - -artifacts_type = api.artifact_type("model", f'{ENTITY_PROJECT}') - -def list_project_models(artifacts_type): - models = [] - for collection in artifacts_type.collections(): - for artifact in collection.versions(): - models.append(artifact.name) - return models - -models_names = list_project_models(artifacts_type) -model_name = st.selectbox("Select your spaCy model (logged as wandb.Artifact)", models_names) - -# download the model from wandb -model = api.artifact(f'{ENTITY_PROJECT}/{model_name}', type='model') -model = model.download() - -text = st.text_area("Input some text to analyze", DEFAULT_TEXT, height=200) -doc = spacy_streamlit.process_text(model, text) - -ner_labels = ["CARDINAL", "DATE", "EVENT", "FAC", "GPE", "LANGUAGE", - "LAW", "LOC", "MONEY", "NORP", "ORDINAL", "ORG", "PERCENT", - "PERSON", "PRODUCT", "QUANTITY", "TIME", "WORK_OF_ART"] - -spacy_streamlit.visualize_ner( - doc, - labels=ner_labels, - show_table=False, - title="Persons, dates and locations", -) -st.text(f"Analyzed using spaCy model: {ENTITY_PROJECT}/{model[2:]}") diff --git a/spaces/terfces0erbo/CollegeProjectV2/Contoh Soal Beserta Jawaban Simple Future Tense.md b/spaces/terfces0erbo/CollegeProjectV2/Contoh Soal Beserta Jawaban Simple Future Tense.md deleted file mode 100644 index 1e36c4f3183d0c9e57533bd2305b8227b18cc657..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Contoh Soal Beserta Jawaban Simple Future Tense.md +++ /dev/null @@ -1,50 +0,0 @@ -
            -

            Contoh Soal Beserta Jawaban Simple Future Tense

            -

            Simple future tense adalah bentuk kalimat yang digunakan untuk menyatakan suatu kejadian yang akan terjadi di masa depan. Ada beberapa cara untuk membentuk simple future tense, yaitu dengan menggunakan modal will, be going to, atau present continuous tense. Berikut adalah contoh soal beserta jawaban simple future tense yang bisa Anda pelajari.

            -

            Contoh Soal Beserta Jawaban Simple Future Tense


            DOWNLOAD ->->->-> https://bytlly.com/2uGjtm



            -

            Contoh Soal dan Jawaban Simple Future Tense dengan Will

            -

            Will adalah modal yang digunakan untuk mengekspresikan keputusan spontan, prediksi, janji, atau harapan di masa depan. Rumusnya adalah will + verb (bare infinitive). Berikut adalah contoh soal dan jawaban simple future tense dengan will:

            -
              -
            1. A: What are you going to do this weekend? (Apa yang akan kamu lakukan akhir pekan ini?)
              -B: I don't know. I think I will stay at home and watch Netflix. (Aku tidak tahu. Aku rasa aku akan tinggal di rumah dan menonton Netflix.)
            2. -
            3. A: The weather forecast says it's going to rain tomorrow. (Prakiraan cuaca mengatakan akan hujan besok.)
              -B: Really? Then I will bring my umbrella. (Benarkah? Kalau begitu aku akan membawa payungku.)
            4. -
            5. A: I'm sorry I forgot your birthday. (Maaf aku lupa ulang tahunmu.)
              -B: It's okay. I'm sure you will remember it next year. (Tidak apa-apa. Aku yakin kamu akan mengingatnya tahun depan.)
            6. -
            7. A: Do you think he will pass the exam? (Menurutmu dia akan lulus ujian?)
              -B: Yes, I do. He will study hard for it. (Ya, aku yakin. Dia akan belajar keras untuk itu.)
            8. -
            9. A: I'm really hungry. (Aku sangat lapar.)
              -B: Don't worry. I will make you some sandwiches. (Jangan khawatir. Aku akan membuatkanmu beberapa roti lapis.)
            10. -
            -

            Contoh Soal dan Jawaban Simple Future Tense dengan Be Going To

            -

            Be going to adalah bentuk kalimat yang digunakan untuk menyatakan suatu rencana, niat, atau prediksi berdasarkan bukti di masa depan. Rumusnya adalah be + going to + verb (bare infinitive). Berikut adalah contoh soal dan jawaban simple future tense dengan be going to:

            -
              -
            1. A: What are you going to do this weekend? (Apa yang akan kamu lakukan akhir pekan ini?)
              -B: I am going to visit my grandparents in Bandung. (Aku akan mengunjungi kakek nenekku di Bandung.)
            2. -
            3. A: Look at those dark clouds. (Lihatlah awan-awan gelap itu.)
              -B: It is going to rain soon. We should go home now. (Akan segera hujan. Kita harus pulang sekarang.)
            4. -
            5. A: I have a toothache. (Aku sakit gigi.)
              -B: You are going to see a dentist tomorrow, right? (Kamu akan pergi ke dokter gigi besok, kan?)
            6. -
            7. A: She has been studying hard for the TOEFL test. (Dia telah belajar keras untuk tes TOEFL.)
              -B: She is going to get a high score for sure. (Dia pasti akan mendapatkan skor tinggi.)
            8. -
            9. A: He looks very tired and sleepy. (Dia terlihat sangat - -

              Contoh Soal dan Jawaban Simple Future Tense dengan Present Continuous Tense

              -

              Present continuous tense juga bisa digunakan untuk menyatakan suatu kejadian yang akan terjadi di masa depan, terutama jika sudah ada perencanaan atau persiapan sebelumnya. Rumusnya adalah be + verb (present participle). Berikut adalah contoh soal dan jawaban simple future tense dengan present continuous tense:

              -
                -
              1. A: What are you doing this weekend? (Apa yang sedang kamu lakukan akhir pekan ini?)
                -B: I am flying to Bali with my family. We have booked the tickets and the hotel. (Aku sedang terbang ke Bali bersama keluargaku. Kami sudah memesan tiket dan hotel.)
              2. -
              3. A: Are you ready for the presentation tomorrow? (Apakah kamu siap untuk presentasi besok?)
                -B: Yes, I am. I am presenting the first part and you are presenting the second part, right? (Ya, aku siap. Aku sedang menyajikan bagian pertama dan kamu sedang menyajikan bagian kedua, kan?)
              4. -
              5. A: Why are you wearing a coat and a scarf? (Mengapa kamu memakai mantel dan syal?)
                -B: Because I am leaving for Canada in an hour. It's very cold there. (Karena aku sedang berangkat ke Kanada dalam satu jam. Di sana sangat dingin.)
              6. -
              7. A: She is not answering her phone. (Dia tidak menjawab teleponnya.)
                -B: Maybe she is sleeping. She has a night shift tonight. (Mungkin dia sedang tidur. Dia memiliki shift malam hari ini.)
              8. -
              9. A: He is very excited today. (Dia sangat bersemangat hari ini.)
                -B: Of course he is. He is graduating from college tomorrow. (Tentu saja dia bersemangat. Dia sedang lulus dari perguruan tinggi besok.)
              10. -
              -

              Kesimpulan

              -

              Simple future tense adalah bentuk kalimat yang digunakan untuk menyatakan suatu kejadian yang akan terjadi di masa depan. Ada beberapa cara untuk membentuk simple future tense, yaitu dengan menggunakan modal will, be going to, atau present continuous tense. Setiap cara memiliki penggunaan dan makna yang berbeda-beda. Semoga contoh soal dan jawaban simple future tense di atas bermanfaat untuk Anda.

              -

              d5da3c52bf
              -
              -
              \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Helix Plugin Crack LINK.md b/spaces/terfces0erbo/CollegeProjectV2/Helix Plugin Crack LINK.md deleted file mode 100644 index 35b5fe06cc133c8cd194b488e4bf287f6ad1e249..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Helix Plugin Crack LINK.md +++ /dev/null @@ -1,6 +0,0 @@ -

              Helix plugin crack


              Download File ››››› https://bytlly.com/2uGkZ0



              - -Main Features of Line 6 Helix Native (2021) Crack: · It looks like an analog device. · In terms of true sound and feel, the Helix Floor and Helix Rack ... 1fdad05405
              -
              -
              -

              diff --git a/spaces/terfces0erbo/CollegeProjectV2/Hybrid Pvp V3 Client Download.md b/spaces/terfces0erbo/CollegeProjectV2/Hybrid Pvp V3 Client Download.md deleted file mode 100644 index 0ea640415314274cbbb4c1fe590150157cec1a0d..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Hybrid Pvp V3 Client Download.md +++ /dev/null @@ -1,6 +0,0 @@ - -

              In the last part of our study we assessed the anti-HIV-1 activity of hybrid coating when applied on glass surface. We observed similar time-dependent virucidal effect as we observed in previous experiment. However, we obtained complete inactivation of HIV-1 in two out of three cases. We performed one experiment with HIV-1 titer of 25688 TCID50/ml, and obtained final titer of 7095 TCID50/ml. For the same set of dilution we performed two other experiments and obtained final titer of 46993466 TCID50/ml and 465086 TCID50/ml. Interestingly, the experiment where HIV-1 titer was higher resulted in higher reduction of virus titer. We performed one experiment with HIV-1 titer of 29871721 TCID50/ml, and obtained final titer of 925112 TCID50/ml. For the same set of dilution we performed two other experiments and obtained final titer of 26202660 TCID50/ml and 2206047 TCID50/ml. Although we did not perform experiments with different initial input of virus we hypothesize that coating time and coating method will influence effectiveness of hybrid coating. For example, longer incubation time could lead to increasing number of active sites, which could provide better protection against infections. We have previously described other modifications of coating manufacturing for obtainment of more active micro-sized particles in hybrid coatings [27].

              -

              hybrid pvp v3 client download


              Downloadhttps://bytlly.com/2uGjCh



              -

              In summary, we have developed hybrid coatings based on polyvinyl pyrrolidone which has shown excellent anti-bacterial activity against different types of bacteria. In comparison with silver- and copper-based coatings we observed decreased effect against bacteria with coated glass surfaces compared to that of silver- and copper-based hybrid coatings, but we obtained similar anti-bacterial effect as against the positive-sense single-stranded RNA virus with hybrid coatings composed of copper and zinc cations. We also found a complete virucidal activity against enveloped virus when applied on polymer surfaces. However, we were not able to inactivate negative sense single-stranded RNA virus with applied hybrid coating. We also show that hybrid coating, as a coating method, can influence the virucidal effect against HIV-1 on polymer surfaces. Hybrid coatings we have developed are prepared with various concentrations of cations and their distribution on the surface is adjustable by the choice of coating method. We are currently exploring different concentrations of silver, copper and zinc cations incorporated into the hybrid coating which would expand our capability of modification and tailor the unique properties of hybrid coatings. In addition, we are currently assessing the anti-HIV-1 activity of these hybrid coatings on various surfaces which are utilized in healthcare settings. Our long-term goal is to develop safe and effective coatings that could protect these surfaces from pathogen infections.

              899543212b
              -
              -
              \ No newline at end of file diff --git a/spaces/terfces0erbo/CollegeProjectV2/Kitab Tazkiyatun Nafs Pdf Download WORK.md b/spaces/terfces0erbo/CollegeProjectV2/Kitab Tazkiyatun Nafs Pdf Download WORK.md deleted file mode 100644 index 40495be645737383fb1fe1ba8dd9929afef9fc06..0000000000000000000000000000000000000000 --- a/spaces/terfces0erbo/CollegeProjectV2/Kitab Tazkiyatun Nafs Pdf Download WORK.md +++ /dev/null @@ -1,10 +0,0 @@ - -

              the next step: the way of discipline for the soul: tazkiyatun nafs, love, and belonging : a muslim guide to personal wholeness (book one). kitab tazkiyatun nafs: the book of purity of the soul: commentary of kitab al-tawhid sh. muhammad ibn 'abdul-wahhab (compressed).

              -

              kitab tazkiyatun nafs pdf download


              Download File »»» https://bytlly.com/2uGjcR



              -

              khatam al-nafs: kitab al-khatam al-nafs: the book of purification of the soul. kitab tazkiyatun nafs: the book of purity of the soul: commentary of kitab al-tawhid sh. muhammad ibn 'abdul-wahhab (compressed).

              -

              a-l-kitab-r-i-y-a-d-a-t-u-n-n-a-f-s: kitab al-riyadat al-nafs. a-l-kitab-r-i-y-a-d-a-t-u-n-n-a-f-s: kitab al-riyadat al-nafs. kitab al-riyadat al-nafs: the book of purity of the soul: commentary of kitab al-tawhid sh. muhammad ibn 'abdul-wahhab (compressed).

              -

              download kitab tazkiyatun nafs imam al ghazali. download ktw : lode share. this work is made available under a creative commons attribution 4.0. download kitab tazkiyatun nafs imam ghazali. . muslims in general have always tried to follow the prophetic example of the. al-ghazali, abu hamid abu hanifa, al-maqdisi, al-shirazi. tazkiyatun nafs as an effort to reduce premarital sexual behavior of adolescents. -. in the same study, the rate of sexual intercourse before marriage by females was 19% while it. kitab tazkiyatun nafs imam ghazali pdf download, pdf kitab tazkiyatun nafs imam ghazali book read online. kitab tazkiyatun nafs imam ghazali - pdf apk latest version 2.0 for. kitab tazkiyatun nafs imam ghazali. quran, hadith. jihad al-nafs spiritual wayfaring akhlaq. retrieved 8.

              -

              -

              tazkiyatun nafs and even tend to prioritize lust to pollute them with immoral acts, do what god forbids, and do detestable and ugly things to dominate their. people also downloaded these pdfs. faiz, alfaiz, hengki yandri, asroful kadafi, rila rahma mulyani, nofrita nofrita, and dosi juliawati. pendekatan tazkiyatun an-nafs untuk membantu mengurangi emosi negatif klien.

              899543212b
              -
              -
              \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download MS Office for Windows 10 Crack A Simple and Safe Method to Install and Run MS Office.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download MS Office for Windows 10 Crack A Simple and Safe Method to Install and Run MS Office.md deleted file mode 100644 index 1e1c7c3b5c30f297049f73e029fcaad456871d45..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download MS Office for Windows 10 Crack A Simple and Safe Method to Install and Run MS Office.md +++ /dev/null @@ -1,31 +0,0 @@ - -

              How to Download MS Office for Windows 10 Crack

              -

              Microsoft Office is one of the most popular and widely used software suites in the world. It includes various applications such as Word, Excel, PowerPoint, Outlook, and more. With MS Office, you can create and edit documents, spreadsheets, presentations, emails, and other types of files. MS Office is compatible with Windows 10 and other operating systems.

              -

              However, MS Office is not a free software. You need to buy a license or a subscription to use it. But what if you don't want to spend money on it? Is there a way to download MS Office for Windows 10 crack for free? The answer is yes, but you need to be careful. There are many websites that claim to offer MS Office crack files, but some of them may contain viruses, malware, or spyware that can harm your computer or your data. Therefore, you need to be very selective and cautious when downloading MS Office crack files.

              -

              download ms office for windows 10 crack


              Download Zip ★★★★★ https://urlcod.com/2uK3fS



              -

              In this article, we will show you how to download MS Office for Windows 10 crack safely and easily. We will also provide you with some tips on how to use MS Office crack effectively and avoid any potential risks. Follow the steps below and enjoy the benefits of MS Office crack.

              - -

              Step 1: Download MS Office for Windows 10 Crack File

              -

              The first step is to download the MS Office for Windows 10 crack file from a reliable source. We recommend you to use the link below, which is tested and verified by us. This link will take you to a Google Drive folder where you can find the MS Office for Windows 10 crack file along with some other files that you will need later.

              -

              Download MS Office for Windows 10 Crack File Here

              -

              Once you click on the link, you will see a folder named "MS Office for Windows 10 Crack". Open it and download the file named "MS_Office_for_Windows_10_Crack.zip". This is a compressed file that contains the MS Office for Windows 10 crack file and some other files that you will need later.

              -

              To download the file, right-click on it and select "Download". Alternatively, you can select the file and click on the download icon at the top right corner of the screen. The file size is about 1 GB, so it may take some time depending on your internet speed.

              -

              After downloading the file, save it in a safe location on your computer. Do not open or extract it yet.

              - -

              Step 2: Disable Your Antivirus Software

              -

              The next step is to disable your antivirus software temporarily before opening or extracting the MS Office for Windows 10 crack file. This is because some antivirus programs may detect the MS Office crack file as a threat and delete it or block it from running.

              -

              To disable your antivirus software, follow the instructions for your specific program. Usually, you can find an option to turn off or pause your antivirus protection in the system tray or in the settings menu of your antivirus program.

              -

              -

              Remember to enable your antivirus software again after using MS Office crack.

              - -

              Step 3: Extract the MS Office for Windows 10 Crack File

              -

              The third step is to extract the MS Office for Windows 10 crack file that you downloaded in step 1. To do this, you need a program that can unzip compressed files, such as WinRAR or 7-Zip.

              -

              If you don't have any of these programs installed on your computer, you can download them from their official websites:

              - -

              To extract the file, right-click on it and select "Extract Here" or "Extract to MS_Office_for_Windows_10_Crack". Alternatively, you can open the file with WinRAR or 7-Zip and click on "Extract" or "Extract To". Choose a destination folder where you want to save the extracted files.

              -

              After extracting the file,

              ddb901b051
              -
              -
              \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/APK PDF Drive How to Download and Read PDF Ebooks on Your Phone.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/APK PDF Drive How to Download and Read PDF Ebooks on Your Phone.md deleted file mode 100644 index f944b55984cd5043b322843a9549d5e04771e989..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/APK PDF Drive How to Download and Read PDF Ebooks on Your Phone.md +++ /dev/null @@ -1,106 +0,0 @@ - -

              Download APK PDF Drive: How to Access Millions of Free eBooks on Your Android Device

              -

              If you are a book lover, you probably know how expensive and inconvenient it can be to buy and store physical books. You may also have tried some online platforms that offer eBooks, but they often require subscriptions, registrations, or have limited selections. What if there was a way to access millions of free eBooks on your Android device, without any hassle or cost? Well, there is, and it's called APK PDF Drive.

              -

              What is APK PDF Drive?

              -

              APK PDF Drive is an app that allows you to search, preview, and download millions of PDF files into your devices. It is based on the website PDF Drive, which is a free search engine for PDF files. With APK PDF Drive, you can access a huge collection of books from various genres and topics, such as fiction, non-fiction, academic, self-help, business, and more. You can also enjoy a user-friendly app with many features, such as bookmarks, favorites, history, downloads manager, and more.

              -

              download apk pdf drive


              Downloadhttps://bltlly.com/2uOrWE



              -

              A free search engine for PDF files

              -

              APK PDF Drive is not just an app, but also a search engine that crawls the internet for PDF files. It indexes and organizes them into categories and subcategories, making it easy for you to find what you are looking for. You can also use keywords to search for specific titles, authors, or topics. APK PDF Drive updates its database regularly, adding new books every day.

              -

              A collection of books from various genres and topics

              -

              With APK PDF Drive, you can access a vast library of books from different genres and topics. Whether you are interested in romance, thriller, fantasy, science fiction, history, biography, psychology, philosophy, or anything else, you can find it on APK PDF Drive. You can also browse by popularity, ratings, or recommendations to discover new books and authors.

              -

              A user-friendly app with many features

              -

              APK PDF Drive is not only a search engine, but also an app that provides you with a great reading experience. You can preview any book before downloading it, to see if it suits your taste and needs. You can also download books in PDF format and read them offline or online with the built-in reader. The app also has features such as bookmarks, favorites, history, downloads manager, and more. You can customize the app settings according to your preferences.

              -

              Why should you download APK PDF Drive?

              -

              If you are still wondering why you should download APK PDF Drive, here are some reasons why:

              -

              To enjoy unlimited reading without ads or limits

              -

              Unlike some other online platforms that offer eBooks, APK PDF Drive does not require you to pay any fees or register any accounts. You can download as many books as you want without any restrictions or interruptions. You can also read them without any annoying ads or pop-ups.

              -

              download apk pdf drive books
              -download apk pdf drive ebooks
              -download apk pdf drive free
              -download apk pdf drive app
              -download apk pdf drive for android
              -download apk pdf drive latest version
              -download apk pdf drive offline
              -download apk pdf drive pro
              -download apk pdf drive premium
              -download apk pdf drive mod
              -download apk pdf drive no ads
              -download apk pdf drive unlimited
              -download apk pdf drive online
              -download apk pdf drive 2023
              -download apk pdf drive update
              -download apk pdf drive best
              -download apk pdf drive new
              -download apk pdf drive old
              -download apk pdf drive review
              -download apk pdf drive tutorial
              -download apk pdf drive guide
              -download apk pdf drive tips
              -download apk pdf drive tricks
              -download apk pdf drive hack
              -download apk pdf drive cheat
              -download apk pdf drive cracked
              -download apk pdf drive full
              -download apk pdf drive safe
              -download apk pdf drive secure
              -download apk pdf drive fast
              -download apk pdf drive easy
              -download apk pdf drive simple
              -download apk pdf drive smart
              -download apk pdf drive awesome
              -download apk pdf drive amazing
              -download apk pdf drive cool
              -download apk pdf drive fun
              -download apk pdf drive interesting
              -download apk pdf drive useful
              -download apk pdf drive helpful
              -download apk pdf drive popular
              -download apk pdf drive famous
              -download apk pdf drive top
              -download apk pdf drive quality
              -download apk pdf drive reliable
              -download apk pdf drive trusted
              -download apk pdf drive verified
              -download apk pdf drive recommended
              -download apk pdf drive rated

              -

              To discover new books and authors

              -

              With APK PDF Drive, you can explore a wide range of books from different genres and topics. You can also find books that are not available in your local libraries or bookstores. You can discover new books and authors that you may not have heard of before.

              -

              -

              Another benefit of using APK PDF Drive is that you can save storage space and data usage on your device. You do not need to download any additional software or plugins to read the PDF files. You can also delete the files after reading them, or store them on a cloud service. You can also reduce your data usage by downloading the books when you have a Wi-Fi connection, and reading them offline later.

              -

              How to download APK PDF Drive?

              -

              If you are convinced that APK PDF Drive is the app for you, you may be wondering how to download it. Here are the steps you need to follow:

              -

              Step 1: Find a reliable source for the app

              -

              The first thing you need to do is to find a reliable source for the app. APK PDF Drive is not available on the Google Play Store, so you need to download it from a third-party website. However, not all websites are trustworthy, and some may contain malware or viruses. Therefore, you need to be careful and do some research before downloading the app. One of the websites that we recommend is [APKPure], which is a reputable platform that offers safe and verified APK files.

              -

              Step 2: Enable unknown sources on your device

              -

              The next thing you need to do is to enable unknown sources on your device. This is a security setting that allows you to install apps from sources other than the Google Play Store. To do this, go to your device settings, then security, then unknown sources. Toggle the switch to allow the installation of apps from unknown sources. You may see a warning message, but you can ignore it if you trust the source of the app.

              -

              Step 3: Install the app and launch it

              -

              The final thing you need to do is to install the app and launch it. To do this, go to the website where you downloaded the APK file, and tap on it. You may see a prompt asking you to confirm the installation, so tap on install. Wait for the installation process to finish, then tap on open. You should see the app icon on your home screen or app drawer. Tap on it to launch the app and start using it.

              -

              How to use APK PDF Drive?

              -

              Now that you have downloaded and installed APK PDF Drive, you may be wondering how to use it. Here are some tips and tricks to help you get started:

              -

              Search for books by keywords, categories, or popularity

              -

              The main feature of APK PDF Drive is its search engine, which allows you to find any book you want in seconds. You can use keywords to search for specific titles, authors, or topics. For example, if you want to find books about Harry Potter, just type "Harry Potter" in the search bar and hit enter. You can also use categories and subcategories to browse by genres and topics. For example, if you want to find books about business, just tap on the business category and see what's available. You can also use popularity, ratings, or recommendations to find books that are popular or well-reviewed by other users.

              -

              Preview and download books in PDF format

              -

              Once you find a book that interests you, you can preview it before downloading it. To do this, just tap on the book cover and see a brief summary and some sample pages. You can also see some information about the book, such as its title, author, size, format, language, and download count. If you like what you see, you can download the book in PDF format by tapping on the download button. The book will be saved in your downloads folder or in a folder named "PDF Drive" on your device.

              -

              Read books offline or online with the built-in reader

              -

              After downloading a book, you can read it offline or online with the built-in reader of APK PDF Drive. To do this, just tap on the book cover again and choose "Read". You will see a simple and elegant reader that allows you to adjust the font size, brightness, orientation, and page layout. You can also bookmark pages, jump to chapters, or search for words within the book. You can also access your downloaded books from the downloads manager of the app.

              -

              Conclusion

              -

              In conclusion, APK PDF Drive is an amazing app that allows you to access millions of free eBooks on your Android device. It is a free search engine for PDF files that offers a huge collection of books from various genres and topics. It is also a user-friendly app with many features that enhance your reading experience. You should download APK PDF Drive if you want to enjoy unlimited reading without ads or limits, discover new books and authors, save storage space and data usage, and have a great reading experience. Here are some FAQs that you may have about APK PDF Drive:

              -

              FAQs

              -

              Q: Is APK PDF Drive legal and safe?

              -

              A: APK PDF Drive is legal and safe, as long as you download it from a reliable source and use it for personal and educational purposes. However, you should be aware that some of the books on APK PDF Drive may be protected by copyright laws, and you should respect the rights of the authors and publishers.

              -

              Q: How can I update APK PDF Drive?

              -

              A: APK PDF Drive does not have an automatic update feature, so you need to check the website where you downloaded the app for any new versions. You can also follow APK PDF Drive on social media platforms, such as Facebook and Twitter, to get the latest news and updates.

              -

              Q: How can I contact APK PDF Drive?

              -

              A: If you have any questions, suggestions, or feedback about APK PDF Drive, you can contact them through their email address: pdfdrive@outlook.com. You can also visit their website: pdfdrive.com, to learn more about their mission and vision.

              -

              Q: How can I support APK PDF Drive?

              -

              A: If you like APK PDF Drive and want to support their work, you can donate to them through their website: pdfdrive.com/donate. You can also share the app with your friends and family, and leave a positive review on the website where you downloaded the app.

              -

              Q: How can I delete APK PDF Drive?

              -

              A: If you want to delete APK PDF Drive from your device, you can do so by following these steps:

              -
                -
              • Go to your device settings, then apps, then APK PDF Drive.
              • -
              • Tap on uninstall and confirm your action.
              • -
              • Delete any downloaded books or files from your device or cloud service.
              • -
              -

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bingo Holiday Bingo Games - A Free and Easy Bingo Game with Mod Apk.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bingo Holiday Bingo Games - A Free and Easy Bingo Game with Mod Apk.md deleted file mode 100644 index d8b9e6759c29307e7863a3ad9f392dfb07cfff9e..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Bingo Holiday Bingo Games - A Free and Easy Bingo Game with Mod Apk.md +++ /dev/null @@ -1,85 +0,0 @@ -
              -

              Bingo Holiday: The Best Bingo Games Mod APK for Android

              -

              Do you love playing bingo games on your Android device? Do you want to experience the thrill of winning big prizes and jackpots? Do you want to enjoy a variety of bingo themes and modes? If you answered yes to any of these questions, then you should try out Bingo Holiday, one of the best bingo games for Android. And if you want to make your bingo experience even better, you should download Bingo Holiday Mod APK, a modified version of the original game that unlocks all features and gives you unlimited resources. In this article, we will tell you everything you need to know about Bingo Holiday and its mod apk. Read on to find out more.

              -

              bingo holiday bingo games mod apk


              Download ……… https://bltlly.com/2uOsBn



              -

              What is Bingo Holiday?

              -

              Bingo Holiday is a fun and relaxing bingo game that lets you play bingo with millions of players from around the world. You can choose from various bingo themes and modes, such as Christmas, Halloween, Classic, Tournament, Speed, and more. You can also chat with other players, send and receive gifts, and join clubs. Bingo Holiday is a social and interactive game that will keep you entertained for hours.

              -

              Bingo Holiday is also a rewarding game that gives you daily bonuses, free spins, lucky cards, and jackpots. You can win coins, credits, power-ups, boosters, and other prizes by playing bingo games. You can also collect puzzle pieces and complete collections to get extra rewards. Bingo Holiday is a game that will make you feel like a bingo king or queen.

              -

              What is Bingo Holiday Mod APK?

              -

              Bingo Holiday Mod APK is a modified version of the original game that unlocks all features and gives you unlimited resources. With this mod apk, you can enjoy the full potential of Bingo Holiday without spending any money or waiting for anything. You can play as many bingo games as you want, use as many power-ups and boosters as you need, and access all bingo rooms and themes. You can also get unlimited coins and credits to buy more tickets and items.

              -

              Bingo Holiday Mod APK is a safe and easy way to download and install the modded version of the game. You don't need to root your device or use any complicated tools. You just need to follow some simple steps that we will explain later in this article. Bingo Holiday Mod APK is also compatible and updated with most Android devices. You don't need to worry about any errors or bugs.

              -

              What are the benefits of Bingo Holiday Mod APK?

              -

              Bingo Holiday Mod APK has many benefits that will make your bingo experience even better. Here are some of them:

              -
                -
              • Unlimited coins and credits: With this mod apk, you will never run out of coins and credits to play more bingo games. You can buy more tickets, items, power-ups, boosters, and anything else you want.
              • -
              • Unlimited power-ups and boosters: With this mod apk, you will always have an edge over your opponents. You can use power-ups and boosters to increase your chances of winning. You can daub more numbers, use bingo hints, double your winnings, and more.
              • -
              • Unlimited access to all bingo rooms and themes: With this mod apk, you will never get bored of playing the same bingo games. You can explore all the bingo rooms and themes that Bingo Holiday has to offer, such as Christmas, Halloween, Classic, Tournament, Speed, and more. You can also unlock new rooms and themes as you progress in the game.
              • -
              -

              Bingo Holiday Mod APK is the ultimate bingo game for Android. You will have more fun, more excitement, and more rewards than ever before.

              -

              How to download and install Bingo Holiday Mod APK?

              -

              If you want to download and install Bingo Holiday Mod APK on your Android device, you just need to follow these simple steps:

              -

              bingo holiday free bingo games mod apk
              -bingo holiday unlimited coins and credits mod apk
              -bingo holiday mod apk latest version
              -bingo holiday hack mod apk download
              -bingo holiday online bingo games mod apk
              -bingo holiday mod apk android 1
              -bingo holiday mod apk unlimited everything
              -bingo holiday free slots and casino games mod apk
              -bingo holiday mod apk revdl
              -bingo holiday mod apk 2023
              -bingo holiday fun bingo games mod apk
              -bingo holiday cheats and hacks mod apk
              -bingo holiday vip membership mod apk
              -bingo holiday offline bingo games mod apk
              -bingo holiday mod apk no root
              -bingo holiday classic bingo games mod apk
              -bingo holiday free credits generator mod apk
              -bingo holiday mod apk for ios
              -bingo holiday super bingo games mod apk
              -bingo holiday mod apk rexdl
              -bingo holiday best free bingo games mod apk
              -bingo holiday free power ups and boosters mod apk
              -bingo holiday mod apk happymod
              -bingo holiday 2022 new year bash mod apk
              -bingo holiday christmas special edition mod apk
              -bingo holiday world tour free online games mod apk
              -bingo holiday daily bonus and rewards mod apk
              -bingo holiday multiplayer social games mod apk
              -bingo holiday free spins and coins mod apk
              -bingo holiday play with friends and family mod apk
              -bingo holiday live chat and voice chat mod apk
              -bingo holiday mega jackpot and big win mod apk
              -bingo holiday themed rooms and mini games mod apk
              -bingo holiday free tickets and gems mod apk
              -bingo holiday lucky daub and auto daub mod apk
              -bingo holiday custom daubers and cards mod apk
              -bingo holiday epic collections and achievements mod apk
              -bingo holiday wheel of fortune and scratch cards mod apk
              -bingo holiday tournaments and leaderboards mod apk
              -bingo holiday seasonal events and promotions mod apk

              -
                -
              1. Download the mod apk file: You can download the mod apk file from this link: [Download Bingo Holiday Mod APK]. Make sure you have enough storage space on your device before downloading the file.
              2. -
              3. Allow unknown sources: You need to enable the installation of apps from unknown sources on your device. To do this, go to Settings > Security > Unknown Sources and toggle it on.
              4. -
              5. Install the mod apk file: Locate the downloaded mod apk file on your device and tap on it. Follow the instructions on the screen to install the app.
              6. -
              7. Launch the game and enjoy: Once the installation is complete, you can launch the game from your app drawer or home screen. You can now enjoy Bingo Holiday Mod APK with all features unlocked and unlimited resources.
              8. -
              -

              Conclusion

              -

              Bingo Holiday is one of the best bingo games for Android that lets you play bingo with millions of players from around the world. You can choose from various bingo themes and modes, chat with other players, send and receive gifts, join clubs, and win coins, credits, power-ups, boosters, and jackpots. Bingo Holiday is a fun and relaxing game that will keep you entertained for hours.

              -

              Bingo Holiday Mod APK is a modified version of the original game that unlocks all features and gives you unlimited resources. With this mod apk, you can play as many bingo games as you want, use as many power-ups and boosters as you need, and access all bingo rooms and themes. You can also get unlimited coins and credits to buy more tickets and items. Bingo Holiday Mod APK is a safe and easy way to download and install the modded version of the game. You don't need to root your device or use any complicated tools. You just need to follow some simple steps that we explained in this article.

              -

              If you want to experience the thrill of winning big prizes and jackpots in a variety of bingo themes and modes, you should try out Bingo Holiday Mod APK. It is the ultimate bingo game for Android that will make you feel like a bingo king or queen. Download Bingo Holiday Mod APK now and enjoy the best bingo games ever.

              -

              FAQs

              -
                -
              • Q: Is Bingo Holiday Mod APK free?
              • -
              • A: Yes, Bingo Holiday Mod APK is free to download and play. You don't need to pay anything to enjoy all features and resources.
              • -
              • Q: Is Bingo Holiday Mod APK safe?
              • -
              • A: Yes, Bingo Holiday Mod APK is safe to use. It does not contain any viruses or malware. It also does not require any root access or permissions.
              • -
              • Q: Is Bingo Holiday Mod APK compatible with my device?
              • -
              • A: Yes, Bingo Holiday Mod APK is compatible with most Android devices. It works on Android 4.1 or higher versions.
              • -
              • Q: How can I update Bingo Holiday Mod APK?
              • -
              • A: You can update Bingo Holiday Mod APK by downloading the latest version from this link: [Download Bingo Holiday Mod APK]. You don't need to uninstall the previous version before installing the new one.
              • -
              • Q: How can I contact the developers of Bingo Holiday?
              • -
              • A: You can contact the developers of Bingo Holiday by sending an email to support@bingoholiday.xyz or by visiting their Facebook page: [Bingo Holiday].
              • -

              197e85843d
              -
              -
              \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Among Us APK for Android and Enjoy Free Purchases.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Among Us APK for Android and Enjoy Free Purchases.md deleted file mode 100644 index 570ce347b31823c4778f9c8197b9437a8305709c..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Download Among Us APK for Android and Enjoy Free Purchases.md +++ /dev/null @@ -1,137 +0,0 @@ - -

              Among Us Free Purchase APK: What You Need to Know

              -

              Among Us is one of the most popular online multiplayer games of 2020 and 2021, with millions of players around the world. The game is fun, social, and addictive, but it also has some in-game purchases that can enhance your experience. If you want to get these purchases for free, you might be tempted to download and install an APK file that claims to offer them. But is this a good idea? What are the risks and consequences of using an Among Us Free Purchase APK? And how can you play Among Us safely and legally without spending any money? In this article, we will answer these questions and more.

              -

              among us free purchase apk


              Download ✶✶✶ https://bltlly.com/2uOmFw



              -

              What is Among Us?

              -

              Among Us is an online multiplayer game that was released in 2018 by an American game studio called Innersloth. The game is set in a spaceship where up to 10 players have to work together as crewmates to complete tasks and prepare for departure. However, one or more players are secretly impostors who can sabotage the ship and kill the crewmates. The crewmates have to find out who the impostors are and vote them out before they kill everyone or ruin the mission.

              -

              How to play Among Us

              -

              To play Among Us, you need to download the game from the official sources. The game is available for free on Android and iOS devices, and for $4.99 on PC, Nintendo Switch, and Xbox platforms. You can also play it online on some websites, but these may not be updated or secure. Once you have the game, you can join or host a game online or over local WiFi with other players. You can customize your character's color, hat, skin, pet, and name, and choose from different game modes, maps, settings, and rules. You can also chat with other players using text or voice chat during meetings or when you are dead.

              -

              Why is Among Us so popular?

              -

              Among Us became a viral sensation in 2020 thanks to several factors. First, the game is simple to play but hard to master, making it appealing to a wide range of players. Second, the game is very social and interactive, allowing players to communicate, cooperate, deceive, accuse, and have fun with each other. Third, the game is influenced by popular culture and media, such as sci-fi movies, murder mystery games, and social deduction games. Fourth, the game was boosted by many streamers, celebrities, and influencers who played it online and attracted millions of viewers. Fifth, the game was updated regularly by its developers who added new features, maps, modes, and content.

              -

              What is Among Us Free Purchase APK?

              -

              An APK file is an Android application package file that contains all the files and code needed to install an app on an Android device. An Among Us Free Purchase APK is an unofficial modified version of the original Among Us app that claims to offer free in-game purchases for players. These purchases include skins, hats, pets, bundles, and other cosmetic items that normally cost real money or require watching ads.

              -

              How to download and install Among Us Free Purchase APK

              -

              To download and install an Among Us Free Purchase APK, you need to find a website that offers it and follow the instructions provided. Usually, this involves enabling unknown sources in your device settings, downloading the APK file from a link or a QR code, opening the file manager app on your device, locating the downloaded file, tapping on it, and granting permissions to install it. After that, you should be able to launch the app and access the free purchases. However, this process is not recommended and can be dangerous for your device and your account. Here's why.

              -

              What are the features of Among Us Free Purchase APK?

              -

              An Among Us Free Purchase APK may offer some features that are not available in the official app, such as:

              -

              among us mod apk free skins and pets
              -among us hack apk free download no verification
              -among us unlocked apk free all maps and hats
              -among us cheat apk free impostor and kill cooldown
              -among us premium apk free no ads and in-app purchases
              -among us cracked apk free latest version and updates
              -among us pro apk free unlimited coins and gems
              -among us full apk free offline and online mode
              -among us patched apk free anti-ban and anti-detection
              -among us mega mod apk free everything unlocked and unlimited
              -among us plus plus apk free extra features and customizations
              -among us beta apk free early access and testing
              -among us god mode apk free invincibility and invisibility
              -among us always impostor apk free guaranteed role and fun
              -among us mod menu apk free easy installation and activation
              -among us hack version apk free for android and ios devices
              -among us modded apk free safe and virus-free download
              -among us unlock all apk free no root and no jailbreak required
              -among us hack tool apk free generator and injector
              -among us cheat engine apk free script and codes
              -among us modded version apk free working and updated
              -among us hack online apk free no survey and no human verification
              -among us hack 2023 apk free new features and improvements
              -among us mod 2023 apk free best mods and hacks available
              -among us cheat 2023 apk free latest cheats and tricks revealed
              -among us hack without verification apk free instant download and use
              -among us mod without root apk free compatible with all devices and versions
              -among us cheat without survey apk free no annoying pop-ups or redirects
              -among us hack for pc apk free emulator and installation guide
              -among us mod for pc apk free best settings and performance tips
              -among us cheat for pc apk free keyboard and mouse support
              -among us hack for ios apk free ipa file and installation guide
              -among us mod for ios apk free best mods and hacks for iphone and ipad users
              -among us cheat for ios apk free game center integration and achievements
              -among us hack for android apk free apk file and installation guide
              -among us mod for android apk free best mods and hacks for android users
              -among us cheat for android apk free google play integration and achievements
              -among us hack no root apk free no need to root your device or void warranty
              -among us mod no root apk free no risk of bricking or damaging your device
              -among us cheat no root apk free no need to install any third-party apps or tools

              -
                -
              • Unlimited skins, hats, pets, and bundles for free
              • -
              • No ads or pop-ups
              • -
              • No verification or registration required
              • -
              • Compatible with most Android devices and versions
              • -
              • Easy to download and install
              • -
              -

              However, these features may not work as expected or may come with some drawbacks, such as:

              -
                -
              • Poor quality or outdated graphics and sounds
              • -
              • Bugs, glitches, errors, and crashes
              • -
              • Incompatible or missing features with the official app or other players
              • -
              • Limited or no updates or support from the developers
              • -
              • Difficult to uninstall or remove
              • -
              -

              What are the risks and warnings of using Among Us Free Purchase APK?

              -

              The biggest risk of using an Among Us Free Purchase APK is that it may contain malware, viruses, spyware, or other harmful software that can damage your device, steal your personal information, hack your accounts, or compromise your security. Some of the signs that an APK file is malicious are:

              -
                -
              • It asks for too many or unnecessary permissions
              • -
              • It has a suspicious name, icon, or description
              • -
              • It comes from an unknown or untrusted source or website
              • -
              • It has a very large or very small file size
              • -
              • It has a lot of negative reviews or comments
              • -
              -

              Another risk of using an Among Us Free Purchase APK is that it may violate the terms of service and the intellectual property rights of Innersloth, the original developers of Among Us. This means that you may face legal consequences, such as fines, lawsuits, or bans from playing the game. Innersloth has stated that they do not support or endorse any unofficial versions of their game and that they will take action against any infringers. They have also warned players to be careful of scams and fake websites that offer free downloads of their game.

              -

              How to play Among Us safely and legally

              -

              If you want to play Among Us without risking your device, your account, or your legal status, you should follow these tips:

              -

              Tips and tricks for playing as a crewmate

              -

              As a crewmate, your goal is to complete your tasks and find the impostors before they kill you or sabotage the ship. Here are some tips and tricks to help you succeed:

              -
                -
              • Pay attention to the map and the task bar to know where to go and what to do.
              • -
              • Stick with other crewmates and use the buddy system to avoid being killed alone.
              • -
              • Report any dead bodies or suspicious activities that you see.
              • -
              • Use logic and evidence to accuse or defend yourself during meetings.
              • -
              • Vote wisely and don't skip unless you have no clue.
              • -
              • Communicate with your teammates using text or voice chat.
              • -
              • Don't cheat by using external tools or information.
              • -
              -

              Tips and tricks for playing as an impostor

              -

              As an impostor, your goal is to kill all the crewmates or sabotage the ship without being caught. Here are some tips and tricks to help you win:

              -
                -
              • Pretend to do tasks and blend in with the crewmates.
              • -
              • Kill discreetly and vent away quickly.
              • -
              • Sabotage strategically and create chaos and confusion.
              • -
              • Lie convincingly and accuse others during meetings.
              • -
              • Avoid being seen by cameras or witnesses.
              • -
              • Use vents, doors, lights, and other features to your advantage.
              • -
              • Communicate with your fellow impostor(s) using text or voice chat.
              • -
              • Don't cheat by using external tools or information.
              -

              How to avoid scams and malware related to Among Us

              To avoid falling victim to scams and malware related to Among Us, you should follow these precautions:

              • Only download the game from the official sources: Google Play Store, Apple App Store, Steam, Nintendo eShop, Xbox Store, itch.io, or Innersloth's website.
              • Avoid clicking on any links or ads that promise free downloads, hacks, cheats, mods, skins, pets, bundles, or other items related to Among Us.
              • -
              • Do not give out your personal or financial information to anyone claiming to be from Innersloth or offering you free rewards or prizes.
              • -
              • Do not install any unknown or unverified apps or files on your device that claim to be related to Among Us.
              • -
              • Use a reliable antivirus or security software to scan your device regularly and remove any threats.
              • -
              • Report any suspicious or fraudulent websites, apps, or activities to Innersloth or the relevant authorities.
              -

              Conclusion

              -

              Among Us is a fun and exciting game that you can play with your friends or strangers online. However, you should be careful of using any unofficial or modified versions of the game, such as the Among Us Free Purchase APK, that may harm your device, your account, or your legal status. Instead, you should play the game safely and legally by downloading it from the official sources, following the rules and etiquette, and avoiding scams and malware. By doing so, you can enjoy the game without any worries and have a blast with your fellow crewmates and impostors.

              -

              FAQs

              -

              Here are some frequently asked questions about Among Us and the Among Us Free Purchase APK:

              -
                -
              1. Can I play Among Us for free on PC?
                -Yes, you can play Among Us for free on PC by using an Android emulator, such as BlueStacks or NoxPlayer, that allows you to run Android apps on your PC. However, this may affect the performance and quality of the game, and you may not be able to access some features or updates. Alternatively, you can buy the game for $4.99 on Steam, which is a reasonable price for a game that offers hours of entertainment.
              2. -
              3. Can I get banned for using Among Us Free Purchase APK?
                -Yes, you can get banned for using Among Us Free Purchase APK or any other unofficial or modified version of the game. Innersloth has stated that they do not tolerate any cheating or hacking in their game and that they will ban any players who use them. Moreover, using an APK file may expose your device and your account to malware and hackers who can steal your information or take over your account.
              4. -
              5. How can I get skins, hats, pets, and bundles in Among Us without paying?
                -There are some ways to get skins, hats, pets, and bundles in Among Us without paying, such as:

                -
                  -
                • Watching ads in the game that reward you with free items
                • -
                • Playing the game during special events or holidays that offer free items
                • -
                • Using promo codes that Innersloth occasionally gives out to players
                • -
                • Joining giveaways or contests that Innersloth or other sponsors host for players
              6. -
              7. Are there any other games like Among Us that I can play?
                -Yes, there are many other games like Among Us that you can play if you enjoy social deduction and deception games. Some of them are:

                -
                  -
                • Town of Salem: A game where you have to find the hidden killers among the townsfolk
                • -
                • Mafia: A game where you have to find the hidden mafia members among the civilians
                • -
                • Werewolf: A game where you have to find the hidden werewolves among the villagers
                • -
                • The Resistance: A game where you have to find the hidden spies among the rebels
                • -
                • Secret Hitler: A game where you have to find the hidden fascists among the liberals
              8. -
              9. How can I contact Innersloth or get more information about Among Us?
                You can contact Innersloth or get more information about Among Us by visiting their official website (https://innersloth.com/), their Twitter account (@InnerslothDevs), their Discord server (https://discord.gg/innersloth), their Reddit community (/r/AmongUs), or their YouTube channel (Innersloth).

              401be4b1e0
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Ayl Portable Mini Speaker System Manual ((HOT)).md b/spaces/tioseFevbu/cartoon-converter/scripts/Ayl Portable Mini Speaker System Manual ((HOT)).md deleted file mode 100644 index 511ddc10c9184c8a09745e14f582c728b8f0d18b..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Ayl Portable Mini Speaker System Manual ((HOT)).md +++ /dev/null @@ -1,46 +0,0 @@ - -

              How to Use the AYL Portable Mini Speaker System

              -

              If you are looking for a compact and powerful speaker that can deliver high-quality sound, you might want to check out the AYL Portable Mini Speaker System. This speaker is designed to be compatible with any device that has a 3.5mm audio jack, such as laptops, smartphones, tablets, MP3 players, and more. It is also easy to use and carry around, thanks to its plug-and-play feature and lightweight design.

              -

              In this article, we will show you how to use the AYL Portable Mini Speaker System and what features it offers. We will also provide some tips on how to get the best sound quality from this speaker.

              -

              Ayl Portable Mini Speaker System Manual


              DOWNLOAD →→→ https://urlcod.com/2uHypY



              -

              What's in the Box?

              -

              When you purchase the AYL Portable Mini Speaker System, you will get the following items in the retail gift box:

              -
                -
              • The speaker itself, which has a black plastic shell with a soft coating and a metal cone.
              • -
              • A soft black velvet pouch for easy traveling and storage.
              • -
              • A USB cable with a 3.5mm audio jack on one end and a mini-USB port on the other end. This cable is used for charging the speaker and connecting it to your device.
              • -
              • A 15-inch extension stereo audio cable with a male 3.5mm jack on one end and a female 3.5mm jack on the other end. This cable is used for connecting a second AYL speaker for stereo sound.
              • -
              • A user manual that explains how to use the speaker and its features.
              • -
              -

              How to Charge the Speaker?

              -

              The AYL Portable Mini Speaker System has a built-in rechargeable lithium battery that can last up to 10 hours of continuous playback time at medium volume. To charge the speaker, follow these steps:

              -
                -
              1. Plug the mini-USB end of the USB cable into the mini-USB port on the bottom of the speaker.
              2. -
              3. Plug the other end of the USB cable into a USB power adapter or a USB port on your computer.
              4. -
              5. The LED light on the bottom of the speaker will turn red when charging and turn blue when fully charged.
              6. -
              7. Unplug the USB cable when charging is complete.
              8. -
              -

              How to Connect the Speaker to Your Device?

              -

              The AYL Portable Mini Speaker System can be connected to any device that has a 3.5mm audio jack, such as laptops, smartphones, tablets, MP3 players, and more. To connect the speaker to your device, follow these steps:

              -
                -
              1. Turn on the speaker by sliding the on/off switch on the bottom of the speaker to ON position.
              2. -
              3. Plug the 3.5mm audio jack on the bottom of the speaker into the headphone jack of your device.
              4. -
              5. The LED light on the bottom of the speaker will blink blue when connected.
              6. -
              7. Adjust the volume level on your device and on the speaker by turning the volume dial on the bottom of the speaker.
              8. -
              9. Enjoy your music or audio from your device through the speaker.
              10. -
              -

              How to Extend or Collapse the Speaker?

              -

              The AYL Portable Mini Speaker System has a unique design that allows you to extend or collapse it by twisting it. When extended, it creates a vacuum-bass system that enhances the sound quality and bass performance. When collapsed, it reduces its size and makes it easier to carry around. To extend or collapse the speaker, follow these steps:

              -
                -
              1. Hold the top and bottom pieces of the speaker firmly with your hands.
              2. -
              3. Twist them in opposite directions until they lock into place.
              4. -
              5. To extend the speaker, twist clockwise. To collapse the speaker, twist counterclockwise.
              6. -
              -

              How to Connect Two Speakers for Stereo Sound?

              -

              The AYL Portable Mini Speaker System can be connected to another AYL speaker for stereo sound. This will create a wider soundstage and more immersive listening experience. To connect two speakers for stereo sound, follow these steps:

              -
                -
              1. Make sure both speakers are fully charged and turned on.
              2. -
              3. Connect one speaker to your device using the 3.5mm audio jack as described above.
              4. -
              5. Connect another speaker to this

                7196e7f11a
                -
                -
                \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Hd Video Songs 1080p Blu The Train.md b/spaces/tioseFevbu/cartoon-converter/scripts/Hd Video Songs 1080p Blu The Train.md deleted file mode 100644 index c23950b8be884108bd8e572cb96fdf43d51117d3..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Hd Video Songs 1080p Blu The Train.md +++ /dev/null @@ -1,27 +0,0 @@ -
                -

                Hd Video Songs 1080p Blu The Train: A Guide for Music Lovers

                -

                If you are a fan of high-definition video songs, you might be interested in Hd Video Songs 1080p Blu The Train, a keyword that refers to a collection of songs from various Tamil movies that are available in Blu-ray quality. Here are some details about this keyword and how you can enjoy these songs.

                -

                Hd Video Songs 1080p Blu The Train


                Downloadhttps://urlcod.com/2uHwGy



                -

                What is Hd Video Songs 1080p Blu The Train?

                -

                Hd Video Songs 1080p Blu The Train is a keyword that is used to search for video songs from Tamil movies that have been released in Blu-ray format. Blu-ray is a digital optical disc format that can store high-definition video and audio data. The resolution of Blu-ray video is 1080p, which means it has 1920 x 1080 pixels, making it sharper and clearer than standard-definition video. The term "The Train" refers to a 2007 Hindi thriller movie that features some popular songs such as "Woh Ajnabee" and "Beete Lamhe". However, the keyword does not only include songs from this movie, but also from other Tamil movies that have been dubbed or remade in Hindi.

                -

                How to find Hd Video Songs 1080p Blu The Train?

                -

                One of the easiest ways to find Hd Video Songs 1080p Blu The Train is to use a search engine such as Bing or Google and type in the keyword. You will get several results that link to websites or platforms that offer these songs for streaming or downloading. Some of the examples are:

                -
                  -
                • Tamil HD Songs - BluRay 1080p / 720p - YouTube: This is a playlist of 429 videos that features Tamil songs from various movies in HD quality. Some of the songs are from The Train, such as "Woh Ajnabee" and "Beete Lamhe". You can watch these videos for free on YouTube, but you might encounter some ads or restrictions depending on your location and device.
                • -
                • Suno Na Sangemarmar Arijit singh 1080p HD blu-ray - YouTube: This is a single video that shows the song "Suno Na Sangemarmar" from the movie Youngistaan, which is a remake of the Tamil movie Leader. The song is sung by Arijit Singh, a popular Bollywood singer. You can watch this video for free on YouTube, but you might encounter some ads or restrictions depending on your location and device.
                • -
                • Hd Video Songs 1080p Blu The Train - SoundCloud: This is an audio file that plays a mix of songs from various Tamil movies that have been dubbed or remade in Hindi. Some of the songs are from The Train, such as "Woh Ajnabee" and "Beete Lamhe". You can listen to this file for free on SoundCloud, but you might encounter some ads or limitations depending on your account and device.
                • -
                -

                Why should you listen to Hd Video Songs 1080p Blu The Train?

                -

                There are many reasons why you might enjoy listening to Hd Video Songs 1080p Blu The Train, such as:

                -
                  -
                • You like Tamil music and want to experience it in high-definition quality.
                • -
                • You are curious about the Hindi versions of Tamil songs and want to compare them with the original ones.
                • -
                • You are a fan of Bollywood singers such as Arijit Singh and want to hear their voices in different languages.
                • -
                • You want to discover new songs and movies from different cultures and genres.
                • -
                • You want to relax and have fun with some catchy tunes and visuals.
                • -
                -

                Conclusion

                -

                Hd Video Songs 1080p Blu The Train is a keyword that can help you find and enjoy some amazing video songs from Tamil movies that have been released in Blu-ray format. You can use a

                -

                7b8c122e87
                -
                -
                \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/api.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/api.py deleted file mode 100644 index 2f71aaed1afc2f43ae5a58d951896b91e0327abc..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/requests/api.py +++ /dev/null @@ -1,157 +0,0 @@ -""" -requests.api -~~~~~~~~~~~~ - -This module implements the Requests API. - -:copyright: (c) 2012 by Kenneth Reitz. -:license: Apache2, see LICENSE for more details. -""" - -from . import sessions - - -def request(method, url, **kwargs): - """Constructs and sends a :class:`Request `. - - :param method: method for the new :class:`Request` object: ``GET``, ``OPTIONS``, ``HEAD``, ``POST``, ``PUT``, ``PATCH``, or ``DELETE``. - :param url: URL for the new :class:`Request` object. - :param params: (optional) Dictionary, list of tuples or bytes to send - in the query string for the :class:`Request`. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param json: (optional) A JSON serializable Python object to send in the body of the :class:`Request`. - :param headers: (optional) Dictionary of HTTP Headers to send with the :class:`Request`. - :param cookies: (optional) Dict or CookieJar object to send with the :class:`Request`. - :param files: (optional) Dictionary of ``'name': file-like-objects`` (or ``{'name': file-tuple}``) for multipart encoding upload. - ``file-tuple`` can be a 2-tuple ``('filename', fileobj)``, 3-tuple ``('filename', fileobj, 'content_type')`` - or a 4-tuple ``('filename', fileobj, 'content_type', custom_headers)``, where ``'content-type'`` is a string - defining the content type of the given file and ``custom_headers`` a dict-like object containing additional headers - to add for the file. - :param auth: (optional) Auth tuple to enable Basic/Digest/Custom HTTP Auth. - :param timeout: (optional) How many seconds to wait for the server to send data - before giving up, as a float, or a :ref:`(connect timeout, read - timeout) ` tuple. - :type timeout: float or tuple - :param allow_redirects: (optional) Boolean. Enable/disable GET/OPTIONS/POST/PUT/PATCH/DELETE/HEAD redirection. Defaults to ``True``. - :type allow_redirects: bool - :param proxies: (optional) Dictionary mapping protocol to the URL of the proxy. - :param verify: (optional) Either a boolean, in which case it controls whether we verify - the server's TLS certificate, or a string, in which case it must be a path - to a CA bundle to use. Defaults to ``True``. - :param stream: (optional) if ``False``, the response content will be immediately downloaded. - :param cert: (optional) if String, path to ssl client cert file (.pem). If Tuple, ('cert', 'key') pair. - :return: :class:`Response ` object - :rtype: requests.Response - - Usage:: - - >>> import requests - >>> req = requests.request('GET', 'https://httpbin.org/get') - >>> req - - """ - - # By using the 'with' statement we are sure the session is closed, thus we - # avoid leaving sockets open which can trigger a ResourceWarning in some - # cases, and look like a memory leak in others. - with sessions.Session() as session: - return session.request(method=method, url=url, **kwargs) - - -def get(url, params=None, **kwargs): - r"""Sends a GET request. - - :param url: URL for the new :class:`Request` object. - :param params: (optional) Dictionary, list of tuples or bytes to send - in the query string for the :class:`Request`. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("get", url, params=params, **kwargs) - - -def options(url, **kwargs): - r"""Sends an OPTIONS request. - - :param url: URL for the new :class:`Request` object. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("options", url, **kwargs) - - -def head(url, **kwargs): - r"""Sends a HEAD request. - - :param url: URL for the new :class:`Request` object. - :param \*\*kwargs: Optional arguments that ``request`` takes. If - `allow_redirects` is not provided, it will be set to `False` (as - opposed to the default :meth:`request` behavior). - :return: :class:`Response ` object - :rtype: requests.Response - """ - - kwargs.setdefault("allow_redirects", False) - return request("head", url, **kwargs) - - -def post(url, data=None, json=None, **kwargs): - r"""Sends a POST request. - - :param url: URL for the new :class:`Request` object. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param json: (optional) json data to send in the body of the :class:`Request`. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("post", url, data=data, json=json, **kwargs) - - -def put(url, data=None, **kwargs): - r"""Sends a PUT request. - - :param url: URL for the new :class:`Request` object. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param json: (optional) json data to send in the body of the :class:`Request`. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("put", url, data=data, **kwargs) - - -def patch(url, data=None, **kwargs): - r"""Sends a PATCH request. - - :param url: URL for the new :class:`Request` object. - :param data: (optional) Dictionary, list of tuples, bytes, or file-like - object to send in the body of the :class:`Request`. - :param json: (optional) json data to send in the body of the :class:`Request`. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("patch", url, data=data, **kwargs) - - -def delete(url, **kwargs): - r"""Sends a DELETE request. - - :param url: URL for the new :class:`Request` object. - :param \*\*kwargs: Optional arguments that ``request`` takes. - :return: :class:`Response ` object - :rtype: requests.Response - """ - - return request("delete", url, **kwargs) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/control.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/control.py deleted file mode 100644 index 88fcb9295164f4e18827ef61fff6723e94ef7381..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/rich/control.py +++ /dev/null @@ -1,225 +0,0 @@ -import sys -import time -from typing import TYPE_CHECKING, Callable, Dict, Iterable, List, Union - -if sys.version_info >= (3, 8): - from typing import Final -else: - from pip._vendor.typing_extensions import Final # pragma: no cover - -from .segment import ControlCode, ControlType, Segment - -if TYPE_CHECKING: - from .console import Console, ConsoleOptions, RenderResult - -STRIP_CONTROL_CODES: Final = [ - 7, # Bell - 8, # Backspace - 11, # Vertical tab - 12, # Form feed - 13, # Carriage return -] -_CONTROL_STRIP_TRANSLATE: Final = { - _codepoint: None for _codepoint in STRIP_CONTROL_CODES -} - -CONTROL_ESCAPE: Final = { - 7: "\\a", - 8: "\\b", - 11: "\\v", - 12: "\\f", - 13: "\\r", -} - -CONTROL_CODES_FORMAT: Dict[int, Callable[..., str]] = { - ControlType.BELL: lambda: "\x07", - ControlType.CARRIAGE_RETURN: lambda: "\r", - ControlType.HOME: lambda: "\x1b[H", - ControlType.CLEAR: lambda: "\x1b[2J", - ControlType.ENABLE_ALT_SCREEN: lambda: "\x1b[?1049h", - ControlType.DISABLE_ALT_SCREEN: lambda: "\x1b[?1049l", - ControlType.SHOW_CURSOR: lambda: "\x1b[?25h", - ControlType.HIDE_CURSOR: lambda: "\x1b[?25l", - ControlType.CURSOR_UP: lambda param: f"\x1b[{param}A", - ControlType.CURSOR_DOWN: lambda param: f"\x1b[{param}B", - ControlType.CURSOR_FORWARD: lambda param: f"\x1b[{param}C", - ControlType.CURSOR_BACKWARD: lambda param: f"\x1b[{param}D", - ControlType.CURSOR_MOVE_TO_COLUMN: lambda param: f"\x1b[{param+1}G", - ControlType.ERASE_IN_LINE: lambda param: f"\x1b[{param}K", - ControlType.CURSOR_MOVE_TO: lambda x, y: f"\x1b[{y+1};{x+1}H", - ControlType.SET_WINDOW_TITLE: lambda title: f"\x1b]0;{title}\x07", -} - - -class Control: - """A renderable that inserts a control code (non printable but may move cursor). - - Args: - *codes (str): Positional arguments are either a :class:`~rich.segment.ControlType` enum or a - tuple of ControlType and an integer parameter - """ - - __slots__ = ["segment"] - - def __init__(self, *codes: Union[ControlType, ControlCode]) -> None: - control_codes: List[ControlCode] = [ - (code,) if isinstance(code, ControlType) else code for code in codes - ] - _format_map = CONTROL_CODES_FORMAT - rendered_codes = "".join( - _format_map[code](*parameters) for code, *parameters in control_codes - ) - self.segment = Segment(rendered_codes, None, control_codes) - - @classmethod - def bell(cls) -> "Control": - """Ring the 'bell'.""" - return cls(ControlType.BELL) - - @classmethod - def home(cls) -> "Control": - """Move cursor to 'home' position.""" - return cls(ControlType.HOME) - - @classmethod - def move(cls, x: int = 0, y: int = 0) -> "Control": - """Move cursor relative to current position. - - Args: - x (int): X offset. - y (int): Y offset. - - Returns: - ~Control: Control object. - - """ - - def get_codes() -> Iterable[ControlCode]: - control = ControlType - if x: - yield ( - control.CURSOR_FORWARD if x > 0 else control.CURSOR_BACKWARD, - abs(x), - ) - if y: - yield ( - control.CURSOR_DOWN if y > 0 else control.CURSOR_UP, - abs(y), - ) - - control = cls(*get_codes()) - return control - - @classmethod - def move_to_column(cls, x: int, y: int = 0) -> "Control": - """Move to the given column, optionally add offset to row. - - Returns: - x (int): absolute x (column) - y (int): optional y offset (row) - - Returns: - ~Control: Control object. - """ - - return ( - cls( - (ControlType.CURSOR_MOVE_TO_COLUMN, x), - ( - ControlType.CURSOR_DOWN if y > 0 else ControlType.CURSOR_UP, - abs(y), - ), - ) - if y - else cls((ControlType.CURSOR_MOVE_TO_COLUMN, x)) - ) - - @classmethod - def move_to(cls, x: int, y: int) -> "Control": - """Move cursor to absolute position. - - Args: - x (int): x offset (column) - y (int): y offset (row) - - Returns: - ~Control: Control object. - """ - return cls((ControlType.CURSOR_MOVE_TO, x, y)) - - @classmethod - def clear(cls) -> "Control": - """Clear the screen.""" - return cls(ControlType.CLEAR) - - @classmethod - def show_cursor(cls, show: bool) -> "Control": - """Show or hide the cursor.""" - return cls(ControlType.SHOW_CURSOR if show else ControlType.HIDE_CURSOR) - - @classmethod - def alt_screen(cls, enable: bool) -> "Control": - """Enable or disable alt screen.""" - if enable: - return cls(ControlType.ENABLE_ALT_SCREEN, ControlType.HOME) - else: - return cls(ControlType.DISABLE_ALT_SCREEN) - - @classmethod - def title(cls, title: str) -> "Control": - """Set the terminal window title - - Args: - title (str): The new terminal window title - """ - return cls((ControlType.SET_WINDOW_TITLE, title)) - - def __str__(self) -> str: - return self.segment.text - - def __rich_console__( - self, console: "Console", options: "ConsoleOptions" - ) -> "RenderResult": - if self.segment.text: - yield self.segment - - -def strip_control_codes( - text: str, _translate_table: Dict[int, None] = _CONTROL_STRIP_TRANSLATE -) -> str: - """Remove control codes from text. - - Args: - text (str): A string possibly contain control codes. - - Returns: - str: String with control codes removed. - """ - return text.translate(_translate_table) - - -def escape_control_codes( - text: str, - _translate_table: Dict[int, str] = CONTROL_ESCAPE, -) -> str: - """Replace control codes with their "escaped" equivalent in the given text. - (e.g. "\b" becomes "\\b") - - Args: - text (str): A string possibly containing control codes. - - Returns: - str: String with control codes replaced with their escaped version. - """ - return text.translate(_translate_table) - - -if __name__ == "__main__": # pragma: no cover - from pip._vendor.rich.console import Console - - console = Console() - console.print("Look at the title of your terminal window ^") - # console.print(Control((ControlType.SET_WINDOW_TITLE, "Hello, world!"))) - for i in range(10): - console.set_window_title("🚀 Loading" + "." * i) - time.sleep(0.5) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_1x_coco.py deleted file mode 100644 index 33629ee6cc2b903407372d68c6d7ab599fe6598e..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/cascade_rcnn/cascade_mask_rcnn_x101_64x4d_fpn_1x_coco.py +++ /dev/null @@ -1,13 +0,0 @@ -_base_ = './cascade_mask_rcnn_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_64x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=64, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - style='pytorch')) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/fpg/faster_rcnn_r50_fpn_crop640_50e_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/fpg/faster_rcnn_r50_fpn_crop640_50e_coco.py deleted file mode 100644 index 95f4e91f203bad8367942fc24b838da9fbf62947..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/fpg/faster_rcnn_r50_fpn_crop640_50e_coco.py +++ /dev/null @@ -1,68 +0,0 @@ -_base_ = [ - '../_base_/models/faster_rcnn_r50_fpn.py', - '../_base_/datasets/coco_detection.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] -norm_cfg = dict(type='BN', requires_grad=True) -model = dict( - backbone=dict(norm_cfg=norm_cfg, norm_eval=False), - neck=dict(norm_cfg=norm_cfg), - roi_head=dict(bbox_head=dict(norm_cfg=norm_cfg))) -dataset_type = 'CocoDataset' -data_root = 'data/coco/' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='Resize', - img_scale=(640, 640), - ratio_range=(0.8, 1.2), - keep_ratio=True), - dict(type='RandomCrop', crop_size=(640, 640)), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=(640, 640)), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(640, 640), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=64), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) -# learning policy -optimizer = dict( - type='SGD', - lr=0.08, - momentum=0.9, - weight_decay=0.0001, - paramwise_cfg=dict(norm_decay_mult=0, bypass_duplicate=True)) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=1000, - warmup_ratio=0.1, - step=[30, 40]) -# runtime settings -runner = dict(max_epochs=50) -evaluation = dict(interval=2) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_r101_fpn_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_r101_fpn_2x_coco.py deleted file mode 100644 index c8cb2d87eedae2777ac8727dff5f398e1c477ab1..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/mask_rcnn/mask_rcnn_r101_fpn_2x_coco.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './mask_rcnn_r50_fpn_2x_coco.py' -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/trysem/coloria/README.md b/spaces/trysem/coloria/README.md deleted file mode 100644 index f5f5cdc51385c1bb90c812a6cd07912e578a23fd..0000000000000000000000000000000000000000 --- a/spaces/trysem/coloria/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Colorizer Models -emoji: 🌈🎨 -colorFrom: red -colorTo: orange -sdk: gradio -sdk_version: 3.5 -app_file: app.py -pinned: false -license: bsd-2-clause -duplicated_from: trysem/Colorizer_Models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/user238921933/stable-diffusion-webui/modules/upscaler.py b/spaces/user238921933/stable-diffusion-webui/modules/upscaler.py deleted file mode 100644 index e2eaa7308af0091b6e8f407e889b2e446679e149..0000000000000000000000000000000000000000 --- a/spaces/user238921933/stable-diffusion-webui/modules/upscaler.py +++ /dev/null @@ -1,145 +0,0 @@ -import os -from abc import abstractmethod - -import PIL -import numpy as np -import torch -from PIL import Image - -import modules.shared -from modules import modelloader, shared - -LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS) -NEAREST = (Image.Resampling.NEAREST if hasattr(Image, 'Resampling') else Image.NEAREST) - - -class Upscaler: - name = None - model_path = None - model_name = None - model_url = None - enable = True - filter = None - model = None - user_path = None - scalers: [] - tile = True - - def __init__(self, create_dirs=False): - self.mod_pad_h = None - self.tile_size = modules.shared.opts.ESRGAN_tile - self.tile_pad = modules.shared.opts.ESRGAN_tile_overlap - self.device = modules.shared.device - self.img = None - self.output = None - self.scale = 1 - self.half = not modules.shared.cmd_opts.no_half - self.pre_pad = 0 - self.mod_scale = None - - if self.model_path is None and self.name: - self.model_path = os.path.join(shared.models_path, self.name) - if self.model_path and create_dirs: - os.makedirs(self.model_path, exist_ok=True) - - try: - import cv2 - self.can_tile = True - except: - pass - - @abstractmethod - def do_upscale(self, img: PIL.Image, selected_model: str): - return img - - def upscale(self, img: PIL.Image, scale, selected_model: str = None): - self.scale = scale - dest_w = int(img.width * scale) - dest_h = int(img.height * scale) - - for i in range(3): - shape = (img.width, img.height) - - img = self.do_upscale(img, selected_model) - - if shape == (img.width, img.height): - break - - if img.width >= dest_w and img.height >= dest_h: - break - - if img.width != dest_w or img.height != dest_h: - img = img.resize((int(dest_w), int(dest_h)), resample=LANCZOS) - - return img - - @abstractmethod - def load_model(self, path: str): - pass - - def find_models(self, ext_filter=None) -> list: - return modelloader.load_models(model_path=self.model_path, model_url=self.model_url, command_path=self.user_path) - - def update_status(self, prompt): - print(f"\nextras: {prompt}", file=shared.progress_print_out) - - -class UpscalerData: - name = None - data_path = None - scale: int = 4 - scaler: Upscaler = None - model: None - - def __init__(self, name: str, path: str, upscaler: Upscaler = None, scale: int = 4, model=None): - self.name = name - self.data_path = path - self.local_data_path = path - self.scaler = upscaler - self.scale = scale - self.model = model - - -class UpscalerNone(Upscaler): - name = "None" - scalers = [] - - def load_model(self, path): - pass - - def do_upscale(self, img, selected_model=None): - return img - - def __init__(self, dirname=None): - super().__init__(False) - self.scalers = [UpscalerData("None", None, self)] - - -class UpscalerLanczos(Upscaler): - scalers = [] - - def do_upscale(self, img, selected_model=None): - return img.resize((int(img.width * self.scale), int(img.height * self.scale)), resample=LANCZOS) - - def load_model(self, _): - pass - - def __init__(self, dirname=None): - super().__init__(False) - self.name = "Lanczos" - self.scalers = [UpscalerData("Lanczos", None, self)] - - -class UpscalerNearest(Upscaler): - scalers = [] - - def do_upscale(self, img, selected_model=None): - return img.resize((int(img.width * self.scale), int(img.height * self.scale)), resample=NEAREST) - - def load_model(self, _): - pass - - def __init__(self, dirname=None): - super().__init__(False) - self.name = "Nearest" - self.scalers = [UpscalerData("Nearest", None, self)] diff --git a/spaces/vinthony/SadTalker/README.md b/spaces/vinthony/SadTalker/README.md deleted file mode 100644 index feebb93a9a459bc0a6625c4976ff492879016889..0000000000000000000000000000000000000000 --- a/spaces/vinthony/SadTalker/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: SadTalker -emoji: 😭 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: mit ---- - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/utils/__init__.py b/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/utils/__init__.py deleted file mode 100644 index ac489e2dbbc0e6fa87f5088b4edcc20f8cadc1a6..0000000000000000000000000000000000000000 --- a/spaces/vumichien/canvas_controlnet/annotator/uniformer/mmseg/utils/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -from .collect_env import collect_env -from .logger import get_root_logger - -__all__ = ['get_root_logger', 'collect_env'] diff --git a/spaces/wallezen/so-vits-svc/modules/attentions.py b/spaces/wallezen/so-vits-svc/modules/attentions.py deleted file mode 100644 index f9c11ca4a3acb86bf1abc04d9dcfa82a4ed4061f..0000000000000000000000000000000000000000 --- a/spaces/wallezen/so-vits-svc/modules/attentions.py +++ /dev/null @@ -1,349 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import modules.commons as commons -import modules.modules as modules -from modules.modules import LayerNorm - - -class FFT(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers=1, kernel_size=1, p_dropout=0., - proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append( - MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, - proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.ffn_layers.append( - FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - x = x * x_mask - return x - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/wffcyrus/MetaGPT-v1/docs/README_CN.md b/spaces/wffcyrus/MetaGPT-v1/docs/README_CN.md deleted file mode 100644 index 2180eb51874ceaa3215331ade42fe8139f130f28..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/docs/README_CN.md +++ /dev/null @@ -1,201 +0,0 @@ -# MetaGPT: 多智能体框架 - -

                -MetaGPT logo: 使 GPT 以软件公司的形式工作,协作处理更复杂的任务 -

                - -

                -使 GPTs 组成软件公司,协作处理更复杂的任务 -

                - -

                -CN doc -EN doc -JA doc -Discord Follow -License: MIT -roadmap -roadmap -Twitter Follow -

                - -1. MetaGPT输入**一句话的老板需求**,输出**用户故事 / 竞品分析 / 需求 / 数据结构 / APIs / 文件等** -2. MetaGPT内部包括**产品经理 / 架构师 / 项目经理 / 工程师**,它提供了一个**软件公司**的全过程与精心调配的SOP - 1. `Code = SOP(Team)` 是核心哲学。我们将SOP具象化,并且用于LLM构成的团队 - -![一个完全由大语言模型角色构成的软件公司](resources/software_company_cd.jpeg) - -

                软件公司多角色示意图(正在逐步实现)

                - -## 示例(均由 GPT-4 生成) - -例如,键入`python startup.py "写个类似今日头条的推荐系统"`并回车,你会获得一系列输出,其一是数据结构与API设计 - -![今日头条 Recsys 数据 & API 设计](resources/workspace/content_rec_sys/resources/data_api_design.png) - -这需要大约**0.2美元**(GPT-4 API的费用)来生成一个带有分析和设计的示例,大约2.0美元用于一个完整的项目 - -## 安装 - -### 传统安装 - -```bash -# 第 1 步:确保您的系统上安装了 NPM。并使用npm安装mermaid-js -npm --version -sudo npm install -g @mermaid-js/mermaid-cli - -# 第 2 步:确保您的系统上安装了 Python 3.9+。您可以使用以下命令进行检查: -python --version - -# 第 3 步:克隆仓库到您的本地机器,并进行安装。 -git clone https://github.com/geekan/metagpt -cd metagpt -python setup.py install -``` - -### Docker安装 - -```bash -# 步骤1: 下载metagpt官方镜像并准备好config.yaml -docker pull metagpt/metagpt:v0.3 -mkdir -p /opt/metagpt/{config,workspace} -docker run --rm metagpt/metagpt:v0.3 cat /app/metagpt/config/config.yaml > /opt/metagpt/config/config.yaml -vim /opt/metagpt/config/config.yaml # 修改config - -# 步骤2: 使用容器运行metagpt演示 -docker run --rm \ - --privileged \ - -v /opt/metagpt/config:/app/metagpt/config \ - -v /opt/metagpt/workspace:/app/metagpt/workspace \ - metagpt/metagpt:v0.3 \ - python startup.py "Write a cli snake game" - -# 您也可以启动一个容器并在其中执行命令 -docker run --name metagpt -d \ - --privileged \ - -v /opt/metagpt/config:/app/metagpt/config \ - -v /opt/metagpt/workspace:/app/metagpt/workspace \ - metagpt/metagpt:v0.3 - -docker exec -it metagpt /bin/bash -$ python startup.py "Write a cli snake game" -``` - -`docker run ...`做了以下事情: - -- 以特权模式运行,有权限运行浏览器 -- 将主机目录 `/opt/metagpt/config` 映射到容器目录`/app/metagpt/config` -- 将主机目录 `/opt/metagpt/workspace` 映射到容器目录 `/app/metagpt/workspace` -- 执行演示命令 `python startup.py "Write a cli snake game"` - -### 自己构建镜像 - -```bash -# 您也可以自己构建metagpt镜像 -git clone https://github.com/geekan/MetaGPT.git -cd MetaGPT && docker build -t metagpt:v0.3 . -``` - -## 配置 - -- 在 `config/key.yaml / config/config.yaml / env` 中配置您的 `OPENAI_API_KEY` -- 优先级顺序:`config/key.yaml > config/config.yaml > env` - -```bash -# 复制配置文件并进行必要的修改 -cp config/config.yaml config/key.yaml -``` - -| 变量名 | config/key.yaml | env | -|--------------------------------------------|-------------------------------------------|--------------------------------| -| OPENAI_API_KEY # 用您自己的密钥替换 | OPENAI_API_KEY: "sk-..." | export OPENAI_API_KEY="sk-..." | -| OPENAI_API_BASE # 可选 | OPENAI_API_BASE: "https:///v1" | export OPENAI_API_BASE="https:///v1" | - -## 示例:启动一个创业公司 - -```shell -python startup.py "写一个命令行贪吃蛇" -# 开启code review模式会会花费更多的money, 但是会提升代码质量和成功率 -python startup.py "写一个命令行贪吃蛇" --code_review True -``` - -运行脚本后,您可以在 `workspace/` 目录中找到您的新项目。 -### 平台或工具的倾向性 -可以在阐述需求时说明想要使用的平台或工具。 -例如: - -```shell -python startup.py "写一个基于pygame的命令行贪吃蛇" -``` - -### 使用 - -``` -名称 - startup.py - 我们是一家AI软件创业公司。通过投资我们,您将赋能一个充满无限可能的未来。 - -概要 - startup.py IDEA - -描述 - 我们是一家AI软件创业公司。通过投资我们,您将赋能一个充满无限可能的未来。 - -位置参数 - IDEA - 类型: str - 您的创新想法,例如"写一个命令行贪吃蛇。" - -标志 - --investment=INVESTMENT - 类型: float - 默认值: 3.0 - 作为投资者,您有机会向这家AI公司投入一定的美元金额。 - --n_round=N_ROUND - 类型: int - 默认值: 5 - -备注 - 您也可以用`标志`的语法,来处理`位置参数` -``` - -### 代码实现 - -```python -from metagpt.software_company import SoftwareCompany -from metagpt.roles import ProjectManager, ProductManager, Architect, Engineer - -async def startup(idea: str, investment: float = 3.0, n_round: int = 5): - """运行一个创业公司。做一个老板""" - company = SoftwareCompany() - company.hire([ProductManager(), Architect(), ProjectManager(), Engineer()]) - company.invest(investment) - company.start_project(idea) - await company.run(n_round=n_round) -``` - -你可以查看`examples`,其中有单角色(带知识库)的使用例子与仅LLM的使用例子。 - -## 快速体验 -对一些用户来说,安装配置本地环境是有困难的,下面这些教程能够让你快速体验到MetaGPT的魅力。 - -- [MetaGPT快速体验](https://deepwisdom.feishu.cn/wiki/Q8ycw6J9tiNXdHk66MRcIN8Pnlg) - -## 联系信息 - -如果您对这个项目有任何问题或反馈,欢迎联系我们。我们非常欢迎您的建议! - -- **邮箱:** alexanderwu@fuzhi.ai -- **GitHub 问题:** 对于更技术性的问题,您也可以在我们的 [GitHub 仓库](https://github.com/geekan/metagpt/issues) 中创建一个新的问题。 - -我们会在2-3个工作日内回复所有问题。 - -## 演示 - -https://github.com/geekan/MetaGPT/assets/2707039/5e8c1062-8c35-440f-bb20-2b0320f8d27d - -## 加入我们 - -📢 加入我们的Discord频道! -https://discord.gg/ZRHeExS6xv - -期待在那里与您相见!🎉 diff --git a/spaces/willgibs/ControlNet-v1-1/app_softedge.py b/spaces/willgibs/ControlNet-v1-1/app_softedge.py deleted file mode 100644 index 4913abc12f6fddd25889d7d5d8de02fc431511d3..0000000000000000000000000000000000000000 --- a/spaces/willgibs/ControlNet-v1-1/app_softedge.py +++ /dev/null @@ -1,112 +0,0 @@ -#!/usr/bin/env python - -import gradio as gr - -from utils import randomize_seed_fn - - -def create_demo(process, max_images=12, default_num_images=3): - with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image() - prompt = gr.Textbox(label='Prompt') - run_button = gr.Button('Run') - with gr.Accordion('Advanced options', open=False): - preprocessor_name = gr.Radio(label='Preprocessor', - choices=[ - 'HED', - 'PidiNet', - 'HED safe', - 'PidiNet safe', - 'None', - ], - type='value', - value='PidiNet') - num_samples = gr.Slider(label='Number of images', - minimum=1, - maximum=max_images, - value=default_num_images, - step=1) - image_resolution = gr.Slider(label='Image resolution', - minimum=256, - maximum=512, - value=512, - step=256) - preprocess_resolution = gr.Slider( - label='Preprocess resolution', - minimum=128, - maximum=512, - value=512, - step=1) - num_steps = gr.Slider(label='Number of steps', - minimum=1, - maximum=100, - value=20, - step=1) - guidance_scale = gr.Slider(label='Guidance scale', - minimum=0.1, - maximum=30.0, - value=9.0, - step=0.1) - seed = gr.Slider(label='Seed', - minimum=0, - maximum=1000000, - step=1, - value=0, - randomize=True) - randomize_seed = gr.Checkbox(label='Randomize seed', - value=True) - a_prompt = gr.Textbox( - label='Additional prompt', - value='best quality, extremely detailed') - n_prompt = gr.Textbox( - label='Negative prompt', - value= - 'longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality' - ) - with gr.Column(): - result = gr.Gallery(label='Output', show_label=False).style( - columns=2, object_fit='scale-down') - inputs = [ - image, - prompt, - a_prompt, - n_prompt, - num_samples, - image_resolution, - preprocess_resolution, - num_steps, - guidance_scale, - seed, - preprocessor_name, - ] - prompt.submit( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - ) - run_button.click( - fn=randomize_seed_fn, - inputs=[seed, randomize_seed], - outputs=seed, - queue=False, - ).then( - fn=process, - inputs=inputs, - outputs=result, - api_name='softedge', - ) - return demo - - -if __name__ == '__main__': - from model import Model - model = Model(task_name='softedge') - demo = create_demo(model.process_softedge) - demo.queue().launch() diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/persimmon/__init__.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/persimmon/__init__.py deleted file mode 100644 index 4c88459362eb725b3c13b4b7a028a429c8000227..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/persimmon/__init__.py +++ /dev/null @@ -1,62 +0,0 @@ -# Copyright 2023 AdeptAI and The HuggingFace Inc. team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from typing import TYPE_CHECKING - -from ...utils import ( - OptionalDependencyNotAvailable, - _LazyModule, - is_torch_available, -) - - -_import_structure = { - "configuration_persimmon": ["PERSIMMON_PRETRAINED_CONFIG_ARCHIVE_MAP", "PersimmonConfig"], -} - - -try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - pass -else: - _import_structure["modeling_persimmon"] = [ - "PersimmonForCausalLM", - "PersimmonModel", - "PersimmonPreTrainedModel", - "PersimmonForSequenceClassification", - ] - - -if TYPE_CHECKING: - from .configuration_persimmon import PERSIMMON_PRETRAINED_CONFIG_ARCHIVE_MAP, PersimmonConfig - - try: - if not is_torch_available(): - raise OptionalDependencyNotAvailable() - except OptionalDependencyNotAvailable: - pass - else: - from .modeling_persimmon import ( - PersimmonForCausalLM, - PersimmonForSequenceClassification, - PersimmonModel, - PersimmonPreTrainedModel, - ) - - -else: - import sys - - sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure, module_spec=__spec__) diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/utils.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/utils.py deleted file mode 100644 index c9efa287fc71315f633347023b390fe4ce57913a..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/dense_heads/utils.py +++ /dev/null @@ -1,38 +0,0 @@ -import cv2 -import torch -from torch import nn -from detectron2.utils.comm import get_world_size -from detectron2.structures import pairwise_iou, Boxes -# from .data import CenterNetCrop -import torch.nn.functional as F -import numpy as np -from detectron2.structures import Boxes, ImageList, Instances - -__all__ = ['reduce_sum', '_transpose'] - -INF = 1000000000 - -def _transpose(training_targets, num_loc_list): - ''' - This function is used to transpose image first training targets to - level first ones - :return: level first training targets - ''' - for im_i in range(len(training_targets)): - training_targets[im_i] = torch.split( - training_targets[im_i], num_loc_list, dim=0) - - targets_level_first = [] - for targets_per_level in zip(*training_targets): - targets_level_first.append( - torch.cat(targets_per_level, dim=0)) - return targets_level_first - - -def reduce_sum(tensor): - world_size = get_world_size() - if world_size < 2: - return tensor - tensor = tensor.clone() - torch.distributed.all_reduce(tensor, op=torch.distributed.ReduceOp.SUM) - return tensor \ No newline at end of file diff --git a/spaces/youngtsai/Mandarin-TTS/vits_pinyin.py b/spaces/youngtsai/Mandarin-TTS/vits_pinyin.py deleted file mode 100644 index 329052479d236f8b01f65db4365df2d2561433e3..0000000000000000000000000000000000000000 --- a/spaces/youngtsai/Mandarin-TTS/vits_pinyin.py +++ /dev/null @@ -1,88 +0,0 @@ -import re - -from pypinyin import Style -from pypinyin.contrib.neutral_tone import NeutralToneWith5Mixin -from pypinyin.converter import DefaultConverter -from pypinyin.core import Pinyin - -from text import pinyin_dict -from bert import TTSProsody - - -class MyConverter(NeutralToneWith5Mixin, DefaultConverter): - pass - - -def is_chinese(uchar): - if uchar >= u'\u4e00' and uchar <= u'\u9fa5': - return True - else: - return False - - -def clean_chinese(text: str): - text = text.strip() - text_clean = [] - for char in text: - if (is_chinese(char)): - text_clean.append(char) - else: - if len(text_clean) > 1 and is_chinese(text_clean[-1]): - text_clean.append(',') - text_clean = ''.join(text_clean).strip(',') - return text_clean - - -class VITS_PinYin: - def __init__(self, bert_path, device): - self.pinyin_parser = Pinyin(MyConverter()) - self.prosody = TTSProsody(bert_path, device) - - def get_phoneme4pinyin(self, pinyins): - result = [] - count_phone = [] - for pinyin in pinyins: - if pinyin[:-1] in pinyin_dict: - tone = pinyin[-1] - a = pinyin[:-1] - a1, a2 = pinyin_dict[a] - result += [a1, a2 + tone] - count_phone.append(2) - return result, count_phone - - def chinese_to_phonemes(self, text): - text = clean_chinese(text) - phonemes = ["sil"] - chars = ['[PAD]'] - count_phone = [] - count_phone.append(1) - for subtext in text.split(","): - if (len(subtext) == 0): - continue - pinyins = self.correct_pinyin_tone3(subtext) - sub_p, sub_c = self.get_phoneme4pinyin(pinyins) - phonemes.extend(sub_p) - phonemes.append("sp") - count_phone.extend(sub_c) - count_phone.append(1) - chars.append(subtext) - chars.append(',') - phonemes.append("sil") - count_phone.append(1) - chars.append('[PAD]') - chars = "".join(chars) - char_embeds = self.prosody.get_char_embeds(chars) - char_embeds = self.prosody.expand_for_phone(char_embeds, count_phone) - return " ".join(phonemes), char_embeds - - def correct_pinyin_tone3(self, text): - pinyin_list = [p[0] for p in self.pinyin_parser.pinyin( - text, style=Style.TONE3, strict=False, neutral_tone_with_five=True)] - if len(pinyin_list) >= 2: - for i in range(1, len(pinyin_list)): - try: - if re.findall(r'\d', pinyin_list[i-1])[0] == '3' and re.findall(r'\d', pinyin_list[i])[0] == '3': - pinyin_list[i-1] = pinyin_list[i-1].replace('3', '2') - except IndexError: - pass - return pinyin_list diff --git a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/resolution.js b/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/resolution.js deleted file mode 100644 index ee04d6d994942f60ef777067f12b716bfc7edfda..0000000000000000000000000000000000000000 --- a/spaces/younker/chatgpt-turbo/client/node_modules/autoprefixer/lib/resolution.js +++ /dev/null @@ -1,97 +0,0 @@ -let FractionJs = require('fraction.js') - -let Prefixer = require('./prefixer') -let utils = require('./utils') - -const REGEXP = /(min|max)-resolution\s*:\s*\d*\.?\d+(dppx|dpcm|dpi|x)/gi -const SPLIT = /(min|max)-resolution(\s*:\s*)(\d*\.?\d+)(dppx|dpcm|dpi|x)/i - -class Resolution extends Prefixer { - /** - * Return prefixed query name - */ - prefixName(prefix, name) { - if (prefix === '-moz-') { - return name + '--moz-device-pixel-ratio' - } else { - return prefix + name + '-device-pixel-ratio' - } - } - - /** - * Return prefixed query - */ - prefixQuery(prefix, name, colon, value, units) { - value = new FractionJs(value) - - // 1dpcm = 2.54dpi - // 1dppx = 96dpi - if (units === 'dpi') { - value = value.div(96) - } else if (units === 'dpcm') { - value = value.mul(2.54).div(96) - } - value = value.simplify() - - if (prefix === '-o-') { - value = value.n + '/' + value.d - } - return this.prefixName(prefix, name) + colon + value - } - - /** - * Remove prefixed queries - */ - clean(rule) { - if (!this.bad) { - this.bad = [] - for (let prefix of this.prefixes) { - this.bad.push(this.prefixName(prefix, 'min')) - this.bad.push(this.prefixName(prefix, 'max')) - } - } - - rule.params = utils.editList(rule.params, queries => { - return queries.filter(query => this.bad.every(i => !query.includes(i))) - }) - } - - /** - * Add prefixed queries - */ - process(rule) { - let parent = this.parentPrefix(rule) - let prefixes = parent ? [parent] : this.prefixes - - rule.params = utils.editList(rule.params, (origin, prefixed) => { - for (let query of origin) { - if ( - !query.includes('min-resolution') && - !query.includes('max-resolution') - ) { - prefixed.push(query) - continue - } - - for (let prefix of prefixes) { - let processed = query.replace(REGEXP, str => { - let parts = str.match(SPLIT) - return this.prefixQuery( - prefix, - parts[1], - parts[2], - parts[3], - parts[4] - ) - }) - prefixed.push(processed) - } - prefixed.push(query) - } - - return utils.uniq(prefixed) - }) - } -} - -module.exports = Resolution diff --git a/spaces/ysharma/ChatinterfaceTests/README.md b/spaces/ysharma/ChatinterfaceTests/README.md deleted file mode 100644 index 2bf1fc063e6fa81f96c4089086090db1941d94ba..0000000000000000000000000000000000000000 --- a/spaces/ysharma/ChatinterfaceTests/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatinterfaceTests -emoji: 🌍 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.36.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/ysharma/LLaVA_v1/llava/train/llama_flash_attn_monkey_patch.py b/spaces/ysharma/LLaVA_v1/llava/train/llama_flash_attn_monkey_patch.py deleted file mode 100644 index 31db2eff8d1c4b3ae645583dfc5e156e818b6f1c..0000000000000000000000000000000000000000 --- a/spaces/ysharma/LLaVA_v1/llava/train/llama_flash_attn_monkey_patch.py +++ /dev/null @@ -1,115 +0,0 @@ -from typing import Optional, Tuple -import warnings - -import torch - -import transformers -from transformers.models.llama.modeling_llama import apply_rotary_pos_emb, repeat_kv - -try: - from flash_attn.flash_attn_interface import flash_attn_unpadded_qkvpacked_func -except ImportError: - from flash_attn.flash_attn_interface import flash_attn_varlen_qkvpacked_func as flash_attn_unpadded_qkvpacked_func -from flash_attn.bert_padding import unpad_input, pad_input - - -def forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.Tensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, -) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - if output_attentions: - warnings.warn( - "Output attentions is not supported for patched `LlamaAttention`, returning `None` instead." - ) - - bsz, q_len, _ = hidden_states.size() - - query_states = ( - self.q_proj(hidden_states) - .view(bsz, q_len, self.num_heads, self.head_dim) - .transpose(1, 2) - ) - key_states = ( - self.k_proj(hidden_states) - .view(bsz, q_len, self.num_key_value_heads, self.head_dim) - .transpose(1, 2) - ) - value_states = ( - self.v_proj(hidden_states) - .view(bsz, q_len, self.num_key_value_heads, self.head_dim) - .transpose(1, 2) - ) # shape: (b, num_heads, s, head_dim) - - kv_seq_len = key_states.shape[-2] - if past_key_value is not None: - kv_seq_len += past_key_value[0].shape[-2] - - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - query_states, key_states = apply_rotary_pos_emb( - query_states, key_states, cos, sin, position_ids - ) - - if past_key_value is not None: - # reuse k, v - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - - past_key_value = (key_states, value_states) if use_cache else None - - # repeat k/v heads if n_kv_heads < n_heads - key_states = repeat_kv(key_states, self.num_key_value_groups) - value_states = repeat_kv(value_states, self.num_key_value_groups) - - # Transform the data into the format required by flash attention - qkv = torch.stack([query_states, key_states, value_states], dim=2) - qkv = qkv.transpose(1, 3) # shape: [b, s, 3, num_heads, head_dim] - key_padding_mask = attention_mask - - if key_padding_mask is None: - qkv = qkv.reshape(-1, 3, self.num_heads, self.head_dim) - cu_q_lens = torch.arange( - 0, (bsz + 1) * q_len, step=q_len, dtype=torch.int32, device=qkv.device - ) - max_s = q_len - output = flash_attn_unpadded_qkvpacked_func( - qkv, cu_q_lens, max_s, 0.0, softmax_scale=None, causal=True - ) - output = output.view(bsz, q_len, -1) - else: - qkv = qkv.reshape(bsz, q_len, -1) - qkv, indices, cu_q_lens, max_s = unpad_input(qkv, key_padding_mask) - qkv = qkv.view(-1, 3, self.num_heads, self.head_dim) - output_unpad = flash_attn_unpadded_qkvpacked_func( - qkv, cu_q_lens, max_s, 0.0, softmax_scale=None, causal=True - ) - output_unpad = output_unpad.reshape(-1, self.num_heads * self.head_dim) - output = pad_input(output_unpad, indices, bsz, q_len) - - return self.o_proj(output), None, past_key_value - - -# Disable the transformation of the attention mask in LlamaModel as the flash attention -# requires the attention mask to be the same as the key_padding_mask -def _prepare_decoder_attention_mask( - self, attention_mask, input_shape, inputs_embeds, past_key_values_length -): - # [bsz, seq_len] - return attention_mask - - -def replace_llama_attn_with_flash_attn(): - cuda_major, cuda_minor = torch.cuda.get_device_capability() - if cuda_major < 8: - warnings.warn( - "Flash attention is only supported on A100 or H100 GPU during training due to head dim > 64 backward." - "ref: https://github.com/HazyResearch/flash-attention/issues/190#issuecomment-1523359593" - ) - transformers.models.llama.modeling_llama.LlamaModel._prepare_decoder_attention_mask = ( - _prepare_decoder_attention_mask - ) - transformers.models.llama.modeling_llama.LlamaAttention.forward = forward diff --git a/spaces/yufiofficial/MusicGenQ/tests/common_utils/__init__.py b/spaces/yufiofficial/MusicGenQ/tests/common_utils/__init__.py deleted file mode 100644 index 74ffcfef96fec35c99b2a1a053a61f44f7a8bbe9..0000000000000000000000000000000000000000 --- a/spaces/yufiofficial/MusicGenQ/tests/common_utils/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from .temp_utils import TempDirMixin -from .wav_utils import get_batch_white_noise, get_white_noise, save_wav diff --git a/spaces/ywqisok/ysyy/models.py b/spaces/ywqisok/ysyy/models.py deleted file mode 100644 index 8353b867f441de7e4d05aef980e672899c3a8889..0000000000000000000000000000000000000000 --- a/spaces/ywqisok/ysyy/models.py +++ /dev/null @@ -1,533 +0,0 @@ -import math -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -import attentions -import monotonic_align - -from torch.nn import Conv1d, ConvTranspose1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from commons import init_weights, get_padding - - -class StochasticDurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, n_flows=4, gin_channels=0): - super().__init__() - filter_channels = in_channels # it needs to be removed from future version. - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.log_flow = modules.Log() - self.flows = nn.ModuleList() - self.flows.append(modules.ElementwiseAffine(2)) - for i in range(n_flows): - self.flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.flows.append(modules.Flip()) - - self.post_pre = nn.Conv1d(1, filter_channels, 1) - self.post_proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.post_convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - self.post_flows = nn.ModuleList() - self.post_flows.append(modules.ElementwiseAffine(2)) - for i in range(4): - self.post_flows.append(modules.ConvFlow(2, filter_channels, kernel_size, n_layers=3)) - self.post_flows.append(modules.Flip()) - - self.pre = nn.Conv1d(in_channels, filter_channels, 1) - self.proj = nn.Conv1d(filter_channels, filter_channels, 1) - self.convs = modules.DDSConv(filter_channels, kernel_size, n_layers=3, p_dropout=p_dropout) - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, filter_channels, 1) - - def forward(self, x, x_mask, w=None, g=None, reverse=False, noise_scale=1.0): - x = torch.detach(x) - x = self.pre(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.convs(x, x_mask) - x = self.proj(x) * x_mask - - if not reverse: - flows = self.flows - assert w is not None - - logdet_tot_q = 0 - h_w = self.post_pre(w) - h_w = self.post_convs(h_w, x_mask) - h_w = self.post_proj(h_w) * x_mask - e_q = torch.randn(w.size(0), 2, w.size(2)).to(device=x.device, dtype=x.dtype) * x_mask - z_q = e_q - for flow in self.post_flows: - z_q, logdet_q = flow(z_q, x_mask, g=(x + h_w)) - logdet_tot_q += logdet_q - z_u, z1 = torch.split(z_q, [1, 1], 1) - u = torch.sigmoid(z_u) * x_mask - z0 = (w - u) * x_mask - logdet_tot_q += torch.sum((F.logsigmoid(z_u) + F.logsigmoid(-z_u)) * x_mask, [1,2]) - logq = torch.sum(-0.5 * (math.log(2*math.pi) + (e_q**2)) * x_mask, [1,2]) - logdet_tot_q - - logdet_tot = 0 - z0, logdet = self.log_flow(z0, x_mask) - logdet_tot += logdet - z = torch.cat([z0, z1], 1) - for flow in flows: - z, logdet = flow(z, x_mask, g=x, reverse=reverse) - logdet_tot = logdet_tot + logdet - nll = torch.sum(0.5 * (math.log(2*math.pi) + (z**2)) * x_mask, [1,2]) - logdet_tot - return nll + logq # [b] - else: - flows = list(reversed(self.flows)) - flows = flows[:-2] + [flows[-1]] # remove a useless vflow - z = torch.randn(x.size(0), 2, x.size(2)).to(device=x.device, dtype=x.dtype) * noise_scale - for flow in flows: - z = flow(z, x_mask, g=x, reverse=reverse) - z0, z1 = torch.split(z, [1, 1], 1) - logw = z0 - return logw - - -class DurationPredictor(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, p_dropout, gin_channels=0): - super().__init__() - - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.gin_channels = gin_channels - - self.drop = nn.Dropout(p_dropout) - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_1 = modules.LayerNorm(filter_channels) - self.conv_2 = nn.Conv1d(filter_channels, filter_channels, kernel_size, padding=kernel_size//2) - self.norm_2 = modules.LayerNorm(filter_channels) - self.proj = nn.Conv1d(filter_channels, 1, 1) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, in_channels, 1) - - def forward(self, x, x_mask, g=None): - x = torch.detach(x) - if g is not None: - g = torch.detach(g) - x = x + self.cond(g) - x = self.conv_1(x * x_mask) - x = torch.relu(x) - x = self.norm_1(x) - x = self.drop(x) - x = self.conv_2(x * x_mask) - x = torch.relu(x) - x = self.norm_2(x) - x = self.drop(x) - x = self.proj(x * x_mask) - return x * x_mask - - -class TextEncoder(nn.Module): - def __init__(self, - n_vocab, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout): - super().__init__() - self.n_vocab = n_vocab - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - - self.emb = nn.Embedding(n_vocab, hidden_channels) - nn.init.normal_(self.emb.weight, 0.0, hidden_channels**-0.5) - - self.encoder = attentions.Encoder( - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.proj= nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths): - x = self.emb(x) * math.sqrt(self.hidden_channels) # [b, t, h] - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return x, m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append(modules.ResidualCouplingLayer(channels, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels, mean_only=True)) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - -class PosteriorEncoder(nn.Module): - def __init__(self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN(hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=gin_channels) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to(x.dtype) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - -class Generator(torch.nn.Module): - def __init__(self, initial_channel, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=0): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d(initial_channel, upsample_initial_channel, 7, 1, padding=3) - resblock = modules.ResBlock1 if resblock == '1' else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append(weight_norm( - ConvTranspose1d(upsample_initial_channel//(2**i), upsample_initial_channel//(2**(i+1)), - k, u, padding=(k-u)//2))) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel//(2**(i+1)) - for j, (k, d) in enumerate(zip(resblock_kernel_sizes, resblock_dilation_sizes)): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i*self.num_kernels+j](x) - else: - xs += self.resblocks[i*self.num_kernels+j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print('Removing weight norm...') - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv2d(1, 32, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(32, 128, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(128, 512, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(512, 1024, (kernel_size, 1), (stride, 1), padding=(get_padding(kernel_size, 1), 0))), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(get_padding(kernel_size, 1), 0))), - ]) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList([ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ]) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2,3,5,7,11] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - - -class SynthesizerTrn(nn.Module): - """ - Synthesizer for Training - """ - - def __init__(self, - n_vocab, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - n_speakers=0, - gin_channels=0, - use_sdp=True, - **kwargs): - - super().__init__() - self.n_vocab = n_vocab - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.n_speakers = n_speakers - self.gin_channels = gin_channels - - self.use_sdp = use_sdp - - self.enc_p = TextEncoder(n_vocab, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout) - self.dec = Generator(inter_channels, resblock, resblock_kernel_sizes, resblock_dilation_sizes, upsample_rates, upsample_initial_channel, upsample_kernel_sizes, gin_channels=gin_channels) - self.enc_q = PosteriorEncoder(spec_channels, inter_channels, hidden_channels, 5, 1, 16, gin_channels=gin_channels) - self.flow = ResidualCouplingBlock(inter_channels, hidden_channels, 5, 1, 4, gin_channels=gin_channels) - - if use_sdp: - self.dp = StochasticDurationPredictor(hidden_channels, 192, 3, 0.5, 4, gin_channels=gin_channels) - else: - self.dp = DurationPredictor(hidden_channels, 256, 3, 0.5, gin_channels=gin_channels) - - if n_speakers > 1: - self.emb_g = nn.Embedding(n_speakers, gin_channels) - - def forward(self, x, x_lengths, y, y_lengths, sid=None): - - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - - with torch.no_grad(): - # negative cross-entropy - s_p_sq_r = torch.exp(-2 * logs_p) # [b, d, t] - neg_cent1 = torch.sum(-0.5 * math.log(2 * math.pi) - logs_p, [1], keepdim=True) # [b, 1, t_s] - neg_cent2 = torch.matmul(-0.5 * (z_p ** 2).transpose(1, 2), s_p_sq_r) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent3 = torch.matmul(z_p.transpose(1, 2), (m_p * s_p_sq_r)) # [b, t_t, d] x [b, d, t_s] = [b, t_t, t_s] - neg_cent4 = torch.sum(-0.5 * (m_p ** 2) * s_p_sq_r, [1], keepdim=True) # [b, 1, t_s] - neg_cent = neg_cent1 + neg_cent2 + neg_cent3 + neg_cent4 - - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = monotonic_align.maximum_path(neg_cent, attn_mask.squeeze(1)).unsqueeze(1).detach() - - w = attn.sum(2) - if self.use_sdp: - l_length = self.dp(x, x_mask, w, g=g) - l_length = l_length / torch.sum(x_mask) - else: - logw_ = torch.log(w + 1e-6) * x_mask - logw = self.dp(x, x_mask, g=g) - l_length = torch.sum((logw - logw_)**2, [1,2]) / torch.sum(x_mask) # for averaging - - # expand prior - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) - - z_slice, ids_slice = commons.rand_slice_segments(z, y_lengths, self.segment_size) - o = self.dec(z_slice, g=g) - return o, l_length, attn, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, x, x_lengths, sid=None, noise_scale=1, length_scale=1, noise_scale_w=1., max_len=None): - x, m_p, logs_p, x_mask = self.enc_p(x, x_lengths) - if self.n_speakers > 0: - g = self.emb_g(sid).unsqueeze(-1) # [b, h, 1] - else: - g = None - - if self.use_sdp: - logw = self.dp(x, x_mask, g=g, reverse=True, noise_scale=noise_scale_w) - else: - logw = self.dp(x, x_mask, g=g) - w = torch.exp(logw) * x_mask * length_scale - w_ceil = torch.ceil(w) - y_lengths = torch.clamp_min(torch.sum(w_ceil, [1, 2]), 1).long() - y_mask = torch.unsqueeze(commons.sequence_mask(y_lengths, None), 1).to(x_mask.dtype) - attn_mask = torch.unsqueeze(x_mask, 2) * torch.unsqueeze(y_mask, -1) - attn = commons.generate_path(w_ceil, attn_mask) - - m_p = torch.matmul(attn.squeeze(1), m_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - logs_p = torch.matmul(attn.squeeze(1), logs_p.transpose(1, 2)).transpose(1, 2) # [b, t', t], [b, t, d] -> [b, d, t'] - - z_p = m_p + torch.randn_like(m_p) * torch.exp(logs_p) * noise_scale - z = self.flow(z_p, y_mask, g=g, reverse=True) - o = self.dec((z * y_mask)[:,:,:max_len], g=g) - return o, attn, y_mask, (z, z_p, m_p, logs_p) - - def voice_conversion(self, y, y_lengths, sid_src, sid_tgt): - assert self.n_speakers > 0, "n_speakers have to be larger than 0." - g_src = self.emb_g(sid_src).unsqueeze(-1) - g_tgt = self.emb_g(sid_tgt).unsqueeze(-1) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g_src) - z_p = self.flow(z, y_mask, g=g_src) - z_hat = self.flow(z_p, y_mask, g=g_tgt, reverse=True) - o_hat = self.dec(z_hat * y_mask, g=g_tgt) - return o_hat, y_mask, (z, z_p, z_hat) - diff --git a/spaces/zhang-wei-jian/docker/node_modules/braces/README.md b/spaces/zhang-wei-jian/docker/node_modules/braces/README.md deleted file mode 100644 index cba2f600d2e6efad9cb14279a04d7e3ac6cd6cce..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/braces/README.md +++ /dev/null @@ -1,593 +0,0 @@ -# braces [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=W8YFZ425KND68) [![NPM version](https://img.shields.io/npm/v/braces.svg?style=flat)](https://www.npmjs.com/package/braces) [![NPM monthly downloads](https://img.shields.io/npm/dm/braces.svg?style=flat)](https://npmjs.org/package/braces) [![NPM total downloads](https://img.shields.io/npm/dt/braces.svg?style=flat)](https://npmjs.org/package/braces) [![Linux Build Status](https://img.shields.io/travis/micromatch/braces.svg?style=flat&label=Travis)](https://travis-ci.org/micromatch/braces) - -> Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed. - -Please consider following this project's author, [Jon Schlinkert](https://github.com/jonschlinkert), and consider starring the project to show your :heart: and support. - -## Install - -Install with [npm](https://www.npmjs.com/): - -```sh -$ npm install --save braces -``` - -## v3.0.0 Released!! - -See the [changelog](CHANGELOG.md) for details. - -## Why use braces? - -Brace patterns make globs more powerful by adding the ability to match specific ranges and sequences of characters. - -* **Accurate** - complete support for the [Bash 4.3 Brace Expansion](www.gnu.org/software/bash/) specification (passes all of the Bash braces tests) -* **[fast and performant](#benchmarks)** - Starts fast, runs fast and [scales well](#performance) as patterns increase in complexity. -* **Organized code base** - The parser and compiler are easy to maintain and update when edge cases crop up. -* **Well-tested** - Thousands of test assertions, and passes all of the Bash, minimatch, and [brace-expansion](https://github.com/juliangruber/brace-expansion) unit tests (as of the date this was written). -* **Safer** - You shouldn't have to worry about users defining aggressive or malicious brace patterns that can break your application. Braces takes measures to prevent malicious regex that can be used for DDoS attacks (see [catastrophic backtracking](https://www.regular-expressions.info/catastrophic.html)). -* [Supports lists](#lists) - (aka "sets") `a/{b,c}/d` => `['a/b/d', 'a/c/d']` -* [Supports sequences](#sequences) - (aka "ranges") `{01..03}` => `['01', '02', '03']` -* [Supports steps](#steps) - (aka "increments") `{2..10..2}` => `['2', '4', '6', '8', '10']` -* [Supports escaping](#escaping) - To prevent evaluation of special characters. - -## Usage - -The main export is a function that takes one or more brace `patterns` and `options`. - -```js -const braces = require('braces'); -// braces(patterns[, options]); - -console.log(braces(['{01..05}', '{a..e}'])); -//=> ['(0[1-5])', '([a-e])'] - -console.log(braces(['{01..05}', '{a..e}'], { expand: true })); -//=> ['01', '02', '03', '04', '05', 'a', 'b', 'c', 'd', 'e'] -``` - -### Brace Expansion vs. Compilation - -By default, brace patterns are compiled into strings that are optimized for creating regular expressions and matching. - -**Compiled** - -```js -console.log(braces('a/{x,y,z}/b')); -//=> ['a/(x|y|z)/b'] -console.log(braces(['a/{01..20}/b', 'a/{1..5}/b'])); -//=> [ 'a/(0[1-9]|1[0-9]|20)/b', 'a/([1-5])/b' ] -``` - -**Expanded** - -Enable brace expansion by setting the `expand` option to true, or by using [braces.expand()](#expand) (returns an array similar to what you'd expect from Bash, or `echo {1..5}`, or [minimatch](https://github.com/isaacs/minimatch)): - -```js -console.log(braces('a/{x,y,z}/b', { expand: true })); -//=> ['a/x/b', 'a/y/b', 'a/z/b'] - -console.log(braces.expand('{01..10}')); -//=> ['01','02','03','04','05','06','07','08','09','10'] -``` - -### Lists - -Expand lists (like Bash "sets"): - -```js -console.log(braces('a/{foo,bar,baz}/*.js')); -//=> ['a/(foo|bar|baz)/*.js'] - -console.log(braces.expand('a/{foo,bar,baz}/*.js')); -//=> ['a/foo/*.js', 'a/bar/*.js', 'a/baz/*.js'] -``` - -### Sequences - -Expand ranges of characters (like Bash "sequences"): - -```js -console.log(braces.expand('{1..3}')); // ['1', '2', '3'] -console.log(braces.expand('a/{1..3}/b')); // ['a/1/b', 'a/2/b', 'a/3/b'] -console.log(braces('{a..c}', { expand: true })); // ['a', 'b', 'c'] -console.log(braces('foo/{a..c}', { expand: true })); // ['foo/a', 'foo/b', 'foo/c'] - -// supports zero-padded ranges -console.log(braces('a/{01..03}/b')); //=> ['a/(0[1-3])/b'] -console.log(braces('a/{001..300}/b')); //=> ['a/(0{2}[1-9]|0[1-9][0-9]|[12][0-9]{2}|300)/b'] -``` - -See [fill-range](https://github.com/jonschlinkert/fill-range) for all available range-expansion options. - -### Steppped ranges - -Steps, or increments, may be used with ranges: - -```js -console.log(braces.expand('{2..10..2}')); -//=> ['2', '4', '6', '8', '10'] - -console.log(braces('{2..10..2}')); -//=> ['(2|4|6|8|10)'] -``` - -When the [.optimize](#optimize) method is used, or [options.optimize](#optionsoptimize) is set to true, sequences are passed to [to-regex-range](https://github.com/jonschlinkert/to-regex-range) for expansion. - -### Nesting - -Brace patterns may be nested. The results of each expanded string are not sorted, and left to right order is preserved. - -**"Expanded" braces** - -```js -console.log(braces.expand('a{b,c,/{x,y}}/e')); -//=> ['ab/e', 'ac/e', 'a/x/e', 'a/y/e'] - -console.log(braces.expand('a/{x,{1..5},y}/c')); -//=> ['a/x/c', 'a/1/c', 'a/2/c', 'a/3/c', 'a/4/c', 'a/5/c', 'a/y/c'] -``` - -**"Optimized" braces** - -```js -console.log(braces('a{b,c,/{x,y}}/e')); -//=> ['a(b|c|/(x|y))/e'] - -console.log(braces('a/{x,{1..5},y}/c')); -//=> ['a/(x|([1-5])|y)/c'] -``` - -### Escaping - -**Escaping braces** - -A brace pattern will not be expanded or evaluted if _either the opening or closing brace is escaped_: - -```js -console.log(braces.expand('a\\{d,c,b}e')); -//=> ['a{d,c,b}e'] - -console.log(braces.expand('a{d,c,b\\}e')); -//=> ['a{d,c,b}e'] -``` - -**Escaping commas** - -Commas inside braces may also be escaped: - -```js -console.log(braces.expand('a{b\\,c}d')); -//=> ['a{b,c}d'] - -console.log(braces.expand('a{d\\,c,b}e')); -//=> ['ad,ce', 'abe'] -``` - -**Single items** - -Following bash conventions, a brace pattern is also not expanded when it contains a single character: - -```js -console.log(braces.expand('a{b}c')); -//=> ['a{b}c'] -``` - -## Options - -### options.maxLength - -**Type**: `Number` - -**Default**: `65,536` - -**Description**: Limit the length of the input string. Useful when the input string is generated or your application allows users to pass a string, et cetera. - -```js -console.log(braces('a/{b,c}/d', { maxLength: 3 })); //=> throws an error -``` - -### options.expand - -**Type**: `Boolean` - -**Default**: `undefined` - -**Description**: Generate an "expanded" brace pattern (alternatively you can use the `braces.expand()` method, which does the same thing). - -```js -console.log(braces('a/{b,c}/d', { expand: true })); -//=> [ 'a/b/d', 'a/c/d' ] -``` - -### options.nodupes - -**Type**: `Boolean` - -**Default**: `undefined` - -**Description**: Remove duplicates from the returned array. - -### options.rangeLimit - -**Type**: `Number` - -**Default**: `1000` - -**Description**: To prevent malicious patterns from being passed by users, an error is thrown when `braces.expand()` is used or `options.expand` is true and the generated range will exceed the `rangeLimit`. - -You can customize `options.rangeLimit` or set it to `Inifinity` to disable this altogether. - -**Examples** - -```js -// pattern exceeds the "rangeLimit", so it's optimized automatically -console.log(braces.expand('{1..1000}')); -//=> ['([1-9]|[1-9][0-9]{1,2}|1000)'] - -// pattern does not exceed "rangeLimit", so it's NOT optimized -console.log(braces.expand('{1..100}')); -//=> ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100'] -``` - -### options.transform - -**Type**: `Function` - -**Default**: `undefined` - -**Description**: Customize range expansion. - -**Example: Transforming non-numeric values** - -```js -const alpha = braces.expand('x/{a..e}/y', { - transform(value, index) { - // When non-numeric values are passed, "value" is a character code. - return 'foo/' + String.fromCharCode(value) + '-' + index; - } -}); -console.log(alpha); -//=> [ 'x/foo/a-0/y', 'x/foo/b-1/y', 'x/foo/c-2/y', 'x/foo/d-3/y', 'x/foo/e-4/y' ] -``` - -**Example: Transforming numeric values** - -```js -const numeric = braces.expand('{1..5}', { - transform(value) { - // when numeric values are passed, "value" is a number - return 'foo/' + value * 2; - } -}); -console.log(numeric); -//=> [ 'foo/2', 'foo/4', 'foo/6', 'foo/8', 'foo/10' ] -``` - -### options.quantifiers - -**Type**: `Boolean` - -**Default**: `undefined` - -**Description**: In regular expressions, quanitifiers can be used to specify how many times a token can be repeated. For example, `a{1,3}` will match the letter `a` one to three times. - -Unfortunately, regex quantifiers happen to share the same syntax as [Bash lists](#lists) - -The `quantifiers` option tells braces to detect when [regex quantifiers](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/RegExp#quantifiers) are defined in the given pattern, and not to try to expand them as lists. - -**Examples** - -```js -const braces = require('braces'); -console.log(braces('a/b{1,3}/{x,y,z}')); -//=> [ 'a/b(1|3)/(x|y|z)' ] -console.log(braces('a/b{1,3}/{x,y,z}', {quantifiers: true})); -//=> [ 'a/b{1,3}/(x|y|z)' ] -console.log(braces('a/b{1,3}/{x,y,z}', {quantifiers: true, expand: true})); -//=> [ 'a/b{1,3}/x', 'a/b{1,3}/y', 'a/b{1,3}/z' ] -``` - -### options.unescape - -**Type**: `Boolean` - -**Default**: `undefined` - -**Description**: Strip backslashes that were used for escaping from the result. - -## What is "brace expansion"? - -Brace expansion is a type of parameter expansion that was made popular by unix shells for generating lists of strings, as well as regex-like matching when used alongside wildcards (globs). - -In addition to "expansion", braces are also used for matching. In other words: - -* [brace expansion](#brace-expansion) is for generating new lists -* [brace matching](#brace-matching) is for filtering existing lists - -
                -More about brace expansion (click to expand) - -There are two main types of brace expansion: - -1. **lists**: which are defined using comma-separated values inside curly braces: `{a,b,c}` -2. **sequences**: which are defined using a starting value and an ending value, separated by two dots: `a{1..3}b`. Optionally, a third argument may be passed to define a "step" or increment to use: `a{1..100..10}b`. These are also sometimes referred to as "ranges". - -Here are some example brace patterns to illustrate how they work: - -**Sets** - -``` -{a,b,c} => a b c -{a,b,c}{1,2} => a1 a2 b1 b2 c1 c2 -``` - -**Sequences** - -``` -{1..9} => 1 2 3 4 5 6 7 8 9 -{4..-4} => 4 3 2 1 0 -1 -2 -3 -4 -{1..20..3} => 1 4 7 10 13 16 19 -{a..j} => a b c d e f g h i j -{j..a} => j i h g f e d c b a -{a..z..3} => a d g j m p s v y -``` - -**Combination** - -Sets and sequences can be mixed together or used along with any other strings. - -``` -{a,b,c}{1..3} => a1 a2 a3 b1 b2 b3 c1 c2 c3 -foo/{a,b,c}/bar => foo/a/bar foo/b/bar foo/c/bar -``` - -The fact that braces can be "expanded" from relatively simple patterns makes them ideal for quickly generating test fixtures, file paths, and similar use cases. - -## Brace matching - -In addition to _expansion_, brace patterns are also useful for performing regular-expression-like matching. - -For example, the pattern `foo/{1..3}/bar` would match any of following strings: - -``` -foo/1/bar -foo/2/bar -foo/3/bar -``` - -But not: - -``` -baz/1/qux -baz/2/qux -baz/3/qux -``` - -Braces can also be combined with [glob patterns](https://github.com/jonschlinkert/micromatch) to perform more advanced wildcard matching. For example, the pattern `*/{1..3}/*` would match any of following strings: - -``` -foo/1/bar -foo/2/bar -foo/3/bar -baz/1/qux -baz/2/qux -baz/3/qux -``` - -## Brace matching pitfalls - -Although brace patterns offer a user-friendly way of matching ranges or sets of strings, there are also some major disadvantages and potential risks you should be aware of. - -### tldr - -**"brace bombs"** - -* brace expansion can eat up a huge amount of processing resources -* as brace patterns increase _linearly in size_, the system resources required to expand the pattern increase exponentially -* users can accidentally (or intentially) exhaust your system's resources resulting in the equivalent of a DoS attack (bonus: no programming knowledge is required!) - -For a more detailed explanation with examples, see the [geometric complexity](#geometric-complexity) section. - -### The solution - -Jump to the [performance section](#performance) to see how Braces solves this problem in comparison to other libraries. - -### Geometric complexity - -At minimum, brace patterns with sets limited to two elements have quadradic or `O(n^2)` complexity. But the complexity of the algorithm increases exponentially as the number of sets, _and elements per set_, increases, which is `O(n^c)`. - -For example, the following sets demonstrate quadratic (`O(n^2)`) complexity: - -``` -{1,2}{3,4} => (2X2) => 13 14 23 24 -{1,2}{3,4}{5,6} => (2X2X2) => 135 136 145 146 235 236 245 246 -``` - -But add an element to a set, and we get a n-fold Cartesian product with `O(n^c)` complexity: - -``` -{1,2,3}{4,5,6}{7,8,9} => (3X3X3) => 147 148 149 157 158 159 167 168 169 247 248 - 249 257 258 259 267 268 269 347 348 349 357 - 358 359 367 368 369 -``` - -Now, imagine how this complexity grows given that each element is a n-tuple: - -``` -{1..100}{1..100} => (100X100) => 10,000 elements (38.4 kB) -{1..100}{1..100}{1..100} => (100X100X100) => 1,000,000 elements (5.76 MB) -``` - -Although these examples are clearly contrived, they demonstrate how brace patterns can quickly grow out of control. - -**More information** - -Interested in learning more about brace expansion? - -* [linuxjournal/bash-brace-expansion](http://www.linuxjournal.com/content/bash-brace-expansion) -* [rosettacode/Brace_expansion](https://rosettacode.org/wiki/Brace_expansion) -* [cartesian product](https://en.wikipedia.org/wiki/Cartesian_product) - -
                - -## Performance - -Braces is not only screaming fast, it's also more accurate the other brace expansion libraries. - -### Better algorithms - -Fortunately there is a solution to the ["brace bomb" problem](#brace-matching-pitfalls): _don't expand brace patterns into an array when they're used for matching_. - -Instead, convert the pattern into an optimized regular expression. This is easier said than done, and braces is the only library that does this currently. - -**The proof is in the numbers** - -Minimatch gets exponentially slower as patterns increase in complexity, braces does not. The following results were generated using `braces()` and `minimatch.braceExpand()`, respectively. - -| **Pattern** | **braces** | **[minimatch][]** | -| --- | --- | --- | -| `{1..9007199254740991}`[^1] | `298 B` (5ms 459μs)| N/A (freezes) | -| `{1..1000000000000000}` | `41 B` (1ms 15μs) | N/A (freezes) | -| `{1..100000000000000}` | `40 B` (890μs) | N/A (freezes) | -| `{1..10000000000000}` | `39 B` (2ms 49μs) | N/A (freezes) | -| `{1..1000000000000}` | `38 B` (608μs) | N/A (freezes) | -| `{1..100000000000}` | `37 B` (397μs) | N/A (freezes) | -| `{1..10000000000}` | `35 B` (983μs) | N/A (freezes) | -| `{1..1000000000}` | `34 B` (798μs) | N/A (freezes) | -| `{1..100000000}` | `33 B` (733μs) | N/A (freezes) | -| `{1..10000000}` | `32 B` (5ms 632μs) | `78.89 MB` (16s 388ms 569μs) | -| `{1..1000000}` | `31 B` (1ms 381μs) | `6.89 MB` (1s 496ms 887μs) | -| `{1..100000}` | `30 B` (950μs) | `588.89 kB` (146ms 921μs) | -| `{1..10000}` | `29 B` (1ms 114μs) | `48.89 kB` (14ms 187μs) | -| `{1..1000}` | `28 B` (760μs) | `3.89 kB` (1ms 453μs) | -| `{1..100}` | `22 B` (345μs) | `291 B` (196μs) | -| `{1..10}` | `10 B` (533μs) | `20 B` (37μs) | -| `{1..3}` | `7 B` (190μs) | `5 B` (27μs) | - -### Faster algorithms - -When you need expansion, braces is still much faster. - -_(the following results were generated using `braces.expand()` and `minimatch.braceExpand()`, respectively)_ - -| **Pattern** | **braces** | **[minimatch][]** | -| --- | --- | --- | -| `{1..10000000}` | `78.89 MB` (2s 698ms 642μs) | `78.89 MB` (18s 601ms 974μs) | -| `{1..1000000}` | `6.89 MB` (458ms 576μs) | `6.89 MB` (1s 491ms 621μs) | -| `{1..100000}` | `588.89 kB` (20ms 728μs) | `588.89 kB` (156ms 919μs) | -| `{1..10000}` | `48.89 kB` (2ms 202μs) | `48.89 kB` (13ms 641μs) | -| `{1..1000}` | `3.89 kB` (1ms 796μs) | `3.89 kB` (1ms 958μs) | -| `{1..100}` | `291 B` (424μs) | `291 B` (211μs) | -| `{1..10}` | `20 B` (487μs) | `20 B` (72μs) | -| `{1..3}` | `5 B` (166μs) | `5 B` (27μs) | - -If you'd like to run these comparisons yourself, see [test/support/generate.js](test/support/generate.js). - -## Benchmarks - -### Running benchmarks - -Install dev dependencies: - -```bash -npm i -d && npm benchmark -``` - -### Latest results - -Braces is more accurate, without sacrificing performance. - -```bash -# range (expanded) - braces x 29,040 ops/sec ±3.69% (91 runs sampled)) - minimatch x 4,735 ops/sec ±1.28% (90 runs sampled) - -# range (optimized for regex) - braces x 382,878 ops/sec ±0.56% (94 runs sampled) - minimatch x 1,040 ops/sec ±0.44% (93 runs sampled) - -# nested ranges (expanded) - braces x 19,744 ops/sec ±2.27% (92 runs sampled)) - minimatch x 4,579 ops/sec ±0.50% (93 runs sampled) - -# nested ranges (optimized for regex) - braces x 246,019 ops/sec ±2.02% (93 runs sampled) - minimatch x 1,028 ops/sec ±0.39% (94 runs sampled) - -# set (expanded) - braces x 138,641 ops/sec ±0.53% (95 runs sampled) - minimatch x 219,582 ops/sec ±0.98% (94 runs sampled) - -# set (optimized for regex) - braces x 388,408 ops/sec ±0.41% (95 runs sampled) - minimatch x 44,724 ops/sec ±0.91% (89 runs sampled) - -# nested sets (expanded) - braces x 84,966 ops/sec ±0.48% (94 runs sampled) - minimatch x 140,720 ops/sec ±0.37% (95 runs sampled) - -# nested sets (optimized for regex) - braces x 263,340 ops/sec ±2.06% (92 runs sampled) - minimatch x 28,714 ops/sec ±0.40% (90 runs sampled) -``` - -## About - -
                -Contributing - -Pull requests and stars are always welcome. For bugs and feature requests, [please create an issue](../../issues/new). - -
                - -
                -Running Tests - -Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: - -```sh -$ npm install && npm test -``` - -
                - -
                -Building docs - -_(This project's readme.md is generated by [verb](https://github.com/verbose/verb-generate-readme), please don't edit the readme directly. Any changes to the readme must be made in the [.verb.md](.verb.md) readme template.)_ - -To generate the readme, run the following command: - -```sh -$ npm install -g verbose/verb#dev verb-generate-readme && verb -``` - -
                - -### Contributors - -| **Commits** | **Contributor** | -| --- | --- | -| 197 | [jonschlinkert](https://github.com/jonschlinkert) | -| 4 | [doowb](https://github.com/doowb) | -| 1 | [es128](https://github.com/es128) | -| 1 | [eush77](https://github.com/eush77) | -| 1 | [hemanth](https://github.com/hemanth) | -| 1 | [wtgtybhertgeghgtwtg](https://github.com/wtgtybhertgeghgtwtg) | - -### Author - -**Jon Schlinkert** - -* [GitHub Profile](https://github.com/jonschlinkert) -* [Twitter Profile](https://twitter.com/jonschlinkert) -* [LinkedIn Profile](https://linkedin.com/in/jonschlinkert) - -### License - -Copyright © 2019, [Jon Schlinkert](https://github.com/jonschlinkert). -Released under the [MIT License](LICENSE). - -*** - -_This file was generated by [verb-generate-readme](https://github.com/verbose/verb-generate-readme), v0.8.0, on April 08, 2019._ \ No newline at end of file diff --git a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/ranges/gtr.js b/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/ranges/gtr.js deleted file mode 100644 index db7e35599dd56651565ca7b85d752bb8f5dfe4a6..0000000000000000000000000000000000000000 --- a/spaces/zhang-wei-jian/docker/node_modules/simple-update-notifier/node_modules/semver/ranges/gtr.js +++ /dev/null @@ -1,4 +0,0 @@ -// Determine if version is greater than all the versions possible in the range. -const outside = require('./outside') -const gtr = (version, range, options) => outside(version, range, '>', options) -module.exports = gtr diff --git "a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" "b/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" deleted file mode 100644 index eada69dc65587782125c0809381260a6bbdce225..0000000000000000000000000000000000000000 --- "a/spaces/zhanghaohui/szu-gpt-academic/crazy_functions/\346\200\273\347\273\223word\346\226\207\346\241\243.py" +++ /dev/null @@ -1,127 +0,0 @@ -from toolbox import update_ui -from toolbox import CatchException, report_execption, write_results_to_file -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -fast_debug = False - - -def 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt): - import time, os - # pip install python-docx 用于docx格式,跨平台 - # pip install pywin32 用于doc格式,仅支持Win平台 - for index, fp in enumerate(file_manifest): - if fp.split(".")[-1] == "docx": - from docx import Document - doc = Document(fp) - file_content = "\n".join([para.text for para in doc.paragraphs]) - else: - import win32com.client - word = win32com.client.Dispatch("Word.Application") - word.visible = False - # 打开文件 - print('fp', os.getcwd()) - doc = word.Documents.Open(os.getcwd() + '/' + fp) - # file_content = doc.Content.Text - doc = word.ActiveDocument - file_content = doc.Range().Text - doc.Close() - word.Quit() - - print(file_content) - # private_upload里面的文件名在解压zip后容易出现乱码(rar和7z格式正常),故可以只分析文章内容,不输入文件名 - from .crazy_utils import breakdown_txt_to_satisfy_token_limit_for_pdf - from request_llm.bridge_all import model_info - max_token = model_info[llm_kwargs['llm_model']]['max_token'] - TOKEN_LIMIT_PER_FRAGMENT = max_token * 3 // 4 - paper_fragments = breakdown_txt_to_satisfy_token_limit_for_pdf( - txt=file_content, - get_token_fn=model_info[llm_kwargs['llm_model']]['token_cnt'], - limit=TOKEN_LIMIT_PER_FRAGMENT - ) - this_paper_history = [] - for i, paper_frag in enumerate(paper_fragments): - i_say = f'请对下面的文章片段用中文做概述,文件名是{os.path.relpath(fp, project_folder)},文章内容是 ```{paper_frag}```' - i_say_show_user = f'请对下面的文章片段做概述: {os.path.abspath(fp)}的第{i+1}/{len(paper_fragments)}个片段。' - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say_show_user, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=[], - sys_prompt="总结文章。" - ) - - chatbot[-1] = (i_say_show_user, gpt_say) - history.extend([i_say_show_user,gpt_say]) - this_paper_history.extend([i_say_show_user,gpt_say]) - - # 已经对该文章的所有片段总结完毕,如果文章被切分了, - if len(paper_fragments) > 1: - i_say = f"根据以上的对话,总结文章{os.path.abspath(fp)}的主要内容。" - gpt_say = yield from request_gpt_model_in_new_thread_with_ui_alive( - inputs=i_say, - inputs_show_user=i_say, - llm_kwargs=llm_kwargs, - chatbot=chatbot, - history=this_paper_history, - sys_prompt="总结文章。" - ) - - history.extend([i_say,gpt_say]) - this_paper_history.extend([i_say,gpt_say]) - - res = write_results_to_file(history) - chatbot.append(("完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - res = write_results_to_file(history) - chatbot.append(("所有文件都总结完成了吗?", res)) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - -@CatchException -def 总结word文档(txt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - import glob, os - - # 基本信息:功能、贡献者 - chatbot.append([ - "函数插件功能?", - "批量总结Word文档。函数插件贡献者: JasonGuo1。注意, 如果是.doc文件, 请先转化为.docx格式。"]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - - # 尝试导入依赖,如果缺少依赖,则给出安装建议 - try: - from docx import Document - except: - report_execption(chatbot, history, - a=f"解析项目: {txt}", - b=f"导入软件依赖失败。使用该模块需要额外依赖,安装方法```pip install --upgrade python-docx pywin32```。") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 清空历史,以免输入溢出 - history = [] - - # 检测输入参数,如没有给定输入参数,直接退出 - if os.path.exists(txt): - project_folder = txt - else: - if txt == "": txt = '空空如也的输入栏' - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到本地项目或无权访问: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 搜索需要处理的文件清单 - if txt.endswith('.docx') or txt.endswith('.doc'): - file_manifest = [txt] - else: - file_manifest = [f for f in glob.glob(f'{project_folder}/**/*.docx', recursive=True)] + \ - [f for f in glob.glob(f'{project_folder}/**/*.doc', recursive=True)] - - # 如果没找到任何文件 - if len(file_manifest) == 0: - report_execption(chatbot, history, a=f"解析项目: {txt}", b=f"找不到任何.docx或doc文件: {txt}") - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 - return - - # 开始正式执行任务 - yield from 解析docx(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)