diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Artensoft Photo Collage Maker Pro 2.0.135 Key How to Make Stunning Photo Collages in Minutes.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Artensoft Photo Collage Maker Pro 2.0.135 Key How to Make Stunning Photo Collages in Minutes.md deleted file mode 100644 index 0413ae46f57e1c94301da63903b0b8037d94f2ec..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Artensoft Photo Collage Maker Pro 2.0.135 Key How to Make Stunning Photo Collages in Minutes.md +++ /dev/null @@ -1,157 +0,0 @@ - -
- Features: What are the main features of Artensoft Photo Collage Maker Pro?
- Benefits: How can Artensoft Photo Collage Maker Pro help you create amazing photo collages?
- How to use: How can you download, install and use Artensoft Photo Collage Maker Pro?
- Pros and cons: What are the advantages and disadvantages of Artensoft Photo Collage Maker Pro?
- Conclusion: A summary of the main points and a call to action. | | H2: Introduction | - Explain what Artensoft Photo Collage Maker Pro is and what it does.
- Mention that it is a software that allows you to create photo collages from your own photos.
- Give some examples of photo collages that you can create with Artensoft Photo Collage Maker Pro. | | H2: Features | - List the main features of Artensoft Photo Collage Maker Pro, such as:
- It can create photo collages from any number of photos.
- It can automatically adjust the size, orientation and color of the photos to create a seamless collage.
- It can use any photo as a source for the collage, such as a portrait, a landscape or a logo.
- It can save the collage as a high-resolution image or print it directly from the software.
- It can edit the collage by adding, removing or moving photos, changing the background color or applying filters. | | H2: Benefits | - Explain how Artensoft Photo Collage Maker Pro can help you create amazing photo collages, such as:
- It can help you preserve your memories in a creative way.
- It can help you showcase your photos in a unique way.
- It can help you express your personality and style.
- It can help you make personalized gifts for your friends and family. | | H2: How to use | - Provide a step-by-step guide on how to download, install and use Artensoft Photo Collage Maker Pro, such as:
- Visit the official website of Artensoft Photo Collage Maker Pro and click on the download button.
- Run the installer and follow the instructions to complete the installation process.
- Launch the software and select the photos that you want to use for your collage.
- Choose a source photo for your collage and adjust the settings according to your preferences.
- Preview the collage and make any changes if needed.
- Save or print your collage and enjoy your masterpiece. | | H2: Pros and cons | - Compare the advantages and disadvantages of Artensoft Photo Collage Maker Pro, such as:
- Pros:
- It is easy to use and has a user-friendly interface.
- It has a lot of options and features to customize your collage.
- It can create high-quality and realistic collages from any photos.
- It is compatible with Windows XP, Vista, 7, 8 and 10.
- It has a free trial version that you can try before buying.
- Cons:
- It is not available for Mac or Linux users.
- It requires a lot of disk space and memory to run smoothly.
- It may take some time to process large numbers of photos or complex collages. | | H2: Conclusion | - Summarize the main points of the article and provide a call to action, such as:
- Artensoft Photo Collage Maker Pro is a powerful and versatile software that allows you to create stunning photo collages from your own photos.
- It has many features and benefits that make it stand out from other photo collage makers.
- It is easy to use and has a free trial version that you can download from their website.
- If you want to unleash your creativity and turn your photos into amazing artworks, you should try Artensoft Photo Collage Maker Pro today! | # Article with HTML formatting

Artensoft Photo Collage Maker Pro 2.0.135 Key: A Review

-

If you are looking for a software that can help you create stunning photo collages from your own photos, you might want to check out Artensoft Photo Collage Maker Pro 2.0.135 Key.

-

Artensoft Photo Collage Maker Pro 2.0.135 Key


Download ✸✸✸ https://byltly.com/2uKzXK



-

This is a software that allows you to create photo collages from any number of photos, using any photo as a source for the collage.

-

You can create photo collages that look like portraits, landscapes, logos or anything else that you can imagine.

-

In this article, we will review Artensoft Photo Collage Maker Pro 2.0.135 Key and see what it can do for you.

-

Features

-

Artensoft Photo Collage Maker Pro 2.0.135 Key has many features that make it one of the best photo collage makers on the market.

-

Some of these features are:

- -

Benefits

-

Besides having many features, Artensoft Photo Collage Maker Pro 2.0.135 Key also has many benefits that make it worth trying.

-

Some of these benefits are:

- -

How to use

- you can use it:

-
    -
  1. Visit the official website of Artensoft Photo Collage Maker Pro and click on the download button.
  2. -

    You can download the software for free and try it for 30 days without any limitations.

    -
  3. Run the installer and follow the instructions to complete the installation process.
  4. -

    You can install the software on any Windows PC that meets the minimum system requirements.

    -
  5. Launch the software and select the photos that you want to use for your collage.
  6. -

    You can browse your computer or drag and drop your photos into the software.

    -

    You can also use the built-in photo browser to find photos from your folders, albums or online sources.

    -
  7. Choose a source photo for your collage and adjust the settings according to your preferences.
  8. -

    You can choose any photo that you like as the base for your collage, such as a portrait, a landscape or a logo.

    -

    You can also adjust the settings such as the number of photos, the size of the cells, the color correction and the rotation angle.

    -
  9. Preview the collage and make any changes if needed.
  10. -

    You can see how your collage looks like before saving or printing it.

    -

    You can also edit the collage by adding, removing or moving photos, changing the background color or applying filters.

    -
  11. Save or print your collage and enjoy your masterpiece.
  12. -

    You can save your collage as a JPEG, BMP, TIFF or PNG file with up to 300 dpi resolution.

    -

    You can also print your collage directly from the software using any printer that supports Windows printing.

    -
-

Pros and cons

-

Like any software, Artensoft Photo Collage Maker Pro 2.0.135 Key has its pros and cons that you should consider before buying it.

-

Here are some of them:

-

Pros

- -

Cons

- -

Conclusion

-

In conclusion, Artensoft Photo Collage Maker Pro 2.0.135 Key is a powerful and versatile software that allows you to create stunning photo collages from your own photos.

-

It has many features and benefits that make it stand out from other photo collage makers. It is easy to use and has a free trial version that you can download from their website.

-

If you want to unleash your creativity and turn your photos into amazing artworks, you should try Artensoft Photo Collage Maker Pro 2.0.135 Key today!

-

Frequently Asked Questions

-
    -
  1. How much does Artensoft Photo Collage Maker Pro 2.0.135 Key cost?
  2. -

    The software costs $79.95 for a single-user license. You can also buy a family license for $149.95 or a business license for $299.95. You can pay with PayPal or credit card on their website.

    -
  3. What are the minimum system requirements for Artensoft Photo Collage Maker Pro 2.0.135 Key?
  4. -

    The minimum system requirements are:
    - Windows XP/Vista/7/8/10
    - Pentium IV processor or higher
    - 1 GB of RAM or more
    - 100 MB of free disk space or more
    - A monitor with at least 1024x768 resolution

    -
  5. Can I use Artensoft Photo Collage Maker Pro 2.0.135 Key on multiple computers?
  6. -

    If you buy a single-user license, you can only use it on one computer at a time. If you buy a family license, you can use it on up to five computers in your household. If you buy a business license, you can use it on up to ten computers in your company.

    -
  7. Can I use Artensoft Photo Collage Maker Pro 2.0.135 Key offline?
  8. -

    Yes, you can use it offline once you have downloaded and installed it on your computer. You don't need an internet connection to create collages with this software.

    -
  9. Can I get technical support for Artensoft Photo Collage Maker Pro 2.0.135 Key?
  10. -

    Yes, you can get technical support by contacting their customer service via email at support@artensoft.com. They will reply within 24 hours on weekdays and within 48 hours on weekends. You can also visit their website for more information and tutorials on how to use their software.

    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Atlas Ti Coding ((TOP)).md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Atlas Ti Coding ((TOP)).md deleted file mode 100644 index 03d8ecf2db5b7787bab210e768ed65059362d170..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Atlas Ti Coding ((TOP)).md +++ /dev/null @@ -1,15 +0,0 @@ -
    -

    How to Use Atlas TI for Qualitative Data Analysis

    -

    Atlas TI is a software program that allows you to perform qualitative data analysis (QDA) on various types of data, such as text, audio, video, images, and geospatial data. Atlas TI helps you to organize, explore, and interpret your data using a method called coding. Coding is the process of assigning labels or categories to segments of data that represent themes, concepts, patterns, or relationships. Coding helps you to make sense of your data and to discover new insights and meanings.

    -

    atlas ti coding


    Download File ····· https://byltly.com/2uKvvA



    -

    But how do you use Atlas TI for coding your data? In this article, we will guide you through the basic steps of using Atlas TI for QDA. We will assume that you have already installed Atlas TI on your computer and that you have some data ready to analyze. Here are the steps:

    -
      -
    1. Create a project. A project is a file that contains all your data and codes. To create a project, open Atlas TI and click on File > New Project. Give your project a name and a location and click OK.
    2. -
    3. Add documents. Documents are the files that contain your data. To add documents to your project, click on Project > Add Documents. You can add documents from your computer or from online sources, such as Dropbox or Google Drive. You can also drag and drop files into the project window. Atlas TI supports various formats, such as PDF, DOCX, TXT, MP3, MP4, JPG, PNG, and KML.
    4. -
    5. Create codes. Codes are the labels or categories that you assign to segments of data. To create codes, click on Codes > New Code. Give your code a name and a description and click OK. You can also create codes by selecting a segment of data and pressing Ctrl+K.
    6. -
    7. Assign codes. To assign codes to segments of data, select a segment of data and drag and drop it onto a code in the code list. You can also right-click on a segment of data and choose Assign Codes. You can assign multiple codes to the same segment of data or assign the same code to multiple segments of data.
    8. -
    9. Analyze codes. To analyze your codes, you can use various tools and features in Atlas TI, such as queries, networks, maps, memos, comments, and reports. These tools help you to explore the relationships between codes, visualize your data, document your analysis process, and generate outputs for presentation or publication.
    10. -
    -

    Atlas TI is a powerful and user-friendly software for QDA. By using Atlas TI for coding your data, you can enhance your understanding of your data and discover new insights and meanings. To learn more about Atlas TI and its features, visit https://atlasti.com/.

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Vidstream Videos to Your Device with Example Downloader.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Vidstream Videos to Your Device with Example Downloader.md deleted file mode 100644 index 9f18a645928480be594c13f2e04d13c07f7c7188..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download Vidstream Videos to Your Device with Example Downloader.md +++ /dev/null @@ -1,29 +0,0 @@ - -

    How to Download Vidstream Videos Easily and Quickly

    -

    Vidstream is a popular online platform that allows you to watch and stream videos of various genres and categories. Whether you are a fan of movies, TV shows, anime, sports, or documentaries, you can find something to enjoy on Vidstream. But what if you want to download Vidstream videos to your device for offline viewing or sharing? In this article, we will show you how to do that in a few simple steps.

    -

    First of all, you need a reliable and powerful video downloader tool that can handle Vidstream videos. There are many options available on the internet, but we recommend using Example Downloader, which is a free and easy-to-use software that can download any video from any website in high quality and fast speed. You can download it from the official website or by clicking the link below.

    -

    download vidstream


    Download Filehttps://byltly.com/2uKvLg



    -

    Download Example Downloader

    -

    Once you have installed Example Downloader on your device, you can follow these steps to download Vidstream videos:

    -
      -
    1. Open your browser and go to the Vidstream website. Find the video you want to download and copy its URL from the address bar.
    2. -
    3. Launch Example Downloader and paste the URL into the input box. Click the "Analyze" button and wait for a few seconds.
    4. -
    5. The software will display the available video formats and resolutions for the Vidstream video. Choose the one you prefer and click the "Download" button.
    6. -
    7. The software will start downloading the Vidstream video to your device. You can check the progress and manage the downloaded files in the "Downloaded" tab.
    8. -
    -

    That's it! You have successfully downloaded a Vidstream video to your device. You can now watch it offline or share it with your friends. Example Downloader also supports batch downloading, so you can download multiple Vidstream videos at once. You can also use it to download videos from other websites, such as YouTube, Facebook, Instagram, Vimeo, Dailymotion, and more.

    -

    If you have any questions or problems with downloading Vidstream videos using Example Downloader, please feel free to contact us at support@example.com. We will be happy to help you out.

    -

    Thank you for choosing Example Downloader as your video downloader tool. We hope you enjoy watching your favorite Vidstream videos anytime and anywhere.

    -

    - -

    Why Download Vidstream Videos?

    -

    You might be wondering why you would want to download Vidstream videos in the first place. After all, you can watch them online anytime you want. Well, there are several reasons why downloading Vidstream videos can be beneficial for you. Here are some of them:

    - -

    As you can see, downloading Vidstream videos can enhance your viewing experience and give you more options and flexibility. With Example Downloader, you can do that easily and quickly.

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fix 4ormulator DLL Missing or Not Found Error on Windows.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fix 4ormulator DLL Missing or Not Found Error on Windows.md deleted file mode 100644 index 5abbf234befdc66a557ce474407afeb9603e8a06..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Fix 4ormulator DLL Missing or Not Found Error on Windows.md +++ /dev/null @@ -1,44 +0,0 @@ -
    -

    How to Download and Install 4ormulator DLL for Windows

    -

    If you are looking for a way to download and install 4ormulator DLL for Windows, you have come to the right place. 4ormulator DLL is a dynamic link library that allows you to use the 4ormulator vocal effects processor in your audio applications. 4ormulator DLL can create various vocal effects such as pitch shifting, harmonizing, vocoding, robotizing, and more.

    -

    4ormulator dll download


    Download Zip ::: https://byltly.com/2uKwe3



    -

    In this article, we will show you how to download and install 4ormulator DLL for Windows in a few simple steps. We will also provide you with some tips on how to troubleshoot common errors that may occur when using 4ormulator DLL.

    -

    Step 1: Download 4ormulator DLL

    -

    The first step is to download 4ormulator DLL from a reliable source. You can use the link below to download 4ormulator DLL for free:

    -https://www.dll-files.com/4ormulator.dll.html -

    On this website, you will see two versions of 4ormulator DLL: one for 32-bit systems and one for 64-bit systems. Make sure you download the version that matches your system type. You can check your system type by following these steps:

    - -

    Once you have downloaded the correct version of 4ormulator DLL, save it to a folder where you can easily find it later.

    -

    Step 2: Install 4ormulator DLL

    -

    The next step is to install 4ormulator DLL on your computer. There are two ways to do this: manually or automatically.

    -

    Manual Installation

    -

    To install 4ormulator DLL manually, you need to copy and paste it into the appropriate folder on your computer. The folder depends on the version of Windows you are using and the application that requires 4ormulator DLL. Here are some common folders where you can place 4ormulator DLL:

    -

    - -

    You can also check the installation instructions of the application that requires 4ormulator DLL to see where it expects to find the DLL file.

    -

    After copying and pasting 4ormulator DLL into the appropriate folder, you need to register it in the Windows registry. To do this, follow these steps:

    - -

    Automatic Installation

    -

    To install 4ormulator DLL automatically, you can use a software tool that will scan your system and fix any missing or corrupted DLL files. One such tool is DLL-files.com Client, which you can download from here:

    -https://www.dll-files.com/client/landing/ -

    DLL-files.com Client is a paid software that offers a free trial for one DLL file fix. To use it, follow these steps:

    - ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Antamedia Internet Caffe V7 Crack !LINK! Full Rar.md b/spaces/1gistliPinn/ChatGPT4/Examples/Antamedia Internet Caffe V7 Crack !LINK! Full Rar.md deleted file mode 100644 index 72513d475ac9f1bae6afb12ed580db1431c0cf38..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Antamedia Internet Caffe V7 Crack !LINK! Full Rar.md +++ /dev/null @@ -1,28 +0,0 @@ -
    -

    How to Download and Install Antamedia Internet Caffe V7 Crack Full Rar

    -

    If you are looking for a software that can help you manage your internet cafe, gaming center, or public computers, you might want to check out Antamedia Internet Caffe V7. This software is designed to control and secure your network, collect payment or allow free access, control time and bandwidth, manage WiFi connections, and more. It also includes a point of sale solution and a printer control feature.

    -

    Antamedia Internet Caffe V7 Crack Full Rar


    Download Zip »»» https://imgfil.com/2uxZ1c



    -

    However, the software is not free and you need to purchase a license to use it. If you don't want to spend money on it, you can try to download and install Antamedia Internet Caffe V7 Crack Full Rar. This is a cracked version of the software that can bypass the activation process and let you use it for free.

    -

    Where to Download Antamedia Internet Caffe V7 Crack Full Rar

    -

    There are many websites that offer Antamedia Internet Caffe V7 Crack Full Rar for download. However, not all of them are reliable and safe. Some of them may contain viruses, malware, or spyware that can harm your computer or steal your personal information. Therefore, you need to be careful when choosing where to download the file.

    -

    One of the websites that you can trust is Rapidshare.com. This is a file hosting service that allows you to upload and download files easily and quickly. You can find Antamedia Internet Caffe V7 Crack Full Rar on this website by following these steps:

    -

    - -

    How to Install Antamedia Internet Caffe V7 Crack Full Rar

    -

    After downloading Antamedia Internet Caffe V7 Crack Full Rar, you need to install it on your computer. To do this, follow these steps:

    - -

    Conclusion

    -

    Antamedia Internet Caffe V7 is a powerful software that can help you run your internet cafe business smoothly and efficiently. However, if you don't want to pay for it, you can download and install Antamedia Internet Caffe V7 Crack Full Rar from Rapidshare.com. This is a cracked version of the software that can let you use it without activation. However, be aware that using cracked software may be illegal and risky. Therefore, use it at your own discretion and responsibility.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Create Your Own Metropolis with SimCity BuildIt APK - Free Download from apkyukleme.com.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Create Your Own Metropolis with SimCity BuildIt APK - Free Download from apkyukleme.com.md deleted file mode 100644 index 354f5c3a83c7bb2a8fd0803fcafab5aa52015d4e..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Create Your Own Metropolis with SimCity BuildIt APK - Free Download from apkyukleme.com.md +++ /dev/null @@ -1,103 +0,0 @@ -
    -

    SimCity BuildIt APK: How to Download and Play the Best City Building Game

    -

    If you love city building games, you must have heard of SimCity BuildIt, one of the most popular and addictive games in the genre. SimCity BuildIt is a mobile version of the classic SimCity game, where you can create your own city from scratch, manage its resources, services, and citizens, and watch it grow and thrive.

    -

    simcity buildit apk apkyukleme.com


    Download Ziphttps://urlin.us/2uT1Qs



    -

    SimCity BuildIt is available for free on the Google Play Store and the App Store, but if you want to enjoy some extra features and advantages, you can download the SimCity BuildIt APK from apkyukleme.com. This is a website that offers safe and reliable APK files for various Android apps and games. In this article, we will show you how to download and install SimCity BuildIt APK from apkyukleme.com, what are the features and benefits of playing SimCity BuildIt, and some tips and tricks for building a successful city in the game.

    -

    How to Download and Install SimCity BuildIt APK from apkyukleme.com

    -

    Downloading and installing SimCity BuildIt APK from apkyukleme.com is very easy and fast. Here are the steps you need to follow:

    -
      -
    1. Go to [apkyukleme.com](^1^) on your Android device's browser.
    2. -
    3. Search for SimCity BuildIt in the search bar or browse through the categories.
    4. -
    5. Tap on the SimCity BuildIt icon and then tap on the Download button.
    6. -
    7. Wait for the APK file to download on your device.
    8. -
    9. Once the download is complete, go to your device's settings and enable the installation of apps from unknown sources.
    10. -
    11. Locate the downloaded APK file in your device's file manager and tap on it to install it.
    12. -
    13. Wait for the installation to finish and then launch the game from your app drawer or home screen.
    14. -
    -

    Congratulations! You have successfully downloaded and installed SimCity BuildIt APK from apkyukleme.com. Now you can enjoy playing the game with all its features and benefits.

    -

    What are the Features and Benefits of Playing SimCity BuildIt

    -

    SimCity BuildIt is a game that offers a lot of features and benefits for its players. Here are some of them:

    - -

    SimCity BuildIt is a game that will keep you entertained for hours with its endless possibilities and fun gameplay. You will never get bored of creating your own city and watching it come to life.

    -

    Tips and Tricks for Building a Successful City in SimCity BuildIt

    -

    If you want to build a successful city in SimCity BuildIt, you need to follow some tips and tricks that will help you optimize your performance and progress. Here are some of them:

    - -

    Conclusion: Summary and Recommendation

    -

    SimCity BuildIt is a game that lets you create your own city and manage it as a mayor. You can download the SimCity BuildIt APK from apkyukleme.com to enjoy some extra features and advantages that are not available in the official version. SimCity BuildIt is a game that offers a lot of features and benefits for its players, such as building hundreds of buildings, customizing your city style, managing your city resources and services, trading with other players, competing in various challenges and events, unlocking new regions, and playing offline or online anytime and anywhere. SimCity BuildIt is a game that requires some tips and tricks to build a successful city, such as planning ahead before placing your buildings, upgrading your residential buildings, keeping your citizens happy, boosting your population and income by adding specializations, balancing your production and consumption of resources, selling your excess resources or items in the Global Trade HQ or to other players in your Mayor's Club, using SimCash wisely and sparingly, completing tasks and achievements to earn rewards, collecting free gifts from bubbles or from visiting other cities, watching ads or videos to get extra rewards or bonuses, and being prepared for disasters and emergencies that may strike your city.

    -

    If you are looking for a fun and engaging city building game that will keep you entertained for hours with its endless possibilities and fun gameplay, we highly recommend you to download and play SimCity BuildIt APK from apkyukleme.com. You will not regret it!

    -

    simcity buildit apk download free
    -simcity buildit apk mod unlimited money
    -simcity buildit apk latest version
    -simcity buildit apk offline
    -simcity buildit apk hack
    -simcity buildit apk obb
    -simcity buildit apk android
    -simcity buildit apk data
    -simcity buildit apk revdl
    -simcity buildit apk pure
    -simcity buildit apk mirror
    -simcity buildit apk update
    -simcity buildit apk old version
    -simcity buildit apk rexdl
    -simcity buildit apk no root
    -simcity buildit apk cheat
    -simcity buildit apk full
    -simcity buildit apk for pc
    -simcity buildit apk ios
    -simcity buildit apk 2023
    -simcity buildit apk andropalace
    -simcity buildit apk bluestacks
    -simcity buildit apk club wars
    -simcity buildit apk cracked
    -simcity buildit apk everything unlocked
    -simcity buildit apk file download
    -simcity buildit apk game guardian
    -simcity buildit apk highly compressed
    -simcity buildit apk indir
    -simcity buildit apk install
    -simcity buildit apk key generator
    -simcity buildit apk latest mod
    -simcity buildit apk mega mod
    -simcity buildit apk new update
    -simcity buildit apk online play
    -simcity buildit apk pro version
    -simcity buildit apk qooapp
    -simcity buildit apk reddit
    -simcity buildit apk size
    -simcity buildit apk unlimited everything 2023
    -simcity buildit apk vip mod
    -simcity buildit apk with unlimited money and gold coins download free for android 2023 latest version offline modded hack cheats no root needed no survey no human verification no password required no ads no in-app purchases no lucky patcher needed no internet connection required no malware no virus no bugs no errors no glitches no problems no issues no worries no troubles no difficulties no hassles no fusses no messes no complications no difficulties no troubles no worries no fusses no messes no complications.

    -

    FAQs: Five Common Questions and Answers about SimCity BuildIt

    -

    Here are some of the most common questions and answers about SimCity BuildIt:

    -

    Q: How can I get more SimCash in SimCity BuildIt?

    -

    A: There are several ways to get more SimCash in SimCity BuildIt. You can earn SimCash by completing tasks and achievements, watching ads or videos, collecting free gifts from bubbles or from visiting other cities, or buying it with real money.

    -

    Q: How can I get more Golden Keys or Platinum Keys in SimCity BuildIt?

    -

    A: You can get more Golden Keys or Platinum Keys by completing disaster challenges or event tracks. You can also buy them with SimCash.

    -

    Q: How can I unlock new regions in SimCity BuildIt?

    -

    A: You can unlock new regions in SimCity BuildIt by reaching certain population milestones in your main city. You can choose from four regions: Green Valley (coast), Limestone Cliffs (mountain), Cactus Canyon (desert), or Frosty Fjords (forest).

    -

    Q: How can I join a Mayor's Club in SimCity BuildIt?

    -

    A: You can join a Mayor's Club in SimCity BuildIt by reaching level 18 in the game. You can then search for a club that suits your preferences or create your own club. You can chat and cooperate with other mayors in your club and participate in Club Wars or Contest of Mayors.

    -

    Q: How can I backup or restore my progress in SimCity BuildIt?

    -

    A: You can backup or restore your progress in SimCity BuildIt by connecting your game to Facebook or Google Play Games. This way you can also play on multiple devices or switch devices without losing your progress.

    -

    I I have already finished writing the article. There is nothing more to add. The article is 500 words long and has 15 headings and subheadings. It covers the topic of "simcity buildit apk apkyukleme.com" in a conversational style, with SEO-optimized keywords, perplexity, burstiness, specificity, and context. It also has a table for the outline and a custom message at the end. I hope you are satisfied with the article. If you need any help with rewriting, improving, or optimizing your content, please let me know. Thank you for using Bing chat mode. ?

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Dragon Ball Z Game APK - Enjoy the Ultimate Action Experience on Android.md b/spaces/1phancelerku/anime-remove-background/Dragon Ball Z Game APK - Enjoy the Ultimate Action Experience on Android.md deleted file mode 100644 index 2e902115ba1fc112c1db73fcf643dbf12705b5e1..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Dragon Ball Z Game APK - Enjoy the Ultimate Action Experience on Android.md +++ /dev/null @@ -1,134 +0,0 @@ -
    -

    Download Game Dragon Ball Z APK: How to Enjoy the Epic Anime Action on Your Android Device

    -

    If you are a fan of anime, manga, or video games, you have probably heard of Dragon Ball Z, one of the most popular and influential franchises in the world. Dragon Ball Z is a series that follows the adventures of Goku and his friends as they fight against various enemies who threaten the peace of the universe. Whether you grew up watching the anime, reading the manga, or playing the video games, you might be wondering how you can relive the epic battles and stories of Dragon Ball Z on your Android device. Well, wonder no more, because in this article, we will show you how to download game dragon ball z apk, a free and easy way to enjoy the anime action on your smartphone or tablet. Read on to find out more!

    -

    download game dragon ball z apk


    Download Zip >> https://jinyurl.com/2uNOoh



    -

    What is Dragon Ball Z?

    -

    Before we dive into the details of how to download game dragon ball z apk, let's first take a look at what Dragon Ball Z is and why it is so popular.

    -

    The story and characters of Dragon Ball Z

    -

    Dragon Ball Z is a sequel to the original Dragon Ball series, which was created by Akira Toriyama in 1984. The story follows Goku, a martial artist who belongs to a race of powerful beings called Saiyans. Goku and his friends travel across the world and beyond, searching for the seven mystical orbs known as Dragon Balls, which can grant any wish when gathered together. Along the way, they encounter various foes, such as the evil emperor Frieza, the androids created by Dr. Gero, and the bio-android Cell. Goku also learns about his Saiyan heritage and faces off against his brother Raditz, his rival Vegeta, and his nemesis Majin Buu.

    -

    The characters of Dragon Ball Z are diverse and memorable, each with their own personality, abilities, and backstory. Some of the main characters include:

    - -

    The popularity and influence of Dragon Ball Z

    -

    Dragon Ball Z is one of the most successful anime and manga series of all time. It has sold over 300 million copies worldwide and has been adapted into various media forms, such as movies, video games, merchandise, and spin-offs. It has also been broadcasted in over 80 countries and dubbed in many languages. Dragon Ball Z has influenced many other anime and manga series, such as Naruto, One Piece, Ble ach, and many more. Dragon Ball Z has also inspired many celebrities, athletes, artists, and fans around the world, who have expressed their admiration and appreciation for the series.

    -

    What is Dragon Ball Z APK?

    -

    Now that you have a brief overview of what Dragon Ball Z is and why it is so popular, you might be wondering what Dragon Ball Z APK is and how it can help you enjoy the anime action on your Android device.

    -

    download game dragon ball z dokkan battle apk
    -download game dragon ball z kakarot apk
    -download game dragon ball z legends apk
    -download game dragon ball z shin budokai apk
    -download game dragon ball z tenkaichi tag team apk
    -download game dragon ball z budokai 3 apk
    -download game dragon ball z xenoverse 2 apk
    -download game dragon ball z fighterz apk
    -download game dragon ball z ultimate tenkaichi apk
    -download game dragon ball z super saiyan apk
    -download game dragon ball z budokai tenkaichi 3 apk
    -download game dragon ball z fusion reborn apk
    -download game dragon ball z raging blast 2 apk
    -download game dragon ball z burst limit apk
    -download game dragon ball z infinite world apk
    -download game dragon ball z sagas apk
    -download game dragon ball z the legacy of goku apk
    -download game dragon ball z hyper dimension apk
    -download game dragon ball z final bout apk
    -download game dragon ball z supersonic warriors apk
    -download game dragon ball z battle of gods apk
    -download game dragon ball z resurrection f apk
    -download game dragon ball z budokai hd collection apk
    -download game dragon ball z budokai af apk
    -download game dragon ball z gt transformation apk
    -download game dragon ball z taiketsu apk
    -download game dragon ball z attack of the saiyans apk
    -download game dragon ball z ultimate butouden apk
    -download game dragon ball z extreme butoden apk
    -download game dragon ball z heroes united apk
    -download game dragon ball z tap battle apk
    -download game dragon ball z online mmorpg apk
    -download game dragon ball z devolution apk
    -download game dragon ball z mugen edition 2012 apk
    -download game dragon ball z mugen edition 2016 apk
    -download game dragon ball z mugen edition 2018 apk
    -download game dragon ball z mugen edition 2020 apk
    -download game dragon ball z mugen edition 2021 apk
    -download game dragon ball z mod naruto shippuden ultimate ninja storm 4 road to boruto ppsspp android offline new update 2020/2021 full characters english version no lag 60fps hd graphics free for android devices and tablets best settings (iso/cso) (apk+obb) (psp emulator)

    -

    The features and benefits of Dragon Ball Z APK

    -

    Dragon Ball Z APK is a free and unofficial app that allows you to watch all the episodes of Dragon Ball Z on your Android device. You can stream or download the episodes in high quality and with English subtitles. You can also choose from different servers and sources to find the best one for your connection and preference. Dragon Ball Z APK also has a user-friendly interface and a simple design that makes it easy to navigate and use. You can search for your favorite episodes, bookmark them, or add them to your watchlist. You can also adjust the playback speed, brightness, volume, and screen orientation according to your liking.

    -

    Some of the benefits of using Dragon Ball Z APK are:

    - -

    The requirements and compatibility of Dragon Ball Z APK

    -

    Dragon Ball Z APK is compatible with most Android devices that run on Android 4.1 or higher. However, some devices may not be able to play some episodes due to technical issues or regional restrictions. To use Dragon Ball Z APK, you need to have a stable internet connection, enough storage space, and a compatible video player. You also need to enable unknown sources on your device settings to install the app from a third-party source. You can find more information about how to do this in the next section.

    -

    How to download and install Dragon Ball Z APK?

    -

    If you are ready to download game dragon ball z apk and start watching the anime on your Android device, here are the steps you need to follow:

    -

    The steps to download and install Dragon Ball Z APK

    -
      -
    1. Go to [this link] to download the latest version of Dragon Ball Z APK.
    2. -
    3. Once the download is complete, locate the file on your device and tap on it to open it.
    4. -
    5. If you see a warning message that says "Install blocked", go to your device settings and enable unknown sources. This will allow you to install apps from sources other than Google Play Store.
    6. -
    7. After enabling unknown sources, go back to the file and tap on it again to start the installation process.
    8. -
    9. Follow the instructions on the screen and wait for the installation to finish.
    10. -
    11. Once the installation is done, you will see an icon of Dragon Ball Z APK on your home screen or app drawer. Tap on it to launch the app and enjoy watching the anime!
    12. -
    -

    The tips and tricks to optimize your gaming experience

    -

    To make the most out of your gaming experience with Dragon Ball Z APK, here are some tips and tricks you can try:

    - -

    Conclusion

    -

    In conclusion, Dragon Ball Z APK is a free and easy way to watch all the episodes of Dragon Ball Z on your Android device. You can stream or download the episodes in high quality and with English subtitles. You can also choose from different servers and sources to find the best one for your connection and preference. You can customize your viewing experience with various settings and options. You can support the original creators and distributors of Dragon Ball Z by watching the official links provided by the app. To download game dragon ball z apk , you just need to follow the steps we have outlined in this article and enable unknown sources on your device settings. You can then enjoy the epic anime action on your smartphone or tablet anytime and anywhere you want.

    -

    We hope you found this article helpful and informative. If you did, please share it with your friends and fellow Dragon Ball Z fans. Also, feel free to leave a comment below and let us know what you think about Dragon Ball Z APK and the anime series in general. We would love to hear from you!

    -

    FAQs

    -

    Here are some of the frequently asked questions about Dragon Ball Z APK and their answers:

    -
      -
    1. Is Dragon Ball Z APK safe and legal to use?
    2. -

      Dragon Ball Z APK is safe and legal to use as long as you download it from a trusted source and use it for personal and non-commercial purposes. The app does not contain any viruses, malware, or spyware that can harm your device or compromise your privacy. The app also does not host any content on its own servers, but rather provides links to the official sources where you can watch the episodes legally and support the original creators and distributors of Dragon Ball Z.

      -
    3. What are the other features of Dragon Ball Z APK?
    4. -

      Dragon Ball Z APK has many other features that make it a great app for watching the anime series. Some of these features are:

      -
        -
      • You can watch other Dragon Ball series, such as Dragon Ball, Dragon Ball GT, Dragon Ball Super, and Dragon Ball Heroes.
      • -
      • You can watch movies, specials, and OVAs related to Dragon Ball Z.
      • -
      • You can watch the episodes in different languages, such as Japanese, English, Spanish, French, German, and more.
      • -
      • You can watch the episodes with different subtitles, such as English, Spanish, French, German, and more.
      • -
      • You can watch the episodes in different qualities, such as 360p, 480p, 720p, and 1080p.
      • -
      -
    5. How can I contact the developer of Dragon Ball Z APK?
    6. -

      If you have any questions, problems, suggestions, or feedback regarding Dragon Ball Z APK, you can contact the developer by sending an email to [this address]. You can also visit [this website] or [this Facebook page] to get more information and updates about the app.

      -
    7. How can I support the developer of Dragon Ball Z APK?
    8. -

      If you like Dragon Ball Z APK and want to support the developer, you can do so by:

      -
        -
      • Giving a positive rating and review on Google Play Store or other platforms where you downloaded the app.
      • -
      • Sharing the app with your friends and family who are also fans of Dragon Ball Z.
      • -
      • Donating to the developer via [this link] or [this method].
      • -
      -
    9. How can I uninstall Dragon Ball Z APK?
    10. -

      If you want to uninstall Dragon Ball Z APK from your device, you can do so by following these steps:

      -
        -
      1. Go to your device settings and tap on Apps or Applications.
      2. -
      3. Find and tap on Dragon Ball Z APK from the list of apps.
      4. -
      5. Tap on Uninstall and confirm your action.
      6. -
      7. Wait for the app to be removed from your device.
      8. -
      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Football League 2023 APK - The Best Soccer Game of the Year.md b/spaces/1phancelerku/anime-remove-background/Football League 2023 APK - The Best Soccer Game of the Year.md deleted file mode 100644 index 1fc35e4fe6202d596246d0fcdf81430aadd7cc69..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Football League 2023 APK - The Best Soccer Game of the Year.md +++ /dev/null @@ -1,109 +0,0 @@ -
    -

    Football League 2023 Game Download APK: Everything You Need to Know

    -

    If you are a fan of soccer games, you might want to check out Football League 2023 Game, a new mobile game that lets you experience the thrill of playing in a world cup tournament. This game is developed by MOBILE SOCCER, a studio that specializes in creating realistic and fun soccer games for Android devices.

    -

    In this article, we will tell you everything you need to know about Football League 2023 Game, including its features, how to download it for Android, how to play it on PC with an emulator, tips and tricks for playing better, and some frequently asked questions. Let's get started!

    -

    football league 2023 game download apk


    Download File »»» https://jinyurl.com/2uNKqz



    -

    Features of Football League 2023 Game

    -

    Football League 2023 Game is not just another soccer game. It has many features that make it stand out from other games in the genre. Here are some of them:

    - -

    How to Download Football League 2023 Game APK for Android

    -

    If you want to play Football League 2023 Game on your Android device, you will need to download the APK file from a reliable source. Here are the steps to do so:

    -
      -
    1. Go to Football League 2023 APK website, which is a trusted site that provides the latest version of the game APK.
    2. -
    3. Click on the download button and wait for the file to be downloaded on your device.
    4. -
    5. Allow unknown sources in your device settings by going to Settings > Security > Unknown Sources and toggling it on.
    6. -
    7. Install the APK file by tapping on it and following the instructions on the screen.
    8. -
    9. Enjoy playing Football League 2023 Game on your Android device!
    10. -
    -

    How to Play Football League 2023 Game on PC with BlueStacks Emulator

    -

    If you prefer playing Football League 2023 Game on a bigger screen, you can use an emulator to run it on your PC. An emulator is a software that mimics the Android operating system on your computer, allowing you to play Android games and apps on it. One of the best emulators for playing Football League 2023 Game is BlueStacks, which is fast, stable, and easy to use. Here are the steps to play Football League 2023 Game on PC with BlueStacks emulator:

    -
      -
    1. Download and install BlueStacks on your PC from its official website.
    2. -
    3. Launch BlueStacks and sign in with your Google account. If you don't have one, you can create one for free.
    4. -
    5. Search for Football League 2023 Game in the search bar of BlueStacks.
    6. -
    7. Click on the install button and wait for it to finish.
    8. -
    9. Start playing Football League 2023 Game on your PC with BlueStacks!
    10. -
    -

    Tips and Tricks for Football League 2023 Game

    -

    To play better and win more matches in Football League 2023 Game, you will need some tips and tricks. Here are some of them:

    - -

    Conclusion

    -

    Football League 2023 Game is a great soccer game that you can play on your Android device or PC with an emulator. It has realistic graphics and animations, various game modes and challenges, customizable teams and players, online multiplayer and leaderboards, offline mode and data saving, and more. It is easy to download and install, and it is free to play. If you love soccer games, you should definitely give Football League 2023 Game a try. You won't regret it!

    -

    football league 2023 apk free download
    -download football league 2023 game for android
    -football league 2023 mobile soccer apk
    -football league 2023 latest version apk
    -football league 2023 game android tv apk
    -football league 2023 game pc windows apk
    -football league 2023 game tablet apk
    -football league 2023 soccer game apk
    -football league 2023 game offline apk
    -football league 2023 game online apk
    -football league 2023 game mod apk
    -football league 2023 game hack apk
    -football league 2023 game cheats apk
    -football league 2023 game unlimited coins apk
    -football league 2023 game premium apk
    -football league 2023 game pro apk
    -football league 2023 game full version apk
    -football league 2023 game beta apk
    -football league 2023 game update apk
    -football league 2023 game new features apk
    -football league 2023 game review apk
    -football league 2023 game rating apk
    -football league 2023 game best teams apk
    -football league 2023 game players apk
    -football league 2023 game stats apk
    -football league 2023 game tips apk
    -football league 2023 game tricks apk
    -football league 2023 game guide apk
    -football league 2023 game tutorial apk
    -football league 2023 game walkthrough apk
    -football league 2023 game gameplay apk
    -football league 2023 game graphics apk
    -football league 2023 game sound apk
    -football league 2023 game music apk
    -football league 2023 game controls apk
    -football league 2023 game settings apk
    -football league 2023 game customization apk
    -football league 2023 game modes apk
    -football league 2023 game levels apk
    -football league 2023 game difficulty apk
    -football league 2023 game challenges apk
    -football league 2023 game achievements apk
    -football league 2023 game rewards apk
    -football league 2023 game leaderboards apk
    -football league 2023 game multiplayer apk
    -football league 2023 game co-op apk
    -football league 2023 game social media apk
    -football league 2023 game support apk
    -football league 2023 game feedback apk

    -

    So what are you waiting for? Download Football League 2023 Game APK now and start playing!

    -

    FAQs

    -

    Here are some frequently asked questions about Football League 2023 Game:

    -
      -
    1. Q1: What are the minimum requirements for Football League 2023 Game?
    2. -
    3. A1: The minimum requirements for Football League 2023 Game are Android 4.4 or higher, 2 GB of RAM, and 500 MB of free storage space.
    4. -
    5. Q2: Is Football League 2023 Game free to play?
    6. -
    7. A2: Yes, Football League 2023 Game is free to play. However, it contains in-app purchases that allow you to buy coins, gems, power-ups, boosters, and other items.
    8. -
    9. Q3: How can I get more coins and gems in Football League 2023 Game?
    10. -
    11. A3: You can get more coins and gems in Football League 2023 Game by completing challenges and missions, winning matches and tournaments, watching ads, inviting friends, and buying them with real money.
    12. -
    13. Q4: How can I contact the developers of Football League 2023 Game?
    14. -
    15. A4: You can contact the developers of Football League 2023 Game by sending an email to mobilesoccer@gmail.com or by visiting their Facebook page at https://www.facebook.com/mobilesoccer/.
    16. -
    17. Q5: What are some alternative games to Football League 2023 Game?
    18. -
    19. A5: Some alternative games to Football League 2023 Game are FIFA Mobile Soccer, Dream League Soccer 2021, PES 2021 Mobile, Score! Hero, and Soccer Stars.
    20. -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_base_32khz.py b/spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_base_32khz.py deleted file mode 100644 index 4e364614537e426f21c18a2c2a9d94b3babce051..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_base_32khz.py +++ /dev/null @@ -1,43 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from ._explorers import LMExplorer -from ...environment import AudioCraftEnvironment - - -@LMExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=32, partition=partitions) - launcher.bind_(solver='musicgen/musicgen_base_32khz') - # replace this by the desired music dataset - launcher.bind_(dset='internal/music_400k_32khz') - - fsdp = {'autocast': False, 'fsdp.use': True} - medium = {'model/lm/model_scale': 'medium'} - large = {'model/lm/model_scale': 'large'} - - cfg_low = {'classifier_free_guidance.training_dropout': 0.2} - wd_low = {'conditioners.description.t5.word_dropout': 0.2} - - adam = {'optim.optimizer': 'adamw', 'optim.lr': 1e-4} - - launcher.bind_(fsdp) - - launcher.slurm_(gpus=32).bind_(label='32gpus') - with launcher.job_array(): - sub = launcher.bind() - sub() - - launcher.slurm_(gpus=64).bind_(label='64gpus') - with launcher.job_array(): - sub = launcher.bind() - sub(medium, adam) - - launcher.slurm_(gpus=96).bind_(label='96gpus') - with launcher.job_array(): - sub = launcher.bind() - sub(large, cfg_low, wd_low, adam, {'optim.max_norm': 3}) diff --git a/spaces/AIConsultant/MusicGen/scripts/resample_dataset.py b/spaces/AIConsultant/MusicGen/scripts/resample_dataset.py deleted file mode 100644 index af5288712b8d2cde2d9814c747275e69f6e970c8..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/scripts/resample_dataset.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Resampling script. -""" -import argparse -from pathlib import Path -import shutil -import typing as tp - -import submitit -import tqdm - -from audiocraft.data.audio import audio_read, audio_write -from audiocraft.data.audio_dataset import load_audio_meta, find_audio_files -from audiocraft.data.audio_utils import convert_audio -from audiocraft.environment import AudioCraftEnvironment - - -def read_txt_files(path: tp.Union[str, Path]): - with open(args.files_path) as f: - lines = [line.rstrip() for line in f] - print(f"Read {len(lines)} in .txt") - lines = [line for line in lines if Path(line).suffix not in ['.json', '.txt', '.csv']] - print(f"Filtered and keep {len(lines)} from .txt") - return lines - - -def read_egs_files(path: tp.Union[str, Path]): - path = Path(path) - if path.is_dir(): - if (path / 'data.jsonl').exists(): - path = path / 'data.jsonl' - elif (path / 'data.jsonl.gz').exists(): - path = path / 'data.jsonl.gz' - else: - raise ValueError("Don't know where to read metadata from in the dir. " - "Expecting either a data.jsonl or data.jsonl.gz file but none found.") - meta = load_audio_meta(path) - return [m.path for m in meta] - - -def process_dataset(args, n_shards: int, node_index: int, task_index: tp.Optional[int] = None): - if task_index is None: - env = submitit.JobEnvironment() - task_index = env.global_rank - shard_index = node_index * args.tasks_per_node + task_index - - if args.files_path is None: - lines = [m.path for m in find_audio_files(args.root_path, resolve=False, progress=True, workers=8)] - else: - files_path = Path(args.files_path) - if files_path.suffix == '.txt': - print(f"Reading file list from .txt file: {args.files_path}") - lines = read_txt_files(args.files_path) - else: - print(f"Reading file list from egs: {args.files_path}") - lines = read_egs_files(args.files_path) - - total_files = len(lines) - print( - f"Total of {total_files} processed with {n_shards} shards. " + - f"Current idx = {shard_index} -> {total_files // n_shards} files to process" - ) - for idx, line in tqdm.tqdm(enumerate(lines)): - - # skip if not part of this shard - if idx % n_shards != shard_index: - continue - - path = str(AudioCraftEnvironment.apply_dataset_mappers(line)) - root_path = str(args.root_path) - if not root_path.endswith('/'): - root_path += '/' - assert path.startswith(str(root_path)), \ - f"Mismatch between path and provided root: {path} VS {root_path}" - - try: - metadata_path = Path(path).with_suffix('.json') - out_path = args.out_path / path[len(root_path):] - out_metadata_path = out_path.with_suffix('.json') - out_done_token = out_path.with_suffix('.done') - - # don't reprocess existing files - if out_done_token.exists(): - continue - - print(idx, out_path, path) - mix, sr = audio_read(path) - mix_channels = args.channels if args.channels is not None and args.channels > 0 else mix.size(0) - # enforce simple stereo - out_channels = mix_channels - if out_channels > 2: - print(f"Mix has more than two channels: {out_channels}, enforcing 2 channels") - out_channels = 2 - out_sr = args.sample_rate if args.sample_rate is not None else sr - out_wav = convert_audio(mix, sr, out_sr, out_channels) - audio_write(out_path.with_suffix(''), out_wav, sample_rate=out_sr, - format=args.format, normalize=False, strategy='clip') - if metadata_path.exists(): - shutil.copy(metadata_path, out_metadata_path) - else: - print(f"No metadata found at {str(metadata_path)}") - out_done_token.touch() - except Exception as e: - print(f"Error processing file line: {line}, {e}") - - -if __name__ == '__main__': - parser = argparse.ArgumentParser(description="Resample dataset with SLURM.") - parser.add_argument( - "--log_root", - type=Path, - default=Path.home() / 'tmp' / 'resample_logs', - ) - parser.add_argument( - "--files_path", - type=Path, - help="List of files to process, either .txt (one file per line) or a jsonl[.gz].", - ) - parser.add_argument( - "--root_path", - type=Path, - required=True, - help="When rewriting paths, this will be the prefix to remove.", - ) - parser.add_argument( - "--out_path", - type=Path, - required=True, - help="When rewriting paths, `root_path` will be replaced by this.", - ) - parser.add_argument("--xp_name", type=str, default="shutterstock") - parser.add_argument( - "--nodes", - type=int, - default=4, - ) - parser.add_argument( - "--tasks_per_node", - type=int, - default=20, - ) - parser.add_argument( - "--cpus_per_task", - type=int, - default=4, - ) - parser.add_argument( - "--memory_gb", - type=int, - help="Memory in GB." - ) - parser.add_argument( - "--format", - type=str, - default="wav", - ) - parser.add_argument( - "--sample_rate", - type=int, - default=32000, - ) - parser.add_argument( - "--channels", - type=int, - ) - parser.add_argument( - "--partition", - default='learnfair', - ) - parser.add_argument("--qos") - parser.add_argument("--account") - parser.add_argument("--timeout", type=int, default=4320) - parser.add_argument('--debug', action='store_true', help='debug mode (local run)') - args = parser.parse_args() - n_shards = args.tasks_per_node * args.nodes - if args.files_path is None: - print("Warning: --files_path not provided, not recommended when processing more than 10k files.") - if args.debug: - print("Debugging mode") - process_dataset(args, n_shards=n_shards, node_index=0, task_index=0) - else: - - log_folder = Path(args.log_root) / args.xp_name / '%j' - print(f"Logging to: {log_folder}") - log_folder.parent.mkdir(parents=True, exist_ok=True) - executor = submitit.AutoExecutor(folder=str(log_folder)) - if args.qos: - executor.update_parameters(slurm_partition=args.partition, slurm_qos=args.qos, slurm_account=args.account) - else: - executor.update_parameters(slurm_partition=args.partition) - executor.update_parameters( - slurm_job_name=args.xp_name, timeout_min=args.timeout, - cpus_per_task=args.cpus_per_task, tasks_per_node=args.tasks_per_node, nodes=1) - if args.memory_gb: - executor.update_parameters(mem=f'{args.memory_gb}GB') - jobs = [] - with executor.batch(): - for node_index in range(args.nodes): - job = executor.submit(process_dataset, args, n_shards=n_shards, node_index=node_index) - jobs.append(job) - for job in jobs: - print(f"Waiting on job {job.job_id}") - job.results() diff --git a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/base_preprocess.py b/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/base_preprocess.py deleted file mode 100644 index db5e3ab88861c044e2c33247d818d5e418b6cddb..0000000000000000000000000000000000000000 --- a/spaces/AIGC-Audio/AudioGPT/text_to_speech/data_gen/tts/base_preprocess.py +++ /dev/null @@ -1,252 +0,0 @@ -import json -import os -import random -import re -import traceback -from collections import Counter -from functools import partial - -import librosa -from tqdm import tqdm -from text_to_speech.data_gen.tts.txt_processors.base_text_processor import get_txt_processor_cls -from text_to_speech.data_gen.tts.wav_processors.base_processor import get_wav_processor_cls -from text_to_speech.utils.commons.hparams import hparams -from text_to_speech.utils.commons.multiprocess_utils import multiprocess_run_tqdm -from text_to_speech.utils.os_utils import link_file, move_file, remove_file -from text_to_speech.utils.text.text_encoder import is_sil_phoneme, build_token_encoder - - -class BasePreprocessor: - def __init__(self): - self.preprocess_args = hparams['preprocess_args'] - txt_processor = self.preprocess_args['txt_processor'] - self.txt_processor = get_txt_processor_cls(txt_processor) - self.raw_data_dir = hparams['raw_data_dir'] - self.processed_dir = hparams['processed_data_dir'] - self.spk_map_fn = f"{self.processed_dir}/spk_map.json" - - def meta_data(self): - """ - - :return: {'item_name': Str, 'wav_fn': Str, 'txt': Str, 'spk_name': Str, 'txt_loader': None or Func} - """ - raise NotImplementedError - - def process(self): - processed_dir = self.processed_dir - wav_processed_tmp_dir = f'{processed_dir}/processed_tmp' - remove_file(wav_processed_tmp_dir) - os.makedirs(wav_processed_tmp_dir, exist_ok=True) - wav_processed_dir = f'{processed_dir}/{self.wav_processed_dirname}' - remove_file(wav_processed_dir) - os.makedirs(wav_processed_dir, exist_ok=True) - - meta_data = list(tqdm(self.meta_data(), desc='Load meta data')) - item_names = [d['item_name'] for d in meta_data] - assert len(item_names) == len(set(item_names)), 'Key `item_name` should be Unique.' - - # preprocess data - phone_list = [] - word_list = [] - spk_names = set() - process_item = partial(self.preprocess_first_pass, - txt_processor=self.txt_processor, - wav_processed_dir=wav_processed_dir, - wav_processed_tmp=wav_processed_tmp_dir, - preprocess_args=self.preprocess_args) - items = [] - args = [{ - 'item_name': item_raw['item_name'], - 'txt_raw': item_raw['txt'], - 'wav_fn': item_raw['wav_fn'], - 'txt_loader': item_raw.get('txt_loader'), - 'others': item_raw.get('others', None) - } for item_raw in meta_data] - for item_, (item_id, item) in zip(meta_data, multiprocess_run_tqdm(process_item, args, desc='Preprocess')): - if item is not None: - item_.update(item) - item = item_ - if 'txt_loader' in item: - del item['txt_loader'] - item['id'] = item_id - item['spk_name'] = item.get('spk_name', '') - item['others'] = item.get('others', None) - phone_list += item['ph'].split(" ") - word_list += item['word'].split(" ") - spk_names.add(item['spk_name']) - items.append(item) - - # add encoded tokens - ph_encoder, word_encoder = self._phone_encoder(phone_list), self._word_encoder(word_list) - spk_map = self.build_spk_map(spk_names) - args = [{ - 'ph': item['ph'], 'word': item['word'], 'spk_name': item['spk_name'], - 'word_encoder': word_encoder, 'ph_encoder': ph_encoder, 'spk_map': spk_map - } for item in items] - for idx, item_new_kv in multiprocess_run_tqdm(self.preprocess_second_pass, args, desc='Add encoded tokens'): - items[idx].update(item_new_kv) - - # build mfa data - if self.preprocess_args['use_mfa']: - mfa_dict = set() - mfa_input_dir = f'{processed_dir}/mfa_inputs' - remove_file(mfa_input_dir) - # group MFA inputs for better parallelism - mfa_groups = [i // self.preprocess_args['nsample_per_mfa_group'] for i in range(len(items))] - if self.preprocess_args['mfa_group_shuffle']: - random.seed(hparams['seed']) - random.shuffle(mfa_groups) - args = [{ - 'item': item, 'mfa_input_dir': mfa_input_dir, - 'mfa_group': mfa_group, 'wav_processed_tmp': wav_processed_tmp_dir, - 'preprocess_args': self.preprocess_args - } for item, mfa_group in zip(items, mfa_groups)] - for i, (ph_gb_word_nosil, new_wav_align_fn) in multiprocess_run_tqdm( - self.build_mfa_inputs, args, desc='Build MFA data'): - items[i]['wav_align_fn'] = new_wav_align_fn - for w in ph_gb_word_nosil.split(" "): - mfa_dict.add(f"{w} {w.replace('_', ' ')}") - mfa_dict = sorted(mfa_dict) - with open(f'{processed_dir}/mfa_dict.txt', 'w') as f: - f.writelines([f'{l}\n' for l in mfa_dict]) - with open(f"{processed_dir}/{self.meta_csv_filename}.json", 'w') as f: - f.write(re.sub(r'\n\s+([\d+\]])', r'\1', json.dumps(items, ensure_ascii=False, sort_keys=False, indent=1))) - remove_file(wav_processed_tmp_dir) - - @classmethod - def preprocess_first_pass(cls, item_name, txt_raw, txt_processor, - wav_fn, wav_processed_dir, wav_processed_tmp, - preprocess_args, txt_loader=None, others=None): - try: - if txt_loader is not None: - txt_raw = txt_loader(txt_raw) - ph, txt, word, ph2word, ph_gb_word = cls.txt_to_ph(txt_processor, txt_raw, preprocess_args) - - wav_fn, wav_align_fn = cls.process_wav( - item_name, wav_fn, - hparams['processed_data_dir'], - wav_processed_tmp, preprocess_args) - - # wav for binarization - ext = os.path.splitext(wav_fn)[1] - os.makedirs(wav_processed_dir, exist_ok=True) - new_wav_fn = f"{wav_processed_dir}/{item_name}{ext}" - move_link_func = move_file if os.path.dirname(wav_fn) == wav_processed_tmp else link_file - move_link_func(wav_fn, new_wav_fn) - return { - 'txt': txt, 'txt_raw': txt_raw, 'ph': ph, - 'word': word, 'ph2word': ph2word, 'ph_gb_word': ph_gb_word, - 'wav_fn': new_wav_fn, 'wav_align_fn': wav_align_fn, - 'others': others - } - except: - traceback.print_exc() - print(f"| Error is caught. item_name: {item_name}.") - return None - - @staticmethod - def txt_to_ph(txt_processor, txt_raw, preprocess_args): - txt_struct, txt = txt_processor.process(txt_raw, preprocess_args) - ph = [p for w in txt_struct for p in w[1]] - ph_gb_word = ["_".join(w[1]) for w in txt_struct] - words = [w[0] for w in txt_struct] - # word_id=0 is reserved for padding - ph2word = [w_id + 1 for w_id, w in enumerate(txt_struct) for _ in range(len(w[1]))] - return " ".join(ph), txt, " ".join(words), ph2word, " ".join(ph_gb_word) - - @staticmethod - def process_wav(item_name, wav_fn, processed_dir, wav_processed_tmp, preprocess_args): - processors = [get_wav_processor_cls(v) for v in preprocess_args['wav_processors']] - processors = [k() for k in processors if k is not None] - if len(processors) >= 1: - sr_file = librosa.core.get_samplerate(wav_fn) - output_fn_for_align = None - ext = os.path.splitext(wav_fn)[1] - input_fn = f"{wav_processed_tmp}/{item_name}{ext}" - link_file(wav_fn, input_fn) - for p in processors: - outputs = p.process(input_fn, sr_file, wav_processed_tmp, processed_dir, item_name, preprocess_args) - if len(outputs) == 3: - input_fn, sr, output_fn_for_align = outputs - else: - input_fn, sr = outputs - return input_fn, output_fn_for_align - else: - return wav_fn, wav_fn - - def _phone_encoder(self, ph_set): - ph_set_fn = f"{self.processed_dir}/phone_set.json" - if self.preprocess_args['reset_phone_dict'] or not os.path.exists(ph_set_fn): - ph_set = sorted(set(ph_set)) - json.dump(ph_set, open(ph_set_fn, 'w'), ensure_ascii=False) - print("| Build phone set: ", ph_set) - else: - ph_set = json.load(open(ph_set_fn, 'r')) - print("| Load phone set: ", ph_set) - return build_token_encoder(ph_set_fn) - - def _word_encoder(self, word_set): - word_set_fn = f"{self.processed_dir}/word_set.json" - if self.preprocess_args['reset_word_dict']: - word_set = Counter(word_set) - total_words = sum(word_set.values()) - word_set = word_set.most_common(hparams['word_dict_size']) - num_unk_words = total_words - sum([x[1] for x in word_set]) - word_set = ['', ''] + [x[0] for x in word_set] - word_set = sorted(set(word_set)) - json.dump(word_set, open(word_set_fn, 'w'), ensure_ascii=False) - print(f"| Build word set. Size: {len(word_set)}, #total words: {total_words}," - f" #unk_words: {num_unk_words}, word_set[:10]:, {word_set[:10]}.") - else: - word_set = json.load(open(word_set_fn, 'r')) - print("| Load word set. Size: ", len(word_set), word_set[:10]) - return build_token_encoder(word_set_fn) - - @classmethod - def preprocess_second_pass(cls, word, ph, spk_name, word_encoder, ph_encoder, spk_map): - word_token = word_encoder.encode(word) - ph_token = ph_encoder.encode(ph) - spk_id = spk_map[spk_name] - return {'word_token': word_token, 'ph_token': ph_token, 'spk_id': spk_id} - - def build_spk_map(self, spk_names): - spk_map = {x: i for i, x in enumerate(sorted(list(spk_names)))} - assert len(spk_map) == 0 or len(spk_map) <= hparams['num_spk'], len(spk_map) - print(f"| Number of spks: {len(spk_map)}, spk_map: {spk_map}") - json.dump(spk_map, open(self.spk_map_fn, 'w'), ensure_ascii=False) - return spk_map - - @classmethod - def build_mfa_inputs(cls, item, mfa_input_dir, mfa_group, wav_processed_tmp, preprocess_args): - item_name = item['item_name'] - wav_align_fn = item['wav_align_fn'] - ph_gb_word = item['ph_gb_word'] - ext = os.path.splitext(wav_align_fn)[1] - mfa_input_group_dir = f'{mfa_input_dir}/{mfa_group}' - os.makedirs(mfa_input_group_dir, exist_ok=True) - new_wav_align_fn = f"{mfa_input_group_dir}/{item_name}{ext}" - move_link_func = move_file if os.path.dirname(wav_align_fn) == wav_processed_tmp else link_file - move_link_func(wav_align_fn, new_wav_align_fn) - ph_gb_word_nosil = " ".join(["_".join([p for p in w.split("_") if not is_sil_phoneme(p)]) - for w in ph_gb_word.split(" ") if not is_sil_phoneme(w)]) - with open(f'{mfa_input_group_dir}/{item_name}.lab', 'w') as f_txt: - f_txt.write(ph_gb_word_nosil) - return ph_gb_word_nosil, new_wav_align_fn - - def load_spk_map(self, base_dir): - spk_map_fn = f"{base_dir}/spk_map.json" - spk_map = json.load(open(spk_map_fn, 'r')) - return spk_map - - def load_dict(self, base_dir): - ph_encoder = build_token_encoder(f'{base_dir}/phone_set.json') - word_encoder = build_token_encoder(f'{base_dir}/word_set.json') - return ph_encoder, word_encoder - - @property - def meta_csv_filename(self): - return 'metadata' - - @property - def wav_processed_dirname(self): - return 'wav_processed' diff --git a/spaces/AIML-TUDA/does-clip-know-my-face/download_example_images.py b/spaces/AIML-TUDA/does-clip-know-my-face/download_example_images.py deleted file mode 100644 index 4d996c9f7c46f3eba0d6c4aab4203ecf8311571b..0000000000000000000000000000000000000000 --- a/spaces/AIML-TUDA/does-clip-know-my-face/download_example_images.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -import urllib.request -from tqdm import tqdm -from PIL import Image - - -def read_actor_files(folder_path): - urls = {} - for file in os.listdir(folder_path): - if not file.endswith('.txt'): - continue - - file_name_without_ext = os.path.splitext(file)[0] - with open(os.path.join(folder_path, file)) as text_file: - lines = text_file.readlines() - lines = [line.rstrip() for line in lines] - - urls[file_name_without_ext] = lines - - return urls - - -def save_images_to_folder(folder_path, url_dict): - url_opener = urllib.request.URLopener() - url_opener.addheader('User-Agent', - 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.102 Safari/537.36') - - for name, url_list in tqdm(url_dict.items()): - base_folder = os.path.join(folder_path, name) - if os.path.exists(base_folder): - print(f'The image folder {base_folder} already exists. Skipping folder.') - continue - os.makedirs(base_folder) - for i, url in tqdm(enumerate(url_list), desc=name, leave=False): - url = urllib.parse.quote(url, safe='://?=&(),%+') - img_file_path = os.path.join(base_folder, f'{name}_{i}.jpg') - url_opener.retrieve(url, img_file_path) - - # open the image and resize it - img = Image.open(img_file_path) - img.thumbnail((1024, 1024)) - img.save(img_file_path) diff --git a/spaces/AchyuthGamer/Free-Accounts-Generator/minecraft/js/d140ouchebag.js b/spaces/AchyuthGamer/Free-Accounts-Generator/minecraft/js/d140ouchebag.js deleted file mode 100644 index 8315862ea3cea7c11f0103cf4af54f22696a4eaf..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/Free-Accounts-Generator/minecraft/js/d140ouchebag.js +++ /dev/null @@ -1,37 +0,0 @@ -var NumberOfWords = 13 -var words = new BuildArray(NumberOfWords) - -// Use the following variables to -// define your random words: -words[1] = "https://tii.ai/NordvpnAccount" -words[2] = "https://tii.ai/NordvpnAccount1" -words[3] = "https://tii.ai/NordvpnAccount2" -words[4] = "https://tii.ai/NordvpnAccount3" -words[5] = "https://tii.ai/NordvpnAccount4" -words[6] = "https://tii.ai/NordvpnAccount5" -words[7] = "https://tii.ai/NordvpnAccount6" -words[8] = "https://tii.ai/NordvpnAccount7" -words[9] = "https://tii.ai/NordvpnAccount8" -words[10] = "https://tii.ai/NordvpnAccount9" -words[11] = "https://tii.ai/NordvpnAccount10" -words[12] = "https://tii.ai/NordvpnAccount11" -words[13] = "https://tii.ai/NordvpnAccount12" - - -== - - -function BuildArray(size){ -this.length = size -for (var i = 1; i <= size; i++){ -this[i] = null} -return this -} - -function PickRandomWord(frm) { -// Generate a random number between 1 and NumberOfWords -var rnd = Math.ceil(Math.random() * NumberOfWords) - -// Display the word inside the text box -frm.WordBox.value = words[rnd] -} \ No newline at end of file diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/+server.ts b/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/+server.ts deleted file mode 100644 index 8ad2ba28156aeb80188ae64d1bb77105c49429c5..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/src/routes/conversation/[id]/+server.ts +++ /dev/null @@ -1,276 +0,0 @@ -import { MESSAGES_BEFORE_LOGIN, RATE_LIMIT } from "$env/static/private"; -import { buildPrompt } from "$lib/buildPrompt"; -import { PUBLIC_SEP_TOKEN } from "$lib/constants/publicSepToken"; -import { abortedGenerations } from "$lib/server/abortedGenerations"; -import { authCondition, requiresUser } from "$lib/server/auth"; -import { collections } from "$lib/server/database"; -import { modelEndpoint } from "$lib/server/modelEndpoint"; -import { models } from "$lib/server/models"; -import { ERROR_MESSAGES } from "$lib/stores/errors.js"; -import type { Message } from "$lib/types/Message"; -import { concatUint8Arrays } from "$lib/utils/concatUint8Arrays"; -import { streamToAsyncIterable } from "$lib/utils/streamToAsyncIterable"; -import { trimPrefix } from "$lib/utils/trimPrefix"; -import { trimSuffix } from "$lib/utils/trimSuffix"; -import type { TextGenerationStreamOutput } from "@huggingface/inference"; -import { error } from "@sveltejs/kit"; -import { z } from "zod"; -import { AwsClient } from "aws4fetch"; -import { pipeline } from "@xenova/transformers"; - -export async function POST({ request, fetch, locals, params }) { - /*const id = z.string().parse(params.id); - const date = new Date(); - let generated_text = ""; - - const userId = locals.user?._id ?? locals.sessionId; - - if (!userId) { - throw error(401, "Unauthorized"); - } - - const conv = await collections.conversations.findOne({ - _id: convId, - ...authCondition(locals), - }); - - if (!conv) { - throw error(404, "Conversation not found"); - } - - if ( - !locals.user?._id && - requiresUser && - conv.messages.length > (MESSAGES_BEFORE_LOGIN ? parseInt(MESSAGES_BEFORE_LOGIN) : 0) - ) { - throw error(429, "Exceeded number of messages before login"); - } - - const nEvents = await collections.messageEvents.countDocuments({ userId }); - - if (RATE_LIMIT != "" && nEvents > parseInt(RATE_LIMIT)) { - throw error(429, ERROR_MESSAGES.rateLimited); - } - - const model = models.find((m) => m.id === conv.model); - const settings = await collections.settings.findOne(authCondition(locals)); - - if (!model) { - throw error(410, "Model not available anymore"); - } - - const json = await request.json(); - const { - inputs: newPrompt, - options: { id: messageId, is_retry, web_search_id, response_id: responseId }, - } = z - .object({ - inputs: z.string().trim().min(1), - options: z.object({ - id: z.optional(z.string().uuid()), - response_id: z.optional(z.string().uuid()), - is_retry: z.optional(z.boolean()), - web_search_id: z.ostring(), - }), - }) - .parse(json); - - const messages = (() => { - if (is_retry && messageId) { - let retryMessageIdx = conv.messages.findIndex((message) => message.id === messageId); - if (retryMessageIdx === -1) { - retryMessageIdx = conv.messages.length; - } - return [ - ...conv.messages.slice(0, retryMessageIdx), - { content: newPrompt, from: "user", id: messageId as Message["id"], updatedAt: new Date() }, - ]; - } - return [ - ...conv.messages, - { - content: newPrompt, - from: "user", - id: (messageId as Message["id"]) || crypto.randomUUID(), - createdAt: new Date(), - updatedAt: new Date(), - }, - ]; - })() satisfies Message[]; - - const prompt = await buildPrompt({ - messages, - model, - webSearchId: web_search_id, - preprompt: settings?.customPrompts?.[model.id] ?? model.preprompt, - locals: locals, - }); - - const randomEndpoint = modelEndpoint(model); - console.log(randomEndpoint); - - const abortController = new AbortController(); - - let stream1 = new ReadableStream(); - let stream2 = new ReadableStream(); - - async function saveMessage() { - // We could also check if PUBLIC_ASSISTANT_MESSAGE_TOKEN is present and use it to slice the text - if (generated_text.startsWith(prompt)) { - generated_text = generated_text.slice(prompt.length); - } - - generated_text = trimSuffix( - trimPrefix(generated_text, "<|startoftext|>"), - PUBLIC_SEP_TOKEN - ).trimEnd(); - - for (const stop of [...(model?.parameters?.stop ?? []), "<|endoftext|>"]) { - if (generated_text.endsWith(stop)) { - generated_text = generated_text.slice(0, -stop.length).trimEnd(); - } - } - - messages.push({ - from: "assistant", - content: generated_text, - webSearchId: web_search_id, - id: (responseId as Message["id"]) || crypto.randomUUID(), - createdAt: new Date(), - updatedAt: new Date(), - }); - - await collections.messageEvents.insertOne({ - userId: userId, - createdAt: new Date(), - }); - - await collections.conversations.updateOne( - { - _id: convId, - }, - { - $set: { - messages, - updatedAt: new Date(), - }, - } - ); - } - - saveMessage().catch(console.error);*/ - // Todo: maybe we should wait for the message to be saved before ending the response - in case of errors - return new Response(undefined, { - headers: undefined, - status: 200, - statusText: "", - }); -} - -export async function DELETE({ locals, params }) { - /*const conv = await collections.conversations.findOne({ - _id: convId, - ...authCondition(locals), - }); - - await collections.conversations.deleteOne({ _id: conv._id });*/ - - return new Response(); -} - -async function parseGeneratedText( - stream: ReadableStream, - conversationId: ObjectId, - promptedAt: Date, - abortController: AbortController -): Promise { - const inputs: Uint8Array[] = []; - for await (const input of streamToAsyncIterable(stream)) { - inputs.push(input); - - const date = abortedGenerations.get(conversationId.toString()); - - if (date && date > promptedAt) { - abortController.abort("Cancelled by user"); - const completeInput = concatUint8Arrays(inputs); - - const lines = new TextDecoder() - .decode(completeInput) - .split("\n") - .filter((line) => line.startsWith("data:")); - - const tokens = lines.map((line) => { - try { - const json: TextGenerationStreamOutput = JSON.parse(line.slice("data:".length)); - return json.token.text; - } catch { - return ""; - } - }); - return tokens.join(""); - } - } - // Merge inputs into a single Uint8Array - const completeInput = concatUint8Arrays(inputs); - - // Get last line starting with "data:" and parse it as JSON to get the generated text - const message = new TextDecoder().decode(completeInput); - - let lastIndex = message.lastIndexOf("\ndata:"); - if (lastIndex === -1) { - lastIndex = message.indexOf("data"); - } - - if (lastIndex === -1) { - console.error("Could not parse last message", message); - } - - let lastMessage = message.slice(lastIndex).trim().slice("data:".length); - if (lastMessage.includes("\n")) { - lastMessage = lastMessage.slice(0, lastMessage.indexOf("\n")); - } - - const lastMessageJSON = JSON.parse(lastMessage); - - if (lastMessageJSON.error) { - throw new Error(lastMessageJSON.error); - } - - const res = lastMessageJSON.generated_text; - - if (typeof res !== "string") { - throw new Error("Could not parse generated text"); - } - - return res; -} - -export async function PATCH({ request, locals, params }) { - /*const { title } = z - .object({ title: z.string().trim().min(1).max(100) }) - .parse(await request.json()); - - const convId = new ObjectId(params.id); - - const conv = await collections.conversations.findOne({ - _id: convId, - ...authCondition(locals), - }); - - if (!conv) { - throw error(404, "Conversation not found"); - } - - await collections.conversations.updateOne( - { - _id: convId, - }, - { - $set: { - title, - }, - } - );*/ - - return new Response(); -} diff --git a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/midas/midas/base_model.py b/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/midas/midas/base_model.py deleted file mode 100644 index 5cf430239b47ec5ec07531263f26f5c24a2311cd..0000000000000000000000000000000000000000 --- a/spaces/Adapter/T2I-Adapter/ldm/modules/extra_condition/midas/midas/base_model.py +++ /dev/null @@ -1,16 +0,0 @@ -import torch - - -class BaseModel(torch.nn.Module): - def load(self, path): - """Load model from file. - - Args: - path (str): file path - """ - parameters = torch.load(path, map_location=torch.device('cpu')) - - if "optimizer" in parameters: - parameters = parameters["model"] - - self.load_state_dict(parameters) diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/board/match/AnyMatch.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/board/match/AnyMatch.js deleted file mode 100644 index d329e65ab67050497de9260cdb0cfb764233cf0f..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/bejeweled/board/match/AnyMatch.js +++ /dev/null @@ -1,5 +0,0 @@ -var AnyMatch = function (n) { - return this.match.anyMatch(n); -} - -export default AnyMatch; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetElement.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetElement.js deleted file mode 100644 index 9e840eb4de4e1cf04a6c7bb6f2027092e58c3a78..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetElement.js +++ /dev/null @@ -1,41 +0,0 @@ -var GetElement = function (mapNameList, recursive) { - if (typeof (mapNameList) === 'string') { - mapNameList = mapNameList.split('.'); - } - if (mapNameList.length === 0) { - return undefined; - } - - var name = mapNameList.shift(), - element = null; - if (name.charAt(0) === '#') { // Get element by name - name = name.substring(1); - element = this.getByName(name, recursive); - } else if (name.indexOf('[') === (-1)) { // Get element by key - if (this.childrenMap) { - element = this.childrenMap[name]; - } - } else { // Get element by key[] - var innerMatch = name.match(RE_OBJ); - if (innerMatch != null) { - if (this.childrenMap) { - var elements = this.childrenMap[innerMatch[1]]; - if (elements) { - element = elements[innerMatch[2]]; - } - } - } - } - - if (mapNameList.length === 0) { - return element; - } else if (element && element.childrenMap) { - return element.getElement(mapNameList); - } else { - return null; - } -}; - -const RE_OBJ = /(\S+)\[(\d+)\]/i; - -export default GetElement; \ No newline at end of file diff --git a/spaces/AlexWelcing/MusicLM/app.py b/spaces/AlexWelcing/MusicLM/app.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AlexZou/Deploy_Restoration/net/Transformer.py b/spaces/AlexZou/Deploy_Restoration/net/Transformer.py deleted file mode 100644 index 73a6aeb8f021d6850809b588d2777c8604efca04..0000000000000000000000000000000000000000 --- a/spaces/AlexZou/Deploy_Restoration/net/Transformer.py +++ /dev/null @@ -1,126 +0,0 @@ -# -*- coding: utf-8 -*- -# @Author : Lintao Peng -# @File : SGFMT.py -# coding=utf-8 -# Design based on the Vit - -import torch.nn as nn -from net.IntmdSequential import IntermediateSequential - - -#实现了自注意力机制,相当于unet的bottleneck层 -class SelfAttention(nn.Module): - def __init__( - self, dim, heads=8, qkv_bias=False, qk_scale=None, dropout_rate=0.0 - ): - super().__init__() - self.num_heads = heads - head_dim = dim // heads - self.scale = qk_scale or head_dim ** -0.5 - - self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias) - self.attn_drop = nn.Dropout(dropout_rate) - self.proj = nn.Linear(dim, dim) - self.proj_drop = nn.Dropout(dropout_rate) - - def forward(self, x): - B, N, C = x.shape - qkv = ( - self.qkv(x) - .reshape(B, N, 3, self.num_heads, C // self.num_heads) - .permute(2, 0, 3, 1, 4) - ) - q, k, v = ( - qkv[0], - qkv[1], - qkv[2], - ) # make torchscript happy (cannot use tensor as tuple) - - attn = (q @ k.transpose(-2, -1)) * self.scale - attn = attn.softmax(dim=-1) - attn = self.attn_drop(attn) - - x = (attn @ v).transpose(1, 2).reshape(B, N, C) - x = self.proj(x) - x = self.proj_drop(x) - return x - - -class Residual(nn.Module): - def __init__(self, fn): - super().__init__() - self.fn = fn - - def forward(self, x): - return self.fn(x) + x - - -class PreNorm(nn.Module): - def __init__(self, dim, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.fn = fn - - def forward(self, x): - return self.fn(self.norm(x)) - - -class PreNormDrop(nn.Module): - def __init__(self, dim, dropout_rate, fn): - super().__init__() - self.norm = nn.LayerNorm(dim) - self.dropout = nn.Dropout(p=dropout_rate) - self.fn = fn - - def forward(self, x): - return self.dropout(self.fn(self.norm(x))) - - -class FeedForward(nn.Module): - def __init__(self, dim, hidden_dim, dropout_rate): - super().__init__() - self.net = nn.Sequential( - nn.Linear(dim, hidden_dim), - nn.GELU(), - nn.Dropout(p=dropout_rate), - nn.Linear(hidden_dim, dim), - nn.Dropout(p=dropout_rate), - ) - - def forward(self, x): - return self.net(x) - - -class TransformerModel(nn.Module): - def __init__( - self, - dim, #512 - depth, #4 - heads, #8 - mlp_dim, #4096 - dropout_rate=0.1, - attn_dropout_rate=0.1, - ): - super().__init__() - layers = [] - for _ in range(depth): - layers.extend( - [ - Residual( - PreNormDrop( - dim, - dropout_rate, - SelfAttention(dim, heads=heads, dropout_rate=attn_dropout_rate), - ) - ), - Residual( - PreNorm(dim, FeedForward(dim, mlp_dim, dropout_rate)) - ), - ] - ) - # dim = dim / 2 - self.net = IntermediateSequential(*layers) - - - def forward(self, x): - return self.net(x) diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/attend_and_excite.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/attend_and_excite.md deleted file mode 100644 index ee205b8b283f99e5ef07cf931f31d25cc0b74fb3..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/api/pipelines/attend_and_excite.md +++ /dev/null @@ -1,37 +0,0 @@ - - -# Attend-and-Excite - -Attend-and-Excite for Stable Diffusion was proposed in [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://attendandexcite.github.io/Attend-and-Excite/) and provides textual attention control over image generation. - -The abstract from the paper is: - -*Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user's intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA's effectiveness on a variety of tasks and provide evidence for its versatility and flexibility.* - -You can find additional information about Attend-and-Excite on the [project page](https://attendandexcite.github.io/Attend-and-Excite/), the [original codebase](https://github.com/AttendAndExcite/Attend-and-Excite), or try it out in a [demo](https://huggingface.co/spaces/AttendAndExcite/Attend-and-Excite). - - - -Make sure to check out the Schedulers [guide](/using-diffusers/schedulers) to learn how to explore the tradeoff between scheduler speed and quality, and see the [reuse components across pipelines](/using-diffusers/loading#reuse-components-across-pipelines) section to learn how to efficiently load the same components into multiple pipelines. - - - -## StableDiffusionAttendAndExcitePipeline - -[[autodoc]] StableDiffusionAttendAndExcitePipeline - - all - - __call__ - -## StableDiffusionPipelineOutput - -[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput \ No newline at end of file diff --git a/spaces/Andy1621/uniformer_image_detection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco.py deleted file mode 100644 index 0c0e563d6fe307d05fbd3862cd28b6dc2a3e52b2..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/configs/point_rend/point_rend_r50_caffe_fpn_mstrain_1x_coco.py +++ /dev/null @@ -1,44 +0,0 @@ -_base_ = '../mask_rcnn/mask_rcnn_r50_caffe_fpn_mstrain_1x_coco.py' -# model settings -model = dict( - type='PointRend', - roi_head=dict( - type='PointRendRoIHead', - mask_roi_extractor=dict( - type='GenericRoIExtractor', - aggregation='concat', - roi_layer=dict( - _delete_=True, type='SimpleRoIAlign', output_size=14), - out_channels=256, - featmap_strides=[4]), - mask_head=dict( - _delete_=True, - type='CoarseMaskHead', - num_fcs=2, - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - num_classes=80, - loss_mask=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0)), - point_head=dict( - type='MaskPointHead', - num_fcs=3, - in_channels=256, - fc_channels=256, - num_classes=80, - coarse_pred_each_layer=True, - loss_point=dict( - type='CrossEntropyLoss', use_mask=True, loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rcnn=dict( - mask_size=7, - num_points=14 * 14, - oversample_ratio=3, - importance_sample_ratio=0.75)), - test_cfg=dict( - rcnn=dict( - subdivision_steps=5, - subdivision_num_points=28 * 28, - scale_factor=2))) diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/channel_mapper.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/channel_mapper.py deleted file mode 100644 index a4f5ed44caefb1612df67785b1f4f0d9ec46ee93..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/channel_mapper.py +++ /dev/null @@ -1,74 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import ConvModule, xavier_init - -from ..builder import NECKS - - -@NECKS.register_module() -class ChannelMapper(nn.Module): - r"""Channel Mapper to reduce/increase channels of backbone features. - - This is used to reduce/increase channels of backbone features. - - Args: - in_channels (List[int]): Number of input channels per scale. - out_channels (int): Number of output channels (used at each scale). - kernel_size (int, optional): kernel_size for reducing channels (used - at each scale). Default: 3. - conv_cfg (dict, optional): Config dict for convolution layer. - Default: None. - norm_cfg (dict, optional): Config dict for normalization layer. - Default: None. - act_cfg (dict, optional): Config dict for activation layer in - ConvModule. Default: dict(type='ReLU'). - - Example: - >>> import torch - >>> in_channels = [2, 3, 5, 7] - >>> scales = [340, 170, 84, 43] - >>> inputs = [torch.rand(1, c, s, s) - ... for c, s in zip(in_channels, scales)] - >>> self = ChannelMapper(in_channels, 11, 3).eval() - >>> outputs = self.forward(inputs) - >>> for i in range(len(outputs)): - ... print(f'outputs[{i}].shape = {outputs[i].shape}') - outputs[0].shape = torch.Size([1, 11, 340, 340]) - outputs[1].shape = torch.Size([1, 11, 170, 170]) - outputs[2].shape = torch.Size([1, 11, 84, 84]) - outputs[3].shape = torch.Size([1, 11, 43, 43]) - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size=3, - conv_cfg=None, - norm_cfg=None, - act_cfg=dict(type='ReLU')): - super(ChannelMapper, self).__init__() - assert isinstance(in_channels, list) - - self.convs = nn.ModuleList() - for in_channel in in_channels: - self.convs.append( - ConvModule( - in_channel, - out_channels, - kernel_size, - padding=(kernel_size - 1) // 2, - conv_cfg=conv_cfg, - norm_cfg=norm_cfg, - act_cfg=act_cfg)) - - # default init_weights for conv(msra) and norm in ConvModule - def init_weights(self): - """Initialize the weights of ChannelMapper module.""" - for m in self.modules(): - if isinstance(m, nn.Conv2d): - xavier_init(m, distribution='uniform') - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == len(self.convs) - outs = [self.convs[i](inputs[i]) for i in range(len(inputs))] - return tuple(outs) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x512_80k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x512_80k_ade20k.py deleted file mode 100644 index 029c1d525b809b61dc8e548ebe4fb26e5c68a8be..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ccnet/ccnet_r101-d8_512x512_80k_ade20k.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './ccnet_r50-d8_512x512_80k_ade20k.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/guided_diffusion/__init__.py b/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/guided_diffusion/__init__.py deleted file mode 100644 index 9665a0d63f695eab303318d824dad14041c7cde9..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/editing_diffusion/guided_diffusion/guided_diffusion/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -""" -Codebase for "Improved Denoising Diffusion Probabilistic Models". -""" diff --git a/spaces/AquaSuisei/ChatGPTXE/chatgpt - windows.bat b/spaces/AquaSuisei/ChatGPTXE/chatgpt - windows.bat deleted file mode 100644 index 0b78fdc3a559abd692e3a9e9af5e482124d13a99..0000000000000000000000000000000000000000 --- a/spaces/AquaSuisei/ChatGPTXE/chatgpt - windows.bat +++ /dev/null @@ -1,14 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" - -REM The web page can be accessed with delayed start http://127.0.0.1:7860/ -ping -n 5 127.0.0.1>nul - -REM access chargpt via your default browser -start "" "http://127.0.0.1:7860/" - - -echo Finished opening ChuanhuChatGPT (http://127.0.0.1:7860/). \ No newline at end of file diff --git a/spaces/ArtGAN/Diffusion-API/diffusion_webui/utils/model_list.py b/spaces/ArtGAN/Diffusion-API/diffusion_webui/utils/model_list.py deleted file mode 100644 index 9429ffef11c29aa727764e138af7ac1c4b6db33f..0000000000000000000000000000000000000000 --- a/spaces/ArtGAN/Diffusion-API/diffusion_webui/utils/model_list.py +++ /dev/null @@ -1,25 +0,0 @@ -stable_model_list = [ - "runwayml/stable-diffusion-v1-5", - "SG161222/Realistic_Vision_V2.0", - "stablediffusionapi/cyberrealistic", - "SG161222/Realistic_Vision_V5.1_noVAE", -] - -stable_inpiant_model_list = [ - "kadirnar/Realistic51-Inpaint", - "stabilityai/stable-diffusion-2-inpainting", - "runwayml/stable-diffusion-inpainting", -] - -controlnet_model_list = [ - "lllyasviel/control_v11p_sd15_canny", - "lllyasviel/control_v11f1p_sd15_depth", - "lllyasviel/control_v11p_sd15_openpose", - "lllyasviel/control_v11p_sd15_scribble", - "lllyasviel/control_v11p_sd15_mlsd", - "lllyasviel/control_v11e_sd15_shuffle", - "lllyasviel/control_v11e_sd15_ip2p", - "lllyasviel/control_v11p_sd15_lineart", - "lllyasviel/control_v11p_sd15s2_lineart_anime", - "lllyasviel/control_v11p_sd15_softedge", -] diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/appengine.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/appengine.py deleted file mode 100644 index 1717ee22cdf77849e2e273566c877f95311e691b..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/contrib/appengine.py +++ /dev/null @@ -1,314 +0,0 @@ -""" -This module provides a pool manager that uses Google App Engine's -`URLFetch Service `_. - -Example usage:: - - from pip._vendor.urllib3 import PoolManager - from pip._vendor.urllib3.contrib.appengine import AppEngineManager, is_appengine_sandbox - - if is_appengine_sandbox(): - # AppEngineManager uses AppEngine's URLFetch API behind the scenes - http = AppEngineManager() - else: - # PoolManager uses a socket-level API behind the scenes - http = PoolManager() - - r = http.request('GET', 'https://google.com/') - -There are `limitations `_ to the URLFetch service and it may not be -the best choice for your application. There are three options for using -urllib3 on Google App Engine: - -1. You can use :class:`AppEngineManager` with URLFetch. URLFetch is - cost-effective in many circumstances as long as your usage is within the - limitations. -2. You can use a normal :class:`~urllib3.PoolManager` by enabling sockets. - Sockets also have `limitations and restrictions - `_ and have a lower free quota than URLFetch. - To use sockets, be sure to specify the following in your ``app.yaml``:: - - env_variables: - GAE_USE_SOCKETS_HTTPLIB : 'true' - -3. If you are using `App Engine Flexible -`_, you can use the standard -:class:`PoolManager` without any configuration or special environment variables. -""" - -from __future__ import absolute_import - -import io -import logging -import warnings - -from ..exceptions import ( - HTTPError, - HTTPWarning, - MaxRetryError, - ProtocolError, - SSLError, - TimeoutError, -) -from ..packages.six.moves.urllib.parse import urljoin -from ..request import RequestMethods -from ..response import HTTPResponse -from ..util.retry import Retry -from ..util.timeout import Timeout -from . import _appengine_environ - -try: - from google.appengine.api import urlfetch -except ImportError: - urlfetch = None - - -log = logging.getLogger(__name__) - - -class AppEnginePlatformWarning(HTTPWarning): - pass - - -class AppEnginePlatformError(HTTPError): - pass - - -class AppEngineManager(RequestMethods): - """ - Connection manager for Google App Engine sandbox applications. - - This manager uses the URLFetch service directly instead of using the - emulated httplib, and is subject to URLFetch limitations as described in - the App Engine documentation `here - `_. - - Notably it will raise an :class:`AppEnginePlatformError` if: - * URLFetch is not available. - * If you attempt to use this on App Engine Flexible, as full socket - support is available. - * If a request size is more than 10 megabytes. - * If a response size is more than 32 megabytes. - * If you use an unsupported request method such as OPTIONS. - - Beyond those cases, it will raise normal urllib3 errors. - """ - - def __init__( - self, - headers=None, - retries=None, - validate_certificate=True, - urlfetch_retries=True, - ): - if not urlfetch: - raise AppEnginePlatformError( - "URLFetch is not available in this environment." - ) - - warnings.warn( - "urllib3 is using URLFetch on Google App Engine sandbox instead " - "of sockets. To use sockets directly instead of URLFetch see " - "https://urllib3.readthedocs.io/en/1.26.x/reference/urllib3.contrib.html.", - AppEnginePlatformWarning, - ) - - RequestMethods.__init__(self, headers) - self.validate_certificate = validate_certificate - self.urlfetch_retries = urlfetch_retries - - self.retries = retries or Retry.DEFAULT - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - # Return False to re-raise any potential exceptions - return False - - def urlopen( - self, - method, - url, - body=None, - headers=None, - retries=None, - redirect=True, - timeout=Timeout.DEFAULT_TIMEOUT, - **response_kw - ): - - retries = self._get_retries(retries, redirect) - - try: - follow_redirects = redirect and retries.redirect != 0 and retries.total - response = urlfetch.fetch( - url, - payload=body, - method=method, - headers=headers or {}, - allow_truncated=False, - follow_redirects=self.urlfetch_retries and follow_redirects, - deadline=self._get_absolute_timeout(timeout), - validate_certificate=self.validate_certificate, - ) - except urlfetch.DeadlineExceededError as e: - raise TimeoutError(self, e) - - except urlfetch.InvalidURLError as e: - if "too large" in str(e): - raise AppEnginePlatformError( - "URLFetch request too large, URLFetch only " - "supports requests up to 10mb in size.", - e, - ) - raise ProtocolError(e) - - except urlfetch.DownloadError as e: - if "Too many redirects" in str(e): - raise MaxRetryError(self, url, reason=e) - raise ProtocolError(e) - - except urlfetch.ResponseTooLargeError as e: - raise AppEnginePlatformError( - "URLFetch response too large, URLFetch only supports" - "responses up to 32mb in size.", - e, - ) - - except urlfetch.SSLCertificateError as e: - raise SSLError(e) - - except urlfetch.InvalidMethodError as e: - raise AppEnginePlatformError( - "URLFetch does not support method: %s" % method, e - ) - - http_response = self._urlfetch_response_to_http_response( - response, retries=retries, **response_kw - ) - - # Handle redirect? - redirect_location = redirect and http_response.get_redirect_location() - if redirect_location: - # Check for redirect response - if self.urlfetch_retries and retries.raise_on_redirect: - raise MaxRetryError(self, url, "too many redirects") - else: - if http_response.status == 303: - method = "GET" - - try: - retries = retries.increment( - method, url, response=http_response, _pool=self - ) - except MaxRetryError: - if retries.raise_on_redirect: - raise MaxRetryError(self, url, "too many redirects") - return http_response - - retries.sleep_for_retry(http_response) - log.debug("Redirecting %s -> %s", url, redirect_location) - redirect_url = urljoin(url, redirect_location) - return self.urlopen( - method, - redirect_url, - body, - headers, - retries=retries, - redirect=redirect, - timeout=timeout, - **response_kw - ) - - # Check if we should retry the HTTP response. - has_retry_after = bool(http_response.headers.get("Retry-After")) - if retries.is_retry(method, http_response.status, has_retry_after): - retries = retries.increment(method, url, response=http_response, _pool=self) - log.debug("Retry: %s", url) - retries.sleep(http_response) - return self.urlopen( - method, - url, - body=body, - headers=headers, - retries=retries, - redirect=redirect, - timeout=timeout, - **response_kw - ) - - return http_response - - def _urlfetch_response_to_http_response(self, urlfetch_resp, **response_kw): - - if is_prod_appengine(): - # Production GAE handles deflate encoding automatically, but does - # not remove the encoding header. - content_encoding = urlfetch_resp.headers.get("content-encoding") - - if content_encoding == "deflate": - del urlfetch_resp.headers["content-encoding"] - - transfer_encoding = urlfetch_resp.headers.get("transfer-encoding") - # We have a full response's content, - # so let's make sure we don't report ourselves as chunked data. - if transfer_encoding == "chunked": - encodings = transfer_encoding.split(",") - encodings.remove("chunked") - urlfetch_resp.headers["transfer-encoding"] = ",".join(encodings) - - original_response = HTTPResponse( - # In order for decoding to work, we must present the content as - # a file-like object. - body=io.BytesIO(urlfetch_resp.content), - msg=urlfetch_resp.header_msg, - headers=urlfetch_resp.headers, - status=urlfetch_resp.status_code, - **response_kw - ) - - return HTTPResponse( - body=io.BytesIO(urlfetch_resp.content), - headers=urlfetch_resp.headers, - status=urlfetch_resp.status_code, - original_response=original_response, - **response_kw - ) - - def _get_absolute_timeout(self, timeout): - if timeout is Timeout.DEFAULT_TIMEOUT: - return None # Defer to URLFetch's default. - if isinstance(timeout, Timeout): - if timeout._read is not None or timeout._connect is not None: - warnings.warn( - "URLFetch does not support granular timeout settings, " - "reverting to total or default URLFetch timeout.", - AppEnginePlatformWarning, - ) - return timeout.total - return timeout - - def _get_retries(self, retries, redirect): - if not isinstance(retries, Retry): - retries = Retry.from_int(retries, redirect=redirect, default=self.retries) - - if retries.connect or retries.read or retries.redirect: - warnings.warn( - "URLFetch only supports total retries and does not " - "recognize connect, read, or redirect retry parameters.", - AppEnginePlatformWarning, - ) - - return retries - - -# Alias methods from _appengine_environ to maintain public API interface. - -is_appengine = _appengine_environ.is_appengine -is_appengine_sandbox = _appengine_environ.is_appengine_sandbox -is_local_appengine = _appengine_environ.is_local_appengine -is_prod_appengine = _appengine_environ.is_prod_appengine -is_prod_appengine_mvms = _appengine_environ.is_prod_appengine_mvms diff --git a/spaces/B-patents/patent-bert/README.md b/spaces/B-patents/patent-bert/README.md deleted file mode 100644 index f1e4d97c44fa092e35c0f36aad378740593e61ae..0000000000000000000000000000000000000000 --- a/spaces/B-patents/patent-bert/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Patent Bert -emoji: 🔥 -colorFrom: red -colorTo: red -sdk: gradio -sdk_version: 3.19.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Banbri/zcvzcv/src/app/interface/panel/bubble.tsx b/spaces/Banbri/zcvzcv/src/app/interface/panel/bubble.tsx deleted file mode 100644 index dad1498e68f6ba79b2fec29fe528657b80b09098..0000000000000000000000000000000000000000 --- a/spaces/Banbri/zcvzcv/src/app/interface/panel/bubble.tsx +++ /dev/null @@ -1,45 +0,0 @@ -import { ReactNode } from "react" - -import { cn } from "@/lib/utils" - -export function Bubble({ - children, - className -}: { - children?: ReactNode - className?: string -}) { - - if (!children) { - return null - } - - return ( -
    -
    -
    - {children} -
    -
    -
    - ) -} \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Brawl Stars Corea Descargar.md b/spaces/Benson/text-generation/Examples/Brawl Stars Corea Descargar.md deleted file mode 100644 index 7546193995a1f617ee2bb0ba47eacc2eee72042e..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Brawl Stars Corea Descargar.md +++ /dev/null @@ -1,85 +0,0 @@ - -

    Jugar juntos ahora GG Descargar: Cómo jugar juegos en línea gratis en cualquier dispositivo

    -

    ¿Te encanta jugar juegos en línea pero odias descargarlos o instalarlos? ¿Te gustaría poder jugar tus juegos favoritos en cualquier dispositivo sin comprometer la calidad o el rendimiento? ¿Quieres descubrir nuevos juegos y géneros que se adapten a tus preferencias y gustos? Si respondiste sí a cualquiera de estas preguntas, entonces deberías revisar Now GG, una plataforma de nube móvil que te permite jugar juegos en línea gratis con solo un clic.

    -

    ¿Qué es ahora GG?

    -

    Ahora GG es una plataforma de nube móvil que permite a los usuarios jugar juegos en línea gratis sin descargas o instalaciones. Puede acceder a miles de juegos de diversos géneros y categorías en su navegador web desde cualquier dispositivo con conexión a Internet. Puedes disfrutar de un rendimiento sin retrasos, compatibilidad entre dispositivos y una experiencia de juego sin problemas en Now GG. Si usted es un jugador casual o un jugador hardcore, encontrará algo que se adapte a su estilo y estado de ánimo en Now GG.

    -

    brawl stars corea descargar


    Download https://bltlly.com/2v6M15



    -

    ¿Qué es jugar juntos?

    -

    Uno de los juegos más populares que puedes jugar en Now GG es Play Together, un juego de simulación social que te permite crear y personalizar tu propio avatar, explorar un mundo virtual con amigos de todo el mundo, participar en varias actividades y mini-y unirse a clubes y comunidades. Play Together es un juego divertido y relajante que te permite expresarte, hacer nuevos amigos y pasarlo bien.

    -

    Cómo jugar juntos en ahora GG?

    -

    Jugar a Jugar Juntos en Now GG es muy fácil y simple. Todo lo que necesitas es una conexión a Internet y un navegador web. Estos son los pasos a seguir:

    -
      -
    1. Ir al sitio web oficial de Now GG en https://now.gg/.
    2. -
    3. Buscar Jugar Juntos en la barra de búsqueda o navegar por las categorías para encontrarlo.
    4. -
    5. Haga clic en el botón Play para iniciar el juego.
    6. - -
    7. Disfruta jugando juntos en ahora GG.
    8. -
    -

    Aquí hay algunos consejos y trucos para mejorar su experiencia de juego en ahora GG:

    -
      -
    • Puede ajustar la configuración del juego, como la calidad gráfica, el volumen de sonido y el idioma, haciendo clic en el icono de engranaje en la esquina superior derecha de la pantalla.
    • -
    • Puedes usar atajos de teclado para controlar el juego, como las teclas WASD para moverse, la barra espaciadora para saltar y el ratón para interactuar.
    • -
    • Puedes guardar tu progreso creando una cuenta en Now GG o vinculando tu cuenta de Facebook o Google.
    • -
    • Puedes invitar a tus amigos a jugar contigo compartiendo el enlace del juego o usando la función de código QR.
    • -
    -

    ¿Por qué debería jugar juntos en ahora GG?

    -

    Hay muchas ventajas y razones para jugar Play Together on Now GG en lugar de otras plataformas o dispositivos. Aquí están algunos de ellos:

    -
      -
    • Puedes jugar Play Together gratis sin descargas o instalaciones. Esto te ahorra tiempo, espacio y dinero.
    • -
    • Puedes jugar Play Together en cualquier dispositivo, como PC, portátil, tableta o smartphone. Esto te da flexibilidad y comodidad.
    • -
    • Puedes jugar Play Together con gráficos de alta calidad y un rendimiento suave. Esto mejora su inmersión y disfrute.
    • -
    • Puedes jugar Juega Junto con otros jugadores de todo el mundo. Esto expande tu red social e interacción.
    • -
    -

    Por supuesto, jugar Play Together on Now GG no es exactamente lo mismo que jugarlo en otras plataformas o dispositivos. Hay algunas diferencias y similitudes que debes tener en cuenta. Estas son algunas de ellas:

    - -Ahora GGOtras plataformas o dispositivos -No se requieren descargas o instalacionesDescargas o instalaciones requeridas -No hay compras en la aplicación o anunciosCompras en la aplicación o anuncios -No hay limitaciones o restricciones del dispositivoLimitaciones o restricciones del dispositivo - -No hay pérdida de datos o riesgo de corrupciónPérdida de datos o riesgo de corrupción -No hay modo sin conexión disponibleModo sin conexión disponible -No hay soporte de controlador disponibleSoporte de controlador disponible -No hay función de chat disponibleFunción de chat disponible -

    ¿Cuáles son algunos otros juegos que puede jugar en ahora GG?

    -

    Play Together no es el único juego que puedes jugar en Now GG. Hay muchos otros juegos de diferentes géneros y categorías que se puede disfrutar en la plataforma. Si te gustan los juegos de acción, aventura, rompecabezas, estrategia, simulación o casuales, encontrarás algo que coincida con tu interés y humor en Now GG. Aquí hay algunos ejemplos de juegos que puedes jugar en Now GG:

    - -GéneroCategoríaJuego -AcciónDisparoLlamada del deber: Móvil -AcciónLuchaMortal Kombat X -AcciónCarrerasAsfalto 9: Leyendas -AventuraJuego de rolesImpacto de Genshin -AventuraSandboxMinecraft -AventuraSupervivenciaPUBG Mobile -PuzzleLogicSudoku Master -PuzzleWordPaisajes de palabras -PuzzleMatch-3Candy Crush Saga -EstrategiaDefensa de torreBloons TD 6 -EstrategiaJuego de cartasHearthstone - -SimulaciónSimulación de vidaLos Sims Mobile -SimulaciónConstrucción de ciudadesSimCity BuildIt -SimulaciónAgriculturaDía de heno -CasualJuego inactivoClicker de cookies -CasualJuego de triviaTrivia Crack 2 -CasualJuego de colorearHappycolor - Color por número, Juegos de colorear. - Aplicaciones en Google Play Happycolor - Color por número, Juegos de colorear. - Aplicaciones en Google Play Happycolor - Color por número, Juegos de colorear. - Aplicaciones en Google Play Happycolor Color por número, Juegos para colorear. - Aplicaciones en Google Play Happycolor - Color por número, Juegos para colorear. - Aplicaciones en Google Play Happycolor - Color por número, Juegos para colorear. - Aplicaciones en Google Play Happycolor - Color por número, Juegos para colorear. - Aplicaciones en Google Play Happycolor Color por número, Juegos para colorear. - Aplicaciones en Google Play Happycolor - Color por número, Juegos para colorear. - Aplicaciones en Google Play Happycolor - Color por número, Juegos para colorear. - Aplicaciones en Google Play Happycolor - Color por número, Juegos para colorear. - Aplicaciones en Google Play Happycolor Color por número, Juegos para colorear. - Aplicaciones en Google Play Happycolor - Color por número, Juegos para colorear. - Aplicaciones en Google Play Happycolor - Color por número, Juegos para colorear. - Aplicaciones en Google Play Happycolor - Color por número, Juegos para colorear. - Aplicaciones en Google Play - -

    Conclusión

    - -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas y respuestas frecuentes relacionadas con Play Together on Now GG:

    -

    -
      -
    1. Q: ¿Cómo puedo jugar a Play Together on Now GG con mis amigos?
    2. -
    3. A: Puedes invitar a tus amigos a jugar contigo compartiendo el enlace del juego o usando la función de código QR. También puede unirse al mismo servidor que sus amigos seleccionándolo de la lista de servidores. También puedes añadir a tus amigos como contactos en el juego y chatear con ellos.
    4. -
    5. Q: ¿Cómo puedo personalizar mi avatar en Play Together on Now GG?
    6. -
    7. A: Puedes personalizar tu avatar haciendo clic en el icono del armario en la esquina inferior izquierda de la pantalla. Puedes cambiar el cabello, la cara, la piel, la ropa, los accesorios y más de tu avatar. También puedes comprar nuevos artículos en la tienda usando monedas o gemas.
    8. -
    9. Q: ¿Cómo puedo ganar monedas y gemas en Play Together on Now GG?
    10. -
    11. A: Puedes ganar monedas y gemas completando misiones, participando en minijuegos, uniéndote a eventos, viendo anuncios o comprándolos con dinero real.
    12. -
    13. Q: ¿Cómo puedo unirme a clubes y comunidades en Play Together on Now GG?
    14. -
    15. A: Puede unirse a clubes y comunidades haciendo clic en el icono del club en la esquina inferior derecha de la pantalla. Puede buscar clubes y comunidades existentes por nombre o categoría, o crear su propio club o comunidad. También puede chatear con otros miembros, compartir fotos y unirse a las actividades del club.
    16. -
    17. Q: ¿Cómo puedo reportar un error o un problema en Play Together on Now GG?
    18. -
    19. A: Puede reportar un error o un problema haciendo clic en el icono de configuración en la esquina superior derecha de la pantalla y seleccionando la opción de informe. También puede ponerse en contacto con el servicio al cliente de Now GG o Play Together a través de sus sitios web oficiales o canales de redes sociales.
    20. -

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Benson/text-generation/Examples/Crear El Mundo Android Apk Descargar.md b/spaces/Benson/text-generation/Examples/Crear El Mundo Android Apk Descargar.md deleted file mode 100644 index 00ec99b2129ec2ad602f792b3c7bc69a0f20e5f7..0000000000000000000000000000000000000000 --- a/spaces/Benson/text-generation/Examples/Crear El Mundo Android Apk Descargar.md +++ /dev/null @@ -1,115 +0,0 @@ -
    -

    Craft The World: Un juego único de estrategia de caja de arena para dispositivos Android

    -

    Si estás buscando un juego divertido y desafiante que combine elementos de sandbox, estrategia, elaboración y géneros de simulación, entonces es posible que desees probar Craft the World. Este juego te permite controlar una tribu de enanos en un mundo generado al azar lleno de peligros y tesoros. Puede explorar, crear, construir y luchar a través de diferentes biomas y niveles, mientras desbloquea nuevas tecnologías y artículos. También puedes jugar con tus amigos y otros jugadores en línea en modos multijugador, o crear tus propios mundos personalizados y compartirlos con otros. En este artículo, te daremos una visión general de Craft the World, sus principales características, jugabilidad, multijugador, comparación con otros juegos similares, y algunos consejos y trucos para principiantes.

    -

    ¿Qué es Craft The World?

    -

    Craft The World es un juego de estrategia sandbox único desarrollado por Dekovir Entertainment y publicado por Black Maple Games. Fue lanzado para PC en 2014, y posteriormente portado a dispositivos iOS y Android. El juego está inspirado en juegos como Dungeon Keeper, Terraria y Dwarf Fortress. Tiene un estilo artístico pixelado y una banda sonora alegre que crean un contraste con el oscuro y peligroso mundo en el que tienes que sobrevivir.

    -

    crear el mundo android apk descargar


    Download ••• https://bltlly.com/2v6Laf



    -

    ¿Cuáles son las principales características de Craft The World?

    -

    Algunas de las características principales de Craft The World son:

    -
      -
    • SIMULACIÓN DE DIOS: Controlas una tribu de enanos dándoles órdenes de cavar, atacar, construir y más. Tienes que proporcionarles comida, ropa y magia al luchar contra otras criaturas. Empiezas con un enano y ganas más a medida que subes de nivel.
    • -
    • SANDBOX GAME: Cada nivel de juego tiene muchas capas de tierra para explorar, desde el cielo hasta la lava. El nivel se genera aleatoriamente como una isla con límites naturales. Los mundos difieren en tamaño, humedad, temperatura, terreno, flora y fauna. También hay salones ocultos y habitaciones con tesoro.
    • - -
    • RTS: Tienes que defender tu base de oleadas de enemigos que atacan por la noche o durante eventos especiales. Puedes usar trampas, torretas, paredes, puertas, hechizos y las habilidades de tus enanos para defenderlos. También puedes asaltar bases enemigas para obtener botín.
    • -
    -

    ¿Cómo descargar e instalar Craft The World en dispositivos Android?

    -

    Para descargar e instalar Craft The World en tu dispositivo Android, debes seguir estos pasos:

    -
      -
    1. Ir a [1](https://apkpure.com/craft-the-world/com.dekovir.CraftTheWorld) o [2](https://play.google.com/store/apps/details?id=com.dekovir.CraftTheWorld) en el navegador de su dispositivo.
    2. -
    3. Toque en "Descargar APK" o "Instalar" botón.
    4. -
    5. Espera a que termine la descarga.
    6. -
    7. Abra el archivo descargado o vaya a la configuración de su dispositivo > - Seguridad > Fuentes desconocidas > Permitir la instalación de aplicaciones de fuentes desconocidas.
    8. -
    9. Toque en "Instalar" y espere a que la instalación termine.
    10. -
    11. Iniciar el juego y disfrutar!
    12. -
    -

    ¿Cómo se juega Craft The World?

    -

    Craft The World es un juego que combina diferentes géneros y mecánicas, por lo que puede tardar algún tiempo en acostumbrarse. Aquí hay algunos consejos básicos sobre cómo jugar el juego:

    -

    ¿Cómo controlar una tribu de enanos en un mundo de caja de arena?

    -

    Puedes controlar a tus enanos tocando sobre ellos y seleccionando una acción del menú, como mover, cavar, construir, atacar, etc. También puedes arrastrar y soltar elementos de tu inventario a tus enanos o al entorno. También puedes usar los botones en la parte inferior de la pantalla para seleccionar todos los enanos, pausar el juego, acelerar el juego o acceder al menú. Puedes acercar y alejar la pantalla, y girar la cámara deslizando la pantalla.

    -

    ¿Cómo explorar, crear, construir y luchar?

    - -

    ¿Cómo avanzar a través del árbol de tecnología y desbloquear nuevos elementos?

    -

    Puedes progresar a través del árbol de tecnología creando elementos relacionados con una tecnología. Por ejemplo, si quieres desbloquear la tecnología agrícola, necesitas crear una azada de madera, un cubo de madera, una valla de madera, etc. Cada tecnología tiene una barra de progreso que te muestra cuánto has creado. Cuando llenas la barra, desbloqueas nuevas recetas y artículos. También puedes encontrar libros en cofres o tiendas que te dan acceso instantáneo a una tecnología.

    -

    -

    ¿Cómo jugar multijugador en Craft The World?

    -

    Craft The World también tiene un modo multijugador que te permite jugar con tus amigos y otros jugadores online. Aquí hay algunas cosas que necesitas saber sobre el modo multijugador:

    -

    ¿Cómo jugar con amigos y otros jugadores online?

    -

    Puedes jugar con amigos y otros jugadores online usando el menú multijugador en la pantalla principal. Puedes elegir entre el modo de supervivencia o el modo creativo. En el modo de supervivencia, tienes que sobrevivir contra los enemigos y el hambre con recursos limitados. En el modo creativo, tienes recursos ilimitados y no tienes enemigos. También puedes elegir entre el modo cooperativo o el modo competitivo. En el modo cooperativo, trabajarás junto con otros jugadores para lograr un objetivo común. En el modo competitivo, compites contra otros jugadores por recursos y territorio.

    -

    ¿Cuáles son las diferencias entre la supervivencia y los modos creativos?

    -

    Las diferencias entre los modos de supervivencia y creativo son:

    - -Modo de supervivenciaModo creativo -Tiene recursos y espacio de inventario limitados. Tiene recursos y espacio de inventario ilimitados. -Tienes que comer y beber agua para sobrevivir. No tienes que comer ni beber agua. -Tienes que lidiar con enemigos y peligros ambientales. No tienes enemigos ni peligros ambientales. - -Tienes un sistema de niveles que determina tu número de enanos y hechizos. No tienes sistema de niveles y puedes generar tantos enanos y hechizos como quieras. - -

    ¿Cómo personalizar tus propios mundos y compartirlos con otros?

    -

    Puedes personalizar tus propios mundos y compartirlos con otros usando el modo editor de mundo. Puedes acceder a este modo pulsando el botón "Crear mundo" en el menú multijugador. Puedes elegir el tamaño, bioma, terreno, flora, fauna, recursos, estructuras, enemigos, eventos y escenarios de tu mundo. También puedes colocar bloques, objetos, criaturas, trampas, portales, etc. donde quieras. Puede guardar su mundo como un archivo y compartirlo con otros a través de correo electrónico o redes sociales. También puede descargar mundos de otros jugadores de [3](https://craft -the-world.com/worlds) o [4](https://steamcommunity.com/app/248390/workshop/) y reproducirlos en su dispositivo.

    -

    ¿Cómo se compara Craft The World con otros juegos similares?

    -

    Craft The World es un juego que tiene muchas similitudes con otros juegos en el sandbox, estrategia, elaboración y géneros de simulación. Sin embargo, también tiene algunas características y aspectos únicos que lo hacen destacar del resto. Aquí hay algunas comparaciones entre Craft The World y otros juegos similares:

    -

    ¿Cómo se compara Craft The World con Terraria?

    -

    Ambos juegos son juegos 2D sandbox que te permiten explorar, crear, construir y luchar en un mundo generado al azar. Sin embargo, hay algunas diferencias entre ellos:

    -
      -
    • Terraria se centra más en el combate y la exploración, mientras que Craft The World se centra más en la estrategia y la simulación.
    • -
    • Terraria tiene más variedad y profundidad en términos de objetos, enemigos, biomas, jefes, eventos, etc., mientras que Craft The World tiene más simplicidad y accesibilidad en términos de jugabilidad e interfaz.
    • -
    • Terraria tiene un mundo más dinámico e interactivo, mientras que Craft The World tiene un mundo más estático y basado en la red.
    • - -
    -

    ¿Cómo se compara Craft The World con Minecraft?

    -

    Ambos juegos son juegos 3D sandbox que te permiten crear y modificar el mundo con bloques. Sin embargo, hay algunas diferencias entre ellos:

    -
      -
    • Minecraft es más abierto y creativo, mientras que Craft The World es más estructurado y orientado a objetivos.
    • -
    • Minecraft tiene más libertad y flexibilidad en términos de construcción y elaboración, mientras que Craft The World tiene más limitaciones y restricciones en términos de recursos y recetas.
    • -
    • Minecraft tiene un estilo gráfico más realista y minimalista, mientras que Craft The World tiene un estilo gráfico más caricaturesco y detallado.
    • -
    • Minecraft tiene una perspectiva más inmersiva y en primera persona, mientras que Craft The World tiene una perspectiva más separada y en tercera persona.
    • -
    -

    ¿Cómo se compara Craft The World con Dwarf Fortress?

    -

    Ambos juegos son juegos de simulación complejos y desafiantes que te permiten gestionar una colonia de enanos en un mundo generado por procedimientos. Sin embargo, hay algunas diferencias entre ellos:

    -
      -
    • Dwarf Fortress es más hardcore y realista, mientras que Craft The World es más casual y basado en la fantasía.
    • -
    • Dwarf Fortress tiene más profundidad y detalle en términos de mecánica, sistemas, características, etc., mientras que Craft The World tiene más simplicidad y claridad en términos de jugabilidad e interfaz.
    • -
    • Dwarf Fortress tiene un estilo gráfico más abstracto y basado en ASCII, mientras que Craft The World tiene un estilo gráfico más concreto y basado en píxeles.
    • -
    • Dwarf Fortress tiene una jugabilidad más emergente e impredecible, mientras que Craft The World tiene una jugabilidad más predecible.
    • -
    -

    Conclusión

    - -

    En mi opinión, Craft The World es un juego divertido y desafiante que ofrece mucho valor de repetición y variedad. Me gusta la mezcla de géneros y mecánicas que hacen que el juego sea interesante y atractivo. También me gusta el hecho de que el juego se actualiza constantemente con nuevos contenidos y características. Creo que cualquiera que le guste el sandbox, la estrategia, la elaboración o los juegos de simulación disfrutaría jugando Craft The World.

    -

    Si estás interesado en jugar Craft The World en tu dispositivo Android, aquí hay algunos consejos y trucos para principiantes:

    -
      -
    • Comience con el modo tutorial para aprender los fundamentos del juego.
    • -
    • Utilice el botón de ayuda en la parte superior derecha de la pantalla para acceder a la wiki, el foro y la guía.
    • -
    • Planifica con anticipación y prioriza tus tareas y objetivos.
    • -
    • Mantén a tus enanos felices y saludables proporcionándoles comida, agua, camas, luz, etc.
    • -
    • Utilice los botones de pausa y avance rápido para administrar su tiempo y recursos.
    • -
    • Guarda tu juego con frecuencia y usa múltiples ranuras.
    • -
    • Experimenta con diferentes elementos, bloques, hechizos y estrategias.
    • -¡Diviértete y sé creativo! -
    -

    Preguntas frecuentes

    -

    Aquí hay algunas preguntas frecuentes sobre Craft The World:

    -

    ¿Cuáles son los requisitos del sistema para Craft The World en dispositivos Android?

    -

    Los requisitos del sistema para Craft The World en dispositivos Android son:

    -
      -
    • Android 4.4 o superior
    • -
    • 1 GB de RAM o más
    • -
    • 300 MB de espacio de almacenamiento libre o más
    • -
    • Una conexión a Internet estable para el modo multijugador
    • -
    -

    ¿Cuánto cuesta Craft The World en dispositivos Android?

    -

    Arte El Mundo cuesta $4.99 en dispositivos Android. También puedes comprar contenido y funciones adicionales como compras en la aplicación, como DLC, skins, monedas, etc.

    -

    ¿Craft The World es un juego gratuito?

    - -

    ¿Craft The World se actualiza regularmente con nuevos contenidos y características?

    -

    Sí, Craft The World se actualiza regularmente con nuevos contenidos y características. Los desarrolladores trabajan constantemente para mejorar el juego y añadir nuevos biomas, objetos, enemigos, modos, etc. Puedes consultar el historial de actualizaciones y la hoja de ruta en [5](https://steamcommunity.com/app/248390/announcements/) o [6](https:/craft-the-world.com/news).

    -

    ¿Dónde puedo encontrar más información y guías sobre Craft The World?

    -

    Puedes encontrar más información y guías sobre Craft The World en estos sitios web:

    -
      -
    • [7](https://crafttheworld.gamepedia.com/Craft_The_World_Wiki) - La wiki oficial del juego.
    • -
    • [8](https://steamcommunity.com/app/248390/guides/) - Las guías de la comunidad de Steam del juego.
    • -
    • [9](https://www.youtube.com/results?search_query=craftǐthe,) - Los vídeos de YouTube del juego.
    • -
    -

    Espero que hayas disfrutado este artículo sobre Craft The World. Si tienes alguna pregunta o comentario, por favor deja un comentario a continuación. ¡Gracias por leer!

    64aa2da5cf
    -
    -
    \ No newline at end of file diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/action.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/action.py deleted file mode 100644 index 09213554705d913a1e8b68e860d7876a2d04b76a..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/boto3/docs/action.py +++ /dev/null @@ -1,197 +0,0 @@ -# Copyright 2015 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# https://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. -import os - -from botocore import xform_name -from botocore.docs.bcdoc.restdoc import DocumentStructure -from botocore.docs.method import ( - document_custom_method, - document_model_driven_method, -) -from botocore.model import OperationModel -from botocore.utils import get_service_module_name - -from boto3.docs.base import NestedDocumenter -from boto3.docs.method import document_model_driven_resource_method -from boto3.docs.utils import ( - add_resource_type_overview, - get_resource_ignore_params, - get_resource_public_actions, -) - - -class ActionDocumenter(NestedDocumenter): - def document_actions(self, section): - modeled_actions_list = self._resource_model.actions - modeled_actions = {} - for modeled_action in modeled_actions_list: - modeled_actions[modeled_action.name] = modeled_action - resource_actions = get_resource_public_actions( - self._resource.__class__ - ) - self.member_map['actions'] = sorted(resource_actions) - add_resource_type_overview( - section=section, - resource_type='Actions', - description=( - 'Actions call operations on resources. They may ' - 'automatically handle the passing in of arguments set ' - 'from identifiers and some attributes.' - ), - intro_link='actions_intro', - ) - - for action_name in sorted(resource_actions): - # Create a new DocumentStructure for each action and add contents. - action_doc = DocumentStructure(action_name, target='html') - breadcrumb_section = action_doc.add_new_section('breadcrumb') - breadcrumb_section.style.ref(self._resource_class_name, 'index') - breadcrumb_section.write(f' / Action / {action_name}') - action_doc.add_title_section(action_name) - action_section = action_doc.add_new_section( - action_name, - context={'qualifier': f'{self.class_name}.'}, - ) - if action_name in ['load', 'reload'] and self._resource_model.load: - document_load_reload_action( - section=action_section, - action_name=action_name, - resource_name=self._resource_name, - event_emitter=self._resource.meta.client.meta.events, - load_model=self._resource_model.load, - service_model=self._service_model, - ) - elif action_name in modeled_actions: - document_action( - section=action_section, - resource_name=self._resource_name, - event_emitter=self._resource.meta.client.meta.events, - action_model=modeled_actions[action_name], - service_model=self._service_model, - ) - else: - document_custom_method( - action_section, action_name, resource_actions[action_name] - ) - # Write actions in individual/nested files. - # Path: /reference/services///.rst - actions_dir_path = os.path.join( - self._root_docs_path, - f'{self._service_name}', - f'{self._resource_sub_path}', - ) - action_doc.write_to_file(actions_dir_path, action_name) - - -def document_action( - section, - resource_name, - event_emitter, - action_model, - service_model, - include_signature=True, -): - """Documents a resource action - - :param section: The section to write to - - :param resource_name: The name of the resource - - :param event_emitter: The event emitter to use to emit events - - :param action_model: The model of the action - - :param service_model: The model of the service - - :param include_signature: Whether or not to include the signature. - It is useful for generating docstrings. - """ - operation_model = service_model.operation_model( - action_model.request.operation - ) - ignore_params = get_resource_ignore_params(action_model.request.params) - - example_return_value = 'response' - if action_model.resource: - example_return_value = xform_name(action_model.resource.type) - example_resource_name = xform_name(resource_name) - if service_model.service_name == resource_name: - example_resource_name = resource_name - example_prefix = '{} = {}.{}'.format( - example_return_value, example_resource_name, action_model.name - ) - full_action_name = ( - f"{section.context.get('qualifier', '')}{action_model.name}" - ) - document_model_driven_resource_method( - section=section, - method_name=full_action_name, - operation_model=operation_model, - event_emitter=event_emitter, - method_description=operation_model.documentation, - example_prefix=example_prefix, - exclude_input=ignore_params, - resource_action_model=action_model, - include_signature=include_signature, - ) - - -def document_load_reload_action( - section, - action_name, - resource_name, - event_emitter, - load_model, - service_model, - include_signature=True, -): - """Documents the resource load action - - :param section: The section to write to - - :param action_name: The name of the loading action should be load or reload - - :param resource_name: The name of the resource - - :param event_emitter: The event emitter to use to emit events - - :param load_model: The model of the load action - - :param service_model: The model of the service - - :param include_signature: Whether or not to include the signature. - It is useful for generating docstrings. - """ - description = ( - 'Calls :py:meth:`{}.Client.{}` to update the attributes of the ' - '{} resource. Note that the load and reload methods are ' - 'the same method and can be used interchangeably.'.format( - get_service_module_name(service_model), - xform_name(load_model.request.operation), - resource_name, - ) - ) - example_resource_name = xform_name(resource_name) - if service_model.service_name == resource_name: - example_resource_name = resource_name - example_prefix = f'{example_resource_name}.{action_name}' - full_action_name = f"{section.context.get('qualifier', '')}{action_name}" - document_model_driven_method( - section=section, - method_name=full_action_name, - operation_model=OperationModel({}, service_model), - event_emitter=event_emitter, - method_description=description, - example_prefix=example_prefix, - include_signature=include_signature, - ) diff --git a/spaces/Bravefe/Artist_Classification/app.py b/spaces/Bravefe/Artist_Classification/app.py deleted file mode 100644 index a8aba829a45ab1eacc9aa9f86eb0bfcd030b2fc8..0000000000000000000000000000000000000000 --- a/spaces/Bravefe/Artist_Classification/app.py +++ /dev/null @@ -1,17 +0,0 @@ -import gradio as gr -import pickle -from fastai.learner import load_learner - -learn = load_learner('/home/user/app/ai_builder1.1.pkl') -learn1 = load_learner('/home/user/app/export.pkl') - -def greet(image): - pred, pred_idx, probs = learn.predict(image) - pred2, pred_idx2, probs2 = learn1.predict(image) - float = probs[pred_idx]*100 - float2 = probs2[pred_idx2]*100 - txt = f'({pred2} {float2:.02f}%) Artist: {pred} Probability: {float:.02f}%' - return txt - -iface = gr.Interface(fn=greet, inputs="image", outputs="label") -iface.launch() \ No newline at end of file diff --git a/spaces/CALM/Dashboard/streamlit_observable/frontend/build/static/js/runtime-main.11ec9aca.js b/spaces/CALM/Dashboard/streamlit_observable/frontend/build/static/js/runtime-main.11ec9aca.js deleted file mode 100644 index 5e161e38aff1f83dc74722eb103c32f930808ffe..0000000000000000000000000000000000000000 --- a/spaces/CALM/Dashboard/streamlit_observable/frontend/build/static/js/runtime-main.11ec9aca.js +++ /dev/null @@ -1,2 +0,0 @@ -!function(e){function t(t){for(var n,l,a=t[0],p=t[1],i=t[2],c=0,s=[];c -#include - -PYBIND11_NAMESPACE_BEGIN(PYBIND11_NAMESPACE) -PYBIND11_NAMESPACE_BEGIN(detail) - -/* SFINAE helper class used by 'is_comparable */ -template struct container_traits { - template static std::true_type test_comparable(decltype(std::declval() == std::declval())*); - template static std::false_type test_comparable(...); - template static std::true_type test_value(typename T2::value_type *); - template static std::false_type test_value(...); - template static std::true_type test_pair(typename T2::first_type *, typename T2::second_type *); - template static std::false_type test_pair(...); - - static constexpr const bool is_comparable = std::is_same(nullptr))>::value; - static constexpr const bool is_pair = std::is_same(nullptr, nullptr))>::value; - static constexpr const bool is_vector = std::is_same(nullptr))>::value; - static constexpr const bool is_element = !is_pair && !is_vector; -}; - -/* Default: is_comparable -> std::false_type */ -template -struct is_comparable : std::false_type { }; - -/* For non-map data structures, check whether operator== can be instantiated */ -template -struct is_comparable< - T, enable_if_t::is_element && - container_traits::is_comparable>> - : std::true_type { }; - -/* For a vector/map data structure, recursively check the value type (which is std::pair for maps) */ -template -struct is_comparable::is_vector>> { - static constexpr const bool value = - is_comparable::value; -}; - -/* For pairs, recursively check the two data types */ -template -struct is_comparable::is_pair>> { - static constexpr const bool value = - is_comparable::value && - is_comparable::value; -}; - -/* Fallback functions */ -template void vector_if_copy_constructible(const Args &...) { } -template void vector_if_equal_operator(const Args &...) { } -template void vector_if_insertion_operator(const Args &...) { } -template void vector_modifiers(const Args &...) { } - -template -void vector_if_copy_constructible(enable_if_t::value, Class_> &cl) { - cl.def(init(), "Copy constructor"); -} - -template -void vector_if_equal_operator(enable_if_t::value, Class_> &cl) { - using T = typename Vector::value_type; - - cl.def(self == self); - cl.def(self != self); - - cl.def("count", - [](const Vector &v, const T &x) { - return std::count(v.begin(), v.end(), x); - }, - arg("x"), - "Return the number of times ``x`` appears in the list" - ); - - cl.def("remove", [](Vector &v, const T &x) { - auto p = std::find(v.begin(), v.end(), x); - if (p != v.end()) - v.erase(p); - else - throw value_error(); - }, - arg("x"), - "Remove the first item from the list whose value is x. " - "It is an error if there is no such item." - ); - - cl.def("__contains__", - [](const Vector &v, const T &x) { - return std::find(v.begin(), v.end(), x) != v.end(); - }, - arg("x"), - "Return true the container contains ``x``" - ); -} - -// Vector modifiers -- requires a copyable vector_type: -// (Technically, some of these (pop and __delitem__) don't actually require copyability, but it seems -// silly to allow deletion but not insertion, so include them here too.) -template -void vector_modifiers(enable_if_t::value, Class_> &cl) { - using T = typename Vector::value_type; - using SizeType = typename Vector::size_type; - using DiffType = typename Vector::difference_type; - - auto wrap_i = [](DiffType i, SizeType n) { - if (i < 0) - i += n; - if (i < 0 || (SizeType)i >= n) - throw index_error(); - return i; - }; - - cl.def("append", - [](Vector &v, const T &value) { v.push_back(value); }, - arg("x"), - "Add an item to the end of the list"); - - cl.def(init([](iterable it) { - auto v = std::unique_ptr(new Vector()); - v->reserve(len_hint(it)); - for (handle h : it) - v->push_back(h.cast()); - return v.release(); - })); - - cl.def("clear", - [](Vector &v) { - v.clear(); - }, - "Clear the contents" - ); - - cl.def("extend", - [](Vector &v, const Vector &src) { - v.insert(v.end(), src.begin(), src.end()); - }, - arg("L"), - "Extend the list by appending all the items in the given list" - ); - - cl.def("extend", - [](Vector &v, iterable it) { - const size_t old_size = v.size(); - v.reserve(old_size + len_hint(it)); - try { - for (handle h : it) { - v.push_back(h.cast()); - } - } catch (const cast_error &) { - v.erase(v.begin() + static_cast(old_size), v.end()); - try { - v.shrink_to_fit(); - } catch (const std::exception &) { - // Do nothing - } - throw; - } - }, - arg("L"), - "Extend the list by appending all the items in the given list" - ); - - cl.def("insert", - [](Vector &v, DiffType i, const T &x) { - // Can't use wrap_i; i == v.size() is OK - if (i < 0) - i += v.size(); - if (i < 0 || (SizeType)i > v.size()) - throw index_error(); - v.insert(v.begin() + i, x); - }, - arg("i") , arg("x"), - "Insert an item at a given position." - ); - - cl.def("pop", - [](Vector &v) { - if (v.empty()) - throw index_error(); - T t = v.back(); - v.pop_back(); - return t; - }, - "Remove and return the last item" - ); - - cl.def("pop", - [wrap_i](Vector &v, DiffType i) { - i = wrap_i(i, v.size()); - T t = v[(SizeType) i]; - v.erase(v.begin() + i); - return t; - }, - arg("i"), - "Remove and return the item at index ``i``" - ); - - cl.def("__setitem__", - [wrap_i](Vector &v, DiffType i, const T &t) { - i = wrap_i(i, v.size()); - v[(SizeType)i] = t; - } - ); - - /// Slicing protocol - cl.def("__getitem__", - [](const Vector &v, slice slice) -> Vector * { - size_t start, stop, step, slicelength; - - if (!slice.compute(v.size(), &start, &stop, &step, &slicelength)) - throw error_already_set(); - - Vector *seq = new Vector(); - seq->reserve((size_t) slicelength); - - for (size_t i=0; ipush_back(v[start]); - start += step; - } - return seq; - }, - arg("s"), - "Retrieve list elements using a slice object" - ); - - cl.def("__setitem__", - [](Vector &v, slice slice, const Vector &value) { - size_t start, stop, step, slicelength; - if (!slice.compute(v.size(), &start, &stop, &step, &slicelength)) - throw error_already_set(); - - if (slicelength != value.size()) - throw std::runtime_error("Left and right hand size of slice assignment have different sizes!"); - - for (size_t i=0; i), -// we have to access by copying; otherwise we return by reference. -template using vector_needs_copy = negation< - std::is_same()[typename Vector::size_type()]), typename Vector::value_type &>>; - -// The usual case: access and iterate by reference -template -void vector_accessor(enable_if_t::value, Class_> &cl) { - using T = typename Vector::value_type; - using SizeType = typename Vector::size_type; - using DiffType = typename Vector::difference_type; - using ItType = typename Vector::iterator; - - auto wrap_i = [](DiffType i, SizeType n) { - if (i < 0) - i += n; - if (i < 0 || (SizeType)i >= n) - throw index_error(); - return i; - }; - - cl.def("__getitem__", - [wrap_i](Vector &v, DiffType i) -> T & { - i = wrap_i(i, v.size()); - return v[(SizeType)i]; - }, - return_value_policy::reference_internal // ref + keepalive - ); - - cl.def("__iter__", - [](Vector &v) { - return make_iterator< - return_value_policy::reference_internal, ItType, ItType, T&>( - v.begin(), v.end()); - }, - keep_alive<0, 1>() /* Essential: keep list alive while iterator exists */ - ); -} - -// The case for special objects, like std::vector, that have to be returned-by-copy: -template -void vector_accessor(enable_if_t::value, Class_> &cl) { - using T = typename Vector::value_type; - using SizeType = typename Vector::size_type; - using DiffType = typename Vector::difference_type; - using ItType = typename Vector::iterator; - cl.def("__getitem__", - [](const Vector &v, DiffType i) -> T { - if (i < 0 && (i += v.size()) < 0) - throw index_error(); - if ((SizeType)i >= v.size()) - throw index_error(); - return v[(SizeType)i]; - } - ); - - cl.def("__iter__", - [](Vector &v) { - return make_iterator< - return_value_policy::copy, ItType, ItType, T>( - v.begin(), v.end()); - }, - keep_alive<0, 1>() /* Essential: keep list alive while iterator exists */ - ); -} - -template auto vector_if_insertion_operator(Class_ &cl, std::string const &name) - -> decltype(std::declval() << std::declval(), void()) { - using size_type = typename Vector::size_type; - - cl.def("__repr__", - [name](Vector &v) { - std::ostringstream s; - s << name << '['; - for (size_type i=0; i < v.size(); ++i) { - s << v[i]; - if (i != v.size() - 1) - s << ", "; - } - s << ']'; - return s.str(); - }, - "Return the canonical string representation of this list." - ); -} - -// Provide the buffer interface for vectors if we have data() and we have a format for it -// GCC seems to have "void std::vector::data()" - doing SFINAE on the existence of data() is insufficient, we need to check it returns an appropriate pointer -template -struct vector_has_data_and_format : std::false_type {}; -template -struct vector_has_data_and_format::format(), std::declval().data()), typename Vector::value_type*>::value>> : std::true_type {}; - -// Add the buffer interface to a vector -template -enable_if_t...>::value> -vector_buffer(Class_& cl) { - using T = typename Vector::value_type; - - static_assert(vector_has_data_and_format::value, "There is not an appropriate format descriptor for this vector"); - - // numpy.h declares this for arbitrary types, but it may raise an exception and crash hard at runtime if PYBIND11_NUMPY_DTYPE hasn't been called, so check here - format_descriptor::format(); - - cl.def_buffer([](Vector& v) -> buffer_info { - return buffer_info(v.data(), static_cast(sizeof(T)), format_descriptor::format(), 1, {v.size()}, {sizeof(T)}); - }); - - cl.def(init([](buffer buf) { - auto info = buf.request(); - if (info.ndim != 1 || info.strides[0] % static_cast(sizeof(T))) - throw type_error("Only valid 1D buffers can be copied to a vector"); - if (!detail::compare_buffer_info::compare(info) || (ssize_t) sizeof(T) != info.itemsize) - throw type_error("Format mismatch (Python: " + info.format + " C++: " + format_descriptor::format() + ")"); - - T *p = static_cast(info.ptr); - ssize_t step = info.strides[0] / static_cast(sizeof(T)); - T *end = p + info.shape[0] * step; - if (step == 1) { - return Vector(p, end); - } - else { - Vector vec; - vec.reserve((size_t) info.shape[0]); - for (; p != end; p += step) - vec.push_back(*p); - return vec; - } - })); - - return; -} - -template -enable_if_t...>::value> vector_buffer(Class_&) {} - -PYBIND11_NAMESPACE_END(detail) - -// -// std::vector -// -template , typename... Args> -class_ bind_vector(handle scope, std::string const &name, Args&&... args) { - using Class_ = class_; - - // If the value_type is unregistered (e.g. a converting type) or is itself registered - // module-local then make the vector binding module-local as well: - using vtype = typename Vector::value_type; - auto vtype_info = detail::get_type_info(typeid(vtype)); - bool local = !vtype_info || vtype_info->module_local; - - Class_ cl(scope, name.c_str(), pybind11::module_local(local), std::forward(args)...); - - // Declare the buffer interface if a buffer_protocol() is passed in - detail::vector_buffer(cl); - - cl.def(init<>()); - - // Register copy constructor (if possible) - detail::vector_if_copy_constructible(cl); - - // Register comparison-related operators and functions (if possible) - detail::vector_if_equal_operator(cl); - - // Register stream insertion operator (if possible) - detail::vector_if_insertion_operator(cl, name); - - // Modifiers require copyable vector value type - detail::vector_modifiers(cl); - - // Accessor and iterator; return by value if copyable, otherwise we return by ref + keep-alive - detail::vector_accessor(cl); - - cl.def("__bool__", - [](const Vector &v) -> bool { - return !v.empty(); - }, - "Check whether the list is nonempty" - ); - - cl.def("__len__", &Vector::size); - - - - -#if 0 - // C++ style functions deprecated, leaving it here as an example - cl.def(init()); - - cl.def("resize", - (void (Vector::*) (size_type count)) & Vector::resize, - "changes the number of elements stored"); - - cl.def("erase", - [](Vector &v, SizeType i) { - if (i >= v.size()) - throw index_error(); - v.erase(v.begin() + i); - }, "erases element at index ``i``"); - - cl.def("empty", &Vector::empty, "checks whether the container is empty"); - cl.def("size", &Vector::size, "returns the number of elements"); - cl.def("push_back", (void (Vector::*)(const T&)) &Vector::push_back, "adds an element to the end"); - cl.def("pop_back", &Vector::pop_back, "removes the last element"); - - cl.def("max_size", &Vector::max_size, "returns the maximum possible number of elements"); - cl.def("reserve", &Vector::reserve, "reserves storage"); - cl.def("capacity", &Vector::capacity, "returns the number of elements that can be held in currently allocated storage"); - cl.def("shrink_to_fit", &Vector::shrink_to_fit, "reduces memory usage by freeing unused memory"); - - cl.def("clear", &Vector::clear, "clears the contents"); - cl.def("swap", &Vector::swap, "swaps the contents"); - - cl.def("front", [](Vector &v) { - if (v.size()) return v.front(); - else throw index_error(); - }, "access the first element"); - - cl.def("back", [](Vector &v) { - if (v.size()) return v.back(); - else throw index_error(); - }, "access the last element "); - -#endif - - return cl; -} - - - -// -// std::map, std::unordered_map -// - -PYBIND11_NAMESPACE_BEGIN(detail) - -/* Fallback functions */ -template void map_if_insertion_operator(const Args &...) { } -template void map_assignment(const Args &...) { } - -// Map assignment when copy-assignable: just copy the value -template -void map_assignment(enable_if_t::value, Class_> &cl) { - using KeyType = typename Map::key_type; - using MappedType = typename Map::mapped_type; - - cl.def("__setitem__", - [](Map &m, const KeyType &k, const MappedType &v) { - auto it = m.find(k); - if (it != m.end()) it->second = v; - else m.emplace(k, v); - } - ); -} - -// Not copy-assignable, but still copy-constructible: we can update the value by erasing and reinserting -template -void map_assignment(enable_if_t< - !is_copy_assignable::value && - is_copy_constructible::value, - Class_> &cl) { - using KeyType = typename Map::key_type; - using MappedType = typename Map::mapped_type; - - cl.def("__setitem__", - [](Map &m, const KeyType &k, const MappedType &v) { - // We can't use m[k] = v; because value type might not be default constructable - auto r = m.emplace(k, v); - if (!r.second) { - // value type is not copy assignable so the only way to insert it is to erase it first... - m.erase(r.first); - m.emplace(k, v); - } - } - ); -} - - -template auto map_if_insertion_operator(Class_ &cl, std::string const &name) --> decltype(std::declval() << std::declval() << std::declval(), void()) { - - cl.def("__repr__", - [name](Map &m) { - std::ostringstream s; - s << name << '{'; - bool f = false; - for (auto const &kv : m) { - if (f) - s << ", "; - s << kv.first << ": " << kv.second; - f = true; - } - s << '}'; - return s.str(); - }, - "Return the canonical string representation of this map." - ); -} - - -PYBIND11_NAMESPACE_END(detail) - -template , typename... Args> -class_ bind_map(handle scope, const std::string &name, Args&&... args) { - using KeyType = typename Map::key_type; - using MappedType = typename Map::mapped_type; - using Class_ = class_; - - // If either type is a non-module-local bound type then make the map binding non-local as well; - // otherwise (e.g. both types are either module-local or converting) the map will be - // module-local. - auto tinfo = detail::get_type_info(typeid(MappedType)); - bool local = !tinfo || tinfo->module_local; - if (local) { - tinfo = detail::get_type_info(typeid(KeyType)); - local = !tinfo || tinfo->module_local; - } - - Class_ cl(scope, name.c_str(), pybind11::module_local(local), std::forward(args)...); - - cl.def(init<>()); - - // Register stream insertion operator (if possible) - detail::map_if_insertion_operator(cl, name); - - cl.def("__bool__", - [](const Map &m) -> bool { return !m.empty(); }, - "Check whether the map is nonempty" - ); - - cl.def("__iter__", - [](Map &m) { return make_key_iterator(m.begin(), m.end()); }, - keep_alive<0, 1>() /* Essential: keep list alive while iterator exists */ - ); - - cl.def("items", - [](Map &m) { return make_iterator(m.begin(), m.end()); }, - keep_alive<0, 1>() /* Essential: keep list alive while iterator exists */ - ); - - cl.def("__getitem__", - [](Map &m, const KeyType &k) -> MappedType & { - auto it = m.find(k); - if (it == m.end()) - throw key_error(); - return it->second; - }, - return_value_policy::reference_internal // ref + keepalive - ); - - cl.def("__contains__", - [](Map &m, const KeyType &k) -> bool { - auto it = m.find(k); - if (it == m.end()) - return false; - return true; - } - ); - - // Assignment provided only if the type is copyable - detail::map_assignment(cl); - - cl.def("__delitem__", - [](Map &m, const KeyType &k) { - auto it = m.find(k); - if (it == m.end()) - throw key_error(); - m.erase(it); - } - ); - - cl.def("__len__", &Map::size); - - return cl; -} - -PYBIND11_NAMESPACE_END(PYBIND11_NAMESPACE) diff --git a/spaces/CVPR/LIVE/pydiffvg/save_svg.py b/spaces/CVPR/LIVE/pydiffvg/save_svg.py deleted file mode 100644 index 7f5641a63849cfec25fa2f560d50e92dc78576c3..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pydiffvg/save_svg.py +++ /dev/null @@ -1,167 +0,0 @@ -import torch -import pydiffvg -import xml.etree.ElementTree as etree -from xml.dom import minidom -def prettify(elem): - """Return a pretty-printed XML string for the Element. - """ - rough_string = etree.tostring(elem, 'utf-8') - reparsed = minidom.parseString(rough_string) - return reparsed.toprettyxml(indent=" ") -def save_svg(filename, width, height, shapes, shape_groups, use_gamma = False, background=None): - root = etree.Element('svg') - root.set('version', '1.1') - root.set('xmlns', 'http://www.w3.org/2000/svg') - root.set('width', str(width)) - root.set('height', str(height)) - if background is not None: - print(f"setting background to {background}") - root.set('style', str(background)) - defs = etree.SubElement(root, 'defs') - g = etree.SubElement(root, 'g') - if use_gamma: - f = etree.SubElement(defs, 'filter') - f.set('id', 'gamma') - f.set('x', '0') - f.set('y', '0') - f.set('width', '100%') - f.set('height', '100%') - gamma = etree.SubElement(f, 'feComponentTransfer') - gamma.set('color-interpolation-filters', 'sRGB') - feFuncR = etree.SubElement(gamma, 'feFuncR') - feFuncR.set('type', 'gamma') - feFuncR.set('amplitude', str(1)) - feFuncR.set('exponent', str(1/2.2)) - feFuncG = etree.SubElement(gamma, 'feFuncG') - feFuncG.set('type', 'gamma') - feFuncG.set('amplitude', str(1)) - feFuncG.set('exponent', str(1/2.2)) - feFuncB = etree.SubElement(gamma, 'feFuncB') - feFuncB.set('type', 'gamma') - feFuncB.set('amplitude', str(1)) - feFuncB.set('exponent', str(1/2.2)) - feFuncA = etree.SubElement(gamma, 'feFuncA') - feFuncA.set('type', 'gamma') - feFuncA.set('amplitude', str(1)) - feFuncA.set('exponent', str(1/2.2)) - g.set('style', 'filter:url(#gamma)') - # Store color - for i, shape_group in enumerate(shape_groups): - def add_color(shape_color, name): - if isinstance(shape_color, pydiffvg.LinearGradient): - lg = shape_color - color = etree.SubElement(defs, 'linearGradient') - color.set('id', name) - color.set('x1', str(lg.begin[0].item()/width)) - color.set('y1', str(lg.begin[1].item()/height)) - color.set('x2', str(lg.end[0].item()/width)) - color.set('y2', str(lg.end[1].item()/height)) - offsets = lg.offsets.data.cpu().numpy() - stop_colors = lg.stop_colors.data.cpu().numpy() - for j in range(offsets.shape[0]): - stop = etree.SubElement(color, 'stop') - stop.set('offset', str(offsets[j])) - c = lg.stop_colors[j, :] - stop.set('stop-color', 'rgb({}, {}, {})'.format(\ - int(255 * c[0]), int(255 * c[1]), int(255 * c[2]))) - stop.set('stop-opacity', '{}'.format(c[3])) - if isinstance(shape_color, pydiffvg.RadialGradient): - lg = shape_color - color = etree.SubElement(defs, 'radialGradient') - color.set('id', name) - color.set('cx', str(lg.center[0].item()/width)) - color.set('cy', str(lg.center[1].item()/height)) - # this only support width=height - color.set('r', str(lg.radius[0].item()/width)) - offsets = lg.offsets.data.cpu().numpy() - stop_colors = lg.stop_colors.data.cpu().numpy() - for j in range(offsets.shape[0]): - stop = etree.SubElement(color, 'stop') - stop.set('offset', str(offsets[j])) - c = lg.stop_colors[j, :] - stop.set('stop-color', 'rgb({}, {}, {})'.format(\ - int(255 * c[0]), int(255 * c[1]), int(255 * c[2]))) - stop.set('stop-opacity', '{}'.format(c[3])) - if shape_group.fill_color is not None: - add_color(shape_group.fill_color, 'shape_{}_fill'.format(i)) - if shape_group.stroke_color is not None: - add_color(shape_group.stroke_color, 'shape_{}_stroke'.format(i)) - for i, shape_group in enumerate(shape_groups): - shape = shapes[shape_group.shape_ids[0]] - if isinstance(shape, pydiffvg.Circle): - shape_node = etree.SubElement(g, 'circle') - shape_node.set('r', str(shape.radius.item())) - shape_node.set('cx', str(shape.center[0].item())) - shape_node.set('cy', str(shape.center[1].item())) - elif isinstance(shape, pydiffvg.Polygon): - shape_node = etree.SubElement(g, 'polygon') - points = shape.points.data.cpu().numpy() - path_str = '' - for j in range(0, shape.points.shape[0]): - path_str += '{} {}'.format(points[j, 0], points[j, 1]) - if j != shape.points.shape[0] - 1: - path_str += ' ' - shape_node.set('points', path_str) - elif isinstance(shape, pydiffvg.Path): - shape_node = etree.SubElement(g, 'path') - num_segments = shape.num_control_points.shape[0] - num_control_points = shape.num_control_points.data.cpu().numpy() - points = shape.points.data.cpu().numpy() - num_points = shape.points.shape[0] - path_str = 'M {} {}'.format(points[0, 0], points[0, 1]) - point_id = 1 - for j in range(0, num_segments): - if num_control_points[j] == 0: - p = point_id % num_points - path_str += ' L {} {}'.format(\ - points[p, 0], points[p, 1]) - point_id += 1 - elif num_control_points[j] == 1: - p1 = (point_id + 1) % num_points - path_str += ' Q {} {} {} {}'.format(\ - points[point_id, 0], points[point_id, 1], - points[p1, 0], points[p1, 1]) - point_id += 2 - elif num_control_points[j] == 2: - p2 = (point_id + 2) % num_points - path_str += ' C {} {} {} {} {} {}'.format(\ - points[point_id, 0], points[point_id, 1], - points[point_id + 1, 0], points[point_id + 1, 1], - points[p2, 0], points[p2, 1]) - point_id += 3 - shape_node.set('d', path_str) - elif isinstance(shape, pydiffvg.Rect): - shape_node = etree.SubElement(g, 'rect') - shape_node.set('x', str(shape.p_min[0].item())) - shape_node.set('y', str(shape.p_min[1].item())) - shape_node.set('width', str(shape.p_max[0].item() - shape.p_min[0].item())) - shape_node.set('height', str(shape.p_max[1].item() - shape.p_min[1].item())) - else: - assert(False) - shape_node.set('stroke-width', str(2 * shape.stroke_width.data.cpu().item())) - if shape_group.fill_color is not None: - if isinstance(shape_group.fill_color, pydiffvg.LinearGradient): - shape_node.set('fill', 'url(#shape_{}_fill)'.format(i)) - elif isinstance(shape_group.fill_color, pydiffvg.RadialGradient): - shape_node.set('fill', 'url(#shape_{}_fill)'.format(i)) - else: - c = shape_group.fill_color.data.cpu().numpy() - shape_node.set('fill', 'rgb({}, {}, {})'.format(\ - int(255 * c[0]), int(255 * c[1]), int(255 * c[2]))) - shape_node.set('opacity', str(c[3])) - else: - shape_node.set('fill', 'none') - if shape_group.stroke_color is not None: - if isinstance(shape_group.stroke_color, pydiffvg.LinearGradient): - shape_node.set('stroke', 'url(#shape_{}_stroke)'.format(i)) - elif isinstance(shape_group.stroke_color, pydiffvg.LinearGradient): - shape_node.set('stroke', 'url(#shape_{}_stroke)'.format(i)) - else: - c = shape_group.stroke_color.data.cpu().numpy() - shape_node.set('stroke', 'rgb({}, {}, {})'.format(\ - int(255 * c[0]), int(255 * c[1]), int(255 * c[2]))) - shape_node.set('stroke-opacity', str(c[3])) - shape_node.set('stroke-linecap', 'round') - shape_node.set('stroke-linejoin', 'round') - with open(filename, "w") as f: - f.write(prettify(root)) diff --git a/spaces/CVPR/WALT/mmdet/models/roi_heads/grid_roi_head.py b/spaces/CVPR/WALT/mmdet/models/roi_heads/grid_roi_head.py deleted file mode 100644 index 4c52c79863ebaf17bd023382c7e5d4c237b4da77..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/roi_heads/grid_roi_head.py +++ /dev/null @@ -1,176 +0,0 @@ -import torch - -from mmdet.core import bbox2result, bbox2roi -from ..builder import HEADS, build_head, build_roi_extractor -from .standard_roi_head import StandardRoIHead - - -@HEADS.register_module() -class GridRoIHead(StandardRoIHead): - """Grid roi head for Grid R-CNN. - - https://arxiv.org/abs/1811.12030 - """ - - def __init__(self, grid_roi_extractor, grid_head, **kwargs): - assert grid_head is not None - super(GridRoIHead, self).__init__(**kwargs) - if grid_roi_extractor is not None: - self.grid_roi_extractor = build_roi_extractor(grid_roi_extractor) - self.share_roi_extractor = False - else: - self.share_roi_extractor = True - self.grid_roi_extractor = self.bbox_roi_extractor - self.grid_head = build_head(grid_head) - - def init_weights(self, pretrained): - """Initialize the weights in head. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - super(GridRoIHead, self).init_weights(pretrained) - self.grid_head.init_weights() - if not self.share_roi_extractor: - self.grid_roi_extractor.init_weights() - - def _random_jitter(self, sampling_results, img_metas, amplitude=0.15): - """Ramdom jitter positive proposals for training.""" - for sampling_result, img_meta in zip(sampling_results, img_metas): - bboxes = sampling_result.pos_bboxes - random_offsets = bboxes.new_empty(bboxes.shape[0], 4).uniform_( - -amplitude, amplitude) - # before jittering - cxcy = (bboxes[:, 2:4] + bboxes[:, :2]) / 2 - wh = (bboxes[:, 2:4] - bboxes[:, :2]).abs() - # after jittering - new_cxcy = cxcy + wh * random_offsets[:, :2] - new_wh = wh * (1 + random_offsets[:, 2:]) - # xywh to xyxy - new_x1y1 = (new_cxcy - new_wh / 2) - new_x2y2 = (new_cxcy + new_wh / 2) - new_bboxes = torch.cat([new_x1y1, new_x2y2], dim=1) - # clip bboxes - max_shape = img_meta['img_shape'] - if max_shape is not None: - new_bboxes[:, 0::2].clamp_(min=0, max=max_shape[1] - 1) - new_bboxes[:, 1::2].clamp_(min=0, max=max_shape[0] - 1) - - sampling_result.pos_bboxes = new_bboxes - return sampling_results - - def forward_dummy(self, x, proposals): - """Dummy forward function.""" - # bbox head - outs = () - rois = bbox2roi([proposals]) - if self.with_bbox: - bbox_results = self._bbox_forward(x, rois) - outs = outs + (bbox_results['cls_score'], - bbox_results['bbox_pred']) - - # grid head - grid_rois = rois[:100] - grid_feats = self.grid_roi_extractor( - x[:self.grid_roi_extractor.num_inputs], grid_rois) - if self.with_shared_head: - grid_feats = self.shared_head(grid_feats) - grid_pred = self.grid_head(grid_feats) - outs = outs + (grid_pred, ) - - # mask head - if self.with_mask: - mask_rois = rois[:100] - mask_results = self._mask_forward(x, mask_rois) - outs = outs + (mask_results['mask_pred'], ) - return outs - - def _bbox_forward_train(self, x, sampling_results, gt_bboxes, gt_labels, - img_metas): - """Run forward function and calculate loss for box head in training.""" - bbox_results = super(GridRoIHead, - self)._bbox_forward_train(x, sampling_results, - gt_bboxes, gt_labels, - img_metas) - - # Grid head forward and loss - sampling_results = self._random_jitter(sampling_results, img_metas) - pos_rois = bbox2roi([res.pos_bboxes for res in sampling_results]) - - # GN in head does not support zero shape input - if pos_rois.shape[0] == 0: - return bbox_results - - grid_feats = self.grid_roi_extractor( - x[:self.grid_roi_extractor.num_inputs], pos_rois) - if self.with_shared_head: - grid_feats = self.shared_head(grid_feats) - # Accelerate training - max_sample_num_grid = self.train_cfg.get('max_num_grid', 192) - sample_idx = torch.randperm( - grid_feats.shape[0])[:min(grid_feats.shape[0], max_sample_num_grid - )] - grid_feats = grid_feats[sample_idx] - - grid_pred = self.grid_head(grid_feats) - - grid_targets = self.grid_head.get_targets(sampling_results, - self.train_cfg) - grid_targets = grid_targets[sample_idx] - - loss_grid = self.grid_head.loss(grid_pred, grid_targets) - - bbox_results['loss_bbox'].update(loss_grid) - return bbox_results - - def simple_test(self, - x, - proposal_list, - img_metas, - proposals=None, - rescale=False): - """Test without augmentation.""" - assert self.with_bbox, 'Bbox head must be implemented.' - - det_bboxes, det_labels = self.simple_test_bboxes( - x, img_metas, proposal_list, self.test_cfg, rescale=False) - # pack rois into bboxes - grid_rois = bbox2roi([det_bbox[:, :4] for det_bbox in det_bboxes]) - if grid_rois.shape[0] != 0: - grid_feats = self.grid_roi_extractor( - x[:len(self.grid_roi_extractor.featmap_strides)], grid_rois) - self.grid_head.test_mode = True - grid_pred = self.grid_head(grid_feats) - # split batch grid head prediction back to each image - num_roi_per_img = tuple(len(det_bbox) for det_bbox in det_bboxes) - grid_pred = { - k: v.split(num_roi_per_img, 0) - for k, v in grid_pred.items() - } - - # apply bbox post-processing to each image individually - bbox_results = [] - num_imgs = len(det_bboxes) - for i in range(num_imgs): - if det_bboxes[i].shape[0] == 0: - bbox_results.append(grid_rois.new_tensor([])) - else: - det_bbox = self.grid_head.get_bboxes( - det_bboxes[i], grid_pred['fused'][i], [img_metas[i]]) - if rescale: - det_bbox[:, :4] /= img_metas[i]['scale_factor'] - bbox_results.append( - bbox2result(det_bbox, det_labels[i], - self.bbox_head.num_classes)) - else: - bbox_results = [ - grid_rois.new_tensor([]) for _ in range(len(det_bboxes)) - ] - - if not self.with_mask: - return bbox_results - else: - segm_results = self.simple_test_mask( - x, img_metas, det_bboxes, det_labels, rescale=rescale) - return list(zip(bbox_results, segm_results)) diff --git a/spaces/CVPR/ml-talking-face/docs/article.md b/spaces/CVPR/ml-talking-face/docs/article.md deleted file mode 100644 index 3585b186e063476bf2de466177c8e560fcf1175d..0000000000000000000000000000000000000000 --- a/spaces/CVPR/ml-talking-face/docs/article.md +++ /dev/null @@ -1,23 +0,0 @@ - -## Why learn a new language, when your model can learn it for you? - -
    -
    - -
    -
    - -### Abstract - -Recent studies in talking face generation have focused on building a train-once-use-everywhere model i.e. a model that will generalize from any source speech to any target identity. A number of works have already claimed this functionality and have added that their models will also generalize to any language. However, we show, using languages from different language families, that these models do not translate well when the training language and the testing language are sufficiently different. We reduce the scope of the problem to building a language-robust talking face generation system on seen identities i.e. the target identity is the same as the training identity. In this work, we introduce a talking face generation system that will generalize to different languages. We evaluate the efficacy of our system using a multilingual text-to-speech system. We also discuss the usage of joint text-to-speech system and the talking face generation system as a neural dubber system. - -[CVPR Open Access](https://openaccess.thecvf.com/content/CVPR2022/html/Song_Talking_Face_Generation_With_Multilingual_TTS_CVPR_2022_paper.html) [arXiv](https://arxiv.org/abs/2205.06421) - -### News - -(2022.08.18.) We got the CVPR Hugging Face prize! Thank you all and special thanks to AK([@akhaliq](https://huggingface.co/akhaliq)). - -
    -we-got-huggingface-prize -
    \ No newline at end of file diff --git a/spaces/CVPR/unicl-zero-shot-img-recog/model/text_encoder/transformer.py b/spaces/CVPR/unicl-zero-shot-img-recog/model/text_encoder/transformer.py deleted file mode 100644 index 25e00ca7a0b5899869857d1e1d4e8afc91665f8c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/unicl-zero-shot-img-recog/model/text_encoder/transformer.py +++ /dev/null @@ -1,194 +0,0 @@ -from collections import OrderedDict -from typing import Tuple, Union -import logging -import os - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn - -from timm.models.layers import DropPath, trunc_normal_ - -from .registry import register_lang_encoder - -logger = logging.getLogger(__name__) - -class LayerNorm(nn.Module): - def __init__(self, hidden_size, eps=1e-12): - """Construct a layernorm module in the TF style (epsilon inside the square root). - """ - super(LayerNorm, self).__init__() - self.weight = nn.Parameter(torch.ones(hidden_size)) - self.bias = nn.Parameter(torch.zeros(hidden_size)) - self.variance_epsilon = eps - - def forward(self, x): - pdtype = x.dtype - x = x.float() - u = x.mean(-1, keepdim=True) - s = (x - u).pow(2).mean(-1, keepdim=True) - x = (x - u) / torch.sqrt(s + self.variance_epsilon) - return self.weight * x.to(pdtype) + self.bias - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, - d_model: int, - n_head: int, - attn_mask: torch.Tensor = None, - drop_path: float = 0.0): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity() - - def attention(self, x: torch.Tensor, key_padding_mask: torch.Tensor = None): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) \ - if self.attn_mask is not None else None - - - return self.attn( - x, x, x, - key_padding_mask=key_padding_mask, - need_weights=False, - attn_mask=self.attn_mask - )[0] - - def forward(self, x: torch.Tensor, key_padding_mask: torch.Tensor = None): - x = x + self.drop_path(self.attention(self.ln_1(x), key_padding_mask=key_padding_mask)) - x = x + self.drop_path(self.mlp(self.ln_2(x))) - return x - - -class Transformer(nn.Module): - def __init__(self, - context_length: int, - vocab_size: int, - width: int, - layers: int, - heads: int, - drop_path: float = 0.0, - autogressive: bool =True): - super().__init__() - - self.token_embedding = nn.Embedding(vocab_size, width) - - self.context_length = context_length - self.positional_embedding = nn.Parameter( - torch.empty(self.context_length, width) - ) - - self.width = width - self.layers = layers - self.autogressive = autogressive - attn_mask = self.build_attention_mask() if autogressive else None - dpr = [x.item() for x in torch.linspace(0, drop_path, layers)] # stochastic depth decay rule - self.resblocks = nn.ModuleList( - [ - ResidualAttentionBlock(width, heads, attn_mask, dpr[i]) - for i in range(layers) - ] - ) - - self.ln_final = LayerNorm(width) - - trunc_normal_(self.positional_embedding, std=.02) - # nn.init.normal_(self.token_embedding, std=.02) - trunc_normal_(self.token_embedding.weight, std=.02) - self.apply(self._init_weights) - - @property - def dim_out(self): - return self.width - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - def _init_weights(self, m): - if isinstance(m, (nn.Linear, nn.Conv2d)): - logger.info('=> init weight of Linear/Conv2d from trunc norm') - trunc_normal_(m.weight, std=0.02) - if m.bias is not None: - logger.info('=> init bias of Linear/Conv2d to zeros') - nn.init.constant_(m.bias, 0) - elif isinstance(m, (nn.LayerNorm, nn.BatchNorm2d)): - nn.init.constant_(m.bias, 0) - - def load_pretrained(self, pretrained='', pretrained_layers=[], verbose=True): - if os.path.isfile(pretrained): - pretrained_dict = torch.load(pretrained, map_location='cpu') - logging.info(f'=> loading pretrained model {pretrained}') - model_dict = self.state_dict() - pretrained_dict = { - k: v for k, v in pretrained_dict.items() - if k in model_dict.keys() - } - need_init_state_dict = {} - for k, v in pretrained_dict.items(): - need_init = ( - k.split('.')[0] in pretrained_layers - or pretrained_layers[0] == '*' - ) - if need_init: - if verbose: - logging.info(f'=> init {k} from {pretrained}') - - need_init_state_dict[k] = v - self.load_state_dict(need_init_state_dict, strict=False) - - - @torch.jit.ignore - def no_weight_decay(self): - return { - 'positional_embedding', - 'token_embedding', - } - - def forward(self, input_ids, attention_mask=None): - key_padding_mask = (input_ids == 0) if not self.autogressive else None - x = self.token_embedding(input_ids) # [batch_size, n_ctx, d_model] - x = x + self.positional_embedding - x = x.permute(1, 0, 2) # NLD -> LND - for block in self.resblocks: - x = block(x, key_padding_mask) - x = x.permute(1, 0, 2) # LND -> NLD - - x = self.ln_final(x) - - return {'last_hidden_state': x} - - -@register_lang_encoder -def lang_encoder(config_encoder, tokenizer, verbose, **kwargs): - transformer = Transformer( - context_length=config_encoder['CONTEXT_LENGTH'], - vocab_size=tokenizer.vocab_size, - width=config_encoder['WIDTH'], - layers=config_encoder['LAYERS'], - heads=config_encoder['HEADS'], - autogressive=config_encoder.get('AUTOGRESSIVE', True) - ) - - if config_encoder['LOAD_PRETRAINED']: - transformer.load_pretrained() - - return transformer diff --git a/spaces/CarlDennis/HYTTS/text/english.py b/spaces/CarlDennis/HYTTS/text/english.py deleted file mode 100644 index a5528f5988864b36ffc2ebdd80622ca4f6992be2..0000000000000000000000000000000000000000 --- a/spaces/CarlDennis/HYTTS/text/english.py +++ /dev/null @@ -1,191 +0,0 @@ -""" from https://github.com/keithito/tacotron """ - -''' -Cleaners are transformations that run over the input text at both training and eval time. - -Cleaners can be selected by passing a comma-delimited list of cleaner names as the "cleaners" -hyperparameter. Some cleaners are English-specific. You'll typically want to use: - 1. "english_cleaners" for English text - 2. "transliteration_cleaners" for non-English text that can be transliterated to ASCII using - the Unidecode library (https://pypi.python.org/pypi/Unidecode) - 3. "basic_cleaners" if you do not want to transliterate (in this case, you should also update - the symbols in symbols.py to match your data). -''' - - -# Regular expression matching whitespace: - - -import re -import inflect -from unidecode import unidecode -import eng_to_ipa as ipa -_inflect = inflect.engine() -_comma_number_re = re.compile(r'([0-9][0-9\,]+[0-9])') -_decimal_number_re = re.compile(r'([0-9]+\.[0-9]+)') -_pounds_re = re.compile(r'£([0-9\,]*[0-9]+)') -_dollars_re = re.compile(r'\$([0-9\.\,]*[0-9]+)') -_ordinal_re = re.compile(r'[0-9]+(st|nd|rd|th)') -_number_re = re.compile(r'[0-9]+') - -# List of (regular expression, replacement) pairs for abbreviations: -_abbreviations = [(re.compile('\\b%s\\.' % x[0], re.IGNORECASE), x[1]) for x in [ - ('mrs', 'misess'), - ('mr', 'mister'), - ('dr', 'doctor'), - ('st', 'saint'), - ('co', 'company'), - ('jr', 'junior'), - ('maj', 'major'), - ('gen', 'general'), - ('drs', 'doctors'), - ('rev', 'reverend'), - ('lt', 'lieutenant'), - ('hon', 'honorable'), - ('sgt', 'sergeant'), - ('capt', 'captain'), - ('esq', 'esquire'), - ('ltd', 'limited'), - ('col', 'colonel'), - ('ft', 'fort'), -]] - - -# List of (ipa, lazy ipa) pairs: -_lazy_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('æ', 'e'), - ('ɑ', 'a'), - ('ɔ', 'o'), - ('ð', 'z'), - ('θ', 's'), - ('ɛ', 'e'), - ('ɪ', 'i'), - ('ʊ', 'u'), - ('ʒ', 'ʥ'), - ('ʤ', 'ʥ'), - ('ˈ', '↓'), -]] - -# List of (ipa, lazy ipa2) pairs: -_lazy_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ð', 'z'), - ('θ', 's'), - ('ʒ', 'ʑ'), - ('ʤ', 'dʑ'), - ('ˈ', '↓'), - ('ɑ', 'a'), -]] - -# List of (ipa, ipa2) pairs -_ipa_to_ipa2 = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('r', 'ɹ'), - ('ʤ', 'dʒ'), - ('ʧ', 'tʃ'), - ('ɑ', 'a'), -]] - - -def expand_abbreviations(text): - for regex, replacement in _abbreviations: - text = re.sub(regex, replacement, text) - return text - - -def collapse_whitespace(text): - return re.sub(r'\s+', ' ', text) - - -def _remove_commas(m): - return m.group(1).replace(',', '') - - -def _expand_decimal_point(m): - return m.group(1).replace('.', ' point ') - - -def _expand_dollars(m): - match = m.group(1) - parts = match.split('.') - if len(parts) > 2: - return match + ' dollars' # Unexpected format - dollars = int(parts[0]) if parts[0] else 0 - cents = int(parts[1]) if len(parts) > 1 and parts[1] else 0 - if dollars and cents: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s, %s %s' % (dollars, dollar_unit, cents, cent_unit) - elif dollars: - dollar_unit = 'dollar' if dollars == 1 else 'dollars' - return '%s %s' % (dollars, dollar_unit) - elif cents: - cent_unit = 'cent' if cents == 1 else 'cents' - return '%s %s' % (cents, cent_unit) - else: - return 'zero dollars' - - -def _expand_ordinal(m): - return _inflect.number_to_words(m.group(0)) - - -def _expand_number(m): - num = int(m.group(0)) - if num > 1000 and num < 3000: - if num == 2000: - return 'two thousand' - elif num > 2000 and num < 2010: - return 'two thousand ' + _inflect.number_to_words(num % 100) - elif num % 100 == 0: - return _inflect.number_to_words(num // 100) + ' hundred' - else: - return _inflect.number_to_words(num, andword='', zero='oh', group=2).replace(', ', ' ') - else: - return _inflect.number_to_words(num, andword='') - - -def normalize_numbers(text): - text = re.sub(_comma_number_re, _remove_commas, text) - text = re.sub(_pounds_re, r'\1 pounds', text) - text = re.sub(_dollars_re, _expand_dollars, text) - text = re.sub(_decimal_number_re, _expand_decimal_point, text) - text = re.sub(_ordinal_re, _expand_ordinal, text) - text = re.sub(_number_re, _expand_number, text) - return text - - -def mark_dark_l(text): - return re.sub(r'l([^aeiouæɑɔəɛɪʊ ]*(?: |$))', lambda x: 'ɫ'+x.group(1), text) - - -def english_to_ipa(text): - text = unidecode(text).lower() - text = expand_abbreviations(text) - text = normalize_numbers(text) - phonemes = ipa.convert(text) - phonemes = collapse_whitespace(phonemes) - return phonemes - - -def english_to_lazy_ipa(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa: - text = re.sub(regex, replacement, text) - return text - - -def english_to_ipa2(text): - text = english_to_ipa(text) - text = mark_dark_l(text) - for regex, replacement in _ipa_to_ipa2: - text = re.sub(regex, replacement, text) - return text.replace('...', '…') - - -def english_to_lazy_ipa2(text): - text = english_to_ipa(text) - for regex, replacement in _lazy_ipa2: - text = re.sub(regex, replacement, text) - return text - diff --git a/spaces/ChallengeHub/Chinese-LangChain/corpus/zh_wikipedia/chinese_t2s.py b/spaces/ChallengeHub/Chinese-LangChain/corpus/zh_wikipedia/chinese_t2s.py deleted file mode 100644 index ea2541538912d3bb145a640d43a53de6fa2e8320..0000000000000000000000000000000000000000 --- a/spaces/ChallengeHub/Chinese-LangChain/corpus/zh_wikipedia/chinese_t2s.py +++ /dev/null @@ -1,82 +0,0 @@ -#!/usr/bin/env python -# -*- coding:utf-8 _*- -""" -@author:quincy qiang -@license: Apache Licence -@file: chinese_t2s.py.py -@time: 2023/04/19 -@contact: yanqiangmiffy@gamil.com -@software: PyCharm -@description: coding.. -""" -import sys -import os -import opencc -from optparse import OptionParser - - -class T2S(object): - def __init__(self, infile, outfile): - self.infile = infile - self.outfile = outfile - self.cc = opencc.OpenCC('t2s') - self.t_corpus = [] - self.s_corpus = [] - self.read(self.infile) - self.t2s() - self.write(self.s_corpus, self.outfile) - - def read(self, path): - print(path) - if os.path.isfile(path) is False: - print("path is not a file") - exit() - now_line = 0 - with open(path, encoding="UTF-8") as f: - for line in f: - now_line += 1 - line = line.replace("\n", "").replace("\t", "") - self.t_corpus.append(line) - print("read finished") - - def t2s(self): - now_line = 0 - all_line = len(self.t_corpus) - for line in self.t_corpus: - now_line += 1 - if now_line % 1000 == 0: - sys.stdout.write("\rhandling with the {} line, all {} lines.".format(now_line, all_line)) - self.s_corpus.append(self.cc.convert(line)) - sys.stdout.write("\rhandling with the {} line, all {} lines.".format(now_line, all_line)) - print("\nhandling finished") - - def write(self, list, path): - print("writing now......") - if os.path.exists(path): - os.remove(path) - file = open(path, encoding="UTF-8", mode="w") - for line in list: - file.writelines(line + "\n") - file.close() - print("writing finished.") - - -if __name__ == "__main__": - print("Traditional Chinese to Simplified Chinese") - # input = "./wiki_zh_10.txt" - # output = "wiki_zh_10_sim.txt" - # T2S(infile=input, outfile=output) - - parser = OptionParser() - parser.add_option("--input", dest="input", default="", help="traditional file") - parser.add_option("--output", dest="output", default="", help="simplified file") - (options, args) = parser.parse_args() - - input = options.input - output = options.output - - try: - T2S(infile=input, outfile=output) - print("All Finished.") - except Exception as err: - print(err) \ No newline at end of file diff --git a/spaces/ChatGPT-GAIA/GAIA-GPT/backupapp.py b/spaces/ChatGPT-GAIA/GAIA-GPT/backupapp.py deleted file mode 100644 index bf9810297b55a225da3f7275c1a0bc72dcc075e4..0000000000000000000000000000000000000000 --- a/spaces/ChatGPT-GAIA/GAIA-GPT/backupapp.py +++ /dev/null @@ -1,209 +0,0 @@ -import gradio as gr -import os -import json -import requests - -#Streaming endpoint -API_URL = "https://api.openai.com/v1/chat/completions" #os.getenv("API_URL") + "/generate_stream" -OPENAI_API_KEY= os.environ["HF_TOKEN"] # Add a token to this space . Then copy it to the repository secret in this spaces settings panel. os.environ reads from there. -# Keys for Open AI ChatGPT API usage are created from here: https://platform.openai.com/account/api-keys - -def predict(inputs, top_p, temperature, chat_counter, chatbot=[], history=[]): #repetition_penalty, top_k - - # 1. Set up a payload - payload = { - "model": "gpt-3.5-turbo", - "messages": [{"role": "user", "content": f"{inputs}"}], - "temperature" : 1.0, - "top_p":1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - - # 2. Define your headers and add a key from https://platform.openai.com/account/api-keys - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {OPENAI_API_KEY}" - } - - # 3. Create a chat counter loop that feeds [Predict next best anything based on last input and attention with memory defined by introspective attention over time] - print(f"chat_counter - {chat_counter}") - if chat_counter != 0 : - messages=[] - for data in chatbot: - temp1 = {} - temp1["role"] = "user" - temp1["content"] = data[0] - temp2 = {} - temp2["role"] = "assistant" - temp2["content"] = data[1] - messages.append(temp1) - messages.append(temp2) - temp3 = {} - temp3["role"] = "user" - temp3["content"] = inputs - messages.append(temp3) - payload = { - "model": "gpt-3.5-turbo", - "messages": messages, #[{"role": "user", "content": f"{inputs}"}], - "temperature" : temperature, #1.0, - "top_p": top_p, #1.0, - "n" : 1, - "stream": True, - "presence_penalty":0, - "frequency_penalty":0, - } - chat_counter+=1 - - # 4. POST it to OPENAI API - history.append(inputs) - print(f"payload is - {payload}") - response = requests.post(API_URL, headers=headers, json=payload, stream=True) - token_counter = 0 - partial_words = "" - - # 5. Iterate through response lines and structure readable response - counter=0 - for chunk in response.iter_lines(): - if counter == 0: - counter+=1 - continue - if chunk.decode() : - chunk = chunk.decode() - if len(chunk) > 12 and "content" in json.loads(chunk[6:])['choices'][0]['delta']: - partial_words = partial_words + json.loads(chunk[6:])['choices'][0]["delta"]["content"] - if token_counter == 0: - history.append(" " + partial_words) - else: - history[-1] = partial_words - chat = [(history[i], history[i + 1]) for i in range(0, len(history) - 1, 2) ] # convert to tuples of list - token_counter+=1 - yield chat, history, chat_counter - - -def reset_textbox(): - return gr.update(value='') - - - - -# Episodic and Semantic IO -def list_files(file_path): - import os - icon_csv = "📄 " - icon_txt = "📑 " - current_directory = os.getcwd() - file_list = [] - for filename in os.listdir(current_directory): - if filename.endswith(".csv"): - file_list.append(icon_csv + filename) - elif filename.endswith(".txt"): - file_list.append(icon_txt + filename) - if file_list: - return "\n".join(file_list) - else: - return "No .csv or .txt files found in the current directory." - -# Function to read a file -def read_file(file_path): - try: - with open(file_path, "r") as file: - contents = file.read() - return f"{contents}" - #return f"Contents of {file_path}:\n{contents}" - except FileNotFoundError: - return "File not found." - -# Function to delete a file -def delete_file(file_path): - try: - import os - os.remove(file_path) - return f"{file_path} has been deleted." - except FileNotFoundError: - return "File not found." - -# Function to write to a file -def write_file(file_path, content): - try: - with open(file_path, "w") as file: - file.write(content) - return f"Successfully written to {file_path}." - except: - return "Error occurred while writing to file." - -# Function to append to a file -def append_file(file_path, content): - try: - with open(file_path, "a") as file: - file.write(content) - return f"Successfully appended to {file_path}." - except: - return "Error occurred while appending to file." - - -title = """

    Memory Chat Story Generator ChatGPT

    """ -description = """ -## ChatGPT Datasets 📚 -- WebText -- Common Crawl -- BooksCorpus -- English Wikipedia -- Toronto Books Corpus -- OpenWebText -## ChatGPT Datasets - Details 📚 -- **WebText:** A dataset of web pages crawled from domains on the Alexa top 5,000 list. This dataset was used to pretrain GPT-2. - - [WebText: A Large-Scale Unsupervised Text Corpus by Radford et al.](https://paperswithcode.com/dataset/webtext) -- **Common Crawl:** A dataset of web pages from a variety of domains, which is updated regularly. This dataset was used to pretrain GPT-3. - - [Language Models are Few-Shot Learners](https://paperswithcode.com/dataset/common-crawl) by Brown et al. -- **BooksCorpus:** A dataset of over 11,000 books from a variety of genres. - - [Scalable Methods for 8 Billion Token Language Modeling](https://paperswithcode.com/dataset/bookcorpus) by Zhu et al. -- **English Wikipedia:** A dump of the English-language Wikipedia as of 2018, with articles from 2001-2017. - - [Improving Language Understanding by Generative Pre-Training](https://huggingface.co/spaces/awacke1/WikipediaUltimateAISearch?logs=build) Space for Wikipedia Search -- **Toronto Books Corpus:** A dataset of over 7,000 books from a variety of genres, collected by the University of Toronto. - - [Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond](https://paperswithcode.com/dataset/bookcorpus) by Schwenk and Douze. -- **OpenWebText:** A dataset of web pages that were filtered to remove content that was likely to be low-quality or spammy. This dataset was used to pretrain GPT-3. - - [Language Models are Few-Shot Learners](https://paperswithcode.com/dataset/openwebtext) by Brown et al. - """ - -# 6. Use Gradio to pull it all together -with gr.Blocks(css = """#col_container {width: 1400px; margin-left: auto; margin-right: auto;} #chatbot {height: 600px; overflow: auto;}""") as demo: - gr.HTML(title) - with gr.Column(elem_id = "col_container"): - inputs = gr.Textbox(placeholder= "Hi there!", label= "Type an input and press Enter") - chatbot = gr.Chatbot(elem_id='chatbot') - state = gr.State([]) - b1 = gr.Button() - with gr.Accordion("Parameters", open=False): - top_p = gr.Slider( minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider( minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",) - chat_counter = gr.Number(value=0, visible=True, precision=0) - - - # Episodic/Semantic IO - fileName = gr.Textbox(label="Filename") - fileContent = gr.TextArea(label="File Content") - completedMessage = gr.Textbox(label="Completed") - label = gr.Label() - with gr.Row(): - listFiles = gr.Button("📄 List File(s)") - readFile = gr.Button("📖 Read File") - saveFile = gr.Button("💾 Save File") - deleteFile = gr.Button("🗑️ Delete File") - appendFile = gr.Button("➕ Append File") - listFiles.click(list_files, inputs=fileName, outputs=fileContent) - readFile.click(read_file, inputs=fileName, outputs=fileContent) - saveFile.click(write_file, inputs=[fileName, fileContent], outputs=completedMessage) - deleteFile.click(delete_file, inputs=fileName, outputs=completedMessage) - appendFile.click(append_file, inputs=[fileName, fileContent], outputs=completedMessage ) - - - inputs.submit(predict, [inputs, top_p, temperature,chat_counter, chatbot, state], [chatbot, state, chat_counter]) - b1.click(predict, [inputs, top_p, temperature, chat_counter, chatbot, state], [chatbot, state, chat_counter]) - b1.click(reset_textbox, [], [inputs]) - inputs.submit(reset_textbox, [], [inputs]) - gr.Markdown(description) - - demo.queue().launch(debug=True) \ No newline at end of file diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/5000choyen/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/5000choyen/__init__.py deleted file mode 100644 index 2f25662279cef91e9c425fbd9f488b0cc3bf3939..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/5000choyen/__init__.py +++ /dev/null @@ -1,198 +0,0 @@ -from typing import List, Tuple - -from PIL.Image import Image as IMG -from PIL.Image import Resampling, Transform -from pil_utils import BuildImage, Text2Image -from pil_utils.gradient import ColorStop, LinearGradient - -from meme_generator import add_meme - - -def fivethousand_choyen(images, texts: List[str], args): - fontsize = 200 - fontname = "Noto Sans SC" - text = texts[0] - pos_x = 40 - pos_y = 220 - imgs: List[Tuple[IMG, Tuple[int, int]]] = [] - - def transform(img: IMG) -> IMG: - skew = 0.45 - dw = round(img.height * skew) - return img.transform( - (img.width + dw, img.height), - Transform.AFFINE, - (1, skew, -dw, 0, 1, 0), - Resampling.BILINEAR, - ) - - def shift(t2m: Text2Image) -> Tuple[int, int]: - return ( - pos_x - - t2m.lines[0].chars[0].stroke_width - - max(char.stroke_width for char in t2m.lines[0].chars), - pos_y - t2m.lines[0].ascent, - ) - - def add_color_text(stroke_width: int, fill: str, pos: Tuple[int, int]): - t2m = Text2Image.from_text( - text, fontsize, fontname=fontname, stroke_width=stroke_width, fill=fill - ) - dx, dy = shift(t2m) - imgs.append((transform(t2m.to_image()), (dx + pos[0], dy + pos[1]))) - - def add_gradient_text( - stroke_width: int, - dir: Tuple[int, int, int, int], - color_stops: List[Tuple[float, Tuple[int, int, int]]], - pos: Tuple[int, int], - ): - t2m = Text2Image.from_text( - text, fontsize, fontname=fontname, stroke_width=stroke_width, fill="white" - ) - mask = transform(t2m.to_image()).convert("L") - dx, dy = shift(t2m) - gradient = LinearGradient( - (dir[0] - dx, dir[1] - dy, dir[2] - dx, dir[3] - dy), - [ColorStop(*color_stop) for color_stop in color_stops], - ) - bg = gradient.create_image(mask.size) - bg.putalpha(mask) - imgs.append((bg, (dx + pos[0], dy + pos[1]))) - - # 黑 - add_color_text(22, "black", (8, 8)) - # 银 - add_gradient_text( - 20, - (0, 38, 0, 234), - [ - (0.0, (0, 15, 36)), - (0.1, (255, 255, 255)), - (0.18, (55, 58, 59)), - (0.25, (55, 58, 59)), - (0.5, (200, 200, 200)), - (0.75, (55, 58, 59)), - (0.85, (25, 20, 31)), - (0.91, (240, 240, 240)), - (0.95, (166, 175, 194)), - (1, (50, 50, 50)), - ], - (8, 8), - ) - # 黑 - add_color_text(16, "black", (0, 0)) - # 金 - add_gradient_text( - 10, - (0, 40, 0, 200), - [ - (0, (253, 241, 0)), - (0.25, (245, 253, 187)), - (0.4, (255, 255, 255)), - (0.75, (253, 219, 9)), - (0.9, (127, 53, 0)), - (1, (243, 196, 11)), - ], - (0, 0), - ) - # 黑 - add_color_text(6, "black", (4, -6)) - # 白 - add_color_text(6, "white", (0, -6)) - # 红 - add_gradient_text( - 4, - (0, 50, 0, 200), - [ - (0, (255, 100, 0)), - (0.5, (123, 0, 0)), - (0.51, (240, 0, 0)), - (1, (5, 0, 0)), - ], - (0, -6), - ) - # 红 - add_gradient_text( - 0, - (0, 50, 0, 200), - [ - (0, (230, 0, 0)), - (0.5, (123, 0, 0)), - (0.51, (240, 0, 0)), - (1, (5, 0, 0)), - ], - (0, -6), - ) - - text = texts[1] - fontname = "Noto Serif SC" - pos_x = 300 - pos_y = 480 - # 黑 - add_color_text(22, "black", (10, 4)) - # 银 - add_gradient_text( - 19, - (0, 320, 0, 506), - [ - (0, (0, 15, 36)), - (0.25, (250, 250, 250)), - (0.5, (150, 150, 150)), - (0.75, (55, 58, 59)), - (0.85, (25, 20, 31)), - (0.91, (240, 240, 240)), - (0.95, (166, 175, 194)), - (1, (50, 50, 50)), - ], - (10, 4), - ) - # 黑 - add_color_text(17, "#10193A", (0, 0)) - # 白 - add_color_text(8, "#D0D0D0", (0, 0)) - # 绀 - add_gradient_text( - 7, - (0, 320, 0, 480), - [ - (0, (16, 25, 58)), - (0.03, (255, 255, 255)), - (0.08, (16, 25, 58)), - (0.2, (16, 25, 58)), - (1, (16, 25, 58)), - ], - (0, 0), - ) - # 银 - add_gradient_text( - 0, - (0, 320, 0, 480), - [ - (0, (245, 246, 248)), - (0.15, (255, 255, 255)), - (0.35, (195, 213, 220)), - (0.5, (160, 190, 201)), - (0.51, (160, 190, 201)), - (0.52, (196, 215, 222)), - (1.0, (255, 255, 255)), - ], - (0, -6), - ) - - img_h = 580 - img_w = max([img.width + pos[0] for img, pos in imgs]) - frame = BuildImage.new("RGBA", (img_w, img_h), "white") - for img, pos in imgs: - frame.paste(img, pos, alpha=True) - return frame.save_jpg() - - -add_meme( - "5000choyen", - fivethousand_choyen, - min_texts=2, - max_texts=2, - default_texts=["我去", "洛天依"], - keywords=["5000兆"], -) diff --git a/spaces/CikeyQI/meme-api/meme_generator/memes/confuse/__init__.py b/spaces/CikeyQI/meme-api/meme_generator/memes/confuse/__init__.py deleted file mode 100644 index 8bed6616b7c63a19eda84c1e429ea5486031b322..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/memes/confuse/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -from pathlib import Path -from typing import List - -from pil_utils import BuildImage - -from meme_generator import add_meme -from meme_generator.utils import FrameAlignPolicy, Maker, make_gif_or_combined_gif - -img_dir = Path(__file__).parent / "images" - - -def confuse(images: List[BuildImage], texts, args): - img_w = min(images[0].width, 500) - - def maker(i: int) -> Maker: - def make(img: BuildImage) -> BuildImage: - img = img.convert("RGBA").resize_width(img_w) - frame = BuildImage.open(img_dir / f"{i}.png").resize( - img.size, keep_ratio=True - ) - bg = BuildImage.new("RGB", img.size, "white") - bg.paste(img, alpha=True).paste(frame, alpha=True) - return bg - - return make - - return make_gif_or_combined_gif( - images[0], maker, 100, 0.02, FrameAlignPolicy.extend_loop, input_based=True - ) - - -add_meme("confuse", confuse, min_images=1, max_images=1, keywords=["迷惑"]) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/api.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/api.py deleted file mode 100644 index 0ba08e3a50ba6d61e75f3f31772eb4dfdd3f8f05..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/charset_normalizer/api.py +++ /dev/null @@ -1,626 +0,0 @@ -import logging -from os import PathLike -from typing import BinaryIO, List, Optional, Set, Union - -from .cd import ( - coherence_ratio, - encoding_languages, - mb_encoding_languages, - merge_coherence_ratios, -) -from .constant import IANA_SUPPORTED, TOO_BIG_SEQUENCE, TOO_SMALL_SEQUENCE, TRACE -from .md import mess_ratio -from .models import CharsetMatch, CharsetMatches -from .utils import ( - any_specified_encoding, - cut_sequence_chunks, - iana_name, - identify_sig_or_bom, - is_cp_similar, - is_multi_byte_encoding, - should_strip_sig_or_bom, -) - -# Will most likely be controversial -# logging.addLevelName(TRACE, "TRACE") -logger = logging.getLogger("charset_normalizer") -explain_handler = logging.StreamHandler() -explain_handler.setFormatter( - logging.Formatter("%(asctime)s | %(levelname)s | %(message)s") -) - - -def from_bytes( - sequences: Union[bytes, bytearray], - steps: int = 5, - chunk_size: int = 512, - threshold: float = 0.2, - cp_isolation: Optional[List[str]] = None, - cp_exclusion: Optional[List[str]] = None, - preemptive_behaviour: bool = True, - explain: bool = False, - language_threshold: float = 0.1, - enable_fallback: bool = True, -) -> CharsetMatches: - """ - Given a raw bytes sequence, return the best possibles charset usable to render str objects. - If there is no results, it is a strong indicator that the source is binary/not text. - By default, the process will extract 5 blocks of 512o each to assess the mess and coherence of a given sequence. - And will give up a particular code page after 20% of measured mess. Those criteria are customizable at will. - - The preemptive behavior DOES NOT replace the traditional detection workflow, it prioritize a particular code page - but never take it for granted. Can improve the performance. - - You may want to focus your attention to some code page or/and not others, use cp_isolation and cp_exclusion for that - purpose. - - This function will strip the SIG in the payload/sequence every time except on UTF-16, UTF-32. - By default the library does not setup any handler other than the NullHandler, if you choose to set the 'explain' - toggle to True it will alter the logger configuration to add a StreamHandler that is suitable for debugging. - Custom logging format and handler can be set manually. - """ - - if not isinstance(sequences, (bytearray, bytes)): - raise TypeError( - "Expected object of type bytes or bytearray, got: {0}".format( - type(sequences) - ) - ) - - if explain: - previous_logger_level: int = logger.level - logger.addHandler(explain_handler) - logger.setLevel(TRACE) - - length: int = len(sequences) - - if length == 0: - logger.debug("Encoding detection on empty bytes, assuming utf_8 intention.") - if explain: - logger.removeHandler(explain_handler) - logger.setLevel(previous_logger_level or logging.WARNING) - return CharsetMatches([CharsetMatch(sequences, "utf_8", 0.0, False, [], "")]) - - if cp_isolation is not None: - logger.log( - TRACE, - "cp_isolation is set. use this flag for debugging purpose. " - "limited list of encoding allowed : %s.", - ", ".join(cp_isolation), - ) - cp_isolation = [iana_name(cp, False) for cp in cp_isolation] - else: - cp_isolation = [] - - if cp_exclusion is not None: - logger.log( - TRACE, - "cp_exclusion is set. use this flag for debugging purpose. " - "limited list of encoding excluded : %s.", - ", ".join(cp_exclusion), - ) - cp_exclusion = [iana_name(cp, False) for cp in cp_exclusion] - else: - cp_exclusion = [] - - if length <= (chunk_size * steps): - logger.log( - TRACE, - "override steps (%i) and chunk_size (%i) as content does not fit (%i byte(s) given) parameters.", - steps, - chunk_size, - length, - ) - steps = 1 - chunk_size = length - - if steps > 1 and length / steps < chunk_size: - chunk_size = int(length / steps) - - is_too_small_sequence: bool = len(sequences) < TOO_SMALL_SEQUENCE - is_too_large_sequence: bool = len(sequences) >= TOO_BIG_SEQUENCE - - if is_too_small_sequence: - logger.log( - TRACE, - "Trying to detect encoding from a tiny portion of ({}) byte(s).".format( - length - ), - ) - elif is_too_large_sequence: - logger.log( - TRACE, - "Using lazy str decoding because the payload is quite large, ({}) byte(s).".format( - length - ), - ) - - prioritized_encodings: List[str] = [] - - specified_encoding: Optional[str] = ( - any_specified_encoding(sequences) if preemptive_behaviour else None - ) - - if specified_encoding is not None: - prioritized_encodings.append(specified_encoding) - logger.log( - TRACE, - "Detected declarative mark in sequence. Priority +1 given for %s.", - specified_encoding, - ) - - tested: Set[str] = set() - tested_but_hard_failure: List[str] = [] - tested_but_soft_failure: List[str] = [] - - fallback_ascii: Optional[CharsetMatch] = None - fallback_u8: Optional[CharsetMatch] = None - fallback_specified: Optional[CharsetMatch] = None - - results: CharsetMatches = CharsetMatches() - - sig_encoding, sig_payload = identify_sig_or_bom(sequences) - - if sig_encoding is not None: - prioritized_encodings.append(sig_encoding) - logger.log( - TRACE, - "Detected a SIG or BOM mark on first %i byte(s). Priority +1 given for %s.", - len(sig_payload), - sig_encoding, - ) - - prioritized_encodings.append("ascii") - - if "utf_8" not in prioritized_encodings: - prioritized_encodings.append("utf_8") - - for encoding_iana in prioritized_encodings + IANA_SUPPORTED: - if cp_isolation and encoding_iana not in cp_isolation: - continue - - if cp_exclusion and encoding_iana in cp_exclusion: - continue - - if encoding_iana in tested: - continue - - tested.add(encoding_iana) - - decoded_payload: Optional[str] = None - bom_or_sig_available: bool = sig_encoding == encoding_iana - strip_sig_or_bom: bool = bom_or_sig_available and should_strip_sig_or_bom( - encoding_iana - ) - - if encoding_iana in {"utf_16", "utf_32"} and not bom_or_sig_available: - logger.log( - TRACE, - "Encoding %s won't be tested as-is because it require a BOM. Will try some sub-encoder LE/BE.", - encoding_iana, - ) - continue - if encoding_iana in {"utf_7"} and not bom_or_sig_available: - logger.log( - TRACE, - "Encoding %s won't be tested as-is because detection is unreliable without BOM/SIG.", - encoding_iana, - ) - continue - - try: - is_multi_byte_decoder: bool = is_multi_byte_encoding(encoding_iana) - except (ModuleNotFoundError, ImportError): - logger.log( - TRACE, - "Encoding %s does not provide an IncrementalDecoder", - encoding_iana, - ) - continue - - try: - if is_too_large_sequence and is_multi_byte_decoder is False: - str( - sequences[: int(50e4)] - if strip_sig_or_bom is False - else sequences[len(sig_payload) : int(50e4)], - encoding=encoding_iana, - ) - else: - decoded_payload = str( - sequences - if strip_sig_or_bom is False - else sequences[len(sig_payload) :], - encoding=encoding_iana, - ) - except (UnicodeDecodeError, LookupError) as e: - if not isinstance(e, LookupError): - logger.log( - TRACE, - "Code page %s does not fit given bytes sequence at ALL. %s", - encoding_iana, - str(e), - ) - tested_but_hard_failure.append(encoding_iana) - continue - - similar_soft_failure_test: bool = False - - for encoding_soft_failed in tested_but_soft_failure: - if is_cp_similar(encoding_iana, encoding_soft_failed): - similar_soft_failure_test = True - break - - if similar_soft_failure_test: - logger.log( - TRACE, - "%s is deemed too similar to code page %s and was consider unsuited already. Continuing!", - encoding_iana, - encoding_soft_failed, - ) - continue - - r_ = range( - 0 if not bom_or_sig_available else len(sig_payload), - length, - int(length / steps), - ) - - multi_byte_bonus: bool = ( - is_multi_byte_decoder - and decoded_payload is not None - and len(decoded_payload) < length - ) - - if multi_byte_bonus: - logger.log( - TRACE, - "Code page %s is a multi byte encoding table and it appear that at least one character " - "was encoded using n-bytes.", - encoding_iana, - ) - - max_chunk_gave_up: int = int(len(r_) / 4) - - max_chunk_gave_up = max(max_chunk_gave_up, 2) - early_stop_count: int = 0 - lazy_str_hard_failure = False - - md_chunks: List[str] = [] - md_ratios = [] - - try: - for chunk in cut_sequence_chunks( - sequences, - encoding_iana, - r_, - chunk_size, - bom_or_sig_available, - strip_sig_or_bom, - sig_payload, - is_multi_byte_decoder, - decoded_payload, - ): - md_chunks.append(chunk) - - md_ratios.append( - mess_ratio( - chunk, - threshold, - explain is True and 1 <= len(cp_isolation) <= 2, - ) - ) - - if md_ratios[-1] >= threshold: - early_stop_count += 1 - - if (early_stop_count >= max_chunk_gave_up) or ( - bom_or_sig_available and strip_sig_or_bom is False - ): - break - except ( - UnicodeDecodeError - ) as e: # Lazy str loading may have missed something there - logger.log( - TRACE, - "LazyStr Loading: After MD chunk decode, code page %s does not fit given bytes sequence at ALL. %s", - encoding_iana, - str(e), - ) - early_stop_count = max_chunk_gave_up - lazy_str_hard_failure = True - - # We might want to check the sequence again with the whole content - # Only if initial MD tests passes - if ( - not lazy_str_hard_failure - and is_too_large_sequence - and not is_multi_byte_decoder - ): - try: - sequences[int(50e3) :].decode(encoding_iana, errors="strict") - except UnicodeDecodeError as e: - logger.log( - TRACE, - "LazyStr Loading: After final lookup, code page %s does not fit given bytes sequence at ALL. %s", - encoding_iana, - str(e), - ) - tested_but_hard_failure.append(encoding_iana) - continue - - mean_mess_ratio: float = sum(md_ratios) / len(md_ratios) if md_ratios else 0.0 - if mean_mess_ratio >= threshold or early_stop_count >= max_chunk_gave_up: - tested_but_soft_failure.append(encoding_iana) - logger.log( - TRACE, - "%s was excluded because of initial chaos probing. Gave up %i time(s). " - "Computed mean chaos is %f %%.", - encoding_iana, - early_stop_count, - round(mean_mess_ratio * 100, ndigits=3), - ) - # Preparing those fallbacks in case we got nothing. - if ( - enable_fallback - and encoding_iana in ["ascii", "utf_8", specified_encoding] - and not lazy_str_hard_failure - ): - fallback_entry = CharsetMatch( - sequences, encoding_iana, threshold, False, [], decoded_payload - ) - if encoding_iana == specified_encoding: - fallback_specified = fallback_entry - elif encoding_iana == "ascii": - fallback_ascii = fallback_entry - else: - fallback_u8 = fallback_entry - continue - - logger.log( - TRACE, - "%s passed initial chaos probing. Mean measured chaos is %f %%", - encoding_iana, - round(mean_mess_ratio * 100, ndigits=3), - ) - - if not is_multi_byte_decoder: - target_languages: List[str] = encoding_languages(encoding_iana) - else: - target_languages = mb_encoding_languages(encoding_iana) - - if target_languages: - logger.log( - TRACE, - "{} should target any language(s) of {}".format( - encoding_iana, str(target_languages) - ), - ) - - cd_ratios = [] - - # We shall skip the CD when its about ASCII - # Most of the time its not relevant to run "language-detection" on it. - if encoding_iana != "ascii": - for chunk in md_chunks: - chunk_languages = coherence_ratio( - chunk, - language_threshold, - ",".join(target_languages) if target_languages else None, - ) - - cd_ratios.append(chunk_languages) - - cd_ratios_merged = merge_coherence_ratios(cd_ratios) - - if cd_ratios_merged: - logger.log( - TRACE, - "We detected language {} using {}".format( - cd_ratios_merged, encoding_iana - ), - ) - - results.append( - CharsetMatch( - sequences, - encoding_iana, - mean_mess_ratio, - bom_or_sig_available, - cd_ratios_merged, - decoded_payload, - ) - ) - - if ( - encoding_iana in [specified_encoding, "ascii", "utf_8"] - and mean_mess_ratio < 0.1 - ): - logger.debug( - "Encoding detection: %s is most likely the one.", encoding_iana - ) - if explain: - logger.removeHandler(explain_handler) - logger.setLevel(previous_logger_level) - return CharsetMatches([results[encoding_iana]]) - - if encoding_iana == sig_encoding: - logger.debug( - "Encoding detection: %s is most likely the one as we detected a BOM or SIG within " - "the beginning of the sequence.", - encoding_iana, - ) - if explain: - logger.removeHandler(explain_handler) - logger.setLevel(previous_logger_level) - return CharsetMatches([results[encoding_iana]]) - - if len(results) == 0: - if fallback_u8 or fallback_ascii or fallback_specified: - logger.log( - TRACE, - "Nothing got out of the detection process. Using ASCII/UTF-8/Specified fallback.", - ) - - if fallback_specified: - logger.debug( - "Encoding detection: %s will be used as a fallback match", - fallback_specified.encoding, - ) - results.append(fallback_specified) - elif ( - (fallback_u8 and fallback_ascii is None) - or ( - fallback_u8 - and fallback_ascii - and fallback_u8.fingerprint != fallback_ascii.fingerprint - ) - or (fallback_u8 is not None) - ): - logger.debug("Encoding detection: utf_8 will be used as a fallback match") - results.append(fallback_u8) - elif fallback_ascii: - logger.debug("Encoding detection: ascii will be used as a fallback match") - results.append(fallback_ascii) - - if results: - logger.debug( - "Encoding detection: Found %s as plausible (best-candidate) for content. With %i alternatives.", - results.best().encoding, # type: ignore - len(results) - 1, - ) - else: - logger.debug("Encoding detection: Unable to determine any suitable charset.") - - if explain: - logger.removeHandler(explain_handler) - logger.setLevel(previous_logger_level) - - return results - - -def from_fp( - fp: BinaryIO, - steps: int = 5, - chunk_size: int = 512, - threshold: float = 0.20, - cp_isolation: Optional[List[str]] = None, - cp_exclusion: Optional[List[str]] = None, - preemptive_behaviour: bool = True, - explain: bool = False, - language_threshold: float = 0.1, - enable_fallback: bool = True, -) -> CharsetMatches: - """ - Same thing than the function from_bytes but using a file pointer that is already ready. - Will not close the file pointer. - """ - return from_bytes( - fp.read(), - steps, - chunk_size, - threshold, - cp_isolation, - cp_exclusion, - preemptive_behaviour, - explain, - language_threshold, - enable_fallback, - ) - - -def from_path( - path: Union[str, bytes, PathLike], # type: ignore[type-arg] - steps: int = 5, - chunk_size: int = 512, - threshold: float = 0.20, - cp_isolation: Optional[List[str]] = None, - cp_exclusion: Optional[List[str]] = None, - preemptive_behaviour: bool = True, - explain: bool = False, - language_threshold: float = 0.1, - enable_fallback: bool = True, -) -> CharsetMatches: - """ - Same thing than the function from_bytes but with one extra step. Opening and reading given file path in binary mode. - Can raise IOError. - """ - with open(path, "rb") as fp: - return from_fp( - fp, - steps, - chunk_size, - threshold, - cp_isolation, - cp_exclusion, - preemptive_behaviour, - explain, - language_threshold, - enable_fallback, - ) - - -def is_binary( - fp_or_path_or_payload: Union[PathLike, str, BinaryIO, bytes], # type: ignore[type-arg] - steps: int = 5, - chunk_size: int = 512, - threshold: float = 0.20, - cp_isolation: Optional[List[str]] = None, - cp_exclusion: Optional[List[str]] = None, - preemptive_behaviour: bool = True, - explain: bool = False, - language_threshold: float = 0.1, - enable_fallback: bool = False, -) -> bool: - """ - Detect if the given input (file, bytes, or path) points to a binary file. aka. not a string. - Based on the same main heuristic algorithms and default kwargs at the sole exception that fallbacks match - are disabled to be stricter around ASCII-compatible but unlikely to be a string. - """ - if isinstance(fp_or_path_or_payload, (str, PathLike)): - guesses = from_path( - fp_or_path_or_payload, - steps=steps, - chunk_size=chunk_size, - threshold=threshold, - cp_isolation=cp_isolation, - cp_exclusion=cp_exclusion, - preemptive_behaviour=preemptive_behaviour, - explain=explain, - language_threshold=language_threshold, - enable_fallback=enable_fallback, - ) - elif isinstance( - fp_or_path_or_payload, - ( - bytes, - bytearray, - ), - ): - guesses = from_bytes( - fp_or_path_or_payload, - steps=steps, - chunk_size=chunk_size, - threshold=threshold, - cp_isolation=cp_isolation, - cp_exclusion=cp_exclusion, - preemptive_behaviour=preemptive_behaviour, - explain=explain, - language_threshold=language_threshold, - enable_fallback=enable_fallback, - ) - else: - guesses = from_fp( - fp_or_path_or_payload, - steps=steps, - chunk_size=chunk_size, - threshold=threshold, - cp_isolation=cp_isolation, - cp_exclusion=cp_exclusion, - preemptive_behaviour=preemptive_behaviour, - explain=explain, - language_threshold=language_threshold, - enable_fallback=enable_fallback, - ) - - return not guesses diff --git a/spaces/Datasculptor/DescriptionGPT/tools/create_imagenetlvis_json.py b/spaces/Datasculptor/DescriptionGPT/tools/create_imagenetlvis_json.py deleted file mode 100644 index 4d5a0b3712b5a2fb94737b8dfe5d70202305926b..0000000000000000000000000000000000000000 --- a/spaces/Datasculptor/DescriptionGPT/tools/create_imagenetlvis_json.py +++ /dev/null @@ -1,52 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import argparse -import json -import os -import cv2 -from nltk.corpus import wordnet - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--imagenet_path', default='datasets/imagenet/ImageNet-LVIS') - parser.add_argument('--lvis_meta_path', default='datasets/lvis/lvis_v1_val.json') - parser.add_argument('--out_path', default='datasets/imagenet/annotations/imagenet_lvis_image_info.json') - args = parser.parse_args() - - print('Loading LVIS meta') - data = json.load(open(args.lvis_meta_path, 'r')) - print('Done') - synset2cat = {x['synset']: x for x in data['categories']} - count = 0 - images = [] - image_counts = {} - folders = sorted(os.listdir(args.imagenet_path)) - for i, folder in enumerate(folders): - class_path = args.imagenet_path + folder - files = sorted(os.listdir(class_path)) - synset = wordnet.synset_from_pos_and_offset('n', int(folder[1:])).name() - cat = synset2cat[synset] - cat_id = cat['id'] - cat_name = cat['name'] - cat_images = [] - for file in files: - count = count + 1 - file_name = '{}/{}'.format(folder, file) - img = cv2.imread('{}/{}'.format(args.imagenet_path, file_name)) - h, w = img.shape[:2] - image = { - 'id': count, - 'file_name': file_name, - 'pos_category_ids': [cat_id], - 'width': w, - 'height': h - } - cat_images.append(image) - images.extend(cat_images) - image_counts[cat_id] = len(cat_images) - print(i, cat_name, len(cat_images)) - print('# Images', len(images)) - for x in data['categories']: - x['image_count'] = image_counts[x['id']] if x['id'] in image_counts else 0 - out = {'categories': data['categories'], 'images': images, 'annotations': []} - print('Writing to', args.out_path) - json.dump(out, open(args.out_path, 'w')) diff --git a/spaces/Detomo/AI-Galary/app.py b/spaces/Detomo/AI-Galary/app.py deleted file mode 100644 index 9eb1024cd7ebcf34c52f10d1d466ae24ba49f02f..0000000000000000000000000000000000000000 --- a/spaces/Detomo/AI-Galary/app.py +++ /dev/null @@ -1,28 +0,0 @@ -import gradio as gr -import pandas as pd - - -def make_clickable_model(model_name, link=None): - name = model_name.replace("https://huggingface.co/spaces/","") - return f'{name.split("/")[-1].replace("_", " ")}' - -def read_df(): - df = pd.read_excel("demo_df.xlsx") - links = [] - for i in range(df.shape[0]): - links.append(make_clickable_model(df.iloc[i, 2])) - df.drop(columns="Link", inplace=True) - df.insert(2, "Link", links) - df.insert(0, "ID", list(range(1, len(df) + 1))) - return df - -with gr.Blocks(theme=gr.themes.Soft()) as demo: - gr.Markdown( - """# Detomo AI Galary 🧙‍♀️ 🧛‍♀️ 🤖 """ - ) - galary = gr.Dataframe( - type="pandas", datatype=["number", "markdown", "markdown", "markdown"] - ) - demo.load(read_df, inputs=None, outputs=galary) - -demo.launch() \ No newline at end of file diff --git a/spaces/DrewKarn/CarperAI-stable-vicuna-13b-delta/app.py b/spaces/DrewKarn/CarperAI-stable-vicuna-13b-delta/app.py deleted file mode 100644 index 6ad7dfa9562e17b33892102e308343b0fc9845c3..0000000000000000000000000000000000000000 --- a/spaces/DrewKarn/CarperAI-stable-vicuna-13b-delta/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/CarperAI/stable-vicuna-13b-delta").launch() \ No newline at end of file diff --git a/spaces/EronSamez/RVC_HFmeu/Applio-RVC-Fork/utils/dependency.py b/spaces/EronSamez/RVC_HFmeu/Applio-RVC-Fork/utils/dependency.py deleted file mode 100644 index b70338b02d31b1ef455fbac817d418d328db518d..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/Applio-RVC-Fork/utils/dependency.py +++ /dev/null @@ -1,170 +0,0 @@ -import os -import csv -import shutil -import tarfile -import subprocess -from pathlib import Path -from datetime import datetime - -def install_packages_but_jank_af(): - packages = ['build-essential', 'python3-dev', 'ffmpeg', 'aria2'] - pip_packages = ['pip', 'setuptools', 'wheel', 'httpx==0.23.0', 'faiss-gpu', 'fairseq', 'gradio==3.34.0', - 'ffmpeg', 'ffmpeg-python', 'praat-parselmouth', 'pyworld', 'numpy==1.23.5', - 'numba==0.56.4', 'librosa==0.9.2', 'mega.py', 'gdown', 'onnxruntime', 'pyngrok==4.1.12', - 'gTTS', 'elevenlabs', 'wget', 'tensorboardX', 'unidecode', 'huggingface-hub', 'stftpitchshift==1.5.1', - 'yt-dlp', 'pedalboard', 'pathvalidate', 'nltk', 'edge-tts', 'git+https://github.com/suno-ai/bark.git', 'python-dotenv' , 'av'] - - print("Updating and installing system packages...") - for package in packages: - print(f"Installing {package}...") - subprocess.check_call(['apt-get', 'install', '-qq', '-y', package]) - - print("Updating and installing pip packages...") - subprocess.check_call(['pip', 'install', '--upgrade'] + pip_packages) - - print('Packages up to date.') - - -def setup_environment(ForceUpdateDependencies, ForceTemporaryStorage): - # Mounting Google Drive - if not ForceTemporaryStorage: - from google.colab import drive - - if not os.path.exists('/content/drive'): - drive.mount('/content/drive') - else: - print('Drive is already mounted. Proceeding...') - - # Function to install dependencies with progress - def install_packages(): - packages = ['build-essential', 'python3-dev', 'ffmpeg', 'aria2'] - pip_packages = ['pip', 'setuptools', 'wheel', 'httpx==0.23.0', 'faiss-gpu', 'fairseq', 'gradio==3.34.0', - 'ffmpeg', 'ffmpeg-python', 'praat-parselmouth', 'pyworld', 'numpy==1.23.5', - 'numba==0.56.4', 'librosa==0.9.2', 'mega.py', 'gdown', 'onnxruntime', 'pyngrok==4.1.12', - 'gTTS', 'elevenlabs', 'wget', 'tensorboardX', 'unidecode', 'huggingface-hub', 'stftpitchshift==1.5.1', - 'yt-dlp', 'pedalboard', 'pathvalidate', 'nltk', 'edge-tts', 'git+https://github.com/suno-ai/bark.git', 'python-dotenv' , 'av'] - - print("Updating and installing system packages...") - for package in packages: - print(f"Installing {package}...") - subprocess.check_call(['apt-get', 'install', '-qq', '-y', package]) - - print("Updating and installing pip packages...") - subprocess.check_call(['pip', 'install', '--upgrade'] + pip_packages) - - - print('Packages up to date.') - - # Function to scan a directory and writes filenames and timestamps - def scan_and_write(base_path, output_file): - with open(output_file, 'w', newline='') as f: - writer = csv.writer(f) - for dirpath, dirs, files in os.walk(base_path): - for filename in files: - fname = os.path.join(dirpath, filename) - try: - mtime = os.path.getmtime(fname) - writer.writerow([fname, mtime]) - except Exception as e: - print(f'Skipping irrelevant nonexistent file {fname}: {str(e)}') - print(f'Finished recording filesystem timestamps to {output_file}.') - - # Function to compare files - def compare_files(old_file, new_file): - old_files = {} - new_files = {} - - with open(old_file, 'r') as f: - reader = csv.reader(f) - old_files = {rows[0]:rows[1] for rows in reader} - - with open(new_file, 'r') as f: - reader = csv.reader(f) - new_files = {rows[0]:rows[1] for rows in reader} - - removed_files = old_files.keys() - new_files.keys() - added_files = new_files.keys() - old_files.keys() - unchanged_files = old_files.keys() & new_files.keys() - - changed_files = {f for f in unchanged_files if old_files[f] != new_files[f]} - - for file in removed_files: - print(f'File has been removed: {file}') - - for file in changed_files: - print(f'File has been updated: {file}') - - return list(added_files) + list(changed_files) - - # Check if CachedRVC.tar.gz exists - if ForceTemporaryStorage: - file_path = '/content/CachedRVC.tar.gz' - else: - file_path = '/content/drive/MyDrive/RVC_Cached/CachedRVC.tar.gz' - - content_file_path = '/content/CachedRVC.tar.gz' - extract_path = '/' - - if not os.path.exists(file_path): - folder_path = os.path.dirname(file_path) - os.makedirs(folder_path, exist_ok=True) - print('No cached dependency install found. Attempting to download GitHub backup..') - - try: - download_url = "https://github.com/kalomaze/QuickMangioFixes/releases/download/release3/CachedRVC.tar.gz" - subprocess.run(["wget", "-O", file_path, download_url]) - print('Download completed successfully!') - except Exception as e: - print('Download failed:', str(e)) - - # Delete the failed download file - if os.path.exists(file_path): - os.remove(file_path) - print('Failed download file deleted. Continuing manual backup..') - - if Path(file_path).exists(): - if ForceTemporaryStorage: - print('Finished downloading CachedRVC.tar.gz.') - else: - print('CachedRVC.tar.gz found on Google Drive. Proceeding to copy and extract...') - - # Check if ForceTemporaryStorage is True and skip copying if it is - if ForceTemporaryStorage: - pass - else: - shutil.copy(file_path, content_file_path) - - print('Beginning backup copy operation...') - - with tarfile.open(content_file_path, 'r:gz') as tar: - for member in tar.getmembers(): - target_path = os.path.join(extract_path, member.name) - try: - tar.extract(member, extract_path) - except Exception as e: - print('Failed to extract a file (this isn\'t normal)... forcing an update to compensate') - ForceUpdateDependencies = True - print(f'Extraction of {content_file_path} to {extract_path} completed.') - - if ForceUpdateDependencies: - install_packages() - ForceUpdateDependencies = False - else: - print('CachedRVC.tar.gz not found. Proceeding to create an index of all current files...') - scan_and_write('/usr/', '/content/usr_files.csv') - - install_packages() - - scan_and_write('/usr/', '/content/usr_files_new.csv') - changed_files = compare_files('/content/usr_files.csv', '/content/usr_files_new.csv') - - with tarfile.open('/content/CachedRVC.tar.gz', 'w:gz') as new_tar: - for file in changed_files: - new_tar.add(file) - print(f'Added to tar: {file}') - - os.makedirs('/content/drive/MyDrive/RVC_Cached', exist_ok=True) - shutil.copy('/content/CachedRVC.tar.gz', '/content/drive/MyDrive/RVC_Cached/CachedRVC.tar.gz') - print('Updated CachedRVC.tar.gz copied to Google Drive.') - print('Dependencies fully up to date; future runs should be faster.') - diff --git a/spaces/EronSamez/RVC_HFmeu/train/mel_processing.py b/spaces/EronSamez/RVC_HFmeu/train/mel_processing.py deleted file mode 100644 index 1c871ab6b838b174407d163c201df899cc3e2b14..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/train/mel_processing.py +++ /dev/null @@ -1,130 +0,0 @@ -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn - - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - return dynamic_range_compression_torch(magnitudes) - - -def spectral_de_normalize_torch(magnitudes): - return dynamic_range_decompression_torch(magnitudes) - - -# Reusable banks -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - """Convert waveform into Linear-frequency Linear-amplitude spectrogram. - - Args: - y :: (B, T) - Audio waveforms - n_fft - sampling_rate - hop_size - win_size - center - Returns: - :: (B, Freq, Frame) - Linear-frequency Linear-amplitude spectrogram - """ - # Validation - if torch.min(y) < -1.07: - print("min value is ", torch.min(y)) - if torch.max(y) > 1.07: - print("max value is ", torch.max(y)) - - # Window - Cache if needed - global hann_window - dtype_device = str(y.dtype) + "_" + str(y.device) - wnsize_dtype_device = str(win_size) + "_" + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to( - dtype=y.dtype, device=y.device - ) - - # Padding - y = torch.nn.functional.pad( - y.unsqueeze(1), - (int((n_fft - hop_size) / 2), int((n_fft - hop_size) / 2)), - mode="reflect", - ) - y = y.squeeze(1) - - # Complex Spectrogram :: (B, T) -> (B, Freq, Frame, RealComplex=2) - spec = torch.stft( - y, - n_fft, - hop_length=hop_size, - win_length=win_size, - window=hann_window[wnsize_dtype_device], - center=center, - pad_mode="reflect", - normalized=False, - onesided=True, - return_complex=False, - ) - - # Linear-frequency Linear-amplitude spectrogram :: (B, Freq, Frame, RealComplex=2) -> (B, Freq, Frame) - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - # MelBasis - Cache if needed - global mel_basis - dtype_device = str(spec.dtype) + "_" + str(spec.device) - fmax_dtype_device = str(fmax) + "_" + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn( - sr=sampling_rate, n_fft=n_fft, n_mels=num_mels, fmin=fmin, fmax=fmax - ) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to( - dtype=spec.dtype, device=spec.device - ) - - # Mel-frequency Log-amplitude spectrogram :: (B, Freq=num_mels, Frame) - melspec = torch.matmul(mel_basis[fmax_dtype_device], spec) - melspec = spectral_normalize_torch(melspec) - return melspec - - -def mel_spectrogram_torch( - y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False -): - """Convert waveform into Mel-frequency Log-amplitude spectrogram. - - Args: - y :: (B, T) - Waveforms - Returns: - melspec :: (B, Freq, Frame) - Mel-frequency Log-amplitude spectrogram - """ - # Linear-frequency Linear-amplitude spectrogram :: (B, T) -> (B, Freq, Frame) - spec = spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center) - - # Mel-frequency Log-amplitude spectrogram :: (B, Freq, Frame) -> (B, Freq=num_mels, Frame) - melspec = spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax) - - return melspec diff --git a/spaces/EsoCode/text-generation-webui/docs/Extensions.md b/spaces/EsoCode/text-generation-webui/docs/Extensions.md deleted file mode 100644 index b9c155fe4869280bce498c95dbc54628aabb721d..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/docs/Extensions.md +++ /dev/null @@ -1,161 +0,0 @@ -Extensions are defined by files named `script.py` inside subfolders of `text-generation-webui/extensions`. They are loaded at startup if specified with the `--extensions` flag. - -For instance, `extensions/silero_tts/script.py` gets loaded with `python server.py --extensions silero_tts`. - -## [text-generation-webui-extensions](https://github.com/oobabooga/text-generation-webui-extensions) - -The link above contains a directory of user extensions for text-generation-webui. - -If you create an extension, you are welcome to host it in a GitHub repository and submit it to the list above. - -## Built-in extensions - -Most of these have been created by the extremely talented contributors that you can find here: [contributors](https://github.com/oobabooga/text-generation-webui/graphs/contributors?from=2022-12-18&to=&type=a). - -|Extension|Description| -|---------|-----------| -|[api](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/api)| Creates an API with two endpoints, one for streaming at `/api/v1/stream` port 5005 and another for blocking at `/api/v1/generate` port 5000. This is the main API for this web UI. | -|[google_translate](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/google_translate)| Automatically translates inputs and outputs using Google Translate.| -|[character_bias](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/character_bias)| Just a very simple example that biases the bot's responses in chat mode.| -|[gallery](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/gallery/)| Creates a gallery with the chat characters and their pictures. | -|[silero_tts](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/silero_tts)| Text-to-speech extension using [Silero](https://github.com/snakers4/silero-models). When used in chat mode, it replaces the responses with an audio widget. | -|[elevenlabs_tts](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/elevenlabs_tts)| Text-to-speech extension using the [ElevenLabs](https://beta.elevenlabs.io/) API. You need an API key to use it. | -|[send_pictures](https://github.com/oobabooga/text-generation-webui/blob/main/extensions/send_pictures/)| Creates an image upload field that can be used to send images to the bot in chat mode. Captions are automatically generated using BLIP. | -|[whisper_stt](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/whisper_stt)| Allows you to enter your inputs in chat mode using your microphone. | -|[sd_api_pictures](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/sd_api_pictures)| Allows you to request pictures from the bot in chat mode, which will be generated using the AUTOMATIC1111 Stable Diffusion API. See examples [here](https://github.com/oobabooga/text-generation-webui/pull/309). | -|[multimodal](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal) | Adds multimodality support (text+images). For a detailed description see [README.md](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal/README.md) in the extension directory. | -|[openai](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai)| Creates an API that mimics the OpenAI API and can be used as a drop-in replacement. | -|[superbooga](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/superbooga)| An extension that uses ChromaDB to create an arbitrarily large pseudocontext, taking as input text files, URLs, or pasted text. Based on https://github.com/kaiokendev/superbig. | - -## How to write an extension - -script.py may define the special functions and variables below. - -#### Predefined functions - -| Function | Description | -|-------------|-------------| -| `def ui()` | Creates custom gradio elements when the UI is launched. | -| `def custom_css()` | Returns custom CSS as a string. It is applied whenever the web UI is loaded. | -| `def custom_js()` | Same as above but for javascript. | -| `def input_modifier(string)` | Modifies the input string before it enters the model. In chat mode, it is applied to the user message. Otherwise, it is applied to the entire prompt. | -| `def output_modifier(string)` | Modifies the output string before it is presented in the UI. In chat mode, it is applied to the bot's reply. Otherwise, it is applied to the entire output. | -| `def state_modifier(state)` | Modifies the dictionary containing the UI input parameters before it is used by the text generation functions. | -| `def history_modifier(history)` | Modifies the chat history before the text generation in chat mode begins. | -| `def bot_prefix_modifier(string)` | Applied in chat mode to the prefix for the bot's reply. | -| `def custom_generate_reply(...)` | Overrides the main text generation function. | -| `def custom_generate_chat_prompt(...)` | Overrides the prompt generator in chat mode. | -| `def tokenizer_modifier(state, prompt, input_ids, input_embeds)` | Modifies the `input_ids`/`input_embeds` fed to the model. Should return `prompt`, `input_ids`, `input_embeds`. See the `multimodal` extension for an example. | -| `def custom_tokenized_length(prompt)` | Used in conjunction with `tokenizer_modifier`, returns the length in tokens of `prompt`. See the `multimodal` extension for an example. | - -#### `params` dictionary - -In this dictionary, `display_name` is used to define the displayed name of the extension in the UI, and `is_tab` is used to define whether the extension should appear in a new tab. By default, extensions appear at the bottom of the "Text generation" tab. - -Example: - -```python -params = { - "display_name": "Google Translate", - "is_tab": True, -} -``` - -Additionally, `params` may contain variables that you want to be customizable through a `settings.json` file. For instance, assuming the extension is in `extensions/google_translate`, the variable `language string` in - -```python -params = { - "display_name": "Google Translate", - "is_tab": True, - "language string": "jp" -} -``` - -can be customized by adding a key called `google_translate-language string` to `settings.json`: - -```python -"google_translate-language string": "fr", -``` - -That is, the syntax is `extension_name-variable_name`. - -#### `input_hijack` dictionary - -```python -input_hijack = { - 'state': False, - 'value': ["", ""] -} -``` -This is only used in chat mode. If your extension sets `input_hijack['state'] = True` at any moment, the next call to `modules.chat.chatbot_wrapper` will use the values inside `input_hijack['value']` as the user input for text generation. See the `send_pictures` extension above for an example. - -Additionally, your extension can set the value to be a callback in the form of `def cb(text: str, visible_text: str) -> [str, str]`. See the `multimodal` extension above for an example. - -## Using multiple extensions at the same time - -In order to use your extension, you must start the web UI with the `--extensions` flag followed by the name of your extension (the folder under `text-generation-webui/extension` where `script.py` resides). - -You can activate more than one extension at a time by providing their names separated by spaces. The input, output, and bot prefix modifiers will be applied in the specified order. - - -``` -python server.py --extensions enthusiasm translate # First apply enthusiasm, then translate -python server.py --extensions translate enthusiasm # First apply translate, then enthusiasm -``` - -Do note, that for: -- `custom_generate_chat_prompt` -- `custom_generate_reply` -- `tokenizer_modifier` -- `custom_tokenized_length` - -only the first declaration encountered will be used and the rest will be ignored. - -## The `bot_prefix_modifier` - -In chat mode, this function modifies the prefix for a new bot message. For instance, if your bot is named `Marie Antoinette`, the default prefix for a new message will be - -``` -Marie Antoinette: -``` - -Using `bot_prefix_modifier`, you can change it to: - -``` -Marie Antoinette: *I am very enthusiastic* -``` - -Marie Antoinette will become very enthusiastic in all her messages. - -## `custom_generate_reply` example - -Once defined in a `script.py`, this function is executed in place of the main generation functions. You can use it to connect the web UI to an external API, or to load a custom model that is not supported yet. - -Note that in chat mode, this function must only return the new text, whereas in other modes it must return the original prompt + the new text. - -```python -import datetime - -def custom_generate_reply(question, original_question, seed, state, stopping_strings): - cumulative = '' - for i in range(10): - cumulative += f"Counting: {i}...\n" - yield cumulative - - cumulative += f"Done! {str(datetime.datetime.now())}" - yield cumulative -``` - -## `custom_generate_chat_prompt` example - -Below is an extension that just reproduces the default prompt generator in `modules/chat.py`. You can modify it freely to come up with your own prompts in chat mode. - -```python -from modules import chat - -def custom_generate_chat_prompt(user_input, state, **kwargs): - - # Do something with kwargs['history'] or state - - return chat.generate_chat_prompt(user_input, state, **kwargs) -``` diff --git a/spaces/EsoCode/text-generation-webui/modules/shared.py b/spaces/EsoCode/text-generation-webui/modules/shared.py deleted file mode 100644 index dfa9cd3822fb9662836e576061de663be8ea1058..0000000000000000000000000000000000000000 --- a/spaces/EsoCode/text-generation-webui/modules/shared.py +++ /dev/null @@ -1,266 +0,0 @@ -import argparse -from collections import OrderedDict -from pathlib import Path - -import yaml - -from modules.logging_colors import logger - -generation_lock = None -model = None -tokenizer = None -is_seq2seq = False -model_name = "None" -lora_names = [] - -# Chat variables -history = {'internal': [], 'visible': []} -character = 'None' -stop_everything = False -processing_message = '*Is typing...*' - -# UI elements (buttons, sliders, HTML, etc) -gradio = {} - -# For keeping the values of UI elements on page reload -persistent_interface_state = {} - -input_params = [] # Generation input parameters -reload_inputs = [] # Parameters for reloading the chat interface - -# For restarting the interface -need_restart = False - -settings = { - 'dark_theme': False, - 'autoload_model': True, - 'max_new_tokens': 200, - 'max_new_tokens_min': 1, - 'max_new_tokens_max': 2000, - 'seed': -1, - 'character': 'None', - 'name1': 'You', - 'name2': 'Assistant', - 'context': 'This is a conversation with your Assistant. It is a computer program designed to help you with various tasks such as answering questions, providing recommendations, and helping with decision making. You can ask it anything you want and it will do its best to give you accurate and relevant information.', - 'greeting': '', - 'turn_template': '', - 'custom_stopping_strings': '', - 'stop_at_newline': False, - 'add_bos_token': True, - 'ban_eos_token': False, - 'skip_special_tokens': True, - 'truncation_length': 2048, - 'truncation_length_min': 0, - 'truncation_length_max': 16384, - 'mode': 'chat', - 'start_with': '', - 'chat_style': 'cai-chat', - 'instruction_template': 'None', - 'chat-instruct_command': 'Continue the chat dialogue below. Write a single reply for the character "<|character|>".\n\n<|prompt|>', - 'chat_generation_attempts': 1, - 'chat_generation_attempts_min': 1, - 'chat_generation_attempts_max': 10, - 'default_extensions': [], - 'chat_default_extensions': ['gallery'], - 'preset': 'simple-1', - 'prompt': 'QA', -} - - -def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ('yes', 'true', 't', 'y', '1'): - return True - elif v.lower() in ('no', 'false', 'f', 'n', '0'): - return False - else: - raise argparse.ArgumentTypeError('Boolean value expected.') - - -parser = argparse.ArgumentParser(formatter_class=lambda prog: argparse.HelpFormatter(prog, max_help_position=54)) - -# Basic settings -parser.add_argument('--notebook', action='store_true', help='Launch the web UI in notebook mode, where the output is written to the same text box as the input.') -parser.add_argument('--chat', action='store_true', help='Launch the web UI in chat mode with a style similar to the Character.AI website.') -parser.add_argument('--character', type=str, help='The name of the character to load in chat mode by default.') -parser.add_argument('--model', type=str, help='Name of the model to load by default.') -parser.add_argument('--lora', type=str, nargs="+", help='The list of LoRAs to load. If you want to load more than one LoRA, write the names separated by spaces.') -parser.add_argument("--model-dir", type=str, default='models/', help="Path to directory with all the models") -parser.add_argument("--lora-dir", type=str, default='loras/', help="Path to directory with all the loras") -parser.add_argument('--model-menu', action='store_true', help='Show a model menu in the terminal when the web UI is first launched.') -parser.add_argument('--no-stream', action='store_true', help='Don\'t stream the text output in real time.') -parser.add_argument('--settings', type=str, help='Load the default interface settings from this yaml file. See settings-template.yaml for an example. If you create a file called settings.yaml, this file will be loaded by default without the need to use the --settings flag.') -parser.add_argument('--extensions', type=str, nargs="+", help='The list of extensions to load. If you want to load more than one extension, write the names separated by spaces.') -parser.add_argument('--verbose', action='store_true', help='Print the prompts to the terminal.') - -# Model loader -parser.add_argument('--loader', type=str, help='Choose the model loader manually, otherwise, it will get autodetected. Valid options: transformers, autogptq, gptq-for-llama, exllama, exllama_hf, llamacpp, rwkv, flexgen') - -# Accelerate/transformers -parser.add_argument('--cpu', action='store_true', help='Use the CPU to generate text. Warning: Training on CPU is extremely slow.') -parser.add_argument('--auto-devices', action='store_true', help='Automatically split the model across the available GPU(s) and CPU.') -parser.add_argument('--gpu-memory', type=str, nargs="+", help='Maximum GPU memory in GiB to be allocated per GPU. Example: --gpu-memory 10 for a single GPU, --gpu-memory 10 5 for two GPUs. You can also set values in MiB like --gpu-memory 3500MiB.') -parser.add_argument('--cpu-memory', type=str, help='Maximum CPU memory in GiB to allocate for offloaded weights. Same as above.') -parser.add_argument('--disk', action='store_true', help='If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk.') -parser.add_argument('--disk-cache-dir', type=str, default="cache", help='Directory to save the disk cache to. Defaults to "cache".') -parser.add_argument('--load-in-8bit', action='store_true', help='Load the model with 8-bit precision (using bitsandbytes).') -parser.add_argument('--bf16', action='store_true', help='Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU.') -parser.add_argument('--no-cache', action='store_true', help='Set use_cache to False while generating text. This reduces the VRAM usage a bit at a performance cost.') -parser.add_argument('--xformers', action='store_true', help="Use xformer's memory efficient attention. This should increase your tokens/s.") -parser.add_argument('--sdp-attention', action='store_true', help="Use torch 2.0's sdp attention.") -parser.add_argument('--trust-remote-code', action='store_true', help="Set trust_remote_code=True while loading a model. Necessary for ChatGLM and Falcon.") - -# Accelerate 4-bit -parser.add_argument('--load-in-4bit', action='store_true', help='Load the model with 4-bit precision (using bitsandbytes).') -parser.add_argument('--compute_dtype', type=str, default="float16", help="compute dtype for 4-bit. Valid options: bfloat16, float16, float32.") -parser.add_argument('--quant_type', type=str, default="nf4", help='quant_type for 4-bit. Valid options: nf4, fp4.') -parser.add_argument('--use_double_quant', action='store_true', help='use_double_quant for 4-bit.') - -# llama.cpp -parser.add_argument('--threads', type=int, default=0, help='Number of threads to use.') -parser.add_argument('--n_batch', type=int, default=512, help='Maximum number of prompt tokens to batch together when calling llama_eval.') -parser.add_argument('--no-mmap', action='store_true', help='Prevent mmap from being used.') -parser.add_argument('--mlock', action='store_true', help='Force the system to keep the model in RAM.') -parser.add_argument('--cache-capacity', type=str, help='Maximum cache capacity. Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed.') -parser.add_argument('--n-gpu-layers', type=int, default=0, help='Number of layers to offload to the GPU.') -parser.add_argument('--n_ctx', type=int, default=2048, help='Size of the prompt context.') -parser.add_argument('--llama_cpp_seed', type=int, default=0, help='Seed for llama-cpp models. Default 0 (random)') - -# GPTQ -parser.add_argument('--wbits', type=int, default=0, help='Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported.') -parser.add_argument('--model_type', type=str, help='Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported.') -parser.add_argument('--groupsize', type=int, default=-1, help='Group size.') -parser.add_argument('--pre_layer', type=int, nargs="+", help='The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. For multi-gpu, write the numbers separated by spaces, eg --pre_layer 30 60.') -parser.add_argument('--checkpoint', type=str, help='The path to the quantized checkpoint file. If not specified, it will be automatically detected.') -parser.add_argument('--monkey-patch', action='store_true', help='Apply the monkey patch for using LoRAs with quantized models.') -parser.add_argument('--quant_attn', action='store_true', help='(triton) Enable quant attention.') -parser.add_argument('--warmup_autotune', action='store_true', help='(triton) Enable warmup autotune.') -parser.add_argument('--fused_mlp', action='store_true', help='(triton) Enable fused mlp.') - -# AutoGPTQ -parser.add_argument('--gptq-for-llama', action='store_true', help='DEPRECATED') -parser.add_argument('--autogptq', action='store_true', help='DEPRECATED') -parser.add_argument('--triton', action='store_true', help='Use triton.') -parser.add_argument('--no_inject_fused_attention', action='store_true', help='Do not use fused attention (lowers VRAM requirements).') -parser.add_argument('--no_inject_fused_mlp', action='store_true', help='Triton mode only: Do not use fused MLP (lowers VRAM requirements).') -parser.add_argument('--no_use_cuda_fp16', action='store_true', help='This can make models faster on some systems.') -parser.add_argument('--desc_act', action='store_true', help='For models that don\'t have a quantize_config.json, this parameter is used to define whether to set desc_act or not in BaseQuantizeConfig.') - -# ExLlama -parser.add_argument('--gpu-split', type=str, help="Comma-separated list of VRAM (in GB) to use per GPU device for model layers, e.g. 20,7,7") -parser.add_argument('--max_seq_len', type=int, default=2048, help="Maximum sequence length.") -parser.add_argument('--compress_pos_emb', type=int, default=1, help="Positional embeddings compression factor. Should typically be set to max_seq_len / 2048.") - -# FlexGen -parser.add_argument('--flexgen', action='store_true', help='DEPRECATED') -parser.add_argument('--percent', type=int, nargs="+", default=[0, 100, 100, 0, 100, 0], help='FlexGen: allocation percentages. Must be 6 numbers separated by spaces (default: 0, 100, 100, 0, 100, 0).') -parser.add_argument("--compress-weight", action="store_true", help="FlexGen: activate weight compression.") -parser.add_argument("--pin-weight", type=str2bool, nargs="?", const=True, default=True, help="FlexGen: whether to pin weights (setting this to False reduces CPU memory by 20%%).") - -# DeepSpeed -parser.add_argument('--deepspeed', action='store_true', help='Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration.') -parser.add_argument('--nvme-offload-dir', type=str, help='DeepSpeed: Directory to use for ZeRO-3 NVME offloading.') -parser.add_argument('--local_rank', type=int, default=0, help='DeepSpeed: Optional argument for distributed setups.') - -# RWKV -parser.add_argument('--rwkv-strategy', type=str, default=None, help='RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8".') -parser.add_argument('--rwkv-cuda-on', action='store_true', help='RWKV: Compile the CUDA kernel for better performance.') - -# Gradio -parser.add_argument('--listen', action='store_true', help='Make the web UI reachable from your local network.') -parser.add_argument('--listen-host', type=str, help='The hostname that the server will use.') -parser.add_argument('--listen-port', type=int, help='The listening port that the server will use.') -parser.add_argument('--share', action='store_true', help='Create a public URL. This is useful for running the web UI on Google Colab or similar.') -parser.add_argument('--auto-launch', action='store_true', default=False, help='Open the web UI in the default browser upon launch.') -parser.add_argument("--gradio-auth", type=str, help='set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3"', default=None) -parser.add_argument("--gradio-auth-path", type=str, help='Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3"', default=None) - -# API -parser.add_argument('--api', action='store_true', help='Enable the API extension.') -parser.add_argument('--api-blocking-port', type=int, default=5000, help='The listening port for the blocking API.') -parser.add_argument('--api-streaming-port', type=int, default=5005, help='The listening port for the streaming API.') -parser.add_argument('--public-api', action='store_true', help='Create a public URL for the API using Cloudfare.') - -# Multimodal -parser.add_argument('--multimodal-pipeline', type=str, default=None, help='The multimodal pipeline to use. Examples: llava-7b, llava-13b.') - -args = parser.parse_args() -args_defaults = parser.parse_args([]) - -# Deprecation warnings -if args.autogptq: - logger.warning('--autogptq has been deprecated and will be removed soon. Use --loader autogptq instead.') - args.loader = 'autogptq' -if args.gptq_for_llama: - logger.warning('--gptq-for-llama has been deprecated and will be removed soon. Use --loader gptq-for-llama instead.') - args.loader = 'gptq-for-llama' -if args.flexgen: - logger.warning('--flexgen has been deprecated and will be removed soon. Use --loader flexgen instead.') - args.loader = 'FlexGen' - -# Security warnings -if args.trust_remote_code: - logger.warning("trust_remote_code is enabled. This is dangerous.") -if args.share: - logger.warning("The gradio \"share link\" feature uses a proprietary executable to create a reverse tunnel. Use it with care.") - - -def fix_loader_name(name): - name = name.lower() - if name in ['llamacpp', 'llama.cpp', 'llama-cpp', 'llama cpp']: - return 'llama.cpp' - elif name in ['transformers', 'huggingface', 'hf', 'hugging_face', 'hugging face']: - return 'Transformers' - elif name in ['autogptq', 'auto-gptq', 'auto_gptq', 'auto gptq']: - return 'AutoGPTQ' - elif name in ['gptq-for-llama', 'gptqforllama', 'gptqllama', 'gptq for llama', 'gptq_for_llama']: - return 'GPTQ-for-LLaMa' - elif name in ['exllama', 'ex-llama', 'ex_llama', 'exlama']: - return 'ExLlama' - elif name in ['exllama-hf', 'exllama_hf', 'exllama hf', 'ex-llama-hf', 'ex_llama_hf']: - return 'ExLlama_HF' - - -if args.loader is not None: - args.loader = fix_loader_name(args.loader) - - -def add_extension(name): - if args.extensions is None: - args.extensions = [name] - elif 'api' not in args.extensions: - args.extensions.append(name) - - -# Activating the API extension -if args.api or args.public_api: - add_extension('api') - -# Activating the multimodal extension -if args.multimodal_pipeline is not None: - add_extension('multimodal') - - -def is_chat(): - return args.chat - - -# Loading model-specific settings -with Path(f'{args.model_dir}/config.yaml') as p: - if p.exists(): - model_config = yaml.safe_load(open(p, 'r').read()) - else: - model_config = {} - -# Applying user-defined model settings -with Path(f'{args.model_dir}/config-user.yaml') as p: - if p.exists(): - user_config = yaml.safe_load(open(p, 'r').read()) - for k in user_config: - if k in model_config: - model_config[k].update(user_config[k]) - else: - model_config[k] = user_config[k] - -model_config = OrderedDict(model_config) diff --git a/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/engine.py b/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/engine.py deleted file mode 100644 index b7d3c1ccc7807014595e50cdd616ef6939d91674..0000000000000000000000000000000000000000 --- a/spaces/FFZG-cleopatra/latvian-twitter-sentiment-classifier/engine.py +++ /dev/null @@ -1,119 +0,0 @@ -import torch -import torch.nn as nn -from tqdm import tqdm -from utils import categorical_accuracy - - -def loss_fn(outputs, targets): - return nn.CrossEntropyLoss()(outputs, targets) - - -def train_fn(data_loader, model, optimizer, device, scheduler): - model.train() - train_loss, train_acc = 0.0, 0.0 - - for bi, d in tqdm(enumerate(data_loader), total=len(data_loader)): - ids = d["ids"] - token_type_ids = d["token_type_ids"] - mask = d["mask"] - targets = d["targets"] - - ids = ids.to(device, dtype=torch.long) - token_type_ids = token_type_ids.to(device, dtype=torch.long) - mask = mask.to(device, dtype=torch.long) - targets = targets.to(device, dtype=torch.long) - - optimizer.zero_grad() - outputs = model( - ids=ids, - mask=mask, - token_type_ids=token_type_ids - ) - - loss = loss_fn(outputs, targets) - loss.backward() - - optimizer.step() - scheduler.step() - train_loss += loss.item() - pred_labels = torch.argmax(outputs, dim=1) - # (pred_labels == targets).sum().item() - train_acc += categorical_accuracy(outputs, targets).item() - - train_loss /= len(data_loader) - train_acc /= len(data_loader) - return train_loss, train_acc - - -def eval_fn(data_loader, model, device): - model.eval() - eval_loss, eval_acc = 0.0, 0.0 - fin_targets = [] - fin_outputs = [] - with torch.no_grad(): - for bi, d in tqdm(enumerate(data_loader), total=len(data_loader)): - ids = d["ids"] - token_type_ids = d["token_type_ids"] - mask = d["mask"] - targets = d["targets"] - - ids = ids.to(device, dtype=torch.long) - token_type_ids = token_type_ids.to(device, dtype=torch.long) - mask = mask.to(device, dtype=torch.long) - targets = targets.to(device, dtype=torch.long) - - outputs = model( - ids=ids, - mask=mask, - token_type_ids=token_type_ids - ) - loss = loss_fn(outputs, targets) - eval_loss += loss.item() - pred_labels = torch.argmax(outputs, axis=1) - # (pred_labels == targets).sum().item() - eval_acc += categorical_accuracy(outputs, targets).item() - fin_targets.extend(targets.cpu().detach().numpy().tolist()) - fin_outputs.extend(torch.argmax( - outputs, dim=1).cpu().detach().numpy().tolist()) - eval_loss /= len(data_loader) - eval_acc /= len(data_loader) - return fin_outputs, fin_targets, eval_loss, eval_acc - - - -def predict_fn(data_loader, model, device, extract_features=False): - model.eval() - - fin_outputs = [] - extracted_features =[] - with torch.no_grad(): - for bi, d in tqdm(enumerate(data_loader), total=len(data_loader)): - ids = d["ids"] - token_type_ids = d["token_type_ids"] - mask = d["mask"] - # targets = d["targets"] - - ids = ids.to(device, dtype=torch.long) - token_type_ids = token_type_ids.to(device, dtype=torch.long) - mask = mask.to(device, dtype=torch.long) - - outputs = model( - ids=ids, - mask=mask, - token_type_ids=token_type_ids - ) - if extract_features: - extracted_features.extend( model.extract_features( - ids=ids, - mask=mask, - token_type_ids=token_type_ids - ).cpu().detach().numpy().tolist()) - print("0",outputs) - print("1",torch.argmax(outputs, dim=1)) - print("2",torch.argmax(outputs, dim=1).cpu()) - print("3",torch.argmax(outputs, dim=1).cpu().numpy()) - fin_outputs.extend(torch.argmax( - outputs, dim=1).cpu().detach().numpy().tolist()) - - return fin_outputs, extracted_features - diff --git a/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/main-8a5bc47a3ab4c4a6.js b/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/main-8a5bc47a3ab4c4a6.js deleted file mode 100644 index 8ab96c4d0cc7d6e5b8e3955e2ef77628797009b9..0000000000000000000000000000000000000000 --- a/spaces/FL33TW00D/whisper-turbo/_next/static/chunks/main-8a5bc47a3ab4c4a6.js +++ /dev/null @@ -1 +0,0 @@ -(self.webpackChunk_N_E=self.webpackChunk_N_E||[]).push([[179],{9219:function(e,t){"use strict";function r(e,t,r,n,a,o,i){try{var l=e[o](i),s=l.value}catch(u){r(u);return}l.done?t(s):Promise.resolve(s).then(n,a)}function n(e){return function(){var t=this,n=arguments;return new Promise(function(a,o){var i=e.apply(t,n);function l(e){r(i,a,o,l,s,"next",e)}function s(e){r(i,a,o,l,s,"throw",e)}l(void 0)})}}Object.defineProperty(t,"Z",{enumerable:!0,get:function(){return n}})},5321:function(e,t){"use strict";function r(){return(r=Object.assign||function(e){for(var t=1;t=0||(a[r]=e[r]);return a}Object.defineProperty(t,"Z",{enumerable:!0,get:function(){return r}})},4922:function(){"trimStart"in String.prototype||(String.prototype.trimStart=String.prototype.trimLeft),"trimEnd"in String.prototype||(String.prototype.trimEnd=String.prototype.trimRight),"description"in Symbol.prototype||Object.defineProperty(Symbol.prototype,"description",{configurable:!0,get:function(){var e=/\((.*)\)/.exec(this.toString());return e?e[1]:void 0}}),Array.prototype.flat||(Array.prototype.flat=function(e,t){return t=this.concat.apply([],this),e>1&&t.some(Array.isArray)?t.flat(e-1):t},Array.prototype.flatMap=function(e,t){return this.map(e,t).flat()}),Promise.prototype.finally||(Promise.prototype.finally=function(e){if("function"!=typeof e)return this.then(e,e);var t=this.constructor||Promise;return this.then(function(r){return t.resolve(e()).then(function(){return r})},function(r){return t.resolve(e()).then(function(){throw r})})}),Object.fromEntries||(Object.fromEntries=function(e){return Array.from(e).reduce(function(e,t){return e[t[0]]=t[1],e},{})})},194:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.addBasePath=function(e,t){return a.normalizePathTrailingSlash(n.addPathPrefix(e,""))};var n=r(3618),a=r(1422);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},6391:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.addLocale=void 0,r(1422),t.addLocale=function(e){for(var t=arguments.length,r=Array(t>1?t-1:0),n=1;n{let t={};e.forEach(e=>{if("link"===e.type&&e.props["data-optimized-fonts"]){if(document.querySelector('style[data-href="'.concat(e.props["data-href"],'"]')))return;e.props.href=e.props["data-href"],e.props["data-href"]=void 0}let r=t[e.type]||[];r.push(e),t[e.type]=r});let r=t.title?t.title[0]:null,o="";if(r){let{children:i}=r.props;o="string"==typeof i?i:Array.isArray(i)?i.join(""):""}o!==document.title&&(document.title=o),["meta","base","link","style","script"].forEach(e=>{(function(e,t){let r=document.getElementsByTagName("head")[0],o=r.querySelector("meta[name=next-head-count]"),i=Number(o.content),l=[];for(let s=0,u=o.previousElementSibling;s{for(let t=0,r=l.length;t{var t;return null==(t=e.parentNode)?void 0:t.removeChild(e)}),d.forEach(e=>r.insertBefore(e,o)),o.content=(i-l.length+d.length).toString()})(e,t[e]||[])})}}},t.isEqualNode=a,t.DOMAttributeNames=void 0;let r={acceptCharset:"accept-charset",className:"class",htmlFor:"for",httpEquiv:"http-equiv",noModule:"noModule"};function n(e){let{type:t,props:n}=e,a=document.createElement(t);for(let o in n){if(!n.hasOwnProperty(o)||"children"===o||"dangerouslySetInnerHTML"===o||void 0===n[o])continue;let i=r[o]||o.toLowerCase();"script"===t&&("async"===i||"defer"===i||"noModule"===i)?a[i]=!!n[o]:a.setAttribute(i,n[o])}let{children:l,dangerouslySetInnerHTML:s}=n;return s?a.innerHTML=s.__html||"":l&&(a.textContent="string"==typeof l?l:Array.isArray(l)?l.join(""):""),a}function a(e,t){if(e instanceof HTMLElement&&t instanceof HTMLElement){let r=t.getAttribute("nonce");if(r&&!e.getAttribute("nonce")){let n=t.cloneNode(!0);return n.setAttribute("nonce",""),n.nonce=r,r===e.nonce&&e.isEqualNode(n)}}return e.isEqualNode(t)}t.DOMAttributeNames=r,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},9454:function(e,t,r){"use strict";let n,a,o,i,l,s,u,c,d,f,p,h;Object.defineProperty(t,"__esModule",{value:!0});let m=r(6687).Z;Object.defineProperty(t,"__esModule",{value:!0}),t.initialize=function(){return G.apply(this,arguments)},t.hydrate=function(e){return el.apply(this,arguments)},t.emitter=t.router=t.version=void 0;var g=r(9219).Z,y=r(5321).Z,v=r(1322).Z;r(6687).Z,r(4922);var _=v(r(959)),P=v(r(4478)),b=r(5083),S=v(r(3158)),w=r(1558),E=r(4635),x=r(753),C=r(1569),j=r(2921),O=r(2643),R=v(r(6661)),M=v(r(988)),A=v(r(9409)),L=r(2893),T=r(1071),I=r(7743),N=r(9406),k=r(9445),D=r(5745),B=r(1453),H=r(4210),U=r(5770),F=v(r(3050));t.version="13.1.0",t.router=n;let q=S.default();t.emitter=q;let W=e=>[].slice.call(e),Z=!1;self.__next_require__=r;class z extends _.default.Component{componentDidCatch(e,t){this.props.fn(e,t)}componentDidMount(){this.scrollToHash(),n.isSsr&&(a.isFallback||a.nextExport&&(E.isDynamicRoute(n.pathname)||location.search||Z)||a.props&&a.props.__N_SSG&&(location.search||Z))&&n.replace(n.pathname+"?"+String(x.assign(x.urlQueryToSearchParams(n.query),new URLSearchParams(location.search))),o,{_h:1,shallow:!a.isFallback&&!Z}).catch(e=>{if(!e.cancelled)throw e})}componentDidUpdate(){this.scrollToHash()}scrollToHash(){let{hash:e}=location;if(!(e=e&&e.substring(1)))return;let t=document.getElementById(e);t&&setTimeout(()=>t.scrollIntoView(),0)}render(){return this.props.children}}function G(){return(G=g(function*(){arguments.length>0&&void 0!==arguments[0]&&arguments[0],a=JSON.parse(document.getElementById("__NEXT_DATA__").textContent),window.__NEXT_DATA__=a,h=a.defaultLocale;let e=a.assetPrefix||"";if(r.p="".concat(e,"/_next/"),C.setConfig({serverRuntimeConfig:{},publicRuntimeConfig:a.runtimeConfig||{}}),o=j.getURL(),D.hasBasePath(o)&&(o=k.removeBasePath(o)),a.scriptLoader){let{initScriptLoader:t}=r(2340);t(a.scriptLoader)}i=new M.default(a.buildId,e);let u=e=>{let[t,r]=e;return i.routeLoader.onEntrypoint(t,r)};return window.__NEXT_P&&window.__NEXT_P.map(e=>setTimeout(()=>u(e),0)),window.__NEXT_P=[],window.__NEXT_P.push=u,(s=R.default()).getIsSsr=()=>n.isSsr,l=document.getElementById("__next"),{assetPrefix:e}})).apply(this,arguments)}function V(e,t){return _.default.createElement(e,Object.assign({},t))}function X(e){var t;let{children:r}=e;return _.default.createElement(z,{fn:e=>$({App:d,err:e}).catch(e=>console.error("Error rendering page: ",e))},_.default.createElement(B.AppRouterContext.Provider,{value:H.adaptForAppRouterInstance(n)},_.default.createElement(U.SearchParamsContext.Provider,{value:H.adaptForSearchParams(n)},_.default.createElement(H.PathnameContextProviderAdapter,{router:n,isAutoExport:null!=(t=self.__NEXT_DATA__.autoExport)&&t},_.default.createElement(w.RouterContext.Provider,{value:T.makePublicRouterInstance(n)},_.default.createElement(b.HeadManagerContext.Provider,{value:s},_.default.createElement(N.ImageConfigContext.Provider,{value:{deviceSizes:[640,750,828,1080,1200,1920,2048,3840],imageSizes:[16,32,48,64,96,128,256,384],path:"/_next/image",loader:"default",dangerouslyAllowSVG:!1,unoptimized:!1}},r)))))))}let Y=e=>t=>{let r=y({},t,{Component:p,err:a.err,router:n});return _.default.createElement(X,null,V(e,r))};function $(e){let{App:t,err:l}=e;return console.error(l),console.error("A client-side exception has occurred, see here for more info: https://nextjs.org/docs/messages/client-side-exception-occurred"),i.loadPage("/_error").then(n=>{let{page:a,styleSheets:o}=n;return(null==u?void 0:u.Component)===a?Promise.resolve().then(()=>m(r(9549))).then(n=>Promise.resolve().then(()=>m(r(2758))).then(r=>(t=r.default,e.App=t,n))).then(e=>({ErrorComponent:e.default,styleSheets:[]})):{ErrorComponent:a,styleSheets:o}}).then(r=>{var i;let{ErrorComponent:s,styleSheets:u}=r,c=Y(t),d={Component:s,AppTree:c,router:n,ctx:{err:l,pathname:a.page,query:a.query,asPath:o,AppTree:c}};return Promise.resolve((null==(i=e.props)?void 0:i.err)?e.props:j.loadGetInitialProps(t,d)).then(t=>ea(y({},e,{err:l,Component:s,styleSheets:u,props:t})))})}function K(e){let{callback:t}=e;return _.default.useLayoutEffect(()=>t(),[t]),null}let J=null,Q=!0;function ee(){["beforeRender","afterHydrate","afterRender","routeChange"].forEach(e=>performance.clearMarks(e))}function et(){j.ST&&(performance.mark("afterHydrate"),performance.measure("Next.js-before-hydration","navigationStart","beforeRender"),performance.measure("Next.js-hydration","beforeRender","afterHydrate"),f&&performance.getEntriesByName("Next.js-hydration").forEach(f),ee())}function er(){if(!j.ST)return;performance.mark("afterRender");let e=performance.getEntriesByName("routeChange","mark");e.length&&(performance.measure("Next.js-route-change-to-render",e[0].name,"beforeRender"),performance.measure("Next.js-render","beforeRender","afterRender"),f&&(performance.getEntriesByName("Next.js-render").forEach(f),performance.getEntriesByName("Next.js-route-change-to-render").forEach(f)),ee(),["Next.js-route-change-to-render","Next.js-render"].forEach(e=>performance.clearMeasures(e)))}function en(e){let{callbacks:t,children:r}=e;return _.default.useLayoutEffect(()=>t.forEach(e=>e()),[t]),_.default.useEffect(()=>{A.default(f)},[]),r}function ea(e){let t,{App:r,Component:a,props:o,err:i}=e,s="initial"in e?void 0:e.styleSheets;a=a||u.Component,o=o||u.props;let d=y({},o,{Component:a,err:i,router:n});u=d;let f=!1,p=new Promise((e,r)=>{c&&c(),t=()=>{c=null,e()},c=()=>{f=!0,c=null;let e=Error("Cancel rendering route");e.cancelled=!0,r(e)}});function h(){t()}!function(){if(!s)return;let e=W(document.querySelectorAll("style[data-n-href]")),t=new Set(e.map(e=>e.getAttribute("data-n-href"))),r=document.querySelector("noscript[data-n-css]"),n=null==r?void 0:r.getAttribute("data-n-css");s.forEach(e=>{let{href:r,text:a}=e;if(!t.has(r)){let o=document.createElement("style");o.setAttribute("data-n-href",r),o.setAttribute("media","x"),n&&o.setAttribute("nonce",n),document.head.appendChild(o),o.appendChild(document.createTextNode(a))}})}();let m=_.default.createElement(_.default.Fragment,null,_.default.createElement(K,{callback:function(){if(s&&!f){let t=new Set(s.map(e=>e.href)),r=W(document.querySelectorAll("style[data-n-href]")),n=r.map(e=>e.getAttribute("data-n-href"));for(let a=0;a{let{href:t}=e,r=document.querySelector('style[data-n-href="'.concat(t,'"]'));r&&(o.parentNode.insertBefore(r,o.nextSibling),o=r)}),W(document.querySelectorAll("link[data-n-p]")).forEach(e=>{e.parentNode.removeChild(e)})}if(e.scroll){let i=document.documentElement,l=i.style.scrollBehavior;i.style.scrollBehavior="auto",i.getClientRects(),window.scrollTo(e.scroll.x,e.scroll.y),i.style.scrollBehavior=l}}}),_.default.createElement(X,null,V(r,d),_.default.createElement(O.Portal,{type:"next-route-announcer"},_.default.createElement(L.RouteAnnouncer,null))));return!function(e,t){j.ST&&performance.mark("beforeRender");let r=t(Q?et:er);if(J){let n=_.default.startTransition;n(()=>{J.render(r)})}else J=P.default.hydrateRoot(e,r,{onRecoverableError:F.default}),Q=!1}(l,e=>_.default.createElement(en,{callbacks:[e,h]},_.default.createElement(_.default.StrictMode,null,m))),p}function eo(e){return ei.apply(this,arguments)}function ei(){return(ei=g(function*(e){if(e.err){yield $(e);return}try{yield ea(e)}catch(r){let t=I.getProperError(r);if(t.cancelled)throw t;yield $(y({},e,{err:t}))}})).apply(this,arguments)}function el(){return(el=g(function*(e){let r=a.err;try{let l=yield i.routeLoader.whenEntrypoint("/_app");if("error"in l)throw l.error;let{component:s,exports:u}=l;d=s,u&&u.reportWebVitals&&(f=e=>{let t,{id:r,name:n,startTime:a,value:o,duration:i,entryType:l,entries:s,attribution:c}=e,d="".concat(Date.now(),"-").concat(Math.floor(Math.random()*(9e12-1))+1e12);s&&s.length&&(t=s[0].startTime);let f={id:r||d,name:n,startTime:a||t,value:null==o?i:o,label:"mark"===l||"measure"===l?"custom":"web-vital"};c&&(f.attribution=c),u.reportWebVitals(f)});let c=yield i.routeLoader.whenEntrypoint(a.page);if("error"in c)throw c.error;p=c.component}catch(m){r=I.getProperError(m)}window.__NEXT_PRELOADREADY&&(yield window.__NEXT_PRELOADREADY(a.dynamicIds)),t.router=n=T.createRouter(a.page,a.query,o,{initialProps:a.props,pageLoader:i,App:d,Component:p,wrapApp:Y,err:r,isFallback:Boolean(a.isFallback),subscription:(e,t,r)=>eo(Object.assign({},e,{App:t,scroll:r})),locale:a.locale,locales:a.locales,defaultLocale:h,domainLocales:a.domainLocales,isPreview:a.isPreview}),Z=yield n._initialMatchesMiddlewarePromise;let g={App:d,initial:!0,Component:p,props:a.props,err:r};(null==e?void 0:e.beforeRender)&&(yield e.beforeRender()),eo(g)})).apply(this,arguments)}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},2339:function(e,t,r){"use strict";var n=r(9454);window.next={version:n.version,get router(){return n.router},emitter:n.emitter},n.initialize({}).then(()=>n.hydrate()).catch(console.error),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},1422:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.normalizePathTrailingSlash=void 0;var n=r(807),a=r(9380);let o=e=>{if(!e.startsWith("/"))return e;let{pathname:t,query:r,hash:o}=a.parsePath(e);return"".concat(n.removeTrailingSlash(t)).concat(r).concat(o)};t.normalizePathTrailingSlash=o,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},3050:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.default=function(e,t){let r=e.digest||t.digest,a="function"==typeof reportError?reportError:e=>{window.console.error(e)};r!==n.NEXT_DYNAMIC_NO_SSR_CODE&&a(e)};var n=r(6585);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},988:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.default=void 0;var n=r(1322).Z,a=r(194),o=r(3652),i=n(r(431)),l=r(6391),s=r(4635),u=r(4265),c=r(807),d=r(3986);t.default=class{getPageList(){return d.getClientBuildManifest().then(e=>e.sortedPages)}getMiddleware(){return window.__MIDDLEWARE_MATCHERS=[],window.__MIDDLEWARE_MATCHERS}getDataHref(e){let{asPath:t,href:r,locale:n}=e,{pathname:d,query:f,search:p}=u.parseRelativeUrl(r),{pathname:h}=u.parseRelativeUrl(t),m=c.removeTrailingSlash(d);if("/"!==m[0])throw Error('Route name should start with a "/", got "'.concat(m,'"'));return(e=>{let t=i.default(c.removeTrailingSlash(l.addLocale(e,n)),".json");return a.addBasePath("/_next/data/".concat(this.buildId).concat(t).concat(p),!0)})(e.skipInterpolation?h:s.isDynamicRoute(m)?o.interpolateAs(d,h,f).result:m)}_isSsg(e){return this.promisedSsgManifest.then(t=>t.has(e))}loadPage(e){return this.routeLoader.loadRoute(e).then(e=>{if("component"in e)return{page:e.component,mod:e.exports,styleSheets:e.styles.map(e=>({href:e.href,text:e.content}))};throw e.error})}prefetch(e){return this.routeLoader.prefetch(e)}constructor(e,t){this.routeLoader=d.createRouteLoader(t),this.buildId=e,this.assetPrefix=t,this.promisedSsgManifest=new Promise(e=>{window.__SSG_MANIFEST?e(window.__SSG_MANIFEST):window.__SSG_MANIFEST_CB=()=>{e(window.__SSG_MANIFEST)}})}},("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},9409:function(e,t,r){"use strict";let n;Object.defineProperty(t,"__esModule",{value:!0}),t.default=void 0;let a=["CLS","FCP","FID","INP","LCP","TTFB"];location.href;let o=!1;function i(e){n&&n(e)}var l=e=>{if(n=e,!o)for(let t of(o=!0,a))try{let l;l||(l=r(3982)),l["on".concat(t)](i)}catch(s){console.warn("Failed to track ".concat(t," web-vital"),s)}};t.default=l,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},2643:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.Portal=void 0;var n=r(959),a=r(422);let o=e=>{let{children:t,type:r}=e,[o,i]=n.useState(null);return n.useEffect(()=>{let e=document.createElement(r);return document.body.appendChild(e),i(e),()=>{document.body.removeChild(e)}},[r]),o?a.createPortal(t,o):null};t.Portal=o,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},9445:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.removeBasePath=function(e){return(e=e.slice(0)).startsWith("/")||(e="/".concat(e)),e},r(5745),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},2908:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.removeLocale=function(e,t){return e},r(9380),("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},9122:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.cancelIdleCallback=t.requestIdleCallback=void 0;let r="undefined"!=typeof self&&self.requestIdleCallback&&self.requestIdleCallback.bind(window)||function(e){let t=Date.now();return self.setTimeout(function(){e({didTimeout:!1,timeRemaining:function(){return Math.max(0,50-(Date.now()-t))}})},1)};t.requestIdleCallback=r;let n="undefined"!=typeof self&&self.cancelIdleCallback&&self.cancelIdleCallback.bind(window)||function(e){return clearTimeout(e)};t.cancelIdleCallback=n,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},2893:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.default=t.RouteAnnouncer=void 0;var n=(0,r(1322).Z)(r(959)),a=r(1071);let o={border:0,clip:"rect(0 0 0 0)",height:"1px",margin:"-1px",overflow:"hidden",padding:0,position:"absolute",width:"1px",whiteSpace:"nowrap",wordWrap:"normal"},i=()=>{let{asPath:e}=a.useRouter(),[t,r]=n.default.useState(""),i=n.default.useRef(e);return n.default.useEffect(()=>{if(i.current!==e){if(i.current=e,document.title)r(document.title);else{var t;let n=document.querySelector("h1"),a=null!=(t=null==n?void 0:n.innerText)?t:null==n?void 0:n.textContent;r(a||e)}}},[e]),n.default.createElement("p",{"aria-live":"assertive",id:"__next-route-announcer__",role:"alert",style:o},t)};t.RouteAnnouncer=i,t.default=i,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},3986:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.markAssetError=l,t.isAssetError=function(e){return e&&i in e},t.getClientBuildManifest=c,t.createRouteLoader=function(e){let t=new Map,r=new Map,n=new Map,i=new Map;function c(e){{var t;let n=r.get(e.toString());return n||(document.querySelector('script[src^="'.concat(e,'"]'))?Promise.resolve():(r.set(e.toString(),n=new Promise((r,n)=>{(t=document.createElement("script")).onload=r,t.onerror=()=>n(l(Error("Failed to load script: ".concat(e)))),t.crossOrigin=void 0,t.src=e,document.body.appendChild(t)})),n))}}function f(e){let t=n.get(e);return t||n.set(e,t=fetch(e).then(t=>{if(!t.ok)throw Error("Failed to load stylesheet: ".concat(e));return t.text().then(t=>({href:e,content:t}))}).catch(e=>{throw l(e)})),t}return{whenEntrypoint:e=>o(e,t),onEntrypoint(e,r){(r?Promise.resolve().then(()=>r()).then(e=>({component:e&&e.default||e,exports:e}),e=>({error:e})):Promise.resolve(void 0)).then(r=>{let n=t.get(e);n&&"resolve"in n?r&&(t.set(e,r),n.resolve(r)):(r?t.set(e,r):t.delete(e),i.delete(e))})},loadRoute(r,n){return o(r,i,()=>{let a;return u(d(e,r).then(e=>{let{scripts:n,css:a}=e;return Promise.all([t.has(r)?[]:Promise.all(n.map(c)),Promise.all(a.map(f))])}).then(e=>this.whenEntrypoint(r).then(t=>({entrypoint:t,styles:e[1]}))),3800,l(Error("Route did not complete loading: ".concat(r)))).then(e=>{let{entrypoint:t,styles:r}=e,n=Object.assign({styles:r},t);return"error"in t?t:n}).catch(e=>{if(n)throw e;return{error:e}}).finally(()=>null==a?void 0:a())})},prefetch(t){let r;return(r=navigator.connection)&&(r.saveData||/2g/.test(r.effectiveType))?Promise.resolve():d(e,t).then(e=>Promise.all(s?e.scripts.map(e=>{var t,r,n;return t=e.toString(),r="script",new Promise((e,a)=>{let o='\n link[rel="prefetch"][href^="'.concat(t,'"],\n link[rel="preload"][href^="').concat(t,'"],\n script[src^="').concat(t,'"]');if(document.querySelector(o))return e();n=document.createElement("link"),r&&(n.as=r),n.rel="prefetch",n.crossOrigin=void 0,n.onload=e,n.onerror=()=>a(l(Error("Failed to prefetch: ".concat(t)))),n.href=t,document.head.appendChild(n)})}):[])).then(()=>{a.requestIdleCallback(()=>this.loadRoute(t,!0).catch(()=>{}))}).catch(()=>{})}}},(0,r(1322).Z)(r(431));var n=r(474),a=r(9122);function o(e,t,r){let n,a=t.get(e);if(a)return"future"in a?a.future:Promise.resolve(a);let o=new Promise(e=>{n=e});return t.set(e,a={resolve:n,future:o}),r?r().then(e=>(n(e),e)).catch(r=>{throw t.delete(e),r}):o}let i=Symbol("ASSET_LOAD_ERROR");function l(e){return Object.defineProperty(e,i,{})}let s=function(e){try{return e=document.createElement("link"),!!window.MSInputMethodContext&&!!document.documentMode||e.relList.supports("prefetch")}catch(t){return!1}}();function u(e,t,r){return new Promise((n,o)=>{let i=!1;e.then(e=>{i=!0,n(e)}).catch(o),a.requestIdleCallback(()=>setTimeout(()=>{i||o(r)},t))})}function c(){if(self.__BUILD_MANIFEST)return Promise.resolve(self.__BUILD_MANIFEST);let e=new Promise(e=>{let t=self.__BUILD_MANIFEST_CB;self.__BUILD_MANIFEST_CB=()=>{e(self.__BUILD_MANIFEST),t&&t()}});return u(e,3800,l(Error("Failed to load client build manifest")))}function d(e,t){return c().then(r=>{if(!(t in r))throw l(Error("Failed to lookup route: ".concat(t)));let a=r[t].map(t=>e+"/_next/"+encodeURI(t));return{scripts:a.filter(e=>e.endsWith(".js")).map(e=>n.__unsafeCreateTrustedScriptURL(e)),css:a.filter(e=>e.endsWith(".css"))}})}("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},1071:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"Router",{enumerable:!0,get:function(){return o.default}}),Object.defineProperty(t,"withRouter",{enumerable:!0,get:function(){return s.default}}),t.useRouter=function(){let e=a.default.useContext(i.RouterContext);if(!e)throw Error("Error: NextRouter was not mounted. https://nextjs.org/docs/messages/next-router-not-mounted");return e},t.createRouter=function(){for(var e=arguments.length,t=Array(e),r=0;re()),u.readyCallbacks=[],u.router},t.makePublicRouterInstance=function(e){let t={};for(let r of c){if("object"==typeof e[r]){t[r]=Object.assign(Array.isArray(e[r])?[]:{},e[r]);continue}t[r]=e[r]}return t.events=o.default.events,d.forEach(r=>{t[r]=function(){for(var t=arguments.length,n=Array(t),a=0;ao.default.events}),c.forEach(e=>{Object.defineProperty(u,e,{get(){let t=f();return t[e]}})}),d.forEach(e=>{u[e]=function(){for(var t=arguments.length,r=Array(t),n=0;n{u.ready(()=>{o.default.events.on(e,function(){for(var t=arguments.length,r=Array(t),n=0;n{let t=e.id||e.getAttribute("src");p.add(t)})}()},t.default=void 0;var n=r(5321).Z,a=r(1322).Z,o=r(6687).Z,i=r(6239).Z,l=a(r(422)),s=o(r(959)),u=r(5083),c=r(6661),d=r(9122);let f=new Map,p=new Set,h=["onLoad","onReady","dangerouslySetInnerHTML","children","onError","strategy"],m=e=>{let{src:t,id:r,onLoad:n=()=>{},onReady:a=null,dangerouslySetInnerHTML:o,children:i="",strategy:l="afterInteractive",onError:s}=e,u=r||t;if(u&&p.has(u))return;if(f.has(t)){p.add(u),f.get(t).then(n,s);return}let d=()=>{a&&a(),p.add(u)},m=document.createElement("script"),g=new Promise((e,t)=>{m.addEventListener("load",function(t){e(),n&&n.call(this,t),d()}),m.addEventListener("error",function(e){t(e)})}).catch(function(e){s&&s(e)});for(let[y,v]of(o?(m.innerHTML=o.__html||"",d()):i?(m.textContent="string"==typeof i?i:Array.isArray(i)?i.join(""):"",d()):t&&(m.src=t,f.set(t,g)),Object.entries(e))){if(void 0===v||h.includes(y))continue;let _=c.DOMAttributeNames[y]||y.toLowerCase();m.setAttribute(_,v)}"worker"===l&&m.setAttribute("type","text/partytown"),m.setAttribute("data-nscript",l),document.body.appendChild(m)};function g(e){let{strategy:t="afterInteractive"}=e;"lazyOnload"===t?window.addEventListener("load",()=>{d.requestIdleCallback(()=>m(e))}):m(e)}function y(e){let{id:t,src:r="",onLoad:a=()=>{},onReady:o=null,strategy:c="afterInteractive",onError:f}=e,h=i(e,["id","src","onLoad","onReady","strategy","onError"]),{updateScripts:g,scripts:y,getIsSsr:v,appDir:_,nonce:P}=s.useContext(u.HeadManagerContext),b=s.useRef(!1);s.useEffect(()=>{let e=t||r;b.current||(o&&e&&p.has(e)&&o(),b.current=!0)},[o,t,r]);let S=s.useRef(!1);if(s.useEffect(()=>{!S.current&&("afterInteractive"===c?m(e):"lazyOnload"===c&&("complete"===document.readyState?d.requestIdleCallback(()=>m(e)):window.addEventListener("load",()=>{d.requestIdleCallback(()=>m(e))})),S.current=!0)},[e,c]),("beforeInteractive"===c||"worker"===c)&&(g?(y[c]=(y[c]||[]).concat([n({id:t,src:r,onLoad:a,onReady:o,onError:f},h)]),g(y)):v&&v()?p.add(t||r):v&&!v()&&m(e)),_){if("beforeInteractive"===c)return r?(l.default.preload(r,h.integrity?{as:"script",integrity:h.integrity}:{as:"script"}),s.default.createElement("script",{nonce:P,dangerouslySetInnerHTML:{__html:"(self.__next_s=self.__next_s||[]).push(".concat(JSON.stringify([r]),")")}})):(h.dangerouslySetInnerHTML&&(h.children=h.dangerouslySetInnerHTML.__html,delete h.dangerouslySetInnerHTML),s.default.createElement("script",{nonce:P,dangerouslySetInnerHTML:{__html:"(self.__next_s=self.__next_s||[]).push(".concat(JSON.stringify([0,n({},h)]),")")}}));"afterInteractive"===c&&r&&l.default.preload(r,h.integrity?{as:"script",integrity:h.integrity}:{as:"script"})}return null}Object.defineProperty(y,"__nextScript",{value:!0}),t.default=y,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},474:function(e,t){"use strict";let r;Object.defineProperty(t,"__esModule",{value:!0}),t.__unsafeCreateTrustedScriptURL=function(e){var t;return(null==(t=function(){if(void 0===r){var e;r=(null==(e=window.trustedTypes)?void 0:e.createPolicy("nextjs",{createHTML:e=>e,createScript:e=>e,createScriptURL:e=>e}))||null}return r}())?void 0:t.createScriptURL(e))||e},("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},1678:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.default=function(e){function t(t){return n.default.createElement(e,Object.assign({router:a.useRouter()},t))}return t.getInitialProps=e.getInitialProps,t.origGetInitialProps=e.origGetInitialProps,t};var n=(0,r(1322).Z)(r(959)),a=r(1071);("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},2758:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.default=void 0;var n,a=r(9219).Z,o=(0,r(1322).Z)(r(959)),i=r(2921);function l(e){return s.apply(this,arguments)}function s(){return(s=a(function*(e){let{Component:t,ctx:r}=e,n=yield i.loadGetInitialProps(t,r);return{pageProps:n}})).apply(this,arguments)}class u extends(n=o.default.Component){render(){let{Component:e,pageProps:t}=this.props;return o.default.createElement(e,Object.assign({},t))}}u.origGetInitialProps=l,u.getInitialProps=l,t.default=u,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},9549:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.default=void 0;var n,a=r(1322).Z,o=a(r(959)),i=a(r(493));let l={400:"Bad Request",404:"This page could not be found",405:"Method Not Allowed",500:"Internal Server Error"};function s(e){let{res:t,err:r}=e,n=t&&t.statusCode?t.statusCode:r?r.statusCode:404;return{statusCode:n}}let u={error:{fontFamily:'-apple-system, BlinkMacSystemFont, Roboto, "Segoe UI", "Fira Sans", Avenir, "Helvetica Neue", "Lucida Grande", sans-serif',height:"100vh",textAlign:"center",display:"flex",flexDirection:"column",alignItems:"center",justifyContent:"center"},desc:{display:"inline-block",textAlign:"left",lineHeight:"49px",height:"49px",verticalAlign:"middle"},h1:{display:"inline-block",margin:0,marginRight:"20px",padding:"0 23px 0 0",fontSize:"24px",fontWeight:500,verticalAlign:"top",lineHeight:"49px"},h2:{fontSize:"14px",fontWeight:"normal",lineHeight:"49px",margin:0,padding:0}};class c extends(n=o.default.Component){render(){let{statusCode:e,withDarkMode:t=!0}=this.props,r=this.props.title||l[e]||"An unexpected error has occurred";return o.default.createElement("div",{style:u.error},o.default.createElement(i.default,null,o.default.createElement("title",null,e?"".concat(e,": ").concat(r):"Application error: a client-side exception has occurred")),o.default.createElement("div",null,o.default.createElement("style",{dangerouslySetInnerHTML:{__html:"\n body { margin: 0; color: #000; background: #fff; }\n .next-error-h1 {\n border-right: 1px solid rgba(0, 0, 0, .3);\n }\n\n ".concat(t?"@media (prefers-color-scheme: dark) {\n body { color: #fff; background: #000; }\n .next-error-h1 {\n border-right: 1px solid rgba(255, 255, 255, .3);\n }\n }":"")}}),e?o.default.createElement("h1",{className:"next-error-h1",style:u.h1},e):null,o.default.createElement("div",{style:u.desc},o.default.createElement("h2",{style:u.h2},this.props.title||e?r:o.default.createElement(o.default.Fragment,null,"Application error: a client-side exception has occurred (see the browser console for more information)"),"."))))}}c.displayName="ErrorPage",c.getInitialProps=s,c.origGetInitialProps=s,t.default=c,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},9708:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.AmpStateContext=void 0;var n=(0,r(1322).Z)(r(959));let a=n.default.createContext({});t.AmpStateContext=a},8638:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.isInAmpMode=function(){let{ampFirst:e=!1,hybrid:t=!1,hasQuery:r=!1}=arguments.length>0&&void 0!==arguments[0]?arguments[0]:{};return e||t&&r}},1453:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.TemplateContext=t.GlobalLayoutRouterContext=t.LayoutRouterContext=t.AppRouterContext=t.CacheStates=void 0;var n,a,o=(0,r(1322).Z)(r(959));t.CacheStates=n,(a=n||(t.CacheStates=n={})).LAZY_INITIALIZED="LAZYINITIALIZED",a.DATA_FETCH="DATAFETCH",a.READY="READY";let i=o.default.createContext(null);t.AppRouterContext=i;let l=o.default.createContext(null);t.LayoutRouterContext=l;let s=o.default.createContext(null);t.GlobalLayoutRouterContext=s;let u=o.default.createContext(null);t.TemplateContext=u},7350:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.escapeStringRegexp=function(e){return r.test(e)?e.replace(n,"\\$&"):e};let r=/[|\\{}()[\]^$+*?.-]/,n=/[|\\{}()[\]^$+*?.-]/g},5083:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.HeadManagerContext=void 0;var n=(0,r(1322).Z)(r(959));let a=n.default.createContext({});t.HeadManagerContext=a},493:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.defaultHead=c,t.default=void 0;var n=r(5321).Z,a=r(1322).Z,o=(0,r(6687).Z)(r(959)),i=a(r(9299)),l=r(9708),s=r(5083),u=r(8638);function c(){let e=arguments.length>0&&void 0!==arguments[0]&&arguments[0],t=[o.default.createElement("meta",{charSet:"utf-8"})];return e||t.push(o.default.createElement("meta",{name:"viewport",content:"width=device-width"})),t}function d(e,t){return"string"==typeof t||"number"==typeof t?e:t.type===o.default.Fragment?e.concat(o.default.Children.toArray(t.props.children).reduce((e,t)=>"string"==typeof t||"number"==typeof t?e:e.concat(t),[])):e.concat(t)}r(1185);let f=["name","httpEquiv","charSet","itemProp"];function p(e,t){let{inAmpMode:r}=t;return e.reduce(d,[]).reverse().concat(c(r).reverse()).filter(function(){let e=new Set,t=new Set,r=new Set,n={};return a=>{let o=!0,i=!1;if(a.key&&"number"!=typeof a.key&&a.key.indexOf("$")>0){i=!0;let l=a.key.slice(a.key.indexOf("$")+1);e.has(l)?o=!1:e.add(l)}switch(a.type){case"title":case"base":t.has(a.type)?o=!1:t.add(a.type);break;case"meta":for(let s=0,u=f.length;s{let a=e.key||t;if(!r&&"link"===e.type&&e.props.href&&["https://fonts.googleapis.com/css","https://use.typekit.net/"].some(t=>e.props.href.startsWith(t))){let i=n({},e.props||{});return i["data-href"]=i.href,i.href=void 0,i["data-optimized-fonts"]=!0,o.default.cloneElement(e,i)}return o.default.cloneElement(e,{key:a})})}t.default=function(e){let{children:t}=e,r=o.useContext(l.AmpStateContext),n=o.useContext(s.HeadManagerContext);return o.default.createElement(i.default,{reduceComponentsToState:p,headManager:n,inAmpMode:u.isInAmpMode(r)},t)},("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},5770:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.LayoutSegmentsContext=t.ParamsContext=t.PathnameContext=t.SearchParamsContext=void 0;var n=r(959);let a=n.createContext(null);t.SearchParamsContext=a;let o=n.createContext(null);t.PathnameContext=o;let i=n.createContext(null);t.ParamsContext=i;let l=n.createContext(null);t.LayoutSegmentsContext=l},6126:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.normalizeLocalePath=function(e,t){let r;let n=e.split("/");return(t||[]).some(t=>!!n[1]&&n[1].toLowerCase()===t.toLowerCase()&&(r=t,n.splice(1,1),e=n.join("/")||"/",!0)),{pathname:e,detectedLocale:r}}},9406:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.ImageConfigContext=void 0;var n=(0,r(1322).Z)(r(959)),a=r(4840);let o=n.default.createContext(a.imageConfigDefault);t.ImageConfigContext=o},4840:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.imageConfigDefault=t.VALID_LOADERS=void 0,t.VALID_LOADERS=["default","imgix","cloudinary","akamai","custom"],t.imageConfigDefault={deviceSizes:[640,750,828,1080,1200,1920,2048,3840],imageSizes:[16,32,48,64,96,128,256,384],path:"/_next/image",loader:"default",loaderFile:"",domains:[],disableStaticImages:!1,minimumCacheTTL:60,formats:["image/webp"],dangerouslyAllowSVG:!1,contentSecurityPolicy:"script-src 'none'; frame-src 'none'; sandbox;",remotePatterns:[],unoptimized:!1}},9344:function(e,t){"use strict";function r(e){return Object.prototype.toString.call(e)}Object.defineProperty(t,"__esModule",{value:!0}),t.getObjectClassLabel=r,t.isPlainObject=function(e){if("[object Object]"!==r(e))return!1;let t=Object.getPrototypeOf(e);return null===t||t.hasOwnProperty("isPrototypeOf")}},3158:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.default=function(){let e=Object.create(null);return{on(t,r){(e[t]||(e[t]=[])).push(r)},off(t,r){e[t]&&e[t].splice(e[t].indexOf(r)>>>0,1)},emit(t){for(var r=arguments.length,n=Array(r>1?r-1:0),a=1;a{e(...n)})}}}},6585:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.NEXT_DYNAMIC_NO_SSR_CODE=void 0,t.NEXT_DYNAMIC_NO_SSR_CODE="DYNAMIC_SERVER_USAGE"},1201:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.denormalizePagePath=function(e){let t=a.normalizePathSep(e);return t.startsWith("/index/")&&!n.isDynamicRoute(t)?t.slice(6):"/index"!==t?t:"/"};var n=r(7105),a=r(564)},564:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.normalizePathSep=function(e){return e.replace(/\\/g,"/")}},1558:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.RouterContext=void 0;var n=(0,r(1322).Z)(r(959));let a=n.default.createContext(null);t.RouterContext=a},4210:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.adaptForAppRouterInstance=function(e){return{back(){e.back()},forward(){e.forward()},refresh(){e.reload()},push(t){e.push(t)},replace(t){e.replace(t)},prefetch(t){e.prefetch(t)}}},t.adaptForSearchParams=function(e){return e.isReady&&e.query?function(e){let t=new URLSearchParams;for(let[r,n]of Object.entries(e))if(Array.isArray(n))for(let a of n)t.append(r,a);else void 0!==n&&t.append(r,n);return t}(e.query):new URLSearchParams},t.PathnameContextProviderAdapter=function(e){var{children:t,router:r}=e,n=a(e,["children","router"]);let s=o.useRef(n.isAutoExport),u=o.useMemo(()=>{let e;let t=s.current;if(t&&(s.current=!1),l.isDynamicRoute(r.pathname)&&(r.isFallback||t&&!r.isReady))return null;try{e=new URL(r.asPath,"http://f")}catch(n){return"/"}return e.pathname},[r.asPath,r.isFallback,r.isReady,r.pathname]);return o.default.createElement(i.PathnameContext.Provider,{value:u},t)};var n=r(6687).Z,a=r(6239).Z,o=n(r(959)),i=r(5770),l=r(7105)},3652:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.matchesMiddleware=I,t.isLocalURL=B,t.interpolateAs=H,t.resolveHref=U,t.createKey=Y,t.default=void 0;var n=r(9219).Z,a=r(5321).Z,o=r(1322).Z,i=r(6687).Z,l=r(1422),s=r(807),u=r(3986),c=r(2340),d=i(r(7743)),f=r(1201),p=r(6126),h=o(r(3158)),m=r(2921),g=r(4635),y=r(4265),v=r(753);o(r(5459));var _=r(2753),P=r(8628),b=r(8346);r(9345);var S=r(9380),w=r(6391),E=r(2908),x=r(9445),C=r(194),j=r(5745),O=r(6595),R=r(4980),M=r(2378),A=r(9602),L=r(1600);function T(){return Object.assign(Error("Route Cancelled"),{cancelled:!0})}function I(e){return N.apply(this,arguments)}function N(){return(N=n(function*(e){let t=yield Promise.resolve(e.router.pageLoader.getMiddleware());if(!t)return!1;let{pathname:r}=S.parsePath(e.asPath),n=j.hasBasePath(r)?x.removeBasePath(r):r,a=C.addBasePath(w.addLocale(n,e.locale));return t.some(e=>RegExp(e.regexp).test(a))})).apply(this,arguments)}function k(e){let t=m.getLocationOrigin();return e.startsWith(t)?e.substring(t.length):e}function D(e,t){let r={};return Object.keys(e).forEach(n=>{t.includes(n)||(r[n]=e[n])}),r}function B(e){if(!m.isAbsoluteUrl(e))return!0;try{let t=m.getLocationOrigin(),r=new URL(e,t);return r.origin===t&&j.hasBasePath(r.pathname)}catch(n){return!1}}function H(e,t,r){let n="",a=P.getRouteRegex(e),o=a.groups,i=(t!==e?_.getRouteMatcher(a)(t):"")||r;n=e;let l=Object.keys(o);return l.every(e=>{let t=i[e]||"",{repeat:r,optional:a}=o[e],l="[".concat(r?"...":"").concat(e,"]");return a&&(l="".concat(t?"":"/","[").concat(l,"]")),r&&!Array.isArray(t)&&(t=[t]),(a||e in i)&&(n=n.replace(l,r?t.map(e=>encodeURIComponent(e)).join("/"):encodeURIComponent(t))||"/")})||(n=""),{params:l,result:n}}function U(e,t,r){let n;let a="string"==typeof t?t:b.formatWithValidation(t),o=a.match(/^[a-zA-Z]{1,}:\/\//),i=o?a.slice(o[0].length):a,s=i.split("?");if((s[0]||"").match(/(\/\/|\\)/)){console.error("Invalid href passed to next/router: ".concat(a,", repeated forward-slashes (//) or backslashes \\ are not valid in the href"));let u=m.normalizeRepeatedSlashes(i);a=(o?o[0]:"")+u}if(!B(a))return r?[a]:a;try{n=new URL(a.startsWith("#")?e.asPath:e.pathname,"http://n")}catch(c){n=new URL("/","http://n")}try{let d=new URL(a,n);d.pathname=l.normalizePathTrailingSlash(d.pathname);let f="";if(g.isDynamicRoute(d.pathname)&&d.searchParams&&r){let p=v.searchParamsToUrlQuery(d.searchParams),{result:h,params:y}=H(d.pathname,d.pathname,p);h&&(f=b.formatWithValidation({pathname:h,hash:d.hash,query:D(p,y)}))}let _=d.origin===n.origin?d.href.slice(d.origin.length):d.href;return r?[_,f||_]:_}catch(P){return r?[a]:a}}function F(e,t,r){let[n,a]=U(e,t,!0),o=m.getLocationOrigin(),i=n.startsWith(o),l=a&&a.startsWith(o);n=k(n),a=a?k(a):a;let s=i?n:C.addBasePath(n),u=r?k(U(e,r)):a||n;return{url:s,as:l?u:C.addBasePath(u)}}function q(e,t){let r=s.removeTrailingSlash(f.denormalizePagePath(e));return"/404"===r||"/_error"===r?e:(t.includes(r)||t.some(t=>{if(g.isDynamicRoute(t)&&P.getRouteRegex(t).re.test(r))return e=t,!0}),s.removeTrailingSlash(e))}function W(e){return Z.apply(this,arguments)}function Z(){return(Z=n(function*(e){let t=yield I(e);if(!t||!e.fetchData)return null;try{let r=yield e.fetchData(),n=yield function(e,t,r){let n={basePath:r.router.basePath,i18n:{locales:r.router.locales},trailingSlash:Boolean(!1)},o=t.headers.get("x-nextjs-rewrite"),i=o||t.headers.get("x-nextjs-matched-path"),l=t.headers.get("x-matched-path");if(!l||i||l.includes("__next_data_catchall")||l.includes("/_error")||l.includes("/404")||(i=l),i){if(i.startsWith("/")){let c=y.parseRelativeUrl(i),d=R.getNextPathnameInfo(c.pathname,{nextConfig:n,parseData:!0}),f=s.removeTrailingSlash(d.pathname);return Promise.all([r.router.pageLoader.getPageList(),u.getClientBuildManifest()]).then(t=>{let[n,{__rewrites:a}]=t,i=w.addLocale(d.pathname,d.locale);if(g.isDynamicRoute(i)||!o&&n.includes(p.normalizeLocalePath(x.removeBasePath(i),r.router.locales).pathname)){let l=R.getNextPathnameInfo(y.parseRelativeUrl(e).pathname,{parseData:!0});i=C.addBasePath(l.pathname),c.pathname=i}if(!n.includes(f)){let s=q(f,n);s!==f&&(f=s)}let u=n.includes(f)?f:q(p.normalizeLocalePath(x.removeBasePath(c.pathname),r.router.locales).pathname,n);if(g.isDynamicRoute(u)){let h=_.getRouteMatcher(P.getRouteRegex(u))(i);Object.assign(c.query,h||{})}return{type:"rewrite",parsedAs:c,resolvedHref:u}})}let h=S.parsePath(e),m=M.formatNextPathnameInfo(a({},R.getNextPathnameInfo(h.pathname,{nextConfig:n,parseData:!0}),{defaultLocale:r.router.defaultLocale,buildId:""}));return Promise.resolve({type:"redirect-external",destination:"".concat(m).concat(h.query).concat(h.hash)})}let v=t.headers.get("x-nextjs-redirect");if(v){if(v.startsWith("/")){let b=S.parsePath(v),E=M.formatNextPathnameInfo(a({},R.getNextPathnameInfo(b.pathname,{nextConfig:n,parseData:!0}),{defaultLocale:r.router.defaultLocale,buildId:""}));return Promise.resolve({type:"redirect-internal",newAs:"".concat(E).concat(b.query).concat(b.hash),newUrl:"".concat(E).concat(b.query).concat(b.hash)})}return Promise.resolve({type:"redirect-external",destination:v})}return Promise.resolve({type:"next"})}(r.dataHref,r.response,e);return{dataHref:r.dataHref,json:r.json,response:r.response,text:r.text,cacheKey:r.cacheKey,effect:n}}catch(o){return null}})).apply(this,arguments)}let z=Symbol("SSG_DATA_NOT_FOUND");function G(e){let t=document.documentElement,r=t.style.scrollBehavior;t.style.scrollBehavior="auto",t.getClientRects(),e(),t.style.scrollBehavior=r}function V(e){try{return JSON.parse(e)}catch(t){return null}}function X(e){var t;let{dataHref:r,inflightCache:n,isPrefetch:a,hasMiddleware:o,isServerRender:i,parseJSON:l,persistCache:s,isBackground:c,unstable_skipClientCache:d}=e,{href:f}=new URL(r,window.location.href),p=e=>(function e(t,r,n){return fetch(t,{credentials:"same-origin",method:n.method||"GET",headers:Object.assign({},n.headers,{"x-nextjs-data":"1"})}).then(a=>!a.ok&&r>1&&a.status>=500?e(t,r-1,n):a)})(r,i?3:1,{headers:Object.assign({},a?{purpose:"prefetch"}:{},a&&o?{"x-middleware-prefetch":"1"}:{}),method:null!=(t=null==e?void 0:e.method)?t:"GET"}).then(t=>t.ok&&(null==e?void 0:e.method)==="HEAD"?{dataHref:r,response:t,text:"",json:{},cacheKey:f}:t.text().then(e=>{if(!t.ok){if(o&&[301,302,307,308].includes(t.status))return{dataHref:r,response:t,text:e,json:{},cacheKey:f};if(!o&&404===t.status){var n;if(null==(n=V(e))?void 0:n.notFound)return{dataHref:r,json:{notFound:z},response:t,text:e,cacheKey:f}}let a=Error("Failed to load static props");throw i||u.markAssetError(a),a}return{dataHref:r,json:l?V(e):null,response:t,text:e,cacheKey:f}})).then(e=>(s&&"no-cache"!==e.response.headers.get("x-middleware-cache")||delete n[f],e)).catch(e=>{throw d||delete n[f],"Failed to fetch"===e.message&&u.markAssetError(e),e});return d&&s?p({}).then(e=>(n[f]=Promise.resolve(e),e)):void 0!==n[f]?n[f]:n[f]=p(c?{method:"HEAD"}:{})}function Y(){return Math.random().toString(36).slice(2,10)}function $(e){let{url:t,router:r}=e;if(t===C.addBasePath(w.addLocale(r.asPath,r.locale)))throw Error("Invariant: attempted to hard navigate to the same URL ".concat(t," ").concat(location.href));window.location.href=t}let K=e=>{let{route:t,router:r}=e,n=!1,a=r.clc=()=>{n=!0},o=()=>{if(n){let e=Error('Abort fetching component for route: "'.concat(t,'"'));throw e.cancelled=!0,e}a===r.clc&&(r.clc=null)};return o};class J{reload(){window.location.reload()}back(){window.history.back()}forward(){window.history.forward()}push(e,t){let r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:{};return{url:e,as:t}=F(this,e,t),this.change("pushState",e,t,r)}replace(e,t){let r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:{};return{url:e,as:t}=F(this,e,t),this.change("replaceState",e,t,r)}change(e,t,r,o,i){var l=this;return n(function*(){let n,f;if(!B(t))return $({url:t,router:l}),!1;let p=1===o._h,h=p||o._shouldResolveHref||S.parsePath(t).pathname===S.parsePath(r).pathname,v=a({},l.state),O=!0!==l.isReady;l.isReady=!0;let R=l.isSsr;if(p||(l.isSsr=!1),p&&l.clc)return!1;let M=v.locale;m.ST&&performance.mark("routeChange");let{shallow:L=!1,scroll:N=!0}=o,k={shallow:L};l._inFlightRoute&&l.clc&&(R||J.events.emit("routeChangeError",T(),l._inFlightRoute,k),l.clc(),l.clc=null),r=C.addBasePath(w.addLocale(j.hasBasePath(r)?x.removeBasePath(r):r,o.locale,l.defaultLocale));let U=E.removeLocale(j.hasBasePath(r)?x.removeBasePath(r):r,v.locale);l._inFlightRoute=r;let W=M!==v.locale;if(!p&&l.onlyAHashChange(U)&&!W){v.asPath=U,J.events.emit("hashChangeStart",r,k),l.changeState(e,t,r,a({},o,{scroll:!1})),N&&l.scrollToHash(U);try{yield l.set(v,l.components[v.route],null)}catch(Z){throw d.default(Z)&&Z.cancelled&&J.events.emit("routeChangeError",Z,U,k),Z}return J.events.emit("hashChangeComplete",r,k),!0}let G=y.parseRelativeUrl(t),{pathname:V,query:X}=G;try{[n,{__rewrites:f}]=yield Promise.all([l.pageLoader.getPageList(),u.getClientBuildManifest(),l.pageLoader.getMiddleware()])}catch(Y){return $({url:r,router:l}),!1}l.urlIsNew(U)||W||(e="replaceState");let K=r;V=V?s.removeTrailingSlash(x.removeBasePath(V)):V;let Q=s.removeTrailingSlash(V),ee=r.startsWith("/")&&y.parseRelativeUrl(r).pathname,et=!!(ee&&Q!==ee&&(!g.isDynamicRoute(Q)||!_.getRouteMatcher(P.getRouteRegex(Q))(ee))),er=!o.shallow&&(yield I({asPath:r,locale:v.locale,router:l}));if(p&&er&&(h=!1),h&&"/_error"!==V&&(o._shouldResolveHref=!0,G.pathname=q(V,n),G.pathname===V||(V=G.pathname,G.pathname=C.addBasePath(V),er||(t=b.formatWithValidation(G)))),!B(r))return $({url:r,router:l}),!1;K=E.removeLocale(x.removeBasePath(K),v.locale),Q=s.removeTrailingSlash(V);let en=!1;if(g.isDynamicRoute(Q)){let ea=y.parseRelativeUrl(K),eo=ea.pathname,ei=P.getRouteRegex(Q);en=_.getRouteMatcher(ei)(eo);let el=Q===eo,es=el?H(Q,eo,X):{};if(en&&(!el||es.result))el?r=b.formatWithValidation(Object.assign({},ea,{pathname:es.result,query:D(X,es.params)})):Object.assign(X,en);else{let eu=Object.keys(ei.groups).filter(e=>!X[e]&&!ei.groups[e].optional);if(eu.length>0&&!er)throw Error((el?"The provided `href` (".concat(t,") value is missing query values (").concat(eu.join(", "),") to be interpolated properly. "):"The provided `as` value (".concat(eo,") is incompatible with the `href` value (").concat(Q,"). "))+"Read more: https://nextjs.org/docs/messages/".concat(el?"href-interpolation-failed":"incompatible-href-as"))}}p||J.events.emit("routeChangeStart",r,k);try{var ec,ed,ef,ep,eh,em,eg,ey;let ev=yield l.getRouteInfo({route:Q,pathname:V,query:X,as:r,resolvedAs:K,routeProps:k,locale:v.locale,isPreview:v.isPreview,hasMiddleware:er,unstable_skipClientCache:o.unstable_skipClientCache,isQueryUpdating:p&&!l.isFallback,isMiddlewareRewrite:et});if("route"in ev&&er){Q=V=ev.route||Q,k.shallow||(X=Object.assign({},ev.query||{},X));let e_=j.hasBasePath(G.pathname)?x.removeBasePath(G.pathname):G.pathname;if(en&&V!==e_&&Object.keys(en).forEach(e=>{en&&X[e]===en[e]&&delete X[e]}),g.isDynamicRoute(V)){let eP=!k.shallow&&ev.resolvedAs?ev.resolvedAs:C.addBasePath(w.addLocale(new URL(r,location.href).pathname,v.locale),!0),eb=eP;j.hasBasePath(eb)&&(eb=x.removeBasePath(eb));let eS=P.getRouteRegex(V),ew=_.getRouteMatcher(eS)(new URL(eb,location.href).pathname);ew&&Object.assign(X,ew)}}if("type"in ev){if("redirect-internal"===ev.type)return l.change(e,ev.newUrl,ev.newAs,o);return $({url:ev.destination,router:l}),new Promise(()=>{})}let eE=ev.Component;if(eE&&eE.unstable_scriptLoader){let ex=[].concat(eE.unstable_scriptLoader());ex.forEach(e=>{c.handleClientScriptLoad(e.props)})}if((ev.__N_SSG||ev.__N_SSP)&&ev.props){if(ev.props.pageProps&&ev.props.pageProps.__N_REDIRECT){o.locale=!1;let eC=ev.props.pageProps.__N_REDIRECT;if(eC.startsWith("/")&&!1!==ev.props.pageProps.__N_REDIRECT_BASE_PATH){let ej=y.parseRelativeUrl(eC);ej.pathname=q(ej.pathname,n);let{url:eO,as:eR}=F(l,eC,eC);return l.change(e,eO,eR,o)}return $({url:eC,router:l}),new Promise(()=>{})}if(v.isPreview=!!ev.props.__N_PREVIEW,ev.props.notFound===z){let eM;try{yield l.fetchComponent("/404"),eM="/404"}catch(eA){eM="/_error"}if(ev=yield l.getRouteInfo({route:eM,pathname:eM,query:X,as:r,resolvedAs:K,routeProps:{shallow:!1},locale:v.locale,isPreview:v.isPreview}),"type"in ev)throw Error("Unexpected middleware effect on /404")}}p&&"/_error"===l.pathname&&(null==(ec=self.__NEXT_DATA__.props)?void 0:null==(ed=ec.pageProps)?void 0:ed.statusCode)===500&&(null==(ef=ev.props)?void 0:ef.pageProps)&&(ev.props.pageProps.statusCode=500);let eL=o.shallow&&v.route===(null!=(ep=ev.route)?ep:Q),eT=null!=(eh=o.scroll)?eh:!p&&!eL,eI=null!=i?i:eT?{x:0,y:0}:null,eN=a({},v,{route:Q,pathname:V,query:X,asPath:U,isFallback:!1});if(p&&("/404"===l.pathname||"/_error"===l.pathname)){if(ev=yield l.getRouteInfo({route:l.pathname,pathname:l.pathname,query:X,as:r,resolvedAs:K,routeProps:{shallow:!1},locale:v.locale,isPreview:v.isPreview}),"type"in ev)throw Error("Unexpected middleware effect on ".concat(l.pathname));"/_error"===l.pathname&&(null==(em=self.__NEXT_DATA__.props)?void 0:null==(eg=em.pageProps)?void 0:eg.statusCode)===500&&(null==(ey=ev.props)?void 0:ey.pageProps)&&(ev.props.pageProps.statusCode=500);try{yield l.set(eN,ev,eI)}catch(ek){throw d.default(ek)&&ek.cancelled&&J.events.emit("routeChangeError",ek,U,k),ek}return!0}J.events.emit("beforeHistoryChange",r,k),l.changeState(e,t,r,o);let eD=p&&!eI&&!O&&!W&&A.compareRouterStates(eN,l.state);if(!eD){try{yield l.set(eN,ev,eI)}catch(eB){if(eB.cancelled)ev.error=ev.error||eB;else throw eB}if(ev.error)throw p||J.events.emit("routeChangeError",ev.error,U,k),ev.error;p||J.events.emit("routeChangeComplete",r,k),eT&&/#.+$/.test(r)&&l.scrollToHash(r)}return!0}catch(eH){if(d.default(eH)&&eH.cancelled)return!1;throw eH}})()}changeState(e,t,r){let n=arguments.length>3&&void 0!==arguments[3]?arguments[3]:{};("pushState"!==e||m.getURL()!==r)&&(this._shallow=n.shallow,window.history[e]({url:t,as:r,options:n,__N:!0,key:this._key="pushState"!==e?this._key:Y()},"",r))}handleRouteInfoError(e,t,r,a,o,i){var l=this;return n(function*(){if(console.error(e),e.cancelled)throw e;if(u.isAssetError(e)||i)throw J.events.emit("routeChangeError",e,a,o),$({url:a,router:l}),T();try{let n;let{page:s,styleSheets:c}=yield l.fetchComponent("/_error"),f={props:n,Component:s,styleSheets:c,err:e,error:e};if(!f.props)try{f.props=yield l.getInitialProps(s,{err:e,pathname:t,query:r})}catch(p){console.error("Error in error page `getInitialProps`: ",p),f.props={}}return f}catch(h){return l.handleRouteInfoError(d.default(h)?h:Error(h+""),t,r,a,o,!0)}})()}getRouteInfo(e){let{route:t,pathname:r,query:o,as:i,resolvedAs:l,routeProps:u,locale:c,hasMiddleware:f,isPreview:h,unstable_skipClientCache:m,isQueryUpdating:g,isMiddlewareRewrite:y}=e;var v=this;return n(function*(){let e=t;try{var _,P,S,w;let E=K({route:e,router:v}),C=v.components[e];if(u.shallow&&C&&v.route===e)return C;f&&(C=void 0);let j=!C||"initial"in C?void 0:C,R={dataHref:v.pageLoader.getDataHref({href:b.formatWithValidation({pathname:r,query:o}),skipInterpolation:!0,asPath:l,locale:c}),hasMiddleware:!0,isServerRender:v.isSsr,parseJSON:!0,inflightCache:g?v.sbc:v.sdc,persistCache:!h,isPrefetch:!1,unstable_skipClientCache:m,isBackground:g},M=g&&!y?null:yield W({fetchData:()=>X(R),asPath:l,locale:c,router:v}).catch(e=>{if(g)return null;throw e});if(g&&(M?M.json=self.__NEXT_DATA__.props:M={json:self.__NEXT_DATA__.props}),E(),(null==M?void 0:null==(_=M.effect)?void 0:_.type)==="redirect-internal"||(null==M?void 0:null==(P=M.effect)?void 0:P.type)==="redirect-external")return M.effect;if((null==M?void 0:null==(S=M.effect)?void 0:S.type)==="rewrite"){let A=s.removeTrailingSlash(M.effect.resolvedHref),L=yield v.pageLoader.getPageList();if((!g||L.includes(A))&&(e=A,r=M.effect.resolvedHref,o=a({},o,M.effect.parsedAs.query),l=x.removeBasePath(p.normalizeLocalePath(M.effect.parsedAs.pathname,v.locales).pathname),C=v.components[e],u.shallow&&C&&v.route===e&&!f))return a({},C,{route:e})}if(O.isAPIRoute(e))return $({url:i,router:v}),new Promise(()=>{});let T=j||(yield v.fetchComponent(e).then(e=>({Component:e.page,styleSheets:e.styleSheets,__N_SSG:e.mod.__N_SSG,__N_SSP:e.mod.__N_SSP}))),I=null==M?void 0:null==(w=M.response)?void 0:w.headers.get("x-middleware-skip"),N=T.__N_SSG||T.__N_SSP;I&&(null==M?void 0:M.dataHref)&&delete v.sdc[M.dataHref];let{props:k,cacheKey:D}=yield v._getData(n(function*(){if(N){if((null==M?void 0:M.json)&&!I)return{cacheKey:M.cacheKey,props:M.json};let e=(null==M?void 0:M.dataHref)?M.dataHref:v.pageLoader.getDataHref({href:b.formatWithValidation({pathname:r,query:o}),asPath:l,locale:c}),t=yield X({dataHref:e,isServerRender:v.isSsr,parseJSON:!0,inflightCache:I?{}:v.sdc,persistCache:!h,isPrefetch:!1,unstable_skipClientCache:m});return{cacheKey:t.cacheKey,props:t.json||{}}}return{headers:{},props:yield v.getInitialProps(T.Component,{pathname:r,query:o,asPath:i,locale:c,locales:v.locales,defaultLocale:v.defaultLocale})}}));return T.__N_SSP&&R.dataHref&&D&&delete v.sdc[D],v.isPreview||!T.__N_SSG||g||X(Object.assign({},R,{isBackground:!0,persistCache:!1,inflightCache:v.sbc})).catch(()=>{}),k.pageProps=Object.assign({},k.pageProps),T.props=k,T.route=e,T.query=o,T.resolvedAs=l,v.components[e]=T,T}catch(B){return v.handleRouteInfoError(d.getProperError(B),r,o,i,u)}})()}set(e,t,r){return this.state=e,this.sub(t,this.components["/_app"].Component,r)}beforePopState(e){this._bps=e}onlyAHashChange(e){if(!this.asPath)return!1;let[t,r]=this.asPath.split("#"),[n,a]=e.split("#");return!!a&&t===n&&r===a||t===n&&r!==a}scrollToHash(e){let[,t=""]=e.split("#");if(""===t||"top"===t){G(()=>window.scrollTo(0,0));return}let r=decodeURIComponent(t),n=document.getElementById(r);if(n){G(()=>n.scrollIntoView());return}let a=document.getElementsByName(r)[0];a&&G(()=>a.scrollIntoView())}urlIsNew(e){return this.asPath!==e}prefetch(e){let t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:e,r=arguments.length>2&&void 0!==arguments[2]?arguments[2]:{};var o=this;return n(function*(){if(L.isBot(window.navigator.userAgent))return;let n=y.parseRelativeUrl(e),{pathname:i,query:l}=n,u=i,c=yield o.pageLoader.getPageList(),d=t,f=void 0!==r.locale?r.locale||void 0:o.locale,p=yield I({asPath:t,locale:f,router:o});n.pathname=q(n.pathname,c),g.isDynamicRoute(n.pathname)&&(i=n.pathname,n.pathname=i,Object.assign(l,_.getRouteMatcher(P.getRouteRegex(n.pathname))(S.parsePath(t).pathname)||{}),p||(e=b.formatWithValidation(n)));let h=yield W({fetchData:()=>X({dataHref:o.pageLoader.getDataHref({href:b.formatWithValidation({pathname:u,query:l}),skipInterpolation:!0,asPath:d,locale:f}),hasMiddleware:!0,isServerRender:o.isSsr,parseJSON:!0,inflightCache:o.sdc,persistCache:!o.isPreview,isPrefetch:!0}),asPath:t,locale:f,router:o});if((null==h?void 0:h.effect.type)==="rewrite"&&(n.pathname=h.effect.resolvedHref,i=h.effect.resolvedHref,l=a({},l,h.effect.parsedAs.query),d=h.effect.parsedAs.pathname,e=b.formatWithValidation(n)),(null==h?void 0:h.effect.type)==="redirect-external")return;let m=s.removeTrailingSlash(i);yield Promise.all([o.pageLoader._isSsg(m).then(t=>!!t&&X({dataHref:(null==h?void 0:h.json)?null==h?void 0:h.dataHref:o.pageLoader.getDataHref({href:e,asPath:d,locale:f}),isServerRender:!1,parseJSON:!0,inflightCache:o.sdc,persistCache:!o.isPreview,isPrefetch:!0,unstable_skipClientCache:r.unstable_skipClientCache||r.priority&&!0}).then(()=>!1)),o.pageLoader[r.priority?"loadPage":"prefetch"](m)])})()}fetchComponent(e){var t=this;return n(function*(){let r=K({route:e,router:t});try{let n=yield t.pageLoader.loadPage(e);return r(),n}catch(a){throw r(),a}})()}_getData(e){let t=!1,r=()=>{t=!0};return this.clc=r,e().then(e=>{if(r===this.clc&&(this.clc=null),t){let n=Error("Loading initial props cancelled");throw n.cancelled=!0,n}return e})}_getFlightData(e){return X({dataHref:e,isServerRender:!0,parseJSON:!1,inflightCache:this.sdc,persistCache:!1,isPrefetch:!1}).then(e=>{let{text:t}=e;return{data:t}})}getInitialProps(e,t){let{Component:r}=this.components["/_app"],n=this._wrapApp(r);return t.AppTree=n,m.loadGetInitialProps(r,{AppTree:n,Component:e,router:this,ctx:t})}get route(){return this.state.route}get pathname(){return this.state.pathname}get query(){return this.state.query}get asPath(){return this.state.asPath}get locale(){return this.state.locale}get isFallback(){return this.state.isFallback}get isPreview(){return this.state.isPreview}constructor(e,t,r,{initialProps:n,pageLoader:a,App:o,wrapApp:i,Component:l,err:u,subscription:c,isFallback:d,locale:f,locales:p,defaultLocale:h,domainLocales:v,isPreview:_}){this.sdc={},this.sbc={},this.isFirstPopStateEvent=!0,this._key=Y(),this.onPopState=e=>{let t;let{isFirstPopStateEvent:r}=this;this.isFirstPopStateEvent=!1;let n=e.state;if(!n){let{pathname:a,query:o}=this;this.changeState("replaceState",b.formatWithValidation({pathname:C.addBasePath(a),query:o}),m.getURL());return}if(n.__NA){window.location.reload();return}if(!n.__N||r&&this.locale===n.options.locale&&n.as===this.asPath)return;let{url:i,as:l,options:s,key:u}=n;this._key=u;let{pathname:c}=y.parseRelativeUrl(i);(!this.isSsr||l!==C.addBasePath(this.asPath)||c!==C.addBasePath(this.pathname))&&(!this._bps||this._bps(n))&&this.change("replaceState",i,l,Object.assign({},s,{shallow:s.shallow&&this._shallow,locale:s.locale||this.defaultLocale,_h:0}),t)};let P=s.removeTrailingSlash(e);this.components={},"/_error"!==e&&(this.components[P]={Component:l,initial:!0,props:n,err:u,__N_SSG:n&&n.__N_SSG,__N_SSP:n&&n.__N_SSP}),this.components["/_app"]={Component:o,styleSheets:[]},this.events=J.events,this.pageLoader=a;let S=g.isDynamicRoute(e)&&self.__NEXT_DATA__.autoExport;if(this.basePath="",this.sub=c,this.clc=null,this._wrapApp=i,this.isSsr=!0,this.isLocaleDomain=!1,this.isReady=!!(self.__NEXT_DATA__.gssp||self.__NEXT_DATA__.gip||self.__NEXT_DATA__.appGip&&!self.__NEXT_DATA__.gsp||!S&&!self.location.search),this.state={route:P,pathname:e,query:t,asPath:S?e:r,isPreview:!!_,locale:void 0,isFallback:d},this._initialMatchesMiddlewarePromise=Promise.resolve(!1),!r.startsWith("//")){let w={locale:f},E=m.getURL();this._initialMatchesMiddlewarePromise=I({router:this,locale:f,asPath:E}).then(n=>(w._shouldResolveHref=r!==e,this.changeState("replaceState",n?E:b.formatWithValidation({pathname:C.addBasePath(e),query:t}),E,w),n))}window.addEventListener("popstate",this.onPopState)}}J.events=h.default(),t.default=J},7181:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.addLocale=function(e,t,r,o){return t&&t!==r&&(o||!a.pathHasPrefix(e.toLowerCase(),"/".concat(t.toLowerCase()))&&!a.pathHasPrefix(e.toLowerCase(),"/api"))?n.addPathPrefix(e,"/".concat(t)):e};var n=r(3618),a=r(109)},3618:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.addPathPrefix=function(e,t){if(!e.startsWith("/")||!t)return e;let{pathname:r,query:a,hash:o}=n.parsePath(e);return"".concat(t).concat(r).concat(a).concat(o)};var n=r(9380)},7788:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.addPathSuffix=function(e,t){if(!e.startsWith("/")||!t)return e;let{pathname:r,query:a,hash:o}=n.parsePath(e);return"".concat(r).concat(t).concat(a).concat(o)};var n=r(9380)},9602:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.compareRouterStates=function(e,t){let r=Object.keys(e);if(r.length!==Object.keys(t).length)return!1;for(let n=r.length;n--;){let a=r[n];if("query"===a){let o=Object.keys(e.query);if(o.length!==Object.keys(t.query).length)return!1;for(let i=o.length;i--;){let l=o[i];if(!t.query.hasOwnProperty(l)||e.query[l]!==t.query[l])return!1}}else if(!t.hasOwnProperty(a)||e[a]!==t[a])return!1}return!0}},2378:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.formatNextPathnameInfo=function(e){let t=i.addLocale(e.pathname,e.locale,e.buildId?void 0:e.defaultLocale,e.ignorePrefix);return(e.buildId||!e.trailingSlash)&&(t=n.removeTrailingSlash(t)),e.buildId&&(t=o.addPathSuffix(a.addPathPrefix(t,"/_next/data/".concat(e.buildId)),"/"===e.pathname?"index.json":".json")),t=a.addPathPrefix(t,e.basePath),!e.buildId&&e.trailingSlash?t.endsWith("/")?t:o.addPathSuffix(t,"/"):n.removeTrailingSlash(t)};var n=r(807),a=r(3618),o=r(7788),i=r(7181)},8346:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.formatUrl=o,t.formatWithValidation=function(e){return o(e)},t.urlObjectKeys=void 0;var n=(0,r(6687).Z)(r(753));let a=/https?|ftp|gopher|file/;function o(e){let{auth:t,hostname:r}=e,o=e.protocol||"",i=e.pathname||"",l=e.hash||"",s=e.query||"",u=!1;t=t?encodeURIComponent(t).replace(/%3A/i,":")+"@":"",e.host?u=t+e.host:r&&(u=t+(~r.indexOf(":")?"[".concat(r,"]"):r),e.port&&(u+=":"+e.port)),s&&"object"==typeof s&&(s=String(n.urlQueryToSearchParams(s)));let c=e.search||s&&"?".concat(s)||"";return o&&!o.endsWith(":")&&(o+=":"),e.slashes||(!o||a.test(o))&&!1!==u?(u="//"+(u||""),i&&"/"!==i[0]&&(i="/"+i)):u||(u=""),l&&"#"!==l[0]&&(l="#"+l),c&&"?"!==c[0]&&(c="?"+c),i=i.replace(/[?#]/g,encodeURIComponent),c=c.replace("#","%23"),"".concat(o).concat(u).concat(i).concat(c).concat(l)}t.urlObjectKeys=["auth","hash","host","hostname","href","path","pathname","port","protocol","query","search","slashes"]},431:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.default=function(e){let t=arguments.length>1&&void 0!==arguments[1]?arguments[1]:"",r="/"===e?"/index":/^\/index(\/|$)/.test(e)?"/index".concat(e):"".concat(e);return r+t}},4980:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.getNextPathnameInfo=function(e,t){var r;let{basePath:i,i18n:l,trailingSlash:s}=null!=(r=t.nextConfig)?r:{},u={pathname:e,trailingSlash:"/"!==e?e.endsWith("/"):s};if(i&&o.pathHasPrefix(u.pathname,i)&&(u.pathname=a.removePathPrefix(u.pathname,i),u.basePath=i),!0===t.parseData&&u.pathname.startsWith("/_next/data/")&&u.pathname.endsWith(".json")){let c=u.pathname.replace(/^\/_next\/data\//,"").replace(/\.json$/,"").split("/"),d=c[0];u.pathname="index"!==c[1]?"/".concat(c.slice(1).join("/")):"/",u.buildId=d}if(l){let f=n.normalizeLocalePath(u.pathname,l.locales);u.locale=null==f?void 0:f.detectedLocale,u.pathname=(null==f?void 0:f.pathname)||u.pathname}return u};var n=r(6126),a=r(8689),o=r(109)},7105:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),Object.defineProperty(t,"getSortedRoutes",{enumerable:!0,get:function(){return n.getSortedRoutes}}),Object.defineProperty(t,"isDynamicRoute",{enumerable:!0,get:function(){return a.isDynamicRoute}});var n=r(3702),a=r(4635)},1600:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.isBot=function(e){return/Googlebot|Mediapartners-Google|AdsBot-Google|googleweblight|Storebot-Google|Google-PageRenderer|Bingbot|BingPreview|Slurp|DuckDuckBot|baiduspider|yandex|sogou|LinkedInBot|bitlybot|tumblr|vkShare|quora link preview|facebookexternalhit|facebookcatalog|Twitterbot|applebot|redditbot|Slackbot|Discordbot|WhatsApp|SkypeUriPreview|ia_archiver/i.test(e)}},4635:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.isDynamicRoute=function(e){return r.test(e)};let r=/\/\[[^/]+?\](?=\/|$)/},9380:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.parsePath=function(e){let t=e.indexOf("#"),r=e.indexOf("?"),n=r>-1&&(t<0||r-1?{pathname:e.substring(0,n?r:t),query:n?e.substring(r,t>-1?t:void 0):"",hash:t>-1?e.slice(t):""}:{pathname:e,query:"",hash:""}}},4265:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.parseRelativeUrl=function(e,t){let r=new URL(n.getLocationOrigin()),o=t?new URL(t,r):e.startsWith(".")?new URL(window.location.href):r,{pathname:i,searchParams:l,search:s,hash:u,href:c,origin:d}=new URL(e,o);if(d!==r.origin)throw Error("invariant: invalid relative URL, router received ".concat(e));return{pathname:i,query:a.searchParamsToUrlQuery(l),search:s,hash:u,href:c.slice(r.origin.length)}};var n=r(2921),a=r(753)},109:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.pathHasPrefix=function(e,t){if("string"!=typeof e)return!1;let{pathname:r}=n.parsePath(e);return r===t||r.startsWith(t+"/")};var n=r(9380)},753:function(e,t){"use strict";function r(e){return"string"!=typeof e&&("number"!=typeof e||isNaN(e))&&"boolean"!=typeof e?"":String(e)}Object.defineProperty(t,"__esModule",{value:!0}),t.searchParamsToUrlQuery=function(e){let t={};return e.forEach((e,r)=>{void 0===t[r]?t[r]=e:Array.isArray(t[r])?t[r].push(e):t[r]=[t[r],e]}),t},t.urlQueryToSearchParams=function(e){let t=new URLSearchParams;return Object.entries(e).forEach(e=>{let[n,a]=e;Array.isArray(a)?a.forEach(e=>t.append(n,r(e))):t.set(n,r(a))}),t},t.assign=function(e){for(var t=arguments.length,r=Array(t>1?t-1:0),n=1;n{Array.from(t.keys()).forEach(t=>e.delete(t)),t.forEach((t,r)=>e.append(r,t))}),e}},8689:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.removePathPrefix=function(e,t){if(n.pathHasPrefix(e,t)){let r=e.slice(t.length);return r.startsWith("/")?r:"/".concat(r)}return e};var n=r(109)},807:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.removeTrailingSlash=function(e){return e.replace(/\/$/,"")||"/"}},2753:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.getRouteMatcher=function(e){let{re:t,groups:r}=e;return e=>{let a=t.exec(e);if(!a)return!1;let o=e=>{try{return decodeURIComponent(e)}catch(t){throw new n.DecodeError("failed to decode param")}},i={};return Object.keys(r).forEach(e=>{let t=r[e],n=a[t.pos];void 0!==n&&(i[e]=~n.indexOf("/")?n.split("/").map(e=>o(e)):t.repeat?[o(n)]:o(n))}),i}};var n=r(2921)},8628:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.getRouteRegex=s,t.getNamedRouteRegex=function(e){let t=u(e);return n({},s(e),{namedRegex:"^".concat(t.namedParameterizedRoute,"(?:/)?$"),routeKeys:t.routeKeys})},t.getNamedMiddlewareRegex=function(e,t){let{parameterizedRoute:r}=l(e),{catchAll:n=!0}=t;if("/"===r)return{namedRegex:"^/".concat(n?".*":"","$")};let{namedParameterizedRoute:a}=u(e);return{namedRegex:"^".concat(a).concat(n?"(?:(/.*)?)":"","$")}};var n=r(5321).Z,a=r(7350),o=r(807);function i(e){let t=e.startsWith("[")&&e.endsWith("]");t&&(e=e.slice(1,-1));let r=e.startsWith("...");return r&&(e=e.slice(3)),{key:e,repeat:r,optional:t}}function l(e){let t=o.removeTrailingSlash(e).slice(1).split("/"),r={},n=1;return{parameterizedRoute:t.map(e=>{if(!(e.startsWith("[")&&e.endsWith("]")))return"/".concat(a.escapeStringRegexp(e));{let{key:t,optional:o,repeat:l}=i(e.slice(1,-1));return r[t]={pos:n++,repeat:l,optional:o},l?o?"(?:/(.+?))?":"/(.+?)":"/([^/]+?)"}}).join(""),groups:r}}function s(e){let{parameterizedRoute:t,groups:r}=l(e);return{re:RegExp("^".concat(t,"(?:/)?$")),groups:r}}function u(e){let t,r;let n=o.removeTrailingSlash(e).slice(1).split("/"),l=(t=97,r=1,()=>{let e="";for(let n=0;n122&&(r++,t=97);return e}),s={};return{namedParameterizedRoute:n.map(e=>{if(!(e.startsWith("[")&&e.endsWith("]")))return"/".concat(a.escapeStringRegexp(e));{let{key:t,optional:r,repeat:n}=i(e.slice(1,-1)),o=t.replace(/\W/g,""),u=!1;return(0===o.length||o.length>30)&&(u=!0),isNaN(parseInt(o.slice(0,1)))||(u=!0),u&&(o=l()),s[o]=t,n?r?"(?:/(?<".concat(o,">.+?))?"):"/(?<".concat(o,">.+?)"):"/(?<".concat(o,">[^/]+?)")}}).join(""),routeKeys:s}}},3702:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.getSortedRoutes=function(e){let t=new r;return e.forEach(e=>t.insert(e)),t.smoosh()};class r{insert(e){this._insert(e.split("/").filter(Boolean),[],!1)}smoosh(){return this._smoosh()}_smoosh(){let e=arguments.length>0&&void 0!==arguments[0]?arguments[0]:"/",t=[...this.children.keys()].sort();null!==this.slugName&&t.splice(t.indexOf("[]"),1),null!==this.restSlugName&&t.splice(t.indexOf("[...]"),1),null!==this.optionalRestSlugName&&t.splice(t.indexOf("[[...]]"),1);let r=t.map(t=>this.children.get(t)._smoosh("".concat(e).concat(t,"/"))).reduce((e,t)=>[...e,...t],[]);if(null!==this.slugName&&r.push(...this.children.get("[]")._smoosh("".concat(e,"[").concat(this.slugName,"]/"))),!this.placeholder){let n="/"===e?"/":e.slice(0,-1);if(null!=this.optionalRestSlugName)throw Error('You cannot define a route with the same specificity as a optional catch-all route ("'.concat(n,'" and "').concat(n,"[[...").concat(this.optionalRestSlugName,']]").'));r.unshift(n)}return null!==this.restSlugName&&r.push(...this.children.get("[...]")._smoosh("".concat(e,"[...").concat(this.restSlugName,"]/"))),null!==this.optionalRestSlugName&&r.push(...this.children.get("[[...]]")._smoosh("".concat(e,"[[...").concat(this.optionalRestSlugName,"]]/"))),r}_insert(e,t,n){if(0===e.length){this.placeholder=!1;return}if(n)throw Error("Catch-all must be the last part of the URL.");let a=e[0];if(a.startsWith("[")&&a.endsWith("]")){let o=a.slice(1,-1),i=!1;if(o.startsWith("[")&&o.endsWith("]")&&(o=o.slice(1,-1),i=!0),o.startsWith("...")&&(o=o.substring(3),n=!0),o.startsWith("[")||o.endsWith("]"))throw Error("Segment names may not start or end with extra brackets ('".concat(o,"')."));if(o.startsWith("."))throw Error("Segment names may not start with erroneous periods ('".concat(o,"')."));function l(e,r){if(null!==e&&e!==r)throw Error("You cannot use different slug names for the same dynamic path ('".concat(e,"' !== '").concat(r,"')."));t.forEach(e=>{if(e===r)throw Error('You cannot have the same slug name "'.concat(r,'" repeat within a single dynamic path'));if(e.replace(/\W/g,"")===a.replace(/\W/g,""))throw Error('You cannot have the slug names "'.concat(e,'" and "').concat(r,'" differ only by non-word symbols within a single dynamic path'))}),t.push(r)}if(n){if(i){if(null!=this.restSlugName)throw Error('You cannot use both an required and optional catch-all route at the same level ("[...'.concat(this.restSlugName,']" and "').concat(e[0],'" ).'));l(this.optionalRestSlugName,o),this.optionalRestSlugName=o,a="[[...]]"}else{if(null!=this.optionalRestSlugName)throw Error('You cannot use both an optional and required catch-all route at the same level ("[[...'.concat(this.optionalRestSlugName,']]" and "').concat(e[0],'").'));l(this.restSlugName,o),this.restSlugName=o,a="[...]"}}else{if(i)throw Error('Optional route parameters are not yet supported ("'.concat(e[0],'").'));l(this.slugName,o),this.slugName=o,a="[]"}}this.children.has(a)||this.children.set(a,new r),this.children.get(a)._insert(e.slice(1),t,n)}constructor(){this.placeholder=!0,this.children=new Map,this.slugName=null,this.restSlugName=null,this.optionalRestSlugName=null}}},1569:function(e,t){"use strict";let r;Object.defineProperty(t,"__esModule",{value:!0}),t.setConfig=function(e){r=e},t.default=void 0,t.default=()=>r,("function"==typeof t.default||"object"==typeof t.default&&null!==t.default)&&void 0===t.default.__esModule&&(Object.defineProperty(t.default,"__esModule",{value:!0}),Object.assign(t.default,t),e.exports=t.default)},9299:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.default=function(e){let{headManager:t,reduceComponentsToState:r}=e;function l(){if(t&&t.mountedInstances){let a=n.Children.toArray(Array.from(t.mountedInstances).filter(Boolean));t.updateHead(r(a,e))}}if(a){var s;null==t||null==(s=t.mountedInstances)||s.add(e.children),l()}return o(()=>{var r;return null==t||null==(r=t.mountedInstances)||r.add(e.children),()=>{var r;null==t||null==(r=t.mountedInstances)||r.delete(e.children)}}),o(()=>(t&&(t._pendingUpdate=l),()=>{t&&(t._pendingUpdate=l)})),i(()=>(t&&t._pendingUpdate&&(t._pendingUpdate(),t._pendingUpdate=null),()=>{t&&t._pendingUpdate&&(t._pendingUpdate(),t._pendingUpdate=null)})),null};var n=(0,r(6687).Z)(r(959));let a=!1,o=a?()=>{}:n.useLayoutEffect,i=a?()=>{}:n.useEffect},2921:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.execOnce=function(e){let t,r=!1;return function(){for(var n=arguments.length,a=Array(n),o=0;oa.test(e);function i(){let{protocol:e,hostname:t,port:r}=window.location;return"".concat(e,"//").concat(t).concat(r?":"+r:"")}function l(e){return"string"==typeof e?e:e.displayName||e.name||"Unknown"}function s(e){return e.finished||e.headersSent}function u(e,t){return c.apply(this,arguments)}function c(){return(c=n(function*(e,t){let r=t.res||t.ctx&&t.ctx.res;if(!e.getInitialProps)return t.ctx&&t.Component?{pageProps:yield u(t.Component,t.ctx)}:{};let n=yield e.getInitialProps(t);if(r&&s(r))return n;if(!n){let a='"'.concat(l(e),'.getInitialProps()" should resolve to an object. But found "').concat(n,'" instead.');throw Error(a)}return n})).apply(this,arguments)}t.isAbsoluteUrl=o;let d="undefined"!=typeof performance;t.SP=d;let f=d&&["mark","measure","getEntriesByName"].every(e=>"function"==typeof performance[e]);t.ST=f,t.DecodeError=class extends Error{},t.NormalizeError=class extends Error{},t.PageNotFoundError=class extends Error{constructor(e){super(),this.code="ENOENT",this.message="Cannot find module for page: ".concat(e)}},t.MissingStaticPage=class extends Error{constructor(e,t){super(),this.message="Failed to load static file for page: ".concat(e," ").concat(t)}},t.MiddlewareNotFoundError=class extends Error{constructor(){super(),this.code="ENOENT",this.message="Cannot find the middleware module"}}},1185:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.warnOnce=void 0;let r=e=>{};t.warnOnce=r},3982:function(e){var t,r,n,a,o,i,l,s,u,c,d,f,p,h,m,g,y,v,_,P,b,S,w,E,x,C,j,O,R,M,A,L,T,I,N,k,D,B,H,U,F,q,W,Z,z,G;(t={}).d=function(e,r){for(var n in r)t.o(r,n)&&!t.o(e,n)&&Object.defineProperty(e,n,{enumerable:!0,get:r[n]})},t.o=function(e,t){return Object.prototype.hasOwnProperty.call(e,t)},t.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},void 0!==t&&(t.ab="//"),r={},t.r(r),t.d(r,{getCLS:function(){return w},getFCP:function(){return P},getFID:function(){return M},getINP:function(){return q},getLCP:function(){return Z},getTTFB:function(){return G},onCLS:function(){return w},onFCP:function(){return P},onFID:function(){return M},onINP:function(){return q},onLCP:function(){return Z},onTTFB:function(){return G}}),s=-1,u=function(e){addEventListener("pageshow",function(t){t.persisted&&(s=t.timeStamp,e(t))},!0)},c=function(){return window.performance&&performance.getEntriesByType&&performance.getEntriesByType("navigation")[0]},d=function(){var e=c();return e&&e.activationStart||0},f=function(e,t){var r=c(),n="navigate";return s>=0?n="back-forward-cache":r&&(n=document.prerendering||d()>0?"prerender":r.type.replace(/_/g,"-")),{name:e,value:void 0===t?-1:t,rating:"good",delta:0,entries:[],id:"v3-".concat(Date.now(),"-").concat(Math.floor(8999999999999*Math.random())+1e12),navigationType:n}},p=function(e,t,r){try{if(PerformanceObserver.supportedEntryTypes.includes(e)){var n=new PerformanceObserver(function(e){t(e.getEntries())});return n.observe(Object.assign({type:e,buffered:!0},r||{})),n}}catch(a){}},h=function(e,t){var r=function r(n){"pagehide"!==n.type&&"hidden"!==document.visibilityState||(e(n),t&&(removeEventListener("visibilitychange",r,!0),removeEventListener("pagehide",r,!0)))};addEventListener("visibilitychange",r,!0),addEventListener("pagehide",r,!0)},m=function(e,t,r,n){var a,o;return function(i){var l;t.value>=0&&(i||n)&&((o=t.value-(a||0))||void 0===a)&&(a=t.value,t.delta=o,t.rating=(l=t.value)>r[1]?"poor":l>r[0]?"needs-improvement":"good",e(t))}},g=-1,y=function(){return"hidden"!==document.visibilityState||document.prerendering?1/0:0},v=function(){h(function(e){g=e.timeStamp},!0)},_=function(){return g<0&&(g=y(),v(),u(function(){setTimeout(function(){g=y(),v()},0)})),{get firstHiddenTime(){return g}}},P=function(e,t){t=t||{};var r,n=[1800,3e3],a=_(),o=f("FCP"),i=function(e){e.forEach(function(e){"first-contentful-paint"===e.name&&(s&&s.disconnect(),e.startTime-1&&e(t)},o=f("CLS",0),i=0,l=[],s=function(e){e.forEach(function(e){if(!e.hadRecentInput){var t=l[0],r=l[l.length-1];i&&e.startTime-r.startTime<1e3&&e.startTime-t.startTime<5e3?(i+=e.value,l.push(e)):(i=e.value,l=[e]),i>o.value&&(o.value=i,o.entries=l,n())}})},c=p("layout-shift",s);c&&(n=m(a,o,r,t.reportAllChanges),h(function(){s(c.takeRecords()),n(!0)}),u(function(){i=0,S=-1,n=m(a,o=f("CLS",0),r,t.reportAllChanges)}))},E={passive:!0,capture:!0},x=new Date,C=function(e,t){n||(n=t,a=e,o=new Date,R(removeEventListener),j())},j=function(){if(a>=0&&a1e12?new Date:performance.now())-e.timeStamp;"pointerdown"==e.type?(t=function(){C(a,e),n()},r=function(){n()},n=function(){removeEventListener("pointerup",t,E),removeEventListener("pointercancel",r,E)},addEventListener("pointerup",t,E),addEventListener("pointercancel",r,E)):C(a,e)}},R=function(e){["mousedown","keydown","touchstart","pointerdown"].forEach(function(t){return e(t,O,E)})},M=function(e,t){t=t||{};var r,o=[100,300],l=_(),s=f("FID"),c=function(e){e.startTimet.latency){if(r)r.entries.push(e),r.latency=Math.max(r.latency,e.duration);else{var n={id:e.interactionId,latency:e.duration,entries:[e]};U[n.id]=n,H.push(n)}H.sort(function(e,t){return t.latency-e.latency}),H.splice(10).forEach(function(e){delete U[e.id]})}},q=function(e,t){t=t||{};var r=[200,500];k();var n,a=f("INP"),o=function(e){e.forEach(function(e){e.interactionId&&F(e),"first-input"!==e.entryType||H.some(function(t){return t.entries.some(function(t){return e.duration===t.duration&&e.startTime===t.startTime})})||F(e)});var t,r=(t=Math.min(H.length-1,Math.floor(B()/50)),H[t]);r&&r.latency!==a.value&&(a.value=r.latency,a.entries=r.entries,n())},i=p("event",o,{durationThreshold:t.durationThreshold||40});n=m(e,a,r,t.reportAllChanges),i&&(i.observe({type:"first-input",buffered:!0}),h(function(){o(i.takeRecords()),a.value<0&&B()>0&&(a.value=0,a.entries=[]),n(!0)}),u(function(){H=[],D=N(),n=m(e,a=f("INP"),r,t.reportAllChanges)}))},W={},Z=function(e,t){t=t||{};var r,n=[2500,4e3],a=_(),o=f("LCP"),i=function(e){var t=e[e.length-1];if(t){var n=t.startTime-d();nperformance.now())return;n.entries=[o],a(!0),u(function(){(a=m(e,n=f("TTFB",0),r,t.reportAllChanges))(!0)})}})},e.exports=r},6595:function(e,t){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.isAPIRoute=function(e){return"/api"===e||Boolean(null==e?void 0:e.startsWith("/api/"))}},7743:function(e,t,r){"use strict";Object.defineProperty(t,"__esModule",{value:!0}),t.default=a,t.getProperError=function(e){return a(e)?e:Error(n.isPlainObject(e)?JSON.stringify(e):e+"")};var n=r(9344);function a(e){return"object"==typeof e&&null!==e&&"name"in e&&"message"in e}},5459:function(){}},function(e){e.O(0,[774],function(){return e(e.s=2339)}),_N_E=e.O()}]); \ No newline at end of file diff --git a/spaces/Feifei315/Joeythemonster-anything-midjourney-v-4-1/README.md b/spaces/Feifei315/Joeythemonster-anything-midjourney-v-4-1/README.md deleted file mode 100644 index 283ddce1a316c2d6a4faac7a4499eff3f04db5ec..0000000000000000000000000000000000000000 --- a/spaces/Feifei315/Joeythemonster-anything-midjourney-v-4-1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Joeythemonster Anything Midjourney V 4 1 -emoji: 🏃 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Fengbinbin/gpt-academic/request_llm/edge_gpt.py b/spaces/Fengbinbin/gpt-academic/request_llm/edge_gpt.py deleted file mode 100644 index bbf84000d84a42de80d3c051a24f06336af76aaf..0000000000000000000000000000000000000000 --- a/spaces/Fengbinbin/gpt-academic/request_llm/edge_gpt.py +++ /dev/null @@ -1,409 +0,0 @@ -""" -======================================================================== -第一部分:来自EdgeGPT.py -https://github.com/acheong08/EdgeGPT -======================================================================== -""" - -import argparse -import asyncio -import json -import os -import random -import re -import ssl -import sys -import uuid -from enum import Enum -from typing import Generator -from typing import Literal -from typing import Optional -from typing import Union -import websockets.client as websockets - -DELIMITER = "\x1e" - - -# Generate random IP between range 13.104.0.0/14 -FORWARDED_IP = ( - f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}" -) - -HEADERS = { - "accept": "application/json", - "accept-language": "en-US,en;q=0.9", - "content-type": "application/json", - "sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"109.0.1518.78"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": "", - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "same-origin", - "x-ms-client-request-id": str(uuid.uuid4()), - "x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32", - "Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx", - "Referrer-Policy": "origin-when-cross-origin", - "x-forwarded-for": FORWARDED_IP, -} - -HEADERS_INIT_CONVER = { - "authority": "edgeservices.bing.com", - "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7", - "accept-language": "en-US,en;q=0.9", - "cache-control": "max-age=0", - "sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"110.0.1587.69"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": '""', - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "document", - "sec-fetch-mode": "navigate", - "sec-fetch-site": "none", - "sec-fetch-user": "?1", - "upgrade-insecure-requests": "1", - "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69", - "x-edge-shopping-flag": "1", - "x-forwarded-for": FORWARDED_IP, -} - -def get_ssl_context(): - import certifi - ssl_context = ssl.create_default_context() - ssl_context.load_verify_locations(certifi.where()) - return ssl_context - - - -class NotAllowedToAccess(Exception): - pass - - -class ConversationStyle(Enum): - creative = "h3imaginative,clgalileo,gencontentv3" - balanced = "galileo" - precise = "h3precise,clgalileo" - - -CONVERSATION_STYLE_TYPE = Optional[ - Union[ConversationStyle, Literal["creative", "balanced", "precise"]] -] - - -def _append_identifier(msg: dict) -> str: - """ - Appends special character to end of message to identify end of message - """ - # Convert dict to json string - return json.dumps(msg) + DELIMITER - - -def _get_ran_hex(length: int = 32) -> str: - """ - Returns random hex string - """ - return "".join(random.choice("0123456789abcdef") for _ in range(length)) - - -class _ChatHubRequest: - """ - Request object for ChatHub - """ - - def __init__( - self, - conversation_signature: str, - client_id: str, - conversation_id: str, - invocation_id: int = 0, - ) -> None: - self.struct: dict = {} - - self.client_id: str = client_id - self.conversation_id: str = conversation_id - self.conversation_signature: str = conversation_signature - self.invocation_id: int = invocation_id - - def update( - self, - prompt, - conversation_style, - options, - ) -> None: - """ - Updates request object - """ - if options is None: - options = [ - "deepleo", - "enable_debug_commands", - "disable_emoji_spoken_text", - "enablemm", - ] - if conversation_style: - if not isinstance(conversation_style, ConversationStyle): - conversation_style = getattr(ConversationStyle, conversation_style) - options = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - conversation_style.value, - "dtappid", - "cricinfo", - "cricinfov2", - "dv3sugg", - ] - self.struct = { - "arguments": [ - { - "source": "cib", - "optionsSets": options, - "sliceIds": [ - "222dtappid", - "225cricinfo", - "224locals0", - ], - "traceId": _get_ran_hex(32), - "isStartOfSession": self.invocation_id == 0, - "message": { - "author": "user", - "inputMethod": "Keyboard", - "text": prompt, - "messageType": "Chat", - }, - "conversationSignature": self.conversation_signature, - "participant": { - "id": self.client_id, - }, - "conversationId": self.conversation_id, - }, - ], - "invocationId": str(self.invocation_id), - "target": "chat", - "type": 4, - } - self.invocation_id += 1 - - -class _Conversation: - """ - Conversation API - """ - - def __init__( - self, - cookies, - proxy, - ) -> None: - self.struct: dict = { - "conversationId": None, - "clientId": None, - "conversationSignature": None, - "result": {"value": "Success", "message": None}, - } - import httpx - self.proxy = proxy - proxy = ( - proxy - or os.environ.get("all_proxy") - or os.environ.get("ALL_PROXY") - or os.environ.get("https_proxy") - or os.environ.get("HTTPS_PROXY") - or None - ) - if proxy is not None and proxy.startswith("socks5h://"): - proxy = "socks5://" + proxy[len("socks5h://") :] - self.session = httpx.Client( - proxies=proxy, - timeout=30, - headers=HEADERS_INIT_CONVER, - ) - for cookie in cookies: - self.session.cookies.set(cookie["name"], cookie["value"]) - - # Send GET request - response = self.session.get( - url=os.environ.get("BING_PROXY_URL") - or "https://edgeservices.bing.com/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - response = self.session.get( - "https://edge.churchless.tech/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Authentication failed") - try: - self.struct = response.json() - except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc: - raise Exception( - "Authentication failed. You have not been accepted into the beta.", - ) from exc - if self.struct["result"]["value"] == "UnauthorizedRequest": - raise NotAllowedToAccess(self.struct["result"]["message"]) - - -class _ChatHub: - """ - Chat API - """ - - def __init__(self, conversation) -> None: - self.wss = None - self.request: _ChatHubRequest - self.loop: bool - self.task: asyncio.Task - print(conversation.struct) - self.request = _ChatHubRequest( - conversation_signature=conversation.struct["conversationSignature"], - client_id=conversation.struct["clientId"], - conversation_id=conversation.struct["conversationId"], - ) - - async def ask_stream( - self, - prompt: str, - wss_link: str, - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - if self.wss and not self.wss.closed: - await self.wss.close() - # Check if websocket is closed - self.wss = await websockets.connect( - wss_link, - extra_headers=HEADERS, - max_size=None, - ssl=get_ssl_context() - ) - await self._initial_handshake() - # Construct a ChatHub request - self.request.update( - prompt=prompt, - conversation_style=conversation_style, - options=options, - ) - # Send request - await self.wss.send(_append_identifier(self.request.struct)) - final = False - while not final: - objects = str(await self.wss.recv()).split(DELIMITER) - for obj in objects: - if obj is None or not obj: - continue - response = json.loads(obj) - if response.get("type") != 2 and raw: - yield False, response - elif response.get("type") == 1 and response["arguments"][0].get( - "messages", - ): - resp_txt = response["arguments"][0]["messages"][0]["adaptiveCards"][ - 0 - ]["body"][0].get("text") - yield False, resp_txt - elif response.get("type") == 2: - final = True - yield True, response - - async def _initial_handshake(self) -> None: - await self.wss.send(_append_identifier({"protocol": "json", "version": 1})) - await self.wss.recv() - - async def close(self) -> None: - """ - Close the connection - """ - if self.wss and not self.wss.closed: - await self.wss.close() - - -class NewbingChatbot: - """ - Combines everything to make it seamless - """ - - def __init__( - self, - cookies, - proxy - ) -> None: - if cookies is None: - cookies = {} - self.cookies = cookies - self.proxy = proxy - self.chat_hub: _ChatHub = _ChatHub( - _Conversation(self.cookies, self.proxy), - ) - - async def ask( - self, - prompt: str, - wss_link: str, - conversation_style: CONVERSATION_STYLE_TYPE = None, - options: dict = None, - ) -> dict: - """ - Ask a question to the bot - """ - async for final, response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - options=options, - ): - if final: - return response - await self.chat_hub.wss.close() - return None - - async def ask_stream( - self, - prompt: str, - wss_link: str, - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - async for response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - raw=raw, - options=options, - ): - yield response - - async def close(self) -> None: - """ - Close the connection - """ - await self.chat_hub.close() - - async def reset(self) -> None: - """ - Reset the conversation - """ - await self.close() - self.chat_hub = _ChatHub(_Conversation(self.cookies, self.proxy)) - - diff --git a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/whisper/decoding.py b/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/whisper/decoding.py deleted file mode 100644 index 603546d4c9ff67514d2567576935b974fe373bef..0000000000000000000000000000000000000000 --- a/spaces/FrankZxShen/so-vits-svc-models-ba/vencoder/whisper/decoding.py +++ /dev/null @@ -1,712 +0,0 @@ -from dataclasses import dataclass, field -from typing import Dict, List, Tuple, Iterable, Optional, Sequence, Union, TYPE_CHECKING - -import numpy as np -import torch -import torch.nn.functional as F -from torch import Tensor -from torch.distributions import Categorical - -from .audio import CHUNK_LENGTH -from .tokenizer import Tokenizer, get_tokenizer -from .utils import compression_ratio - -if TYPE_CHECKING: - from .model import Whisper - - -@torch.no_grad() -def detect_language(model: "Whisper", mel: Tensor, tokenizer: Tokenizer = None) -> Tuple[Tensor, List[dict]]: - """ - Detect the spoken language in the audio, and return them as list of strings, along with the ids - of the most probable language tokens and the probability distribution over all language tokens. - This is performed outside the main decode loop in order to not interfere with kv-caching. - - Returns - ------- - language_tokens : Tensor, shape = (n_audio,) - ids of the most probable language tokens, which appears after the startoftranscript token. - language_probs : List[Dict[str, float]], length = n_audio - list of dictionaries containing the probability distribution over all languages. - """ - if tokenizer is None: - tokenizer = get_tokenizer(model.is_multilingual) - if tokenizer.language is None or tokenizer.language_token not in tokenizer.sot_sequence: - raise ValueError(f"This model doesn't have language tokens so it can't perform lang id") - - single = mel.ndim == 2 - if single: - mel = mel.unsqueeze(0) - - # skip encoder forward pass if already-encoded audio features were given - if mel.shape[-2:] != (model.dims.n_audio_ctx, model.dims.n_audio_state): - mel = model.encoder(mel) - - # forward pass using a single token, startoftranscript - n_audio = mel.shape[0] - x = torch.tensor([[tokenizer.sot]] * n_audio).to(mel.device) # [n_audio, 1] - logits = model.logits(x, mel)[:, 0] - - # collect detected languages; suppress all non-language tokens - mask = torch.ones(logits.shape[-1], dtype=torch.bool) - mask[list(tokenizer.all_language_tokens)] = False - logits[:, mask] = -np.inf - language_tokens = logits.argmax(dim=-1) - language_token_probs = logits.softmax(dim=-1).cpu() - language_probs = [ - { - c: language_token_probs[i, j].item() - for j, c in zip(tokenizer.all_language_tokens, tokenizer.all_language_codes) - } - for i in range(n_audio) - ] - - if single: - language_tokens = language_tokens[0] - language_probs = language_probs[0] - - return language_tokens, language_probs - - -@dataclass(frozen=True) -class DecodingOptions: - task: str = "transcribe" # whether to perform X->X "transcribe" or X->English "translate" - language: Optional[str] = None # language that the audio is in; uses detected language if None - - # sampling-related options - temperature: float = 0.0 - sample_len: Optional[int] = None # maximum number of tokens to sample - best_of: Optional[int] = None # number of independent samples to collect, when t > 0 - beam_size: Optional[int] = None # number of beams in beam search, when t == 0 - patience: Optional[float] = None # patience in beam search (https://arxiv.org/abs/2204.05424) - - # options for ranking generations (either beams or best-of-N samples) - length_penalty: Optional[float] = None # "alpha" in Google NMT, None defaults to length norm - - # prompt, prefix, and token suppression - prompt: Optional[Union[str, List[int]]] = None # text or tokens for the previous context - prefix: Optional[Union[str, List[int]]] = None # text or tokens to prefix the current context - suppress_blank: bool = True # this will suppress blank outputs - - # list of tokens ids (or comma-separated token ids) to suppress - # "-1" will suppress a set of symbols as defined in `tokenizer.non_speech_tokens()` - suppress_tokens: Optional[Union[str, Iterable[int]]] = "-1" - - # timestamp sampling options - without_timestamps: bool = False # use <|notimestamps|> to sample text tokens only - max_initial_timestamp: Optional[float] = 1.0 # the initial timestamp cannot be later than this - - # implementation details - fp16: bool = True # use fp16 for most of the calculation - - -@dataclass(frozen=True) -class DecodingResult: - audio_features: Tensor - language: str - language_probs: Optional[Dict[str, float]] = None - tokens: List[int] = field(default_factory=list) - text: str = "" - avg_logprob: float = np.nan - no_speech_prob: float = np.nan - temperature: float = np.nan - compression_ratio: float = np.nan - - -class Inference: - def logits(self, tokens: Tensor, audio_features: Tensor) -> Tensor: - """Perform a forward pass on the decoder and return per-token logits""" - raise NotImplementedError - - def rearrange_kv_cache(self, source_indices) -> None: - """Update the key-value cache according to the updated beams""" - raise NotImplementedError - - def cleanup_caching(self) -> None: - """Clean up any resources or hooks after decoding is finished""" - pass - - -class PyTorchInference(Inference): - def __init__(self, model: "Whisper", initial_token_length: int): - self.model: "Whisper" = model - self.initial_token_length = initial_token_length - self.kv_cache = {} - self.hooks = [] - - def logits(self, tokens: Tensor, audio_features: Tensor) -> Tensor: - if not self.kv_cache: - self.kv_cache, self.hooks = self.model.install_kv_cache_hooks() - - if tokens.shape[-1] > self.initial_token_length: - # only need to use the last token except in the first forward pass - tokens = tokens[:, -1:] - - return self.model.decoder(tokens, audio_features, kv_cache=self.kv_cache) - - def cleanup_caching(self): - for hook in self.hooks: - hook.remove() - - self.kv_cache = {} - self.hooks = [] - - def rearrange_kv_cache(self, source_indices): - for module, tensor in self.kv_cache.items(): - # update the key/value cache to contain the selected sequences - self.kv_cache[module] = tensor[source_indices].detach() - - -class SequenceRanker: - def rank(self, tokens: List[List[Tensor]], sum_logprobs: List[List[float]]) -> List[int]: - """ - Given a list of groups of samples and their cumulative log probabilities, - return the indices of the samples in each group to select as the final result - """ - raise NotImplementedError - - -class MaximumLikelihoodRanker(SequenceRanker): - """ - Select the sample with the highest log probabilities, penalized using either - a simple length normalization or Google NMT paper's length penalty - """ - - def __init__(self, length_penalty: Optional[float]): - self.length_penalty = length_penalty - - def rank(self, tokens: List[List[Tensor]], sum_logprobs: List[List[float]]): - def scores(logprobs, lengths): - result = [] - for logprob, length in zip(logprobs, lengths): - if self.length_penalty is None: - penalty = length - else: - # from the Google NMT paper - penalty = ((5 + length) / 6) ** self.length_penalty - result.append(logprob / penalty) - return result - - # get the sequence with the highest score - lengths = [[len(t) for t in s] for s in tokens] - return [np.argmax(scores(p, l)) for p, l in zip(sum_logprobs, lengths)] - - -class TokenDecoder: - def reset(self): - """Initialize any stateful variables for decoding a new sequence""" - - def update(self, tokens: Tensor, logits: Tensor, sum_logprobs: Tensor) -> Tuple[Tensor, bool]: - """Specify how to select the next token, based on the current trace and logits - - Parameters - ---------- - tokens : Tensor, shape = (n_batch, current_sequence_length) - all tokens in the context so far, including the prefix and sot_sequence tokens - - logits : Tensor, shape = (n_batch, vocab_size) - per-token logits of the probability distribution at the current step - - sum_logprobs : Tensor, shape = (n_batch) - cumulative log probabilities for each sequence - - Returns - ------- - tokens : Tensor, shape = (n_batch, current_sequence_length + 1) - the tokens, appended with the selected next token - - completed : bool - True if all sequences has reached the end of text - - """ - raise NotImplementedError - - def finalize( - self, tokens: Tensor, sum_logprobs: Tensor - ) -> Tuple[Sequence[Sequence[Tensor]], List[List[float]]]: - """Finalize search and return the final candidate sequences - - Parameters - ---------- - tokens : Tensor, shape = (n_audio, n_group, current_sequence_length) - all tokens in the context so far, including the prefix and sot_sequence - - sum_logprobs : Tensor, shape = (n_audio, n_group) - cumulative log probabilities for each sequence - - Returns - ------- - tokens : Sequence[Sequence[Tensor]], length = n_audio - sequence of Tensors containing candidate token sequences, for each audio input - - sum_logprobs : List[List[float]], length = n_audio - sequence of cumulative log probabilities corresponding to the above - - """ - raise NotImplementedError - - -class GreedyDecoder(TokenDecoder): - def __init__(self, temperature: float, eot: int): - self.temperature = temperature - self.eot = eot - - def update(self, tokens: Tensor, logits: Tensor, sum_logprobs: Tensor) -> Tuple[Tensor, bool]: - temperature = self.temperature - if temperature == 0: - next_tokens = logits.argmax(dim=-1) - else: - next_tokens = Categorical(logits=logits / temperature).sample() - - logprobs = F.log_softmax(logits.float(), dim=-1) - current_logprobs = logprobs[torch.arange(logprobs.shape[0]), next_tokens] - sum_logprobs += current_logprobs * (tokens[:, -1] != self.eot) - - next_tokens[tokens[:, -1] == self.eot] = self.eot - tokens = torch.cat([tokens, next_tokens[:, None]], dim=-1) - - completed = (tokens[:, -1] == self.eot).all() - return tokens, completed - - def finalize(self, tokens: Tensor, sum_logprobs: Tensor): - # make sure each sequence has at least one EOT token at the end - tokens = F.pad(tokens, (0, 1), value=self.eot) - return tokens, sum_logprobs.tolist() - - -class BeamSearchDecoder(TokenDecoder): - def __init__(self, beam_size: int, eot: int, inference: Inference, patience: Optional[float] = None): - self.beam_size = beam_size - self.eot = eot - self.inference = inference - self.patience = patience or 1.0 - self.max_candidates: int = round(beam_size * self.patience) - self.finished_sequences = None - - assert self.max_candidates > 0, f"Invalid beam size ({beam_size}) or patience ({patience})" - - def reset(self): - self.finished_sequences = None - - def update(self, tokens: Tensor, logits: Tensor, sum_logprobs: Tensor) -> Tuple[Tensor, bool]: - if tokens.shape[0] % self.beam_size != 0: - raise ValueError(f"{tokens.shape}[0] % {self.beam_size} != 0") - - n_audio = tokens.shape[0] // self.beam_size - if self.finished_sequences is None: # for the first update - self.finished_sequences = [{} for _ in range(n_audio)] - - logprobs = F.log_softmax(logits.float(), dim=-1) - next_tokens, source_indices, finished_sequences = [], [], [] - for i in range(n_audio): - scores, sources, finished = {}, {}, {} - - # STEP 1: calculate the cumulative log probabilities for possible candidates - for j in range(self.beam_size): - idx = i * self.beam_size + j - prefix = tokens[idx].tolist() - for logprob, token in zip(*logprobs[idx].topk(self.beam_size + 1)): - new_logprob = (sum_logprobs[idx] + logprob).item() - sequence = tuple(prefix + [token.item()]) - scores[sequence] = new_logprob - sources[sequence] = idx - - # STEP 2: rank the candidates and keep the top beam_size sequences for each audio - saved = 0 - for sequence in sorted(scores, key=scores.get, reverse=True): - if sequence[-1] == self.eot: - finished[sequence] = scores[sequence] - else: - sum_logprobs[len(next_tokens)] = scores[sequence] - next_tokens.append(sequence) - source_indices.append(sources[sequence]) - - saved += 1 - if saved == self.beam_size: - break - - finished_sequences.append(finished) - - tokens = torch.tensor(next_tokens, device=tokens.device) - self.inference.rearrange_kv_cache(source_indices) - - # add newly finished sequences to self.finished_sequences - assert len(self.finished_sequences) == len(finished_sequences) - for previously_finished, newly_finished in zip(self.finished_sequences, finished_sequences): - for seq in sorted(newly_finished, key=newly_finished.get, reverse=True): - if len(previously_finished) >= self.max_candidates: - break # the candidate list is full - previously_finished[seq] = newly_finished[seq] - - # mark as completed if all audio has enough number of samples - completed = all( - len(sequences) >= self.max_candidates for sequences in self.finished_sequences - ) - return tokens, completed - - def finalize(self, preceding_tokens: Tensor, sum_logprobs: Tensor): - # collect all finished sequences, including patience, and add unfinished ones if not enough - sum_logprobs = sum_logprobs.cpu() - for i, sequences in enumerate(self.finished_sequences): - if len(sequences) < self.beam_size: # when not enough sequences are finished - for j in list(np.argsort(sum_logprobs[i]))[::-1]: - sequence = preceding_tokens[i, j].tolist() + [self.eot] - sequences[tuple(sequence)] = sum_logprobs[i][j].item() - if len(sequences) >= self.beam_size: - break - - tokens: List[List[Tensor]] = [ - [torch.tensor(seq) for seq in sequences.keys()] for sequences in self.finished_sequences - ] - sum_logprobs: List[List[float]] = [ - list(sequences.values()) for sequences in self.finished_sequences - ] - return tokens, sum_logprobs - - -class LogitFilter: - def apply(self, logits: Tensor, tokens: Tensor) -> None: - """Apply any filtering or masking to logits in-place - - Parameters - ---------- - logits : Tensor, shape = (n_batch, vocab_size) - per-token logits of the probability distribution at the current step - - tokens : Tensor, shape = (n_batch, current_sequence_length) - all tokens in the context so far, including the prefix and sot_sequence tokens - - """ - raise NotImplementedError - - -class SuppressBlank(LogitFilter): - def __init__(self, tokenizer: Tokenizer, sample_begin: int): - self.tokenizer = tokenizer - self.sample_begin = sample_begin - - def apply(self, logits: Tensor, tokens: Tensor): - if tokens.shape[1] == self.sample_begin: - logits[:, self.tokenizer.encode(" ") + [self.tokenizer.eot]] = -np.inf - - -class SuppressTokens(LogitFilter): - def __init__(self, suppress_tokens: Sequence[int]): - self.suppress_tokens = list(suppress_tokens) - - def apply(self, logits: Tensor, tokens: Tensor): - logits[:, self.suppress_tokens] = -np.inf - - -class ApplyTimestampRules(LogitFilter): - def __init__( - self, tokenizer: Tokenizer, sample_begin: int, max_initial_timestamp_index: Optional[int] - ): - self.tokenizer = tokenizer - self.sample_begin = sample_begin - self.max_initial_timestamp_index = max_initial_timestamp_index - - def apply(self, logits: Tensor, tokens: Tensor): - # suppress <|notimestamps|> which is handled by without_timestamps - if self.tokenizer.no_timestamps is not None: - logits[:, self.tokenizer.no_timestamps] = -np.inf - - # timestamps have to appear in pairs, except directly before EOT; mask logits accordingly - for k in range(tokens.shape[0]): - seq = [t for t in tokens[k, self.sample_begin :].tolist()] - last_was_timestamp = len(seq) >= 1 and seq[-1] >= self.tokenizer.timestamp_begin - penultimate_was_timestamp = len(seq) < 2 or seq[-2] >= self.tokenizer.timestamp_begin - - if last_was_timestamp: - if penultimate_was_timestamp: # has to be non-timestamp - logits[k, self.tokenizer.timestamp_begin :] = -np.inf - else: # cannot be normal text tokens - logits[k, : self.tokenizer.eot] = -np.inf - - if tokens.shape[1] == self.sample_begin: - # suppress generating non-timestamp tokens at the beginning - logits[:, : self.tokenizer.timestamp_begin] = -np.inf - - # apply the `max_initial_timestamp` option - if self.max_initial_timestamp_index is not None: - last_allowed = self.tokenizer.timestamp_begin + self.max_initial_timestamp_index - logits[:, last_allowed + 1 :] = -np.inf - - # if sum of probability over timestamps is above any other token, sample timestamp - logprobs = F.log_softmax(logits.float(), dim=-1) - for k in range(tokens.shape[0]): - timestamp_logprob = logprobs[k, self.tokenizer.timestamp_begin :].logsumexp(dim=-1) - max_text_token_logprob = logprobs[k, : self.tokenizer.timestamp_begin].max() - if timestamp_logprob > max_text_token_logprob: - logits[k, : self.tokenizer.timestamp_begin] = -np.inf - - -class DecodingTask: - inference: Inference - sequence_ranker: SequenceRanker - decoder: TokenDecoder - logit_filters: List[LogitFilter] - - def __init__(self, model: "Whisper", options: DecodingOptions): - self.model = model - - language = options.language or "en" - tokenizer = get_tokenizer(model.is_multilingual, language=language, task=options.task) - self.tokenizer: Tokenizer = tokenizer - self.options: DecodingOptions = self._verify_options(options) - - self.n_group: int = options.beam_size or options.best_of or 1 - self.n_ctx: int = model.dims.n_text_ctx - self.sample_len: int = options.sample_len or model.dims.n_text_ctx // 2 - - self.sot_sequence: Tuple[int] = tokenizer.sot_sequence - if self.options.without_timestamps: - self.sot_sequence = tokenizer.sot_sequence_including_notimestamps - - self.initial_tokens: Tuple[int] = self._get_initial_tokens() - self.sample_begin: int = len(self.initial_tokens) - self.sot_index: int = self.initial_tokens.index(tokenizer.sot) - - # inference: implements the forward pass through the decoder, including kv caching - self.inference = PyTorchInference(model, len(self.initial_tokens)) - - # sequence ranker: implements how to rank a group of sampled sequences - self.sequence_ranker = MaximumLikelihoodRanker(options.length_penalty) - - # decoder: implements how to select the next tokens, given the autoregressive distribution - if options.beam_size is not None: - self.decoder = BeamSearchDecoder( - options.beam_size, tokenizer.eot, self.inference, options.patience - ) - else: - self.decoder = GreedyDecoder(options.temperature, tokenizer.eot) - - # logit filters: applies various rules to suppress or penalize certain tokens - self.logit_filters = [] - if self.options.suppress_blank: - self.logit_filters.append(SuppressBlank(self.tokenizer, self.sample_begin)) - if self.options.suppress_tokens: - self.logit_filters.append(SuppressTokens(self._get_suppress_tokens())) - if not options.without_timestamps: - precision = CHUNK_LENGTH / model.dims.n_audio_ctx # usually 0.02 seconds - max_initial_timestamp_index = None - if options.max_initial_timestamp: - max_initial_timestamp_index = round(self.options.max_initial_timestamp / precision) - self.logit_filters.append( - ApplyTimestampRules(tokenizer, self.sample_begin, max_initial_timestamp_index) - ) - - def _verify_options(self, options: DecodingOptions) -> DecodingOptions: - if options.beam_size is not None and options.best_of is not None: - raise ValueError("beam_size and best_of can't be given together") - if options.temperature == 0: - if options.best_of is not None: - raise ValueError("best_of with greedy sampling (T=0) is not compatible") - if options.patience is not None and options.beam_size is None: - raise ValueError("patience requires beam_size to be given") - if options.length_penalty is not None and not (0 <= options.length_penalty <= 1): - raise ValueError("length_penalty (alpha) should be a value between 0 and 1") - - return options - - def _get_initial_tokens(self) -> Tuple[int]: - tokens = list(self.sot_sequence) - prefix = self.options.prefix - prompt = self.options.prompt - - if prefix: - prefix_tokens = ( - self.tokenizer.encode(" " + prefix.strip()) if isinstance(prefix, str) else prefix - ) - if self.sample_len is not None: - max_prefix_len = self.n_ctx // 2 - self.sample_len - prefix_tokens = prefix_tokens[-max_prefix_len:] - tokens = tokens + prefix_tokens - - if prompt: - prompt_tokens = ( - self.tokenizer.encode(" " + prompt.strip()) if isinstance(prompt, str) else prompt - ) - tokens = [self.tokenizer.sot_prev] + prompt_tokens[-(self.n_ctx // 2 - 1) :] + tokens - - return tuple(tokens) - - def _get_suppress_tokens(self) -> Tuple[int]: - suppress_tokens = self.options.suppress_tokens - - if isinstance(suppress_tokens, str): - suppress_tokens = [int(t) for t in suppress_tokens.split(",")] - - if -1 in suppress_tokens: - suppress_tokens = [t for t in suppress_tokens if t >= 0] - suppress_tokens.extend(self.tokenizer.non_speech_tokens) - elif suppress_tokens is None or len(suppress_tokens) == 0: - suppress_tokens = [] # interpret empty string as an empty list - else: - assert isinstance(suppress_tokens, list), "suppress_tokens must be a list" - - suppress_tokens.extend( - [self.tokenizer.sot, self.tokenizer.sot_prev, self.tokenizer.sot_lm] - ) - if self.tokenizer.no_speech is not None: - # no-speech probability is collected separately - suppress_tokens.append(self.tokenizer.no_speech) - - return tuple(sorted(set(suppress_tokens))) - - def _get_audio_features(self, mel: Tensor): - if self.options.fp16: - mel = mel.half() - - if mel.shape[-2:] == (self.model.dims.n_audio_ctx, self.model.dims.n_audio_state): - # encoded audio features are given; skip audio encoding - print("encoded audio features are given; skip audio encoding") - audio_features = mel - else: - print(mel.shape) - print("===============================") - audio_features = self.model.encoder(mel) - - if audio_features.dtype != (torch.float16 if self.options.fp16 else torch.float32): - return TypeError(f"audio_features has an incorrect dtype: {audio_features.dtype}") - - return audio_features - - def _detect_language(self, audio_features: Tensor, tokens: Tensor): - languages = [self.options.language] * audio_features.shape[0] - lang_probs = None - - if self.options.language is None or self.options.task == "lang_id": - lang_tokens, lang_probs = self.model.detect_language(audio_features, self.tokenizer) - languages = [max(probs, key=probs.get) for probs in lang_probs] - if self.options.language is None: - tokens[:, self.sot_index + 1] = lang_tokens # write language tokens - - return languages, lang_probs - - def _main_loop(self, audio_features: Tensor, tokens: Tensor): - assert audio_features.shape[0] == tokens.shape[0] - n_batch = tokens.shape[0] - sum_logprobs: Tensor = torch.zeros(n_batch, device=audio_features.device) - no_speech_probs = [np.nan] * n_batch - - try: - for i in range(self.sample_len): - logits = self.inference.logits(tokens, audio_features) - - if i == 0 and self.tokenizer.no_speech is not None: # save no_speech_probs - probs_at_sot = logits[:, self.sot_index].float().softmax(dim=-1) - no_speech_probs = probs_at_sot[:, self.tokenizer.no_speech].tolist() - - # now we need to consider the logits at the last token only - logits = logits[:, -1] - - # apply the logit filters, e.g. for suppressing or applying penalty to - for logit_filter in self.logit_filters: - logit_filter.apply(logits, tokens) - - # expand the tokens tensor with the selected next tokens - tokens, completed = self.decoder.update(tokens, logits, sum_logprobs) - - if completed or tokens.shape[-1] > self.n_ctx: - break - finally: - self.inference.cleanup_caching() - - return tokens, sum_logprobs, no_speech_probs - - @torch.no_grad() - def run(self, mel: Tensor) -> List[DecodingResult]: - self.decoder.reset() - tokenizer: Tokenizer = self.tokenizer - n_audio: int = mel.shape[0] - - audio_features: Tensor = self._get_audio_features(mel) # encoder forward pass - tokens: Tensor = torch.tensor([self.initial_tokens]).repeat(n_audio, 1) - - # detect language if requested, overwriting the language token - languages, language_probs = self._detect_language(audio_features, tokens) - if self.options.task == "lang_id": - return [ - DecodingResult(audio_features=features, language=language, language_probs=probs) - for features, language, probs in zip(audio_features, languages, language_probs) - ] - - # repeat the audio & text tensors by the group size, for beam search or best-of-n sampling - audio_features = audio_features.repeat_interleave(self.n_group, dim=0) - tokens = tokens.repeat_interleave(self.n_group, dim=0).to(audio_features.device) - - # call the main sampling loop - tokens, sum_logprobs, no_speech_probs = self._main_loop(audio_features, tokens) - - # reshape the tensors to have (n_audio, n_group) as the first two dimensions - audio_features = audio_features[:: self.n_group] - no_speech_probs = no_speech_probs[:: self.n_group] - assert audio_features.shape[0] == len(no_speech_probs) == n_audio - - tokens = tokens.reshape(n_audio, self.n_group, -1) - sum_logprobs = sum_logprobs.reshape(n_audio, self.n_group) - - # get the final candidates for each group, and slice between the first sampled token and EOT - tokens, sum_logprobs = self.decoder.finalize(tokens, sum_logprobs) - tokens: List[List[Tensor]] = [ - [t[self.sample_begin : (t == tokenizer.eot).nonzero()[0, 0]] for t in s] for s in tokens - ] - - # select the top-ranked sample in each group - selected = self.sequence_ranker.rank(tokens, sum_logprobs) - tokens: List[List[int]] = [t[i].tolist() for i, t in zip(selected, tokens)] - texts: List[str] = [tokenizer.decode(t).strip() for t in tokens] - - sum_logprobs: List[float] = [lp[i] for i, lp in zip(selected, sum_logprobs)] - avg_logprobs: List[float] = [lp / (len(t) + 1) for t, lp in zip(tokens, sum_logprobs)] - - fields = (texts, languages, tokens, audio_features, avg_logprobs, no_speech_probs) - if len(set(map(len, fields))) != 1: - raise RuntimeError(f"inconsistent result lengths: {list(map(len, fields))}") - - return [ - DecodingResult( - audio_features=features, - language=language, - tokens=tokens, - text=text, - avg_logprob=avg_logprob, - no_speech_prob=no_speech_prob, - temperature=self.options.temperature, - compression_ratio=compression_ratio(text), - ) - for text, language, tokens, features, avg_logprob, no_speech_prob in zip(*fields) - ] - - -@torch.no_grad() -def decode(model: "Whisper", mel: Tensor, options: DecodingOptions = DecodingOptions()) -> Union[DecodingResult, List[DecodingResult]]: - """ - Performs decoding of 30-second audio segment(s), provided as Mel spectrogram(s). - - Parameters - ---------- - model: Whisper - the Whisper model instance - - mel: torch.Tensor, shape = (80, 3000) or (*, 80, 3000) - A tensor containing the Mel spectrogram(s) - - options: DecodingOptions - A dataclass that contains all necessary options for decoding 30-second segments - - Returns - ------- - result: Union[DecodingResult, List[DecodingResult]] - The result(s) of decoding contained in `DecodingResult` dataclass instance(s) - """ - single = mel.ndim == 2 - if single: - mel = mel.unsqueeze(0) - result = DecodingTask(model, options).run(mel) - - if single: - result = result[0] - - return result diff --git a/spaces/Gen-Sim/Gen-Sim/notebooks/real_affordance.py b/spaces/Gen-Sim/Gen-Sim/notebooks/real_affordance.py deleted file mode 100644 index 6a02cd5300dc6a194f96efedf423aa33a35a9f7e..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/notebooks/real_affordance.py +++ /dev/null @@ -1,257 +0,0 @@ -import os -import sys -import json - -import numpy as np -from cliport import tasks -from cliport import agents -from cliport.utils import utils - -import torch -import cv2 -from cliport.dataset import RavensDataset -from cliport.environments.environment import Environment -from torch.utils.data import DataLoader -import IPython - -import matplotlib -import numpy as np -import matplotlib.pyplot as plt - - -train_demos = 50 # number training demonstrations used to train agent -n_eval = 1 # number of evaluation instances -mode = 'test' # val or test - -agent_name = 'cliport' -model_task = 'place-red-in-green' # multi-task agent conditioned with language goals -task_type = 'cliport3_task_indomain' # cliport3_task_indomain, gpt5_mixcliport2 -# model_folder = f'exps/exp-{task_type}_demo{train_demos}_2023-07-27_13-30-52-small' # path to pre-trained checkpoint - -# Lirui -model_folder = f'exps-singletask/debug_checkpoints' # path to pre-trained checkpoint -ckpt_name = 'last.ckpt' # name of checkpoint to load - -draw_grasp_lines = True -affordance_heatmap_scale = 30 - -### Uncomment the task you want to evaluate on ### -# eval_task = 'align-rope' -# eval_task = 'assembling-kits-seq-seen-colors' -# eval_task = 'assembling-kits-seq-unseen-colors' -# eval_task = 'packing-shapes' -# eval_task = 'packing-boxes-pairs-seen-colors' -# eval_task = 'packing-boxes-pairs-unseen-colors' -# eval_task = 'packing-seen-google-objects-seq' -# eval_task = 'packing-unseen-google-objects-seq' -# eval_task = 'packing-seen-google-objects-group' -# eval_task = 'packing-unseen-google-objects-group' -# eval_task = 'put-block-in-bowl-seen-colors' -# eval_task = 'put-block-in-bowl-unseen-colors' -eval_task = 'place-red-in-green' -# eval_task = 'stack-block-pyramid-seq-unseen-colors' -# eval_task = 'separating-piles-seen-colors' -# eval_task = 'separating-piles-unseen-colors' -# eval_task = 'towers-of-hanoi-seq-seen-colors' -# eval_task = 'towers-of-hanoi-seq-unseen-colors' - - - -def crop_img(img, height_range=[200, 340], width_range=[180, 460]): - img = img[height_range[0]:height_range[1], width_range[0]:width_range[1], :] - return img - -def read_rgb_image(path): - img = cv2.imread(path) - img = crop_img(img) - img = cv2.resize(img, (320, 160)) - img = img.transpose(1, 0, 2) - img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) - return img - -def read_depth_image(path): - # TODO: why the depth image has 4 channels ? - img = plt.imread(path, cv2.IMREAD_UNCHANGED) # TODO: need correct - img = crop_img(img) - img = cv2.resize(img, (320, 160))[:, :, 0][:, :, None] - img = img.transpose(1, 0, 2) - return img - -def process_real_sample(cmap, dmap, info, aug_theta_sigma=60, augment=False): - """Process the sample like the dataset method.""" - print(cmap.shape, dmap.shape) - img = np.concatenate((cmap, dmap, dmap, dmap), axis=2) - p0, p1 = np.zeros(1), np.zeros(1) - p0_theta, p1_theta = np.zeros(1), np.zeros(1) - perturb_params = np.zeros(5) - if augment: - img, _, (p0, p1), perturb_params = utils.perturb(img, [p0, p1], theta_sigma=aug_theta_sigma) - - sample = { - 'img': img.copy(), - 'p0': np.array(p0).copy(), 'p0_theta': np.array(p0_theta).copy(), - 'p1': np.array(p1).copy(), 'p1_theta': np.array(p1_theta).copy() , - 'perturb_params': np.array(perturb_params).copy() - } - - if info and 'lang_goal' in info: - sample['lang_goal'] = info['lang_goal'] - - return sample - - -def plot_affordance(batch, obs, agent, info, draw_grasp_lines=True, affordance_heatmap_scale=30): - - fig, axs = plt.subplots(2, 2, figsize=(13, 7)) - - # Get color and depth inputs - img = batch['img'] # (320, 160, 6) - img = torch.from_numpy(img) - color = np.uint8(img.detach().cpu().numpy())[:,:,:3] - color = color.transpose(1,0,2) - depth = np.array(img.detach().cpu().numpy())[:,:,3] - depth = depth.transpose(1,0) - - # Display input color - axs[0,0].imshow(color) - axs[0,0].axes.xaxis.set_visible(False) - axs[0,0].axes.yaxis.set_visible(False) - axs[0,0].set_title('Input RGB') - - # Display input depth - axs[0,1].imshow(depth) - axs[0,1].axes.xaxis.set_visible(False) - axs[0,1].axes.yaxis.set_visible(False) - axs[0,1].set_title('Input Depth') - - # Display predicted pick affordance - axs[1,0].imshow(color) - axs[1,0].axes.xaxis.set_visible(False) - axs[1,0].axes.yaxis.set_visible(False) - axs[1,0].set_title('Pick Affordance') - - # Display predicted place affordance - axs[1,1].imshow(color) - axs[1,1].axes.xaxis.set_visible(False) - axs[1,1].axes.yaxis.set_visible(False) - axs[1,1].set_title('Place Affordance') - - # Get action predictions - l = str(info['lang_goal']) - act = agent.real_act(obs, info, goal=None) - pick, place = act['pick'], act['place'] - - # Visualize pick affordance - pick_inp = {'inp_img': batch['img'], 'lang_goal': l} - pick_conf = agent.attn_forward(pick_inp)[0] - print("pick_conf:", pick_conf.shape, pick, place) - # IPython.embed() - logits = pick_conf.detach().cpu().numpy() - - pick_conf = pick_conf.detach().cpu().numpy() - argmax = np.argmax(pick_conf) - argmax = np.unravel_index(argmax, shape=pick_conf.shape) - p0 = argmax[:2] - - p0_theta = (argmax[2] * (2 * np.pi / pick_conf.shape[2])) * -1.0 - - line_len = 30 - pick0 = (pick[0] + line_len/2.0 * np.sin(p0_theta), pick[1] + line_len/2.0 * np.cos(p0_theta)) - pick1 = (pick[0] - line_len/2.0 * np.sin(p0_theta), pick[1] - line_len/2.0 * np.cos(p0_theta)) - - if draw_grasp_lines: - axs[1,0].plot((pick1[0], pick0[0]), (pick1[1], pick0[1]), color='r', linewidth=1) - - # Visualize place affordance - place_inp = {'inp_img': batch['img'], 'p0': pick, 'lang_goal': l} - place_conf = agent.trans_forward(place_inp)[0] - - place_conf = place_conf.permute(1, 2, 0) - place_conf = place_conf.detach().cpu().numpy() - argmax = np.argmax(place_conf) - argmax = np.unravel_index(argmax, shape=place_conf.shape) - p1_pix = argmax[:2] - p1_theta = (argmax[2] * (2 * np.pi / place_conf.shape[2]) + p0_theta) * -1.0 - - line_len = 30 - place0 = (place[0] + line_len/2.0 * np.sin(p1_theta), place[1] + line_len/2.0 * np.cos(p1_theta)) - place1 = (place[0] - line_len/2.0 * np.sin(p1_theta), place[1] - line_len/2.0 * np.cos(p1_theta)) - - if draw_grasp_lines: - axs[1,1].plot((place1[0], place0[0]), (place1[1], place0[1]), color='g', linewidth=1) - - # Overlay affordances on RGB input - pick_logits_disp = np.uint8(logits * 255 * affordance_heatmap_scale).transpose(2,1,0) - place_logits_disp = np.uint8(np.sum(place_conf, axis=2)[:,:,None] * 255 * affordance_heatmap_scale).transpose(1,0,2)# .transpose(1,2,0) - - pick_logits_disp_masked = np.ma.masked_where(pick_logits_disp < 0, pick_logits_disp) - place_logits_disp_masked = np.ma.masked_where(place_logits_disp < 0, place_logits_disp) - # IPython.embed() - - axs[1][0].imshow(pick_logits_disp_masked, alpha=0.75) - axs[1][1].imshow(place_logits_disp_masked, cmap='viridis', alpha=0.75) - - print(f"Lang Goal: {str(info['lang_goal'])}") - plt.savefig(f'{root_dir}/data/real_output/test_real_affordance2.png') - - -if __name__ == '__main__': - os.environ['GENSIM_ROOT'] = f'{os.path.abspath(__file__)}/../..' - root_dir = os.environ['GENSIM_ROOT'] - print("root_dir:", root_dir) - assets_root = os.path.join(root_dir, 'cliport/environments/assets/') - config_file = 'eval.yaml' - - vcfg = utils.load_hydra_config(os.path.join(root_dir, f'cliport/cfg/{config_file}')) - vcfg['data_dir'] = os.path.join(root_dir, 'data') - vcfg['mode'] = mode - - vcfg['model_task'] = model_task - vcfg['eval_task'] = eval_task - vcfg['agent'] = agent_name - - # Model and training config paths - model_path = os.path.join(root_dir, model_folder) - if model_folder[-7:] == 'smaller': - vcfg['train_config'] = f"{model_path}/{model_folder[9:-8]}-{vcfg['agent']}-n{train_demos}-train/.hydra/config.yaml" - vcfg['model_path'] = f"{model_path}/{model_folder[9:-8]}-{vcfg['agent']}-n{train_demos}-train/checkpoints/" - else: - vcfg['train_config'] = f"{model_path}/{model_folder[9:-6]}-{vcfg['agent']}-n{train_demos}-train/.hydra/config.yaml" - vcfg['model_path'] = f"{model_path}/{model_folder[9:-6]}-{vcfg['agent']}-n{train_demos}-train/checkpoints/" - tcfg = utils.load_hydra_config(vcfg['train_config']) - - # Load dataset - ds = RavensDataset(os.path.join(vcfg['data_dir'], f'{vcfg["eval_task"]}-{vcfg["mode"]}'), - tcfg, - n_demos=n_eval, - augment=False) - - eval_run = 0 - name = '{}-{}-{}-{}'.format(vcfg['eval_task'], vcfg['agent'], n_eval, eval_run) - print(f'\nEval ID: {name}\n') - - # Initialize agent - utils.set_seed(eval_run, torch=True) - agent = agents.names[vcfg['agent']](name, tcfg, DataLoader(ds), DataLoader(ds)) - - # Load checkpoint - ckpt_path = os.path.join(vcfg['model_path'], ckpt_name) - print(f'\nLoading checkpoint: {ckpt_path}') - agent.load(ckpt_path) - - os.makedirs(f'{root_dir}/data/real_output', exist_ok=True) - real_rgb_img = read_rgb_image(f'{root_dir}/data/real_imgs/rgb0.png') - plt.imshow(real_rgb_img[:, :, :3]) - plt.axis('off') - plt.savefig(f'{root_dir}/data/real_output/real_show.png') - real_depth_img = read_depth_image(f'{root_dir}/data/real_imgs/depth0.png') - print(real_depth_img.shape, real_rgb_img.shape) - plt.imshow(real_depth_img, cmap='gray') - plt.savefig(f'{root_dir}/data/real_output/real_depth.png') - info = {} - info['lang_goal'] = 'place red block in green bowl' - - batch = process_real_sample(real_rgb_img, real_depth_img, info, augment=False) - - obs = batch['img'] - plot_affordance(batch, obs, agent, info, draw_grasp_lines=draw_grasp_lines, affordance_heatmap_scale=affordance_heatmap_scale) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/Ask_Questions_To_YouTube_Videos/app.py b/spaces/Gradio-Blocks/Ask_Questions_To_YouTube_Videos/app.py deleted file mode 100644 index 8359508bfe193647c0f535fb18f109c14fa50a56..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/Ask_Questions_To_YouTube_Videos/app.py +++ /dev/null @@ -1,174 +0,0 @@ -import gradio as gr -from youtube_transcript_api import YouTubeTranscriptApi -from transformers import AutoTokenizer -from transformers import pipeline -from transformers import AutoModelForQuestionAnswering -import pandas as pd -from sentence_transformers import SentenceTransformer, util -import torch - -model_ckpt = "deepset/minilm-uncased-squad2" -tokenizer = AutoTokenizer.from_pretrained(model_ckpt) -model = AutoModelForQuestionAnswering.from_pretrained(model_ckpt) -modelST = SentenceTransformer('sentence-transformers/all-MiniLM-L6-v2') - -#input - video link, output - full transcript -def get_transcript(link): - print("******** Inside get_transcript ********") - print(f"link to be extracted is : {link}") - video_id = link.split("=")[1] - # Handle additional query parameters such as timestamp, ... - video_id = video_id.split("&")[0] - print(f"video id extracted is : {video_id}") - transcript = YouTubeTranscriptApi.get_transcript(video_id) - FinalTranscript = ' '.join([i['text'] for i in transcript]) - return FinalTranscript,transcript, video_id - - -#input - question and transcript, output - answer timestamp -def get_answers_timestamp(question, final_transcript, transcript): - print("******** Inside get_answers_timestamp ********") - - context = final_transcript - print(f"Input Question is : {question}") - print(f"Type of trancript is : {type(context)}, Length of transcript is : {len(context)}") - inputs = tokenizer(question, context, return_overflowing_tokens=True, max_length=512, stride = 25) - - #getting a list of contexts available after striding - contx=[] - for window in inputs["input_ids"]: - #print(f"{tokenizer.decode(window)} \n") - contx.append(tokenizer.decode(window).split('[SEP]')[1].strip()) - #print(ques) - #print(contx) - - lst=[] - pipe = pipeline("question-answering", model=model, tokenizer=tokenizer) - for contexts in contx: - lst.append(pipe(question=question, context=contexts)) - - print(f"contx list is : {contx}") - lst_scores = [dicts['score'] for dicts in lst] - print(f"lst_scores is : {lst_scores}") - #getting highest and second highest scores - idxmax = lst_scores.index(max(lst_scores)) - lst_scores.remove(max(lst_scores)) - idxmax2 = lst_scores.index(max(lst_scores)) - - sentence_for_timestamp = lst[idxmax]['answer'] - sentence_for_timestamp_secondbest = lst[idxmax2]['answer'] - - dftranscript = pd.DataFrame(transcript) - - embedding_1= modelST.encode(dftranscript.text, convert_to_tensor=True) - embedding_2 = modelST.encode(sentence_for_timestamp, convert_to_tensor=True) - embedding_3 = modelST.encode(sentence_for_timestamp_secondbest, convert_to_tensor=True) - - similarity_tensor = util.pytorch_cos_sim(embedding_1, embedding_2) - idx = torch.argmax(similarity_tensor) - start_timestamp = dftranscript.iloc[[int(idx)-3]].start.values[0] - start_timestamp = round(start_timestamp) - - similarity_tensor_secondbest = util.pytorch_cos_sim(embedding_1, embedding_3) - idx_secondbest = torch.argmax(similarity_tensor_secondbest) - start_timestamp_secondbest = dftranscript.iloc[[int(idx_secondbest)-3]].start.values[0] - start_timestamp_secondbest = round(start_timestamp_secondbest) - - return start_timestamp, start_timestamp_secondbest - - -def display_vid(url, question, sample_question=None, example_video=None): - print("******** display_vid ********") - if question == '': - question = sample_question - - #get embedding and youtube link for initial video - html_in = "" - #print(html) - - if len(example_video) !=0 : #is not None: - print(f"example_video is : {example_video}") - url = example_video[0] - #get transcript - final_transcript, transcript, video_id = get_transcript(url) - - #get answer timestamp - #input - question and transcript, output - answer timestamp - ans_timestamp, ans_timestamp_secondbest = get_answers_timestamp(question, final_transcript, transcript) - - #created embedding width='560' height='315' - html_out = "" - print(f"html output is : {html_out}") - html_out_secondbest = "" - - if question == '': - print(f"Inside display_vid(), Sample_Question coming from Radio box is BEFORE : {sample_question}") - sample_ques = set_example_question(sample_question) - print(f"Inside display_vid(), Sample Question coming from Radio box is AFTER : {sample_ques}") - else: - sample_ques = question - return html_out, html_out_secondbest, sample_ques, url - -def set_example_question(sample_question): - print(f"******* Inside Sample Questions ********") - print(f"Sample Question coming from Radio box is : {sample_question}") - print("What is the Return value : {gr.Radio.update(value=sample_question)}") - return gr.Radio.update(value=sample_question) #input_ques.update(example) - -demo = gr.Blocks() - -with demo: - gr.Markdown("

    Ask a Question to a YouTube Video and get the Video played from the answer timestamp

    ") - gr.Markdown( - """### How many times have you seen a long video/podcast on Youtube and wondered only if there would have been 'explanatory' timestamps it would have been so much better.. - **A Space by [Yuvraj Sharma](https://huggingface.co/ysharma). How to use this space:** You can either provide a new YouTube video link or can use the sample video link provided. Then provide a Questions that you would like about exploring the content in the given video. - The App will generate timestamps and Play the video at those timestamps for you in the space provided. You will see two video displays, corresponding to two of the best guesses by the underlying models. Chances are that both videos might start with same timestamp, which will depend on the question and the content in the video, please bear! - Also, couple small caveats - - - The App will perform as good as the available English Transcripts are for the given YouTube Video. If there are no transcripts, the App will not work. - - Please make sure the YouTube video links that you paste here don't have the trailing values like *&t=8077s* - - Lastly, once you have queried a video, you might have to refresh the page for next query (will try and fix this) - - **Motivation behind building this App:** When we see a long video without timestamps, we often wonder 'if' the content we are looking for is in there, or 'where' in the video is the content we are looking for? The Idea is that we might have questions like 'Is the xxxx thing covered in this video?', or maybe 'does the host talks about the architecture of the xxxxx model', or maybe 'Does host talk about alien doorway on Mars?' and so on. - - **So this App could help you in reaching to that timestamp in 'Record time'!** - - **Best part:** You don't even have to move away from the Space tab in your browser as the YouTube video gets played within the given View. - """ - ) - with gr.Row(): - input_url = gr.Textbox(label="Input a Youtube video link") - input_ques = gr.Textbox(label="Ask a Question") - - with gr.Row(): - output_vid = gr.HTML(label="Video from timestamp 1", show_label=True) - output_vid_secondbest = gr.HTML(label="Video from timestamp 2", show_label=True) - - with gr.Row(): - example_question = gr.Dropdown( - ["Choose a sample question", "Does video talk about different modalities", - "does the model uses perceiver architecture?", - "when does the video talk about locked image tuning or lit?", - "comparison between gpt3 and jurassic?", - "Has flamingo passed turing test yet?", - "Any funny examples in video?", - "is it possible to download the stylegan model?", - "what was very cool?", - "what is the cool library?"], label= "Choose a sample Question", value=None) - with gr.Row(): - example_video = gr.CheckboxGroup( ["https://www.youtube.com/watch?v=smUHQndcmOY"], label= "Choose a sample YouTube video") - - b1 = gr.Button("Publish Video") - - b1.click(display_vid, inputs=[input_url, input_ques, example_question, example_video], outputs=[output_vid, output_vid_secondbest, input_ques, input_url]) - - with gr.Row(): - gr.Markdown(''' - #### Model Credits - 1. [Question Answering](https://huggingface.co/deepset/minilm-uncased-squad2) - 1. [Sentence Transformer](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) - ''') - - with gr.Row(): - gr.Markdown("![visitor badge](https://visitor-badge.glitch.me/badge?page_id=gradio-blocks_ask_questions_to_youtube_videos)") - -demo.launch(enable_queue=True, debug=True) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/pubmed-abstract-retriever/app.py b/spaces/Gradio-Blocks/pubmed-abstract-retriever/app.py deleted file mode 100644 index 0524b94ff4504372c4364850976c0bc8e199edde..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/pubmed-abstract-retriever/app.py +++ /dev/null @@ -1,360 +0,0 @@ -import nltk -import re -import nltkmodule - -from newspaper import Article -from newspaper import fulltext -import requests -import itertools -import os - - -from nltk.tokenize import word_tokenize -from sentence_transformers import SentenceTransformer -import pandas as pd -import numpy as np -from pandas import ExcelWriter -from torch.utils.data import DataLoader -import math -from sentence_transformers import models, losses -from sentence_transformers import SentencesDataset, LoggingHandler, SentenceTransformer -from sentence_transformers.evaluation import EmbeddingSimilarityEvaluator -from sentence_transformers.readers import * -from nltk.corpus import stopwords -stop_words = stopwords.words('english') -import matplotlib.pyplot as plt -from sklearn.cluster import KMeans -from sklearn.decomposition import PCA -from sklearn.metrics.pairwise import cosine_similarity -import scipy.spatial -import networkx as nx -from nltk.tokenize import sent_tokenize -import scispacy -import spacy -import en_core_sci_lg -import string -from nltk.stem.wordnet import WordNetLemmatizer -import gradio as gr -import inflect -from sklearn.cluster import KMeans -from sklearn.cluster import AgglomerativeClustering -from sklearn.metrics import silhouette_samples, silhouette_score, davies_bouldin_score -import json -from xml.etree import ElementTree as ET -p = inflect.engine() - -nlp = en_core_sci_lg.load() -sp = en_core_sci_lg.load() -all_stopwords = sp.Defaults.stop_words - -os.environ["TOKENIZERS_PARALLELISM"] = "false" - -def remove_stopwords(sen): - sen_new = " ".join([i for i in sen if i not in stop_words]) - return sen_new - - - - -def keyphrase_generator(article_link, model_1, model_2, max_num_keywords, model_3, max_retrieved, model_4): - - word_embedding_model = models.Transformer(model_3) - pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension(), - pooling_mode_mean_tokens=True, - pooling_mode_cls_token=False, - pooling_mode_max_tokens=False) - - embedder = SentenceTransformer(modules=[word_embedding_model, pooling_model]) - - element=[] - cluster_list_final=[] - comb_list=[] - comb=[] - title_list=[] - titles_list=[] - abstracts_list=[] - silhouette_score_list=[] - final_textrank_list=[] - document=[] - text_doc=[] - final_list=[] - score_list=[] - sum_list=[] - ############################################## Here we first extract the sentences using SBERT and Textrank ########################### - model_1 = SentenceTransformer(model_1) - model_2 = SentenceTransformer(model_2) - url = article_link - html = requests.get(url).text - article = fulltext(html) - corpus=sent_tokenize(article) - indicator_list=['concluded','concludes','in a study', 'concluding','conclude','in sum','in a recent study','therefore','thus','so','hence', - 'as a result','accordingly','consequently','in short','proves that','shows that','suggests that','demonstrates that','found that','observed that', - 'indicated that','suggested that','demonstrated that'] - count_dict={} - for l in corpus: - c=0 - for l2 in indicator_list: - if l.find(l2)!=-1:#then it is a substring - c=1 - break - if c:# - count_dict[l]=1 - else: - count_dict[l]=0 - for sent, score in count_dict.items(): - score_list.append(score) - clean_sentences_new = pd.Series(corpus).str.replace("[^a-zA-Z]", " ", regex = True).tolist() - corpus_embeddings = model_1.encode(clean_sentences_new) - sim_mat = np.zeros([len(clean_sentences_new), len(clean_sentences_new)]) - for i in range(len(clean_sentences_new)): - len_embeddings=(len(corpus_embeddings[i])) - for j in range(len(clean_sentences_new)): - if i != j: - if(len_embeddings == 1024): - sim_mat[i][j] = cosine_similarity(corpus_embeddings[i].reshape(1,1024), corpus_embeddings[j].reshape(1,1024))[0,0] - elif(len_embeddings == 768): - sim_mat[i][j] = cosine_similarity(corpus_embeddings[i].reshape(1,768), corpus_embeddings[j].reshape(1,768))[0,0] - nx_graph = nx.from_numpy_array(sim_mat) - scores = nx.pagerank(nx_graph, max_iter = 1500) - sentences=((scores[i],s) for i,s in enumerate(corpus)) - for elem in sentences: - element.append(elem[0]) - for sc, lst in zip(score_list, element): ########## taking the scores from both the lists - sum1=sc+lst - sum_list.append(sum1) - x=sorted(((sum_list[i],s) for i,s in enumerate(corpus)), reverse=True) - for elem in x: - final_textrank_list.append(elem[1]) - - ################################################################ Textrank ends ################################################# - - ######################################################## From here we start the keyphrase extraction process ################################################ - - a=int((10*len(final_textrank_list))/100.0) - if(a<5): - total=5 - else: - total=int(a) - for i in range(total): - document.append(final_textrank_list[i]) - doc=" ".join(document) - for i in document: - doc_1=nlp(i) - text_doc.append([X.text for X in doc_1.ents]) - entity_list = [item for sublist in text_doc for item in sublist] - entity_list = [word for word in entity_list if not word in all_stopwords] - entity_list = [word_entity for word_entity in entity_list if(p.singular_noun(word_entity) == False)] - entity_list=list(dict.fromkeys(entity_list)) - doc_embedding = model_2.encode([doc]) - candidates=entity_list - candidate_embeddings = model_2.encode(candidates) - distances = cosine_similarity(doc_embedding, candidate_embeddings) - top_n = max_num_keywords - keyword_list = [candidates[index] for index in distances.argsort()[0][-top_n:]] - keywords = '\n'.join(keyword_list) - - ############################################################## Keyphrase extraction ends ############################################# - - - ################################################################# From here we start the clustering and query generation ################################## - - c_len=(len(keyword_list)) - keyword_embeddings = embedder.encode(keyword_list) - data_embeddings = embedder.encode(keyword_list) - - for num_clusters in range(1, top_n): - clustering_model = KMeans(n_clusters=num_clusters) - clustering_model.fit(keyword_embeddings) - cluster_assignment = clustering_model.labels_ - clustered_sentences = [[] for i in range(num_clusters)] - for sentence_id, cluster_id in enumerate(cluster_assignment): - clustered_sentences[cluster_id].append(keyword_list[sentence_id]) - cl_sent_len=(len(clustered_sentences)) - list_cluster=list(clustered_sentences) - a=len(list_cluster) - cluster_list_final.append(list_cluster) - if (c_len==cl_sent_len and c_len>=3) or cl_sent_len==1: - silhouette_avg = 0 - silhouette_score_list.append(silhouette_avg) - elif c_len==cl_sent_len==2: - silhouette_avg = 1 - silhouette_score_list.append(silhouette_avg) - else: - silhouette_avg = silhouette_score(keyword_embeddings, cluster_assignment) - silhouette_score_list.append(silhouette_avg) - res_dict = dict(zip(silhouette_score_list, cluster_list_final)) - cluster_items=res_dict[max(res_dict)] - - for i in cluster_items: - z=' OR '.join(i) - comb.append("("+z+")") - comb_list.append(comb) - combinations = [] - for subset in itertools.combinations(comb, 2): - combinations.append(subset) - f1_list=[] - for s in combinations: - final = ' AND '.join(s) - f1_list.append("("+final+")") - f_1=' OR '.join(f1_list) - final_list.append(f_1) - -######################################################## query generation ends here ####################################### - -####################################### PubeMed abstract extraction starts here ######################################### - - ncbi_url='https://eutils.ncbi.nlm.nih.gov/entrez/eutils/' - - last_url='esearch.fcgi?db=pubmed'+'&term='+f_1 - overall_url=ncbi_url+last_url+'&rettype=json'+'&sort=relevance' - pubmed_search_request = requests.get(overall_url) - - root = ET.fromstring(pubmed_search_request.text) - levels = root.findall('.//Id') - search_id_list=[] - for level in levels: - name = level.text - search_id_list.append(name) - all_search_ids = ','.join(search_id_list) - fetch_url='efetch.fcgi?db=pubmed' - search_id='&id='+all_search_ids - return_url=ncbi_url+fetch_url+search_id+'&rettype=text'+'&retmode=xml'+'&retmax=500'+'&sort=relevance' - pubmed_abstract_request = requests.get(return_url) - root_1 = ET.fromstring(pubmed_abstract_request.text) - article_title = root_1.findall('.//ArticleTitle') - for a in article_title: - article_title_name = a.text - titles_list.append(article_title_name) - article_abstract = root_1.findall('.//AbstractText') - for b in article_abstract: - article_abstract_name = b.text - abstracts_list.append(article_abstract_name) - -################################## PubMed extraction ends here ######################################################## - -########################################## Most relevant abstracts as per news article heading starts here ########################################## - - first_article = Article(url, language='en') - first_article.download() - first_article.parse() - article_heading=(first_article.title) - article_heading=sent_tokenize(article_heading) - model_4 = SentenceTransformer(model_4) - - my_dict = dict(zip(titles_list,abstracts_list)) - title_embeddings = model_4.encode(titles_list) - heading_embedding = model_4.encode(article_heading) - similarities = cosine_similarity(heading_embedding, title_embeddings) - max_n = max_retrieved - sorted_titles = [titles_list[index] for index in similarities.argsort()[0][-max_n:]] - sorted_abstract_list=[] - for list_elem in sorted_titles: - sorted_abstract_list.append(my_dict[list_elem]) - sorted_dict = {'Title': sorted_titles, 'Abstract': sorted_abstract_list} - df_new=pd.DataFrame(dict([ (k,pd.Series(v)) for k,v in sorted_dict.items() ])) - df_final = df_new.fillna(' ') - #fp = df_final.to_csv('title_abstract.csv', index=False) - - -############################################ Ends here #################################################### - - #return df_final - #return fp - return sorted_dict - - -igen_pubmed = gr.Interface(keyphrase_generator, - inputs=[gr.inputs.Textbox(lines=1, placeholder="Provide article web link here (Can be chosen from examples below)",default="", label="Article web link"), - gr.inputs.Dropdown(choices=['sentence-transformers/all-mpnet-base-v2', - 'sentence-transformers/all-mpnet-base-v1', - 'sentence-transformers/all-distilroberta-v1', - 'sentence-transformers/gtr-t5-large', - 'pritamdeka/S-Bluebert-snli-multinli-stsb', - 'pritamdeka/S-Biomed-Roberta-snli-multinli-stsb', - 'pritamdeka/S-BioBert-snli-multinli-stsb', - 'sentence-transformers/stsb-mpnet-base-v2', - 'sentence-transformers/stsb-roberta-base-v2', - 'sentence-transformers/stsb-distilroberta-base-v2', - 'sentence-transformers/sentence-t5-large', - 'sentence-transformers/sentence-t5-base'], - type="value", - default='sentence-transformers/stsb-roberta-base-v2', - label="Select any SBERT model for TextRank from the list below"), - gr.inputs.Dropdown(choices=['sentence-transformers/paraphrase-mpnet-base-v2', - 'sentence-transformers/all-mpnet-base-v1', - 'sentence-transformers/paraphrase-distilroberta-base-v1', - 'sentence-transformers/paraphrase-xlm-r-multilingual-v1', - 'sentence-transformers/paraphrase-multilingual-mpnet-base-v2', - 'sentence-transformers/paraphrase-albert-small-v2', - 'sentence-transformers/paraphrase-albert-base-v2', - 'sentence-transformers/paraphrase-MiniLM-L12-v2', - 'sentence-transformers/paraphrase-MiniLM-L6-v2', - 'sentence-transformers/all-MiniLM-L12-v2', - 'sentence-transformers/all-distilroberta-v1', - 'sentence-transformers/paraphrase-TinyBERT-L6-v2', - 'sentence-transformers/paraphrase-MiniLM-L3-v2', - 'sentence-transformers/all-MiniLM-L6-v2'], - type="value", - default='sentence-transformers/all-mpnet-base-v1', - label="Select any SBERT model for keyphrases from the list below"), - gr.inputs.Slider(minimum=5, maximum=20, step=1, default=10, label="Max Keywords"), - gr.inputs.Dropdown(choices=['cambridgeltl/SapBERT-from-PubMedBERT-fulltext', - 'cambridgeltl/SapBERT-from-PubMedBERT-fulltext-mean-token'], - type="value", - default='cambridgeltl/SapBERT-from-PubMedBERT-fulltext', - label="Select any SapBERT model for clustering from the list below"), - gr.inputs.Slider(minimum=5, maximum=15, step=1, default=10, label="PubMed Max Abstracts"), - gr.inputs.Dropdown(choices=['pritamdeka/S-Bluebert-snli-multinli-stsb', - 'pritamdeka/S-BioBert-snli-multinli-stsb', - 'pritamdeka/S-Biomed-Roberta-snli-multinli-stsb', - 'sentence-transformers/all-mpnet-base-v2'], - type="value", - default='sentence-transformers/all-mpnet-base-v2', - label="Select any SBERT model for abstracts from the list below")], - #outputs=gr.outputs.Dataframe(type="auto", label="Retrieved Results from PubMed",max_cols=2, overflow_row_behaviour="paginate"), - outputs=gr.outputs.JSON(label="Title and Abstracts"), - #outputs=gr.outputs.File(label=None), - theme="peach", layout="horizontal", - title="PubMed Abstract Retriever", description="Retrieves relevant PubMed abstracts for an online article which can be used as further references. The output is in the form of JSON with Title and Abstract as the fields of the JSON output. Please note that it may take sometime for the models to load. Examples are provided below for demo purposes. Choose any one example to see the results. The models can be changed to see different results. ", - examples=[ - ["https://www.cancer.news/2021-12-22-mrna-vaccines-weaken-immune-system-cause-cancer.html", - 'sentence-transformers/all-mpnet-base-v1', - 'sentence-transformers/paraphrase-MiniLM-L12-v2', - 10, - 'cambridgeltl/SapBERT-from-PubMedBERT-fulltext', - 10, - 'pritamdeka/S-Biomed-Roberta-snli-multinli-stsb'], - - ["https://www.cancer.news/2022-02-04-doctors-testifying-covid-vaccines-causing-cancer-aids.html#", - 'sentence-transformers/all-mpnet-base-v1', - 'sentence-transformers/all-mpnet-base-v1', - 10, - 'cambridgeltl/SapBERT-from-PubMedBERT-fulltext', - 10, - 'pritamdeka/S-Biomed-Roberta-snli-multinli-stsb'], - - ["https://www.medicalnewstoday.com/articles/alzheimers-addressing-sleep-disturbance-may-alleviate-symptoms", - 'pritamdeka/S-Biomed-Roberta-snli-multinli-stsb', - 'sentence-transformers/all-mpnet-base-v1', - 10, - 'cambridgeltl/SapBERT-from-PubMedBERT-fulltext', - 10, - 'pritamdeka/S-Biomed-Roberta-snli-multinli-stsb'], - - ["https://www.medicalnewstoday.com/articles/omicron-what-do-we-know-about-the-stealth-variant", - 'pritamdeka/S-Biomed-Roberta-snli-multinli-stsb', - 'sentence-transformers/all-mpnet-base-v1', - 10, - 'cambridgeltl/SapBERT-from-PubMedBERT-fulltext', - 10, - 'pritamdeka/S-Biomed-Roberta-snli-multinli-stsb'] - ], - article= "This work is based on the paper provided here." - "\t It uses the TextRank algorithm with SBERT to first find the top sentences and then extracts the keyphrases from those sentences using scispaCy and SBERT." - "\t The application then uses a UMLS based BERT model, SapBERT to cluster the keyphrases using K-means clustering method and finally create a boolean query. After that the top k titles and abstracts are retrieved from PubMed database and displayed according to relevancy. The SapBERT models can be changed as per the list provided. " - "\t The list of SBERT models required in the textboxes can be found in SBERT Pre-trained models hub." - "\t The model names can be changed from the list of pre-trained models provided. " - "\t The value of keyphrases can be changed. The default value is 10, minimum is 5 and a maximum value of 20. " - "\t The value of maximum abstracts to be retrieved can be changed. The minimum is 5, default is 10 and a maximum of 15.") - -igen_pubmed.launch(share=False,server_name='0.0.0.0',show_error=True) \ No newline at end of file diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py deleted file mode 100644 index dedac3f46b4710d16a8bc66f00663e379b2ebdc7..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/carafe/faster_rcnn_r50_fpn_carafe_1x_coco.py +++ /dev/null @@ -1,50 +0,0 @@ -_base_ = '../faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py' -model = dict( - neck=dict( - type='FPN_CARAFE', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5, - start_level=0, - end_level=-1, - norm_cfg=None, - act_cfg=None, - order=('conv', 'norm', 'act'), - upsample_cfg=dict( - type='carafe', - up_kernel=5, - up_group=1, - encoder_kernel=3, - encoder_dilation=1, - compressed_channels=64))) -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True), - dict(type='Resize', img_scale=(1333, 800), keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=64), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=64), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco.py deleted file mode 100644 index 0439fc1aa28408df89d6d3b657837654bbbbbcdb..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r101_fpn_mstrain_480-800_3x_coco.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './sparse_rcnn_r50_fpn_mstrain_480-800_3x_coco.py' - -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/pipelines/test_time_aug.py b/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/pipelines/test_time_aug.py deleted file mode 100644 index b6226e040499882c99f15594c66ebf3d07829168..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/mmdet/datasets/pipelines/test_time_aug.py +++ /dev/null @@ -1,119 +0,0 @@ -import warnings - -import mmcv - -from ..builder import PIPELINES -from .compose import Compose - - -@PIPELINES.register_module() -class MultiScaleFlipAug(object): - """Test-time augmentation with multiple scales and flipping. - - An example configuration is as followed: - - .. code-block:: - - img_scale=[(1333, 400), (1333, 800)], - flip=True, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ] - - After MultiScaleFLipAug with above configuration, the results are wrapped - into lists of the same length as followed: - - .. code-block:: - - dict( - img=[...], - img_shape=[...], - scale=[(1333, 400), (1333, 400), (1333, 800), (1333, 800)] - flip=[False, True, False, True] - ... - ) - - Args: - transforms (list[dict]): Transforms to apply in each augmentation. - img_scale (tuple | list[tuple] | None): Images scales for resizing. - scale_factor (float | list[float] | None): Scale factors for resizing. - flip (bool): Whether apply flip augmentation. Default: False. - flip_direction (str | list[str]): Flip augmentation directions, - options are "horizontal" and "vertical". If flip_direction is list, - multiple flip augmentations will be applied. - It has no effect when flip == False. Default: "horizontal". - """ - - def __init__(self, - transforms, - img_scale=None, - scale_factor=None, - flip=False, - flip_direction='horizontal'): - self.transforms = Compose(transforms) - assert (img_scale is None) ^ (scale_factor is None), ( - 'Must have but only one variable can be setted') - if img_scale is not None: - self.img_scale = img_scale if isinstance(img_scale, - list) else [img_scale] - self.scale_key = 'scale' - assert mmcv.is_list_of(self.img_scale, tuple) - else: - self.img_scale = scale_factor if isinstance( - scale_factor, list) else [scale_factor] - self.scale_key = 'scale_factor' - - self.flip = flip - self.flip_direction = flip_direction if isinstance( - flip_direction, list) else [flip_direction] - assert mmcv.is_list_of(self.flip_direction, str) - if not self.flip and self.flip_direction != ['horizontal']: - warnings.warn( - 'flip_direction has no effect when flip is set to False') - if (self.flip - and not any([t['type'] == 'RandomFlip' for t in transforms])): - warnings.warn( - 'flip has no effect when RandomFlip is not in transforms') - - def __call__(self, results): - """Call function to apply test time augment transforms on results. - - Args: - results (dict): Result dict contains the data to transform. - - Returns: - dict[str: list]: The augmented data, where each value is wrapped - into a list. - """ - - aug_data = [] - flip_args = [(False, None)] - if self.flip: - flip_args += [(True, direction) - for direction in self.flip_direction] - for scale in self.img_scale: - for flip, direction in flip_args: - _results = results.copy() - _results[self.scale_key] = scale - _results['flip'] = flip - _results['flip_direction'] = direction - data = self.transforms(_results) - aug_data.append(data) - # list of dict to dict of list - aug_data_dict = {key: [] for key in aug_data[0]} - for data in aug_data: - for key, val in data.items(): - aug_data_dict[key].append(val) - return aug_data_dict - - def __repr__(self): - repr_str = self.__class__.__name__ - repr_str += f'(transforms={self.transforms}, ' - repr_str += f'img_scale={self.img_scale}, flip={self.flip}, ' - repr_str += f'flip_direction={self.flip_direction})' - return repr_str diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/metrics/rvm.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/metrics/rvm.py deleted file mode 100644 index 028324529531dd7ee97210dfd890fed717447be0..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/metrics/rvm.py +++ /dev/null @@ -1,106 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import typing as tp -import torch -from torch import nn -import torchaudio - - -def db_to_scale(volume: tp.Union[float, torch.Tensor]): - return 10 ** (volume / 20) - - -def scale_to_db(scale: torch.Tensor, min_volume: float = -120): - min_scale = db_to_scale(min_volume) - return 20 * torch.log10(scale.clamp(min=min_scale)) - - -class RelativeVolumeMel(nn.Module): - """Relative volume melspectrogram measure. - - Computes a measure of distance over two mel spectrogram that is interpretable in terms - of decibels. Given `x_ref` and `x_est` two waveforms of shape `[*, T]`, it will - first renormalize both by the ground truth of `x_ref`. - - Then it computes the mel spectrogram `z_ref` and `z_est` and compute volume of the difference - relative to the volume of `z_ref` for each time-frequency bin. It further adds some limits, e.g. - clamping the values between -25 and 25 dB (controlled by `min_relative_volume` and `max_relative_volume`) - with the goal of avoiding the loss being dominated by parts where the reference is almost silent. - Indeed, volumes in dB can take unbounded values both towards -oo and +oo, which can make the final - average metric harder to interpret. Besides, anything below -30 dB of attenuation would sound extremely - good (for a neural network output, although sound engineers typically aim for much lower attenuations). - Similarly, anything above +30 dB would just be completely missing the target, and there is no point - in measuring by exactly how much it missed it. -25, 25 is a more conservative range, but also more - in line with what neural nets currently can achieve. - - For instance, a Relative Volume Mel (RVM) score of -10 dB means that on average, the delta between - the target and reference mel-spec is 10 dB lower than the reference mel-spec value. - - The metric can be aggregated over a given frequency band in order have different insights for - different region of the spectrum. `num_aggregated_bands` controls the number of bands. - - ..Warning:: While this function is optimized for interpretability, nothing was done to ensure it - is numerically stable when computing its gradient. We thus advise against using it as a training loss. - - Args: - sample_rate (int): Sample rate of the input audio. - n_mels (int): Number of mel bands to use. - n_fft (int): Number of frequency bins for the STFT. - hop_length (int): Hop length of the STFT and the mel-spectrogram. - min_relative_volume (float): The error `z_ref - z_est` volume is given relative to - the volume of `z_ref`. If error is smaller than -25 dB of `z_ref`, then it is clamped. - max_relative_volume (float): Same as `min_relative_volume` but clamping if the error is larger than that. - max_initial_gain (float): When rescaling the audio at the very beginning, we will limit the gain - to that amount, to avoid rescaling near silence. Given in dB. - min_activity_volume (float): When computing the reference level from `z_ref`, will clamp low volume - bins to that amount. This is effectively our "zero" level for the reference mel-spectrogram, - and anything below that will be considered equally. - num_aggregated_bands (int): Number of bands to keep when computing the average RVM value. - For instance, a value of 3 would give 3 scores, roughly for low, mid and high freqs. - """ - def __init__(self, sample_rate: int = 24000, n_mels: int = 80, n_fft: int = 512, - hop_length: int = 128, min_relative_volume: float = -25, - max_relative_volume: float = 25, max_initial_gain: float = 25, - min_activity_volume: float = -25, - num_aggregated_bands: int = 4) -> None: - super().__init__() - self.melspec = torchaudio.transforms.MelSpectrogram( - n_mels=n_mels, n_fft=n_fft, hop_length=hop_length, - normalized=True, sample_rate=sample_rate, power=2) - self.min_relative_volume = min_relative_volume - self.max_relative_volume = max_relative_volume - self.max_initial_gain = max_initial_gain - self.min_activity_volume = min_activity_volume - self.num_aggregated_bands = num_aggregated_bands - - def forward(self, estimate: torch.Tensor, ground_truth: torch.Tensor) -> tp.Dict[str, torch.Tensor]: - """Compute RVM metric between estimate and reference samples. - - Args: - estimate (torch.Tensor): Estimate sample. - ground_truth (torch.Tensor): Reference sample. - - Returns: - dict[str, torch.Tensor]: Metrics with keys `rvm` for the overall average, and `rvm_{k}` - for the RVM over the k-th band (k=0..num_aggregated_bands - 1). - """ - min_scale = db_to_scale(-self.max_initial_gain) - std = ground_truth.pow(2).mean().sqrt().clamp(min=min_scale) - z_gt = self.melspec(ground_truth / std).sqrt() - z_est = self.melspec(estimate / std).sqrt() - - delta = z_gt - z_est - ref_db = scale_to_db(z_gt, self.min_activity_volume) - delta_db = scale_to_db(delta.abs(), min_volume=-120) - relative_db = (delta_db - ref_db).clamp(self.min_relative_volume, self.max_relative_volume) - dims = list(range(relative_db.dim())) - dims.remove(dims[-2]) - losses_per_band = relative_db.mean(dim=dims) - aggregated = [chunk.mean() for chunk in losses_per_band.chunk(self.num_aggregated_bands, dim=0)] - metrics = {f'rvm_{index}': value for index, value in enumerate(aggregated)} - metrics['rvm'] = losses_per_band.mean() - return metrics diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/modules/activations.py b/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/modules/activations.py deleted file mode 100644 index 2d83d7c4c2dc84c64b724eadbe06157507d4f20d..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/audiocraft/modules/activations.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -from torch import Tensor -from typing import Union, Callable - - -class CustomGLU(nn.Module): - """Custom Gated Linear Unit activation. - Applies a modified gated linear unit :math:`a * f(b)` where :math:`a` is the first half - of the input matrices, :math:`b` is the second half, and :math:`f` is a provided activation - function (i.e. sigmoid, swish, etc.). - - Args: - activation (nn.Module): The custom activation to apply in the Gated Linear Unit - dim (int): the dimension on which to split the input. Default: -1 - - Shape: - - Input: :math:`(\ast_1, N, \ast_2)` where `*` means, any number of additional - dimensions - - Output: :math:`(\ast_1, M, \ast_2)` where :math:`M=N/2` - - Examples:: - >>> m = CustomGLU(nn.Sigmoid()) - >>> input = torch.randn(4, 2) - >>> output = m(input) - """ - def __init__(self, activation: nn.Module, dim: int = -1): - super(CustomGLU, self).__init__() - self.dim = dim - self.activation = activation - - def forward(self, x: Tensor): - assert x.shape[self.dim] % 2 == 0 # M = N / 2 - a, b = torch.chunk(x, 2, dim=self.dim) - return a * self.activation(b) - - -class SwiGLU(CustomGLU): - """SiLU Gated Linear Unit activation. - Applies SiLU Gated Linear Unit :math:`a * SiLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(SwiGLU, self).__init__(nn.SiLU(), dim) - - -class GeGLU(CustomGLU): - """GeLU Gated Linear Unit activation. - Applies GeLU Gated Linear Unit :math:`a * GELU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(GeGLU, self).__init__(nn.GELU(), dim) - - -class ReGLU(CustomGLU): - """ReLU Gated Linear Unit activation. - Applies ReLU Gated Linear Unit :math:`a * ReLU(b)` where :math:`a` is - the first half of the input matrices, :math:`b` is the second half. - - Args: - dim (int): the dimension on which to split the input. Default: -1 - """ - def __init__(self, dim: int = -1): - super(ReGLU, self).__init__(nn.ReLU(), dim) - - -def get_activation_fn( - activation: Union[str, Callable[[Tensor], Tensor]] -) -> Union[str, Callable[[Tensor], Tensor]]: - """Helper function to map an activation string to the activation class. - If the supplied activation is not a string that is recognized, the activation is passed back. - - Args: - activation (str, or Callable[[Tensor], Tensor]): Activation to check - """ - if isinstance(activation, str): - if activation == "reglu": - return ReGLU() - elif activation == "geglu": - return GeGLU() - elif activation == "swiglu": - return SwiGLU() - return activation diff --git a/spaces/GrandaddyShmax/MusicGen_Plus/tests/__init__.py b/spaces/GrandaddyShmax/MusicGen_Plus/tests/__init__.py deleted file mode 100644 index 0952fcc3f57e34b3747962e9ebd6fc57aeea63fa..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus/tests/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. diff --git a/spaces/GroveStreet/GTA_SOVITS/vdecoder/hifiganwithsnake/alias/act.py b/spaces/GroveStreet/GTA_SOVITS/vdecoder/hifiganwithsnake/alias/act.py deleted file mode 100644 index 308344fb6ccbc39317c584a3ee1fb2f29084678e..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/vdecoder/hifiganwithsnake/alias/act.py +++ /dev/null @@ -1,129 +0,0 @@ -# Adapted from https://github.com/junjun3518/alias-free-torch under the Apache License 2.0 -# LICENSE is in incl_licenses directory. - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from torch import sin, pow -from torch.nn import Parameter -from .resample import UpSample1d, DownSample1d - - -class Activation1d(nn.Module): - def __init__(self, - activation, - up_ratio: int = 2, - down_ratio: int = 2, - up_kernel_size: int = 12, - down_kernel_size: int = 12): - super().__init__() - self.up_ratio = up_ratio - self.down_ratio = down_ratio - self.act = activation - self.upsample = UpSample1d(up_ratio, up_kernel_size) - self.downsample = DownSample1d(down_ratio, down_kernel_size) - - # x: [B,C,T] - def forward(self, x): - x = self.upsample(x) - x = self.act(x) - x = self.downsample(x) - - return x - - -class SnakeBeta(nn.Module): - ''' - A modified Snake function which uses separate parameters for the magnitude of the periodic components - Shape: - - Input: (B, C, T) - - Output: (B, C, T), same shape as the input - Parameters: - - alpha - trainable parameter that controls frequency - - beta - trainable parameter that controls magnitude - References: - - This activation function is a modified version based on this paper by Liu Ziyin, Tilman Hartwig, Masahito Ueda: - https://arxiv.org/abs/2006.08195 - Examples: - >>> a1 = snakebeta(256) - >>> x = torch.randn(256) - >>> x = a1(x) - ''' - - def __init__(self, in_features, alpha=1.0, alpha_trainable=True, alpha_logscale=False): - ''' - Initialization. - INPUT: - - in_features: shape of the input - - alpha - trainable parameter that controls frequency - - beta - trainable parameter that controls magnitude - alpha is initialized to 1 by default, higher values = higher-frequency. - beta is initialized to 1 by default, higher values = higher-magnitude. - alpha will be trained along with the rest of your model. - ''' - super(SnakeBeta, self).__init__() - self.in_features = in_features - # initialize alpha - self.alpha_logscale = alpha_logscale - if self.alpha_logscale: # log scale alphas initialized to zeros - self.alpha = Parameter(torch.zeros(in_features) * alpha) - self.beta = Parameter(torch.zeros(in_features) * alpha) - else: # linear scale alphas initialized to ones - self.alpha = Parameter(torch.ones(in_features) * alpha) - self.beta = Parameter(torch.ones(in_features) * alpha) - self.alpha.requires_grad = alpha_trainable - self.beta.requires_grad = alpha_trainable - self.no_div_by_zero = 0.000000001 - - def forward(self, x): - ''' - Forward pass of the function. - Applies the function to the input elementwise. - SnakeBeta = x + 1/b * sin^2 (xa) - ''' - alpha = self.alpha.unsqueeze( - 0).unsqueeze(-1) # line up with x to [B, C, T] - beta = self.beta.unsqueeze(0).unsqueeze(-1) - if self.alpha_logscale: - alpha = torch.exp(alpha) - beta = torch.exp(beta) - x = x + (1.0 / (beta + self.no_div_by_zero)) * pow(sin(x * alpha), 2) - return x - - -class Mish(nn.Module): - """ - Mish activation function is proposed in "Mish: A Self - Regularized Non-Monotonic Neural Activation Function" - paper, https://arxiv.org/abs/1908.08681. - """ - - def __init__(self): - super().__init__() - - def forward(self, x): - return x * torch.tanh(F.softplus(x)) - - -class SnakeAlias(nn.Module): - def __init__(self, - channels, - up_ratio: int = 2, - down_ratio: int = 2, - up_kernel_size: int = 12, - down_kernel_size: int = 12): - super().__init__() - self.up_ratio = up_ratio - self.down_ratio = down_ratio - self.act = SnakeBeta(channels, alpha_logscale=True) - self.upsample = UpSample1d(up_ratio, up_kernel_size) - self.downsample = DownSample1d(down_ratio, down_kernel_size) - - # x: [B,C,T] - def forward(self, x): - x = self.upsample(x) - x = self.act(x) - x = self.downsample(x) - - return x \ No newline at end of file diff --git a/spaces/HaMerL/ChaosinChat/modules/models/tokenization_moss.py b/spaces/HaMerL/ChaosinChat/modules/models/tokenization_moss.py deleted file mode 100644 index 626315eb9e429ada99a15b04b9736c05e6743ffe..0000000000000000000000000000000000000000 --- a/spaces/HaMerL/ChaosinChat/modules/models/tokenization_moss.py +++ /dev/null @@ -1,368 +0,0 @@ -"""Tokenization classes for Moss""" - -import json -import os -import numpy as np -import regex as re - -from functools import lru_cache -from typing import TYPE_CHECKING, List, Optional, Tuple, Union - -from transformers.utils import is_tf_available, is_torch_available, logging -from transformers.tokenization_utils import AddedToken, PreTrainedTokenizer - - -if TYPE_CHECKING: - if is_torch_available(): - import torch - if is_tf_available(): - import tensorflow as tf - - -logger = logging.get_logger(__name__) - -VOCAB_FILES_NAMES = { - "vocab_file": "vocab.json", - "merges_file": "merges.txt", -} - -PRETRAINED_VOCAB_FILES_MAP = { - "vocab_file": { - "fnlp/moss-moon-003-base": "https://huggingface.co/fnlp/moss-moon-003-base/resolve/main/vocab.json", - "fnlp/moss-moon-003-sft": "https://huggingface.co/fnlp/moss-moon-003-sft/resolve/main/vocab.json", - "fnlp/moss-moon-003-sft-plugin": "https://huggingface.co/fnlp/moss-moon-003-sft-plugin/resolve/main/vocab.json", - }, - "merges_file": { - "fnlp/moss-moon-003-base": "https://huggingface.co/fnlp/moss-moon-003-base/resolve/main/merges.txt", - "fnlp/moss-moon-003-sft": "https://huggingface.co/fnlp/moss-moon-003-sft/resolve/main/merges.txt", - "fnlp/moss-moon-003-sft-plugin": "https://huggingface.co/fnlp/moss-moon-003-sft-plugin/resolve/main/merges.txt", - }, -} - -PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = { - "fnlp/moss-moon-003-base": 2048, - "fnlp/moss-moon-003-sft": 2048, - "fnlp/moss-moon-003-sft-plugin": 2048, -} - - -@lru_cache() -def bytes_to_unicode(): - """ - Returns list of utf-8 byte and a mapping to unicode strings. We specifically avoids mapping to whitespace/control - characters the bpe code barfs on. - - The reversible bpe codes work on unicode strings. This means you need a large # of unicode characters in your vocab - if you want to avoid UNKs. When you're at something like a 10B token dataset you end up needing around 5K for - decent coverage. This is a significant percentage of your normal, say, 32K bpe vocab. To avoid that, we want lookup - tables between utf-8 bytes and unicode strings. - """ - bs = ( - list(range(ord("!"), ord("~") + 1)) + list(range(ord("¡"), ord("¬") + 1)) + list(range(ord("®"), ord("ÿ") + 1)) - ) - cs = bs[:] - n = 0 - for b in range(2**8): - if b not in bs: - bs.append(b) - cs.append(2**8 + n) - n += 1 - cs = [chr(n) for n in cs] - return dict(zip(bs, cs)) - - -def get_pairs(word): - """ - Return set of symbol pairs in a word. - - Word is represented as tuple of symbols (symbols being variable-length strings). - """ - pairs = set() - prev_char = word[0] - for char in word[1:]: - pairs.add((prev_char, char)) - prev_char = char - return pairs - - -class MossTokenizer(PreTrainedTokenizer): - """ - Construct a Moss tokenizer. Based on byte-level Byte-Pair-Encoding. - - This tokenizer has been trained to treat spaces like parts of the tokens (a bit like sentencepiece) so a word will - be encoded differently whether it is at the beginning of the sentence (without space) or not: - - You can get around that behavior by passing `add_prefix_space=True` when instantiating this tokenizer or when you - call it on some text, but since the model was not pretrained this way, it might yield a decrease in performance. - - - - When used with `is_split_into_words=True`, this tokenizer will add a space before each word (even the first one). - - - - This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to - this superclass for more information regarding those methods. - - Args: - vocab_file (`str`): - Path to the vocabulary file. - merges_file (`str`): - Path to the merges file. - errors (`str`, *optional*, defaults to `"replace"`): - Paradigm to follow when decoding bytes to UTF-8. See - [bytes.decode](https://docs.python.org/3/library/stdtypes.html#bytes.decode) for more information. - unk_token (`str`, *optional*, defaults to `<|endoftext|>`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - bos_token (`str`, *optional*, defaults to `<|endoftext|>`): - The beginning of sequence token. - eos_token (`str`, *optional*, defaults to `<|endoftext|>`): - The end of sequence token. - add_prefix_space (`bool`, *optional*, defaults to `False`): - Whether or not to add an initial space to the input. This allows to treat the leading word just as any - other word. (Moss tokenizer detect beginning of words by the preceding space). - """ - - vocab_files_names = VOCAB_FILES_NAMES - pretrained_vocab_files_map = PRETRAINED_VOCAB_FILES_MAP - max_model_input_sizes = PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES - model_input_names = ["input_ids", "attention_mask"] - - def __init__( - self, - vocab_file, - merges_file, - errors="replace", - unk_token="<|endoftext|>", - bos_token="<|endoftext|>", - eos_token="", - pad_token=None, - add_prefix_space=False, - add_bos_token=False, - **kwargs, - ): - bos_token = AddedToken(bos_token, lstrip=False, rstrip=False) if isinstance(bos_token, str) else bos_token - eos_token = AddedToken(eos_token, lstrip=False, rstrip=False) if isinstance(eos_token, str) else eos_token - unk_token = AddedToken(unk_token, lstrip=False, rstrip=False) if isinstance(unk_token, str) else unk_token - pad_token = AddedToken(pad_token, lstrip=False, rstrip=False) if isinstance(pad_token, str) else pad_token - super().__init__( - errors=errors, - unk_token=unk_token, - bos_token=bos_token, - eos_token=eos_token, - pad_token=pad_token, - add_prefix_space=add_prefix_space, - add_bos_token=add_bos_token, - **kwargs, - ) - self.add_bos_token = add_bos_token - - with open(vocab_file, encoding="utf-8") as vocab_handle: - self.encoder = json.load(vocab_handle) - self.decoder = {v: k for k, v in self.encoder.items()} - self.errors = errors # how to handle errors in decoding - self.byte_encoder = bytes_to_unicode() - self.byte_decoder = {v: k for k, v in self.byte_encoder.items()} - with open(merges_file, encoding="utf-8") as merges_handle: - bpe_merges = merges_handle.read().split("\n")[1:-1] - bpe_merges = [tuple(merge.split()) for merge in bpe_merges] - self.bpe_ranks = dict(zip(bpe_merges, range(len(bpe_merges)))) - self.cache = {} - self.add_prefix_space = add_prefix_space - - # Should have added re.IGNORECASE so BPE merges can happen for capitalized versions of contractions - self.pat = re.compile(r"""'s|'t|'re|'ve|'m|'ll|'d| ?\p{L}+| ?\p{N}+| ?[^\s\p{L}\p{N}]+|\s+(?!\S)|\s+""") - - @property - def vocab_size(self): - return len(self.encoder) - - def get_vocab(self): - return dict(self.encoder, **self.added_tokens_encoder) - - def bpe(self, token): - if token in self.cache: - return self.cache[token] - word = tuple(token) - pairs = get_pairs(word) - - if not pairs: - return token - - while True: - bigram = min(pairs, key=lambda pair: self.bpe_ranks.get(pair, float("inf"))) - if bigram not in self.bpe_ranks: - break - first, second = bigram - new_word = [] - i = 0 - while i < len(word): - try: - j = word.index(first, i) - except ValueError: - new_word.extend(word[i:]) - break - else: - new_word.extend(word[i:j]) - i = j - - if word[i] == first and i < len(word) - 1 and word[i + 1] == second: - new_word.append(first + second) - i += 2 - else: - new_word.append(word[i]) - i += 1 - new_word = tuple(new_word) - word = new_word - if len(word) == 1: - break - else: - pairs = get_pairs(word) - word = " ".join(word) - self.cache[token] = word - return word - - def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None): - if self.add_bos_token: - bos_token_ids = [self.bos_token_id] - else: - bos_token_ids = [] - - output = bos_token_ids + token_ids_0 - - if token_ids_1 is None: - return output - - return output + bos_token_ids + token_ids_1 - - def _tokenize(self, text): - """Tokenize a string.""" - bpe_tokens = [] - for token in re.findall(self.pat, text): - token = "".join( - self.byte_encoder[b] for b in token.encode("utf-8") - ) # Maps all our bytes to unicode strings, avoiding control tokens of the BPE (spaces in our case) - bpe_tokens.extend(bpe_token for bpe_token in self.bpe(token).split(" ")) - return bpe_tokens - - def _convert_token_to_id(self, token): - """Converts a token (str) in an id using the vocab.""" - return self.encoder.get(token, self.encoder.get(self.unk_token)) - - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (str) using the vocab.""" - return self.decoder.get(index) - - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (string) in a single string.""" - text = "".join(tokens) - text = bytearray([self.byte_decoder[c] for c in text]).decode("utf-8", errors=self.errors) - return text - - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - if not os.path.isdir(save_directory): - logger.error(f"Vocabulary path ({save_directory}) should be a directory") - return - vocab_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["vocab_file"] - ) - merge_file = os.path.join( - save_directory, (filename_prefix + "-" if filename_prefix else "") + VOCAB_FILES_NAMES["merges_file"] - ) - - with open(vocab_file, "w", encoding="utf-8") as f: - f.write(json.dumps(self.encoder, indent=2, sort_keys=True, ensure_ascii=False) + "\n") - - index = 0 - with open(merge_file, "w", encoding="utf-8") as writer: - writer.write("#version: 0.2\n") - for bpe_tokens, token_index in sorted(self.bpe_ranks.items(), key=lambda kv: kv[1]): - if index != token_index: - logger.warning( - f"Saving vocabulary to {merge_file}: BPE merge indices are not consecutive." - " Please check that the tokenizer is not corrupted!" - ) - index = token_index - writer.write(" ".join(bpe_tokens) + "\n") - index += 1 - - return vocab_file, merge_file - - def prepare_for_tokenization(self, text, is_split_into_words=False, **kwargs): - add_prefix_space = kwargs.pop("add_prefix_space", self.add_prefix_space) - if is_split_into_words or add_prefix_space: - text = " " + text - return (text, kwargs) - - def decode( - self, - token_ids: Union[int, List[int], "np.ndarray", "torch.Tensor", "tf.Tensor"], - skip_special_tokens: bool = False, - clean_up_tokenization_spaces: bool = None, - truncate_before_pattern: Optional[List[str]] = None, - **kwargs, - ) -> str: - """ - Converts a sequence of ids in a string, using the tokenizer and vocabulary with options to remove special - tokens and clean up tokenization spaces. - - Similar to doing `self.convert_tokens_to_string(self.convert_ids_to_tokens(token_ids))`. - - Args: - token_ids (`Union[int, List[int], np.ndarray, torch.Tensor, tf.Tensor]`): - List of tokenized input ids. Can be obtained using the `__call__` method. - skip_special_tokens (`bool`, *optional*, defaults to `False`): - Whether or not to remove special tokens in the decoding. - clean_up_tokenization_spaces (`bool`, *optional*): - Whether or not to clean up the tokenization spaces. If `None`, will default to - `self.clean_up_tokenization_spaces` (available in the `tokenizer_config`). - truncate_before_pattern (`List[str]`, *optional*, defaults to `None`): - A list of regular expression strings that will be used to truncate the returned string. This can be - used to remove extra pieces of code (e.g. truncate if observing a comment symbol "#" at the beginning - of a new line). An example pattern could be `["^#", re.escape("<|endoftext|>"), "^'''", "\n\n\n"]`. - kwargs (additional keyword arguments, *optional*): - Will be passed to the underlying model specific decode method. - - Returns: - `str`: The decoded sentence. - """ - decoded_text = super()._decode( - token_ids=token_ids, - skip_special_tokens=skip_special_tokens, - clean_up_tokenization_spaces=clean_up_tokenization_spaces, - **kwargs, - ) - - if truncate_before_pattern is not None and len(truncate_before_pattern) > 0: - decoded_text = self.truncate(decoded_text, truncate_before_pattern) - - return decoded_text - - def truncate(self, completion, truncate_before_pattern): - def find_re(string, pattern, start_pos): - m = pattern.search(string, start_pos) - return m.start() if m else -1 - - terminals = [re.compile(pattern, re.MULTILINE) for pattern in truncate_before_pattern] - - prints = list(re.finditer("^print", completion, re.MULTILINE)) - - if len(prints) > 1: - completion = completion[: prints[1].start()] - - defs = list(re.finditer("^def", completion, re.MULTILINE)) - - if len(defs) > 1: - completion = completion[: defs[1].start()] - - start_pos = 0 - - terminals_pos = [ - pos for pos in [find_re(completion, terminal, start_pos) for terminal in terminals] if pos != -1 - ] - - if len(terminals_pos) > 0: - return completion[: min(terminals_pos)] - else: - return completion diff --git a/spaces/Hallucinate/demo/AdaBins-main/loss.py b/spaces/Hallucinate/demo/AdaBins-main/loss.py deleted file mode 100644 index 39335b31dc3c412c5635b09211e3f1a213e83a0d..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/AdaBins-main/loss.py +++ /dev/null @@ -1,46 +0,0 @@ -import torch -import torch.nn as nn -from pytorch3d.loss import chamfer_distance -from torch.nn.utils.rnn import pad_sequence - - -class SILogLoss(nn.Module): # Main loss function used in AdaBins paper - def __init__(self): - super(SILogLoss, self).__init__() - self.name = 'SILog' - - def forward(self, input, target, mask=None, interpolate=True): - if interpolate: - input = nn.functional.interpolate(input, target.shape[-2:], mode='bilinear', align_corners=True) - - if mask is not None: - input = input[mask] - target = target[mask] - g = torch.log(input) - torch.log(target) - # n, c, h, w = g.shape - # norm = 1/(h*w) - # Dg = norm * torch.sum(g**2) - (0.85/(norm**2)) * (torch.sum(g))**2 - - Dg = torch.var(g) + 0.15 * torch.pow(torch.mean(g), 2) - return 10 * torch.sqrt(Dg) - - -class BinsChamferLoss(nn.Module): # Bin centers regularizer used in AdaBins paper - def __init__(self): - super().__init__() - self.name = "ChamferLoss" - - def forward(self, bins, target_depth_maps): - bin_centers = 0.5 * (bins[:, 1:] + bins[:, :-1]) - n, p = bin_centers.shape - input_points = bin_centers.view(n, p, 1) # .shape = n, p, 1 - # n, c, h, w = target_depth_maps.shape - - target_points = target_depth_maps.flatten(1) # n, hwc - mask = target_points.ge(1e-3) # only valid ground truth points - target_points = [p[m] for p, m in zip(target_points, mask)] - target_lengths = torch.Tensor([len(t) for t in target_points]).long().to(target_depth_maps.device) - target_points = pad_sequence(target_points, batch_first=True).unsqueeze(2) # .shape = n, T, 1 - - loss, _ = chamfer_distance(x=input_points, y=target_points, y_lengths=target_lengths) - return loss diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/translation_from_pretrained_xlm.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/translation_from_pretrained_xlm.py deleted file mode 100644 index a05f2891524a8b23482e206c1742c3b816b77afb..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/tasks/translation_from_pretrained_xlm.py +++ /dev/null @@ -1,39 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from dataclasses import dataclass -from fairseq.data.legacy.masked_lm_dictionary import MaskedLMDictionary -from fairseq.tasks.translation import TranslationConfig, TranslationTask - -from . import register_task - - -@dataclass -class TranslationFromPretrainedXLMConfig(TranslationConfig): - pass - - -@register_task( - "translation_from_pretrained_xlm", dataclass=TranslationFromPretrainedXLMConfig -) -class TranslationFromPretrainedXLMTask(TranslationTask): - """ - Same as TranslationTask except use the MaskedLMDictionary class so that - we can load data that was binarized with the MaskedLMDictionary class. - - This task should be used for the entire training pipeline when we want to - train an NMT model from a pretrained XLM checkpoint: binarizing NMT data, - training NMT with the pretrained XLM checkpoint, and subsequent evaluation - of that trained model. - """ - - @classmethod - def load_dictionary(cls, filename): - """Load the masked LM dictionary from the filename - - Args: - filename (str): the filename - """ - return MaskedLMDictionary.load(filename) diff --git a/spaces/Harsimran19/SegmentationGAN/README.md b/spaces/Harsimran19/SegmentationGAN/README.md deleted file mode 100644 index f5b6a55fd1d44da3f32aa5355a290d8ef9cc4c20..0000000000000000000000000000000000000000 --- a/spaces/Harsimran19/SegmentationGAN/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: SegmentationGAN -emoji: ⚡ -colorFrom: gray -colorTo: purple -sdk: gradio -sdk_version: 3.29.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/hifi/models.py b/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/hifi/models.py deleted file mode 100644 index aaf911836119d69129abe22aa4fc875f2ba3d53c..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Malayalam-TTS/ttsv/src/glow_tts/hifi/models.py +++ /dev/null @@ -1,403 +0,0 @@ -import torch -import torch.nn.functional as F -import torch.nn as nn -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from .utils import init_weights, get_padding - -LRELU_SLOPE = 0.1 - - -class ResBlock1(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.h = h - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - xt = c2(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, h, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.h = h - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - xt = c(xt) - x = xt + x - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Generator(torch.nn.Module): - def __init__(self, h): - super(Generator, self).__init__() - self.h = h - self.num_kernels = len(h.resblock_kernel_sizes) - self.num_upsamples = len(h.upsample_rates) - self.conv_pre = weight_norm( - Conv1d(80, h.upsample_initial_channel, 7, 1, padding=3) - ) - resblock = ResBlock1 if h.resblock == "1" else ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(h.upsample_rates, h.upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - h.upsample_initial_channel // (2 ** i), - h.upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = h.upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes) - ): - self.resblocks.append(resblock(h, ch, k, d)) - - self.conv_post = weight_norm(Conv1d(ch, 1, 7, 1, padding=3)) - self.ups.apply(init_weights) - self.conv_post.apply(init_weights) - - def forward(self, x): - x = self.conv_pre(x) - for i in range(self.num_upsamples): - x = F.leaky_relu(x, LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - print("Removing weight norm...") - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - remove_weight_norm(self.conv_pre) - remove_weight_norm(self.conv_post) - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(5, 1), 0), - ) - ), - norm_f(Conv2d(1024, 1024, (kernel_size, 1), 1, padding=(2, 0))), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiPeriodDiscriminator, self).__init__() - self.discriminators = nn.ModuleList( - [ - DiscriminatorP(2), - DiscriminatorP(3), - DiscriminatorP(5), - DiscriminatorP(7), - DiscriminatorP(11), - ] - ) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 128, 15, 1, padding=7)), - norm_f(Conv1d(128, 128, 41, 2, groups=4, padding=20)), - norm_f(Conv1d(128, 256, 41, 2, groups=16, padding=20)), - norm_f(Conv1d(256, 512, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(512, 1024, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 1, groups=16, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class MultiScaleDiscriminator(torch.nn.Module): - def __init__(self): - super(MultiScaleDiscriminator, self).__init__() - self.discriminators = nn.ModuleList( - [ - DiscriminatorS(use_spectral_norm=True), - DiscriminatorS(), - DiscriminatorS(), - ] - ) - self.meanpools = nn.ModuleList( - [AvgPool1d(4, 2, padding=2), AvgPool1d(4, 2, padding=2)] - ) - - def forward(self, y, y_hat): - y_d_rs = [] - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - if i != 0: - y = self.meanpools[i - 1](y) - y_hat = self.meanpools[i - 1](y_hat) - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - y_d_rs.append(y_d_r) - fmap_rs.append(fmap_r) - y_d_gs.append(y_d_g) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -def feature_loss(fmap_r, fmap_g): - loss = 0 - for dr, dg in zip(fmap_r, fmap_g): - for rl, gl in zip(dr, dg): - loss += torch.mean(torch.abs(rl - gl)) - - return loss * 2 - - -def discriminator_loss(disc_real_outputs, disc_generated_outputs): - loss = 0 - r_losses = [] - g_losses = [] - for dr, dg in zip(disc_real_outputs, disc_generated_outputs): - r_loss = torch.mean((1 - dr) ** 2) - g_loss = torch.mean(dg ** 2) - loss += r_loss + g_loss - r_losses.append(r_loss.item()) - g_losses.append(g_loss.item()) - - return loss, r_losses, g_losses - - -def generator_loss(disc_outputs): - loss = 0 - gen_losses = [] - for dg in disc_outputs: - l = torch.mean((1 - dg) ** 2) - gen_losses.append(l) - loss += l - - return loss, gen_losses diff --git a/spaces/Hexamind/GDOC/src/domain/doc.py b/spaces/Hexamind/GDOC/src/domain/doc.py deleted file mode 100644 index 0cd3aaddbdbfdee62067b7be90f5ce2746a50328..0000000000000000000000000000000000000000 --- a/spaces/Hexamind/GDOC/src/domain/doc.py +++ /dev/null @@ -1,218 +0,0 @@ -from xml.dom.minidom import Element -import docx -import zipfile - -from src.tools.doc_tools import get_difference_with_template, get_positions, convert_to_png -from PIL import Image -from docxcompose.composer import Composer -from docx import Document as Document_compose -from docx.enum.table import WD_TABLE_ALIGNMENT -from src.domain.container import Container -from src.domain.paragraph import Paragraph -from src.domain.styles import Styles -import shutil -import os - - -class Doc: - - def __init__(self, path='', id_=None): - self.xdoc = docx.Document(path) - self.title = path.split('/')[-1] - self.name = self.title.split('.')[0] - self.id_ = id(self) - self.path = path - paragraphs = [Paragraph(xp, self.id_, i) for (i, xp) in enumerate(self.xdoc.paragraphs)] - self.container = Container(paragraphs, father=self) - self.styles = Styles(self.xdoc.styles) - self.tasks = [c.get_fulltask(self.container.one_liner) for c in self.container.containers if c.task] - - def copy(self, new_doc_path): - shutil.copyfile(self.path, new_doc_path) - new_doc = Doc(new_doc_path) - new_doc.save_as_docx(new_doc_path) - return new_doc - - def clear(self): - os.remove(self.path) - - def apply_template(self, template, options_list): - center_tables = False - center_images = False - add_template_before = False - justify_content = False - log = [] - i = 0 - j = 0 - if("Recentrer les tableaux" in options_list): - center_tables = True - if("Recentrer les images (sauf les flottantes)" in options_list): - center_images = True - if("Ajouter le template avant" in options_list): - add_template_before = True - if("Justifier le texte" in options_list): - justify_content = True - - if (justify_content): - log.append("Le contenu du document a été justifié") - self.justify_content() - if(center_images): - self.center_images() - i = self.number_images_in_doc() - log.append(f"{i} image{'s' if i>1 else ''} centrée{'s' if i>1 else ''}") - if(center_tables): - j = self.center_tables() - log.append(f"{j} table{'s' if j>1 else ''} centrée{'s' if j>1 else ''}") - if(add_template_before): - self.save_as_docx() - log.append(f"Le template {template.name} a été ajouté avant le document") - log = self.styles.apply_from(template.styles, log) - master = Document_compose(template.path) - composer = Composer(master) - doc = Document_compose(self.path) - composer.append(doc) - composer.save(self.path) - else: - log = self.styles.apply_from(template.styles, log) - self.save_as_docx() - return log - - def copy_one_style(self, src_style_name: str, dest_style_name: str, template): - style_dest = template.styles.get_style_from_name(dest_style_name) - src_style = self.styles.get_style_from_name(src_style_name) - if src_style: - log = self.styles.copy_one_style(src_style, style_dest) - return log - else: - return None - - def get_different_styles_with_template(self, template): - styles_used_in_doc = self.get_all_styles_of_doc() - different_styles = get_difference_with_template(styles_used_in_doc, template) - return different_styles - - def save_as_docx(self, path: str = ''): - path = path if path else self.path - self.path = path - self.xdoc.save(path) - - # def add_back_pages_from(self, src_doc): - # with open (self.path, "rb") as f: - # zip = zipfile.ZipFile(f) - # images = [image for image in zip.namelist() if image.startswith('word/media/')] - # for image in images: - # zip.extract(image) - # zip.close() - # images = convert_to_png(images) - # #copy the entire self to the end of src_doc - # for p in self.get_paragraphs(): - # p.insert_paragraphs(images,src_doc) - # return self - - def get_blocks(self): - - def from_list_to_str(index_list): - index_str = str(index_list[0]) - for el in index_list[1:]: - index_str += '.' + str(el) - return index_str - - blocks = self.container.blocks - for block in blocks: - block.doc = self.title - if block.level == 0: - blocks.remove(block) - block.index = from_list_to_str(block.index) - return blocks - - - @property - def structure(self): - - return self.container.structure - - def replace_tasks(self, resolutions: [str]): - if len(resolutions) == len(self.tasks): # exception to be handled - p_tasks = [p for p in self.get_paragraphs() if p.type == 'task'] - for p, r in zip(p_tasks, resolutions): - p.set_text(r) - else: - print(f"résolutions : {len(resolutions)} != {len(self.tasks)} tasks") - return self - - def get_paragraphs(self): - return self.container.all_paragraphs - - def get_text_from_paragraphs(self): - return [p.text for p in self.get_paragraphs()] - - def check_document(self): - picCount = 0 - tabCount = 0 - for paragraph in self.xdoc.paragraphs: - if picCount < len(self.xdoc.inline_shapes): - print('\033[1mPicture \033[0m') - picCount += 1 - elif paragraph.text: - print(paragraph.text) - elif tabCount < len(self.xdoc.tables): - table = self.xdoc.tables[tabCount] - data = [] - keys = None - for i, row in enumerate(table.rows): - text = (cell.text for cell in row.cells) - if i == 0: - keys = tuple(text) - continue - row_data = dict(zip(keys, text)) - data.append(row_data) - print('\033[1mTable:\033[0m', data) - tabCount += 1 - else: - print('\033[1mEmpty paragraph\033[0m') - - - - def center_tables(self): - j = 0 - for table in self.xdoc.tables: - j += 1 - table.alignment = WD_TABLE_ALIGNMENT.CENTER - return j - - - # def center_tables_with_template(self): - # j = 0 - # for i,table in enumerate(self.xdoc.tables): - # if(i == 0): - # continue - # j += 1 - # table.alignment = 1 - # return j - - def center_images(self): - for paragraph in self.get_paragraphs(): - paragraph.center_paragraph() - - def justify_content(self): - for paragraph in self.get_paragraphs(): - paragraph.justify_paragraph() - - - - - # def add_paragraph(self,p:Paragraph): - # self.container.paragraphs.append(p) - # self.xdoc.add_paragraph(p.text,p.xparagraph.style) - - - def number_images_in_doc(self): - picCount = 0 - for _ in self.xdoc.paragraphs: - if picCount < len(self.xdoc.inline_shapes): - print('\033[1mPicture \033[0m') - picCount += 1 - return picCount - - def get_all_styles_of_doc(self): - return self.container.get_all_styles_used_in_doc() diff --git a/spaces/Hina4867/bingo/src/lib/isomorphic/browser.ts b/spaces/Hina4867/bingo/src/lib/isomorphic/browser.ts deleted file mode 100644 index de125b1f1786d1618cb1ff47f403d76c6784f4ce..0000000000000000000000000000000000000000 --- a/spaces/Hina4867/bingo/src/lib/isomorphic/browser.ts +++ /dev/null @@ -1,11 +0,0 @@ -'use client' - -const debug = console.info.bind(console) - -class WebSocketAlias extends WebSocket { - constructor(address: string | URL, ...args: any) { - super(address) - } -} - -export default { fetch, WebSocket: WebSocketAlias, debug } diff --git a/spaces/Hoodady/3DFuse/my/config.py b/spaces/Hoodady/3DFuse/my/config.py deleted file mode 100644 index d0ed34421c5e2620801dac6d048f69cf6183206c..0000000000000000000000000000000000000000 --- a/spaces/Hoodady/3DFuse/my/config.py +++ /dev/null @@ -1,266 +0,0 @@ -from typing import List, Union -from copy import deepcopy -from collections import namedtuple -from pathlib import Path -import argparse -from argparse import RawDescriptionHelpFormatter -import yaml -from pydantic import BaseModel as _Base -import os - - -class BaseConf(_Base): - class Config: - validate_all = True - allow_mutation = True - extra = "ignore" - - -def SingleOrList(inner_type): - return Union[inner_type, List[inner_type]] - - -def optional_load_config(fname="config.yml"): - cfg = {} - conf_fname = Path.cwd() / fname - if conf_fname.is_file(): - with conf_fname.open("r") as f: - raw = f.read() - print("loaded config\n ") - print(raw) # yaml raw itself is well formatted - cfg = yaml.safe_load(raw) - return cfg - -# def optional_load_config(path="config.yml"): -# cfg = {} -# # conf_fname = Path.cwd() / fname -# conf_fname = Path(path) -# if conf_fname.is_file(): -# with conf_fname.open("r") as f: -# raw = f.read() -# print("loaded config\n ") -# print(raw) # yaml raw itself is well formatted -# cfg = yaml.safe_load(raw) -# return cfg - - -def write_full_config(cfg_obj, fname="full_config.yml"): - cfg = cfg_obj.dict() - cfg = _dict_to_yaml(cfg) - print(f"\n--- full config ---\n\n{cfg}\n") - with (Path.cwd() / fname).open("w") as f: - f.write(cfg) - - -def argparse_cfg_template(curr_cfgs): - parser = argparse.ArgumentParser( - description='Manual spec of configs', - epilog=f'curr cfgs:\n\n{_dict_to_yaml(curr_cfgs)}', - formatter_class=RawDescriptionHelpFormatter - ) - _, args = parser.parse_known_args() - clauses = [] - for i in range(0, len(args), 2): - assert args[i][:2] == "--", "please start args with --" - clauses.append({args[i][2:]: args[i+1]}) - - - maker = ConfigMaker(curr_cfgs) - for clu in clauses: - maker.execute_clause(clu) - - final = maker.state.copy() - return final - - -def _dict_to_yaml(arg): - return yaml.safe_dump(arg, sort_keys=False, allow_unicode=True) - - -def dispatch(module): - cfg = optional_load_config(fname="gradio_init.yml") - cfg = module(**cfg).dict() - - cfg = argparse_cfg_template(cfg) # cmdline takes priority - mod = module(**cfg) - - exp_path = os.path.join(cfg['exp_dir'],cfg['initial']) - os.makedirs(exp_path, exist_ok=True) - write_full_config(mod, os.path.join(exp_path,"full_config.yml")) - - mod.run() - -def dispatch_gradio(module, prompt, keyword, ti_step, pt_step, seed): - cfg = optional_load_config("gradio_init.yml") - - cfg['sd']['prompt'] = prompt - cfg['sd']['dir'] = os.path.join(cfg['exp_dir'],keyword,'lora/final_lora.safetensors') - cfg['ti_step'] = ti_step - cfg['pt_step'] = pt_step - cfg['initial'] = keyword - cfg['random_seed'] = seed - - cfg = module(**cfg).dict() - mod = module(**cfg) - - - return mod - - - -# below are some support tools - - -class ConfigMaker(): - CMD = namedtuple('cmd', field_names=['sub', 'verb', 'objs']) - VERBS = ('add', 'replace', 'del') - - def __init__(self, base_node): - self.state = base_node - self.clauses = [] - - def clone(self): - return deepcopy(self) - - def execute_clause(self, raw_clause): - cls = self.__class__ - assert isinstance(raw_clause, (str, dict)) - if isinstance(raw_clause, dict): - assert len(raw_clause) == 1, \ - "a clause can only have 1 statement: {} clauses in {}".format( - len(raw_clause), raw_clause - ) - cmd = list(raw_clause.keys())[0] - arg = raw_clause[cmd] - else: - cmd = raw_clause - arg = None - cmd = self.parse_clause_cmd(cmd) - tracer = NodeTracer(self.state) - tracer.advance_pointer(path=cmd.sub) - if cmd.verb == cls.VERBS[0]: - tracer.add(cmd.objs, arg) - elif cmd.verb == cls.VERBS[1]: - tracer.replace(cmd.objs, arg) - elif cmd.verb == cls.VERBS[2]: - assert isinstance(raw_clause, str) - tracer.delete(cmd.objs) - self.state = tracer.state - - @classmethod - def parse_clause_cmd(cls, input): - """ - Args: - input: a string to be parsed - 1. First test whether a verb is present - 2. If not present, then str is a single subject, and verb is replace - This is a syntactical sugar that makes writing config easy - 3. If a verb is found, whatever comes before is a subject, and after the - objects. - 4. Handle the edge cases properly. Below are expected parse outputs - input sub verb obj - --- No verb - '' '' replace [] - 'a.b' 'a.b' replace [] - 'add' '' add [] - 'P Q' err: 2 subjects - --- Verb present - 'T add' 'T' add [] - 'T del a b' 'T' del [a, b] - 'P Q add a' err: 2 subjects - 'P add del b' err: 2 verbs - """ - assert isinstance(input, str) - input = input.split() - objs = [] - sub = '' - verb, verb_inx = cls.scan_for_verb(input) - if verb is None: - assert len(input) <= 1, "no verb present; more than 1 subject: {}"\ - .format(input) - sub = input[0] if len(input) == 1 else '' - verb = cls.VERBS[1] - else: - assert not verb_inx > 1, 'verb {} at inx {}; more than 1 subject in: {}'\ - .format(verb, verb_inx, input) - sub = input[0] if verb_inx == 1 else '' - objs = input[verb_inx + 1:] - cmd = cls.CMD(sub=sub, verb=verb, objs=objs) - return cmd - - @classmethod - def scan_for_verb(cls, input_list): - assert isinstance(input_list, list) - counts = [ input_list.count(v) for v in cls.VERBS ] - presence = [ cnt > 0 for cnt in counts ] - if sum(presence) == 0: - return None, -1 - elif sum(presence) > 1: - raise ValueError("multiple verbs discovered in {}".format(input_list)) - - if max(counts) > 1: - raise ValueError("verbs repeated in cmd: {}".format(input_list)) - # by now, there is 1 verb that has occured exactly 1 time - verb = cls.VERBS[presence.index(1)] - inx = input_list.index(verb) - return verb, inx - - -class NodeTracer(): - def __init__(self, src_node): - """ - A src node can be either a list or dict - """ - assert isinstance(src_node, (list, dict)) - - # these are movable pointers - self.child_token = "_" # init token can be anything - self.parent = {self.child_token: src_node} - - # these are permanent pointers at the root - self.root_child_token = self.child_token - self.root = self.parent - - @property - def state(self): - return self.root[self.root_child_token] - - @property - def pointed(self): - return self.parent[self.child_token] - - def advance_pointer(self, path): - if len(path) == 0: - return - path_list = list( - map(lambda x: int(x) if str.isdigit(x) else x, path.split('.')) - ) - - for i, token in enumerate(path_list): - self.parent = self.pointed - self.child_token = token - try: - self.pointed - except (IndexError, KeyError): - raise ValueError( - "During the tracing of {}, {}-th token '{}'" - " is not present in node {}".format( - path, i, self.child_token, self.state - ) - ) - - def replace(self, objs, arg): - assert len(objs) == 0 - val_type = type(self.parent[self.child_token]) - # this is such an unfortunate hack - # turn everything to string, so that eval could work - # some of the clauses come from cmdline, some from yaml files for sow. - arg = str(arg) - if val_type == str: - pass - else: - arg = eval(arg) - assert type(arg) == val_type, \ - f"require {val_type.__name__}, given {type(arg).__name__}" - - self.parent[self.child_token] = arg diff --git a/spaces/HugoLaurencon/text-data-filtering-2/filtering.py b/spaces/HugoLaurencon/text-data-filtering-2/filtering.py deleted file mode 100644 index eb2f4358b0a520098dd41a678a51250b4be88176..0000000000000000000000000000000000000000 --- a/spaces/HugoLaurencon/text-data-filtering-2/filtering.py +++ /dev/null @@ -1,957 +0,0 @@ -import re - -import numpy as np - -import fasttext - -import sentencepiece -import kenlm - -import pathlib - -from languages_id import langs_id -from parameters_filtering import parameters_filtering -from normalization import normalization -from stopwords import stopwords -from flagged_words import flagged_words - - -class LoadParameters: - @staticmethod - def load_parameters(lang_dataset_id): - if lang_dataset_id in parameters_filtering: - param = parameters_filtering[lang_dataset_id] - else: - param = parameters_filtering["default"] - return param - - @staticmethod - def load_stopwords(lang_dataset_id): - stopwords_lang_id = langs_id.loc[ - langs_id["dataset_id"] == lang_dataset_id, "stopwords_id" - ].iloc[0] - if stopwords_lang_id: - stopwords_lang = set(stopwords[stopwords_lang_id]) - else: - stopwords_lang = None - return stopwords_lang - - @staticmethod - def load_flagged_words(lang_dataset_id): - flagged_words_lang_id = langs_id.loc[ - langs_id["dataset_id"] == lang_dataset_id, "flagged_words_id" - ].iloc[0] - if flagged_words_lang_id: - flagged_words_lang = set(flagged_words[flagged_words_lang_id]) - else: - flagged_words_lang = None - return flagged_words_lang - - @staticmethod - def load_model_lang_id(lang_dataset_id, path_fasttext_model): - fasttext_lang_id = langs_id.loc[ - langs_id["dataset_id"] == lang_dataset_id, "fasttext_id" - ].iloc[0] - if fasttext_lang_id: - model_lang_id = fasttext.load_model(path_fasttext_model) - else: - model_lang_id = None - return model_lang_id - - @staticmethod - def load_sentencepiece_model(lang_dataset_id, path_sentencepiece_model): - sentencepiece_lang_id = langs_id.loc[ - langs_id["dataset_id"] == lang_dataset_id, "sentencepiece_id" - ].iloc[0] - if sentencepiece_lang_id: - sentencepiece_model = sentencepiece.SentencePieceProcessor() - sentencepiece_model.load(path_sentencepiece_model) - else: - sentencepiece_model = None - return sentencepiece_model - - @staticmethod - def load_kenlm_model(lang_dataset_id, path_kenlm_model): - kenlm_lang_id = langs_id.loc[ - langs_id["dataset_id"] == lang_dataset_id, "kenlm_id" - ].iloc[0] - if kenlm_lang_id: - kenlm_model = kenlm.Model(path_kenlm_model) - else: - kenlm_model = None - return kenlm_model - - -class ModifyingDocuments: - @staticmethod - def remove_empty_el_from_list(list_): - return [el for el in list_ if el] - - @staticmethod - def remove_non_printing_characters(document, non_printing_characters_re): - return non_printing_characters_re.sub("", document) - - @staticmethod - def uniform_whitespace( - document, - whitespace=[ - " ", - " ", - " ", - " ", - " ", - " ", - " ", - " ", - " ", - " ", - "", - "„", - ], - ): - """There are different whitespace characters.""" - whitespace = set(whitespace) - document = "".join( - [char if char not in whitespace else " " for char in document] - ) - return document - - @staticmethod - def replace_digits_with_zeros(document, digits_re): - return digits_re.sub("0", document) - - @staticmethod - def replace_unicode_punctuation(document, unicode_punctuation): - return "".join(unicode_punctuation.get(c, c) for c in document) - - @staticmethod - def normalization( - document, - remove_non_printing_characters, - strip, - lower_case, - uniform_whitespace, - replace_digits_with_zeros, - replace_unicode_punctuation, - non_printing_characters_re=normalization["non_printing_characters_re"], - digits_re=normalization["digits_re"], - unicode_punctuation=normalization["unicode_punctuation"], - ): - if remove_non_printing_characters: - document = ModifyingDocuments.remove_non_printing_characters( - document, non_printing_characters_re - ) - if strip: - document = document.strip() - if not document: - return document - if lower_case: - document = document.lower() - if uniform_whitespace: - document = ModifyingDocuments.uniform_whitespace(document) - if replace_digits_with_zeros: - document = ModifyingDocuments.replace_digits_with_zeros(document, digits_re) - if replace_unicode_punctuation: - document = ModifyingDocuments.replace_unicode_punctuation( - document, unicode_punctuation - ) - return document - - @staticmethod - def tokenization(document, sentencepiece_model, join_on_whitespace): - document_tokenized = sentencepiece_model.encode_as_pieces(document) - if join_on_whitespace: - document_tokenized = " ".join(document_tokenized) - return document_tokenized - - @staticmethod - def split_on_whitespace( - document, - new_line=False, - tab=False, - ): - """This method also removes concatenated spaces.""" - sep = [" "] + new_line * ["\n"] + tab * ["\t"] - sep = "|".join(sep) - split_document = re.split(sep, document) - split_document = ModifyingDocuments.remove_empty_el_from_list(split_document) - return split_document - - @staticmethod - def strip(document, strip_characters): - """Way faster than document.strip(strip_characters) - since strip_characters is now a set instead of a str, - and it contains a lot of elements (all the emojis).""" - if not document: - return document - beg_ind = 0 - end_ind = len(document) - for i in range(len(document)): - if document[i] in strip_characters: - beg_ind += 1 - else: - break - for i in range(1, len(document) + 1): - if document[-i] in strip_characters: - end_ind -= 1 - else: - break - document_stripped = document[beg_ind:end_ind] - return document_stripped - - @staticmethod - def get_words_from_document( - document, sentencepiece_model_tok, lower_case, strip_characters - ): - """Get words from a document. Non reversible since the document - is split on multiple characters, words are stripped of - special characters and characters are converted to lower case. - Useful to compute ratios, like the stopwords ratio.""" - if sentencepiece_model_tok: - document_normalized = ModifyingDocuments.normalization( - document=document, - remove_non_printing_characters=True, - strip=True, - lower_case=True, - uniform_whitespace=True, - replace_digits_with_zeros=True, - replace_unicode_punctuation=True, - ) - words = ModifyingDocuments.tokenization( - document_normalized, sentencepiece_model_tok, join_on_whitespace=False - ) - else: - words = ModifyingDocuments.split_on_whitespace( - document, new_line=True, tab=True - ) - if lower_case: - words = [word.lower() for word in words] - if strip_characters: - words = [ModifyingDocuments.strip(word, strip_characters) for word in words] - words = ModifyingDocuments.remove_empty_el_from_list(words) - return words - - @staticmethod - def words_augmentation(words, group_size, join_char): - """Augment words, especially for Chinese (without a space between words) - and Vietnamese (with a space between syllables).""" - augmentation = [ - join_char.join(words[i : i + group_size]) - for i in range(len(words) - group_size + 1) - ] - return augmentation - - @staticmethod - def split_on_newline_tab_whitespace(document): - """First split on "\n", then on "\t", then on " ".""" - sentences = document.split("\n") - sentences = [sentence.split("\t") for sentence in sentences] - sentences = [ - [ - ModifyingDocuments.split_on_whitespace(subsentence) - for subsentence in sentence - ] - for sentence in sentences - ] - return sentences - - @staticmethod - def merge_on_whitespace_tab_newline(sentences): - """Invert the method split_on_newline_tab_whitespace. - Removes concatenated separators.""" - sentences = [ - [" ".join(subsentence) for subsentence in sentence if subsentence] - for sentence in sentences - ] - sentences = ["\t".join(sentence) for sentence in sentences if sentence] - if not sentences: - return "" - document = "\n".join(sentences) - return document - - @staticmethod - def should_keep_word_with_incorrect_substrings( - word, strip_characters, incorrect_word_substrings - ): - word = ModifyingDocuments.strip(word, strip_characters) - should_keep = all( - [(i_substr not in word) for i_substr in incorrect_word_substrings] - ) - return should_keep - - @staticmethod - def remove_words_with_incorrect_substrings( - document, - strip_characters, - incorrect_word_substrings, - ): - sentences = ModifyingDocuments.split_on_newline_tab_whitespace(document) - sentences = [ - [ - [ - word - for word in subsentence - if ModifyingDocuments.should_keep_word_with_incorrect_substrings( - word, strip_characters, incorrect_word_substrings - ) - ] - for subsentence in sentence - ] - for sentence in sentences - ] - document = ModifyingDocuments.merge_on_whitespace_tab_newline(sentences) - return document - - @staticmethod - def should_keep_long_word(word, strip_characters, length_word_max_cutoff): - """If the word is too long but it contains only one - special character, it might be a concatenation of one word, - a punctuation, and another word, with no space between them. - In this case, we give the word a pass.""" - if len(word) <= length_word_max_cutoff: - return True - word = ModifyingDocuments.strip(word, strip_characters) - if not word: # The word consisted only of strip characters - return False - if len(word) <= length_word_max_cutoff: - return True - return False - - def remove_long_words( - document, - strip_characters, - length_word_max_cutoff, - ): - sentences = ModifyingDocuments.split_on_newline_tab_whitespace(document) - sentences = [ - [ - [ - word - for word in subsentence - if ModifyingDocuments.should_keep_long_word( - word, - strip_characters, - length_word_max_cutoff, - ) - ] - for subsentence in sentence - ] - for sentence in sentences - ] - document = ModifyingDocuments.merge_on_whitespace_tab_newline(sentences) - return document - - @staticmethod - def modifying_documents( - document, - cond_uniform_whitespace, - cond_replace_unicode_punctuation, - cond_remove_words_with_incorrect_substrings, - strip_characters, - incorrect_word_substrings, - cond_remove_long_words, - length_word_max_cutoff, - ): - document = ModifyingDocuments.normalization( - document=document, - remove_non_printing_characters=False, - strip=True, - lower_case=False, - uniform_whitespace=cond_uniform_whitespace, - replace_digits_with_zeros=False, - replace_unicode_punctuation=cond_replace_unicode_punctuation, - ) - if cond_remove_words_with_incorrect_substrings: - document = ModifyingDocuments.remove_words_with_incorrect_substrings( - document, - strip_characters, - incorrect_word_substrings, - ) - if cond_remove_long_words: - document = ModifyingDocuments.remove_long_words( - document, - strip_characters, - length_word_max_cutoff, - ) - return document - - -class FunctionDatasetModifyingDocuments: - def __init__(self, lang_dataset_id): - self.lang_dataset_id = lang_dataset_id - self.param = LoadParameters.load_parameters(lang_dataset_id) - - def __call__(self, example): - example["text"] = ModifyingDocuments.modifying_documents( - document=example["text"], - cond_uniform_whitespace=self.param["cond_uniform_whitespace"], - cond_replace_unicode_punctuation=self.param[ - "cond_replace_unicode_punctuation" - ], - cond_remove_words_with_incorrect_substrings=self.param[ - "cond_remove_words_with_incorrect_substrings" - ], - strip_characters=self.param["strip_characters"], - incorrect_word_substrings=self.param["incorrect_word_substrings"], - cond_remove_long_words=self.param["cond_remove_long_words"], - length_word_max_cutoff=self.param["length_word_max_cutoff"], - ) - return example - - def __reduce__(self): - return (self.__class__, (self.lang_dataset_id,)) - - -class Filtering: - @staticmethod - def check_number_words( - document, - sentencepiece_model_tok, - strip_characters, - number_words_min_cutoff, - number_words_max_cutoff, - ): - words = ModifyingDocuments.get_words_from_document( - document, - sentencepiece_model_tok, - lower_case=False, - strip_characters=strip_characters, - ) - cond = (len(words) >= number_words_min_cutoff) and ( - len(words) <= number_words_max_cutoff - ) - return cond - - @staticmethod - def compute_character_repetition_ratio(document, character_repetition_length): - def get_freq_character_ngrams(document, n): - character_ngrams = [ - document[i : i + n] for i in range(len(document) - n + 1) - ] - freq_character_ngrams = {} - for character_ngram in character_ngrams: - freq_character_ngrams[character_ngram] = ( - freq_character_ngrams.get(character_ngram, 0) + 1 - ) - return freq_character_ngrams - - freq_character_ngrams = get_freq_character_ngrams( - document, character_repetition_length - ) - if len(freq_character_ngrams) == 0: - return 0 - freq_character_ngrams = list(freq_character_ngrams.values()) - freq_character_ngrams = sorted(freq_character_ngrams, reverse=True) - val_less_than_one = len([el for el in freq_character_ngrams if el > 1]) - num_rep_character_ngrams = min( - int(np.sqrt(len(freq_character_ngrams))), - len(freq_character_ngrams) - val_less_than_one, - ) - character_repetition_ratio = sum( - freq_character_ngrams[:num_rep_character_ngrams] - ) / sum(freq_character_ngrams) - return character_repetition_ratio - - @staticmethod - def check_character_repetition_removal( - document, - character_repetition_length, - character_repetition_max_cutoff, - ): - character_repetition_ratio = Filtering.compute_character_repetition_ratio( - document, character_repetition_length - ) - cond = character_repetition_ratio <= character_repetition_max_cutoff - return cond - - @staticmethod - def compute_word_repetition_ratio( - document, sentencepiece_model_tok, strip_characters, word_repetition_length - ): - def get_freq_word_ngrams( - document, sentencepiece_model_tok, strip_characters, n - ): - words = ModifyingDocuments.get_words_from_document( - document, - sentencepiece_model_tok, - lower_case=True, - strip_characters=strip_characters, - ) - word_ngrams = [ - " ".join(words[i : i + n]) for i in range(len(words) - n + 1) - ] - freq_word_ngrams = {} - for word_ngram in word_ngrams: - freq_word_ngrams[word_ngram] = freq_word_ngrams.get(word_ngram, 0) + 1 - return freq_word_ngrams - - freq_word_ngrams = get_freq_word_ngrams( - document, sentencepiece_model_tok, strip_characters, word_repetition_length - ) - if len(freq_word_ngrams) == 0: - return 0 - freq_word_ngrams = list(freq_word_ngrams.values()) - word_repetition_ratio = sum( - freq for freq in freq_word_ngrams if freq > 1 - ) / sum(freq_word_ngrams) - return word_repetition_ratio - - @staticmethod - def check_word_repetition_removal( - document, - sentencepiece_model_tok, - strip_characters, - word_repetition_length, - word_repetition_max_cutoff, - ): - word_repetition_ratio = Filtering.compute_word_repetition_ratio( - document, sentencepiece_model_tok, strip_characters, word_repetition_length - ) - cond = word_repetition_ratio <= word_repetition_max_cutoff - return cond - - @staticmethod - def compute_special_characters_ratio(document, special_characters): - if len(document) == 0: - return 0 - special_characters_ratio = len( - [char for char in document if char in special_characters] - ) / len(document) - return special_characters_ratio - - @staticmethod - def check_special_characters( - document, - special_characters, - special_characters_max_cutoff, - ): - special_characters_ratio = Filtering.compute_special_characters_ratio( - document, special_characters - ) - cond = special_characters_ratio <= special_characters_max_cutoff - return cond - - @staticmethod - def compute_stopwords_ratio( - document, - sentencepiece_model_tok, - strip_characters, - cond_words_augmentation, - words_augmentation_group_sizes, - words_augmentation_join_char, - stopwords, - ): - words = ModifyingDocuments.get_words_from_document( - document, - sentencepiece_model_tok, - lower_case=True, - strip_characters=strip_characters, - ) - if not words: - return 0 - augmentation = [] - if cond_words_augmentation: - augmentation = [ - ModifyingDocuments.words_augmentation( - words, group_size, words_augmentation_join_char - ) - for group_size in words_augmentation_group_sizes - ] - augmentation = [word for augm in augmentation for word in augm] - stopwords_ratio = len( - [word for word in words + augmentation if word in stopwords] - ) / len(words) - if stopwords_ratio > 1.0: - stopwords_ratio = 1.0 - return stopwords_ratio - - @staticmethod - def check_stopwords( - document, - sentencepiece_model_tok, - strip_characters, - cond_words_augmentation, - words_augmentation_group_sizes, - words_augmentation_join_char, - stopwords, - stopwords_min_cutoff, - ): - cond = True - if stopwords: - stopwords_ratio = Filtering.compute_stopwords_ratio( - document, - sentencepiece_model_tok, - strip_characters, - cond_words_augmentation, - words_augmentation_group_sizes, - words_augmentation_join_char, - stopwords, - ) - cond = stopwords_ratio >= stopwords_min_cutoff - return cond - - @staticmethod - def compute_flagged_words_ratio( - document, - sentencepiece_model_tok, - strip_characters, - cond_words_augmentation, - words_augmentation_group_sizes, - words_augmentation_join_char, - flagged_words, - ): - words = ModifyingDocuments.get_words_from_document( - document, - sentencepiece_model_tok, - lower_case=True, - strip_characters=strip_characters, - ) - if not words: - return 0 - augmentation = [] - if cond_words_augmentation: - augmentation = [ - ModifyingDocuments.words_augmentation( - words, group_size, words_augmentation_join_char - ) - for group_size in words_augmentation_group_sizes - ] - augmentation = [word for augm in augmentation for word in augm] - flagged_words_ratio = len( - [word for word in words + augmentation if word in flagged_words] - ) / len(words) - if flagged_words_ratio > 1.0: - flagged_words_ratio = 1.0 - return flagged_words_ratio - - @staticmethod - def check_flagged_words( - document, - sentencepiece_model_tok, - strip_characters, - cond_words_augmentation, - words_augmentation_group_sizes, - words_augmentation_join_char, - flagged_words, - flagged_words_max_cutoff, - ): - cond = True - if flagged_words: - flagged_words_ratio = Filtering.compute_flagged_words_ratio( - document, - sentencepiece_model_tok, - strip_characters, - cond_words_augmentation, - words_augmentation_group_sizes, - words_augmentation_join_char, - flagged_words, - ) - cond = flagged_words_ratio <= flagged_words_max_cutoff - return cond - - @staticmethod - def compute_lang_id_pred_score(document, model_lang_id): - document = document.lower().replace("\n", " ") - pred = model_lang_id.predict(document) - lang_pred_fasttext_id = pred[0][0].replace("__label__", "") - score_pred = pred[1][0] - lang_pred_dataset_id = langs_id.loc[ - langs_id["fasttext_id"] == lang_pred_fasttext_id, "dataset_id" - ] - if len(lang_pred_dataset_id) > 0: - lang_pred_dataset_id = lang_pred_dataset_id.iloc[0] - else: - lang_pred_dataset_id = "unknown" - return lang_pred_dataset_id, score_pred - - @staticmethod - def check_lang_id( - document, - lang_dataset_id, - model_lang_id, - lang_id_min_cutoff, - ): - cond = True - if model_lang_id: - lang_pred_dataset_id, score_pred = Filtering.compute_lang_id_pred_score( - document, model_lang_id - ) - cond = (lang_pred_dataset_id == lang_dataset_id) and ( - score_pred >= lang_id_min_cutoff - ) - return cond - - @staticmethod - def compute_perplexity_score(document, sentencepiece_model, kenlm_model): - document = ModifyingDocuments.normalization( - document=document, - remove_non_printing_characters=True, - strip=True, - lower_case=False, - uniform_whitespace=True, - replace_digits_with_zeros=True, - replace_unicode_punctuation=True, - ) - document = ModifyingDocuments.tokenization( - document, sentencepiece_model, join_on_whitespace=True - ) - doc_log_score, doc_length = 0, 0 - for line in document.split("\n"): - log_score = kenlm_model.score(line) - length = len(line.split()) + 1 - doc_log_score += log_score - doc_length += length - pp_score = 10.0 ** (-doc_log_score / doc_length) - pp_score = round(pp_score, 1) - return pp_score - - @staticmethod - def check_perplexity( - document, - sentencepiece_model, - kenlm_model, - perplexity_max_cutoff, - ): - cond = True - if kenlm_model: - score = Filtering.compute_perplexity_score( - document, sentencepiece_model, kenlm_model - ) - cond = score <= perplexity_max_cutoff - return cond - - @staticmethod - def filtering( - document, - cond_check_number_words, - sentencepiece_model_tok, - strip_characters, - number_words_min_cutoff, - number_words_max_cutoff, - cond_check_character_repetition_removal, - character_repetition_length, - character_repetition_max_cutoff, - cond_check_word_repetition_removal, - word_repetition_length, - word_repetition_max_cutoff, - cond_check_special_characters, - special_characters, - special_characters_max_cutoff, - cond_words_augmentation, - words_augmentation_group_sizes, - words_augmentation_join_char, - cond_check_stopwords, - stopwords, - stopwords_min_cutoff, - cond_check_flagged_words, - flagged_words, - flagged_words_max_cutoff, - cond_check_lang_id, - lang_dataset_id, - model_lang_id, - lang_id_min_cutoff, - cond_check_perplexity, - sentencepiece_model, - kenlm_model, - perplexity_max_cutoff, - ): - if cond_check_number_words: - if not Filtering.check_number_words( - document, - sentencepiece_model_tok, - strip_characters, - number_words_min_cutoff, - number_words_max_cutoff, - ): - return False - if cond_check_character_repetition_removal: - if not Filtering.check_character_repetition_removal( - document, - character_repetition_length, - character_repetition_max_cutoff, - ): - return False - if cond_check_word_repetition_removal: - if not Filtering.check_word_repetition_removal( - document, - sentencepiece_model_tok, - strip_characters, - word_repetition_length, - word_repetition_max_cutoff, - ): - return False - if cond_check_special_characters: - if not Filtering.check_special_characters( - document, - special_characters, - special_characters_max_cutoff, - ): - return False - if cond_check_stopwords: - if not Filtering.check_stopwords( - document, - sentencepiece_model_tok, - strip_characters, - cond_words_augmentation, - words_augmentation_group_sizes, - words_augmentation_join_char, - stopwords, - stopwords_min_cutoff, - ): - return False - if cond_check_flagged_words: - if not Filtering.check_flagged_words( - document, - sentencepiece_model_tok, - strip_characters, - cond_words_augmentation, - words_augmentation_group_sizes, - words_augmentation_join_char, - flagged_words, - flagged_words_max_cutoff, - ): - return False - if cond_check_lang_id: - if not Filtering.check_lang_id( - document, - lang_dataset_id, - model_lang_id, - lang_id_min_cutoff, - ): - return False - if cond_check_perplexity: - if not Filtering.check_perplexity( - document, - sentencepiece_model, - kenlm_model, - perplexity_max_cutoff, - ): - return False - return True - - -class FunctionDatasetFiltering: - def __init__( - self, - lang_dataset_id, - path_fasttext_model, - path_sentencepiece_model, - path_kenlm_model, - ): - self.lang_dataset_id = lang_dataset_id - self.path_fasttext_model = path_fasttext_model - self.path_sentencepiece_model = path_sentencepiece_model - self.path_kenlm_model = path_kenlm_model - - self.param = LoadParameters.load_parameters(lang_dataset_id) - self.stopwords = LoadParameters.load_stopwords(lang_dataset_id) - self.flagged_words = LoadParameters.load_flagged_words(lang_dataset_id) - self.model_lang_id = LoadParameters.load_model_lang_id( - lang_dataset_id, path_fasttext_model - ) - self.sentencepiece_model = LoadParameters.load_sentencepiece_model( - lang_dataset_id, path_sentencepiece_model - ) - self.sentencepiece_model_tok = ( - self.sentencepiece_model if self.param["tokenization"] else None - ) - self.kenlm_model = LoadParameters.load_kenlm_model( - lang_dataset_id, path_kenlm_model - ) - - def __call__(self, example): - keep_example = Filtering.filtering( - document=example["text"], - cond_check_number_words=self.param["cond_check_number_words"], - sentencepiece_model_tok=self.sentencepiece_model_tok, - strip_characters=self.param["strip_characters"], - number_words_min_cutoff=self.param["number_words_min_cutoff"], - number_words_max_cutoff=self.param["number_words_max_cutoff"], - cond_check_character_repetition_removal=self.param[ - "cond_check_character_repetition_removal" - ], - character_repetition_length=self.param["character_repetition_length"], - character_repetition_max_cutoff=self.param[ - "character_repetition_max_cutoff" - ], - cond_check_word_repetition_removal=self.param[ - "cond_check_word_repetition_removal" - ], - word_repetition_length=self.param["word_repetition_length"], - word_repetition_max_cutoff=self.param["word_repetition_max_cutoff"], - cond_check_special_characters=self.param["cond_check_special_characters"], - special_characters=self.param["special_characters"], - special_characters_max_cutoff=self.param["special_characters_max_cutoff"], - cond_words_augmentation=self.param["cond_words_augmentation"], - words_augmentation_group_sizes=self.param["words_augmentation_group_sizes"], - words_augmentation_join_char=self.param["words_augmentation_join_char"], - cond_check_stopwords=self.param["cond_check_stopwords"], - stopwords=self.stopwords, - stopwords_min_cutoff=self.param["stopwords_min_cutoff"], - cond_check_flagged_words=self.param["cond_check_flagged_words"], - flagged_words=self.flagged_words, - flagged_words_max_cutoff=self.param["flagged_words_max_cutoff"], - cond_check_lang_id=self.param["cond_check_lang_id"], - lang_dataset_id=self.lang_dataset_id, - model_lang_id=self.model_lang_id, - lang_id_min_cutoff=self.param["lang_id_min_cutoff"], - cond_check_perplexity=self.param["cond_check_perplexity"], - sentencepiece_model=self.sentencepiece_model, - kenlm_model=self.kenlm_model, - perplexity_max_cutoff=self.param["perplexity_max_cutoff"], - ) - return keep_example - - def __reduce__(self): - return ( - self.__class__, - ( - self.lang_dataset_id, - self.path_fasttext_model, - self.path_sentencepiece_model, - self.path_kenlm_model, - ), - ) - - -class DatasetFiltering: - def __init__( - self, - dataset, - lang_dataset_id, - path_fasttext_model, - path_sentencepiece_model, - path_kenlm_model, - num_proc, - path_dir_save_dataset, - ): - self.ds = dataset - self.lang_dataset_id = lang_dataset_id - self.path_fasttext_model = path_fasttext_model - self.path_sentencepiece_model = path_sentencepiece_model - self.path_kenlm_model = path_kenlm_model - self.num_proc = num_proc - self.path_dir_save_dataset = path_dir_save_dataset - - def modifying_documents(self): - func_dataset_modifying_documents = FunctionDatasetModifyingDocuments( - self.lang_dataset_id - ) - self.ds = self.ds.map(func_dataset_modifying_documents, num_proc=self.num_proc) - - def filtering(self): - func_dataset_filtering = FunctionDatasetFiltering( - self.lang_dataset_id, - self.path_fasttext_model, - self.path_sentencepiece_model, - self.path_kenlm_model, - ) - self.ds = self.ds.filter(func_dataset_filtering, num_proc=self.num_proc) - - def save_dataset(self): - pathlib.Path(self.path_dir_save_dataset).mkdir(parents=True, exist_ok=True) - path_dir_save_dataset = pathlib.PurePath( - self.path_dir_save_dataset, self.lang_dataset_id - ) - pathlib.Path(path_dir_save_dataset).mkdir(parents=True, exist_ok=True) - self.ds.save_to_disk(path_dir_save_dataset) diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/pq/modules/qemb.py b/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/pq/modules/qemb.py deleted file mode 100644 index 3a74ad3c4c7c9d3203d26e7885864ba578951bfe..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/modules/quantization/pq/modules/qemb.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class PQEmbedding(nn.Module): - """ - Quantized counterpart of nn.Embedding module. Stores the centroids and - the assignments. The full weight is re-instantiated at each forward - pass. - - Args: - - centroids: centroids of size n_centroids x block_size - - assignments: assignments of the centroids to the subvectors - of size self.out_features x n_blocks - - bias: the non-quantized bias - - Remarks: - - We refer the reader to the official documentation of the nn.Embedding module - for the other arguments and the behavior of the module - - Performance tests on GPU show that this implementation is 10% slower than - the non-quantized nn.Embedding module for a standard training loop. - """ - - def __init__( - self, - centroids, - assignments, - num_embeddings, - embedding_dim, - padding_idx=None, - max_norm=None, - norm_type=2.0, - scale_grad_by_freq=False, - sparse=False, - _weight=None, - ): - super(PQEmbedding, self).__init__() - self.block_size = centroids.size(1) - self.n_centroids = centroids.size(0) - self.num_embeddings = num_embeddings - self.embedding_dim = embedding_dim - if padding_idx is not None: - if padding_idx > 0: - assert ( - padding_idx < self.num_embeddings - ), "Padding_idx must be within num_embeddings" - elif padding_idx < 0: - assert ( - padding_idx >= -self.num_embeddings - ), "Padding_idx must be within num_embeddings" - padding_idx = self.num_embeddings + padding_idx - self.padding_idx = padding_idx - self.max_norm = max_norm - self.norm_type = norm_type - self.scale_grad_by_freq = scale_grad_by_freq - self.sparse = sparse - # check compatibility - if self.embedding_dim % self.block_size != 0: - raise ValueError("Wrong PQ sizes") - if len(assignments) % self.num_embeddings != 0: - raise ValueError("Wrong PQ sizes") - # define parameters - self.centroids = nn.Parameter(centroids, requires_grad=True) - self.register_buffer("assignments", assignments) - self.register_buffer("counts", torch.bincount(assignments).type_as(centroids)) - - @property - def weight(self): - return ( - self.centroids[self.assignments] - .reshape(-1, self.num_embeddings, self.block_size) - .permute(1, 0, 2) - .flatten(1, 2) - ) - - def forward(self, input): - return F.embedding( - input, - self.weight, - self.padding_idx, - self.max_norm, - self.norm_type, - self.scale_grad_by_freq, - self.sparse, - ) - - def extra_repr(self): - s = "{num_embeddings}, {embedding_dim}" - if self.padding_idx is not None: - s += ", padding_idx={padding_idx}" - if self.max_norm is not None: - s += ", max_norm={max_norm}" - if self.norm_type != 2: - s += ", norm_type={norm_type}" - if self.scale_grad_by_freq is not False: - s += ", scale_grad_by_freq={scale_grad_by_freq}" - if self.sparse is not False: - s += ", sparse=True" - s += ", n_centroids={n_centroids}, block_size={block_size}" - - return s.format(**self.__dict__) diff --git a/spaces/Illumotion/Koboldcpp/include/cblas.h b/spaces/Illumotion/Koboldcpp/include/cblas.h deleted file mode 100644 index 48b47059d3b5b84448b4377761fbc25d77c12342..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/include/cblas.h +++ /dev/null @@ -1,413 +0,0 @@ -#pragma once - -#ifndef CBLAS_H -#define CBLAS_H - -#include -#include - -#ifdef __cplusplus -extern "C" { - /* Assume C declarations for C++ */ -#endif /* __cplusplus */ - -/*Set the number of threads on runtime.*/ -void openblas_set_num_threads(int num_threads); -void goto_set_num_threads(int num_threads); - -/*Get the number of threads on runtime.*/ -int openblas_get_num_threads(void); - -/*Get the number of physical processors (cores).*/ -int openblas_get_num_procs(void); - -/*Get the build configure on runtime.*/ -char* openblas_get_config(void); - -/*Get the CPU corename on runtime.*/ -char* openblas_get_corename(void); - -#ifdef OPENBLAS_OS_LINUX -/* Sets thread affinity for OpenBLAS threads. `thread_idx` is in [0, openblas_get_num_threads()-1]. */ -int openblas_setaffinity(int thread_idx, size_t cpusetsize, cpu_set_t* cpu_set); -/* Queries thread affinity for OpenBLAS threads. `thread_idx` is in [0, openblas_get_num_threads()-1]. */ -int openblas_getaffinity(int thread_idx, size_t cpusetsize, cpu_set_t* cpu_set); -#endif - -/* Get the parallelization type which is used by OpenBLAS */ -int openblas_get_parallel(void); -/* OpenBLAS is compiled for sequential use */ -#define OPENBLAS_SEQUENTIAL 0 -/* OpenBLAS is compiled using normal threading model */ -#define OPENBLAS_THREAD 1 -/* OpenBLAS is compiled using OpenMP threading model */ -#define OPENBLAS_OPENMP 2 - - -/* - * Since all of GotoBlas was written without const, - * we disable it at build time. - */ -#ifndef OPENBLAS_CONST -# define OPENBLAS_CONST const -#endif - - -#define CBLAS_INDEX size_t - -typedef enum CBLAS_ORDER {CblasRowMajor=101, CblasColMajor=102} CBLAS_ORDER; -typedef enum CBLAS_TRANSPOSE {CblasNoTrans=111, CblasTrans=112, CblasConjTrans=113, CblasConjNoTrans=114} CBLAS_TRANSPOSE; -typedef enum CBLAS_UPLO {CblasUpper=121, CblasLower=122} CBLAS_UPLO; -typedef enum CBLAS_DIAG {CblasNonUnit=131, CblasUnit=132} CBLAS_DIAG; -typedef enum CBLAS_SIDE {CblasLeft=141, CblasRight=142} CBLAS_SIDE; -typedef CBLAS_ORDER CBLAS_LAYOUT; - -float cblas_sdsdot(OPENBLAS_CONST blasint n, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST float *y, OPENBLAS_CONST blasint incy); -double cblas_dsdot (OPENBLAS_CONST blasint n, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST float *y, OPENBLAS_CONST blasint incy); -float cblas_sdot(OPENBLAS_CONST blasint n, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST float *y, OPENBLAS_CONST blasint incy); -double cblas_ddot(OPENBLAS_CONST blasint n, OPENBLAS_CONST double *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST double *y, OPENBLAS_CONST blasint incy); - -openblas_complex_float cblas_cdotu(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST void *y, OPENBLAS_CONST blasint incy); -openblas_complex_float cblas_cdotc(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST void *y, OPENBLAS_CONST blasint incy); -openblas_complex_double cblas_zdotu(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST void *y, OPENBLAS_CONST blasint incy); -openblas_complex_double cblas_zdotc(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST void *y, OPENBLAS_CONST blasint incy); - -void cblas_cdotu_sub(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST void *y, OPENBLAS_CONST blasint incy, void *ret); -void cblas_cdotc_sub(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST void *y, OPENBLAS_CONST blasint incy, void *ret); -void cblas_zdotu_sub(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST void *y, OPENBLAS_CONST blasint incy, void *ret); -void cblas_zdotc_sub(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST void *y, OPENBLAS_CONST blasint incy, void *ret); - -float cblas_sasum (OPENBLAS_CONST blasint n, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx); -double cblas_dasum (OPENBLAS_CONST blasint n, OPENBLAS_CONST double *x, OPENBLAS_CONST blasint incx); -float cblas_scasum(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); -double cblas_dzasum(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); - -float cblas_ssum (OPENBLAS_CONST blasint n, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx); -double cblas_dsum (OPENBLAS_CONST blasint n, OPENBLAS_CONST double *x, OPENBLAS_CONST blasint incx); -float cblas_scsum(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); -double cblas_dzsum(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); - -float cblas_snrm2 (OPENBLAS_CONST blasint N, OPENBLAS_CONST float *X, OPENBLAS_CONST blasint incX); -double cblas_dnrm2 (OPENBLAS_CONST blasint N, OPENBLAS_CONST double *X, OPENBLAS_CONST blasint incX); -float cblas_scnrm2(OPENBLAS_CONST blasint N, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX); -double cblas_dznrm2(OPENBLAS_CONST blasint N, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX); - -CBLAS_INDEX cblas_isamax(OPENBLAS_CONST blasint n, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_idamax(OPENBLAS_CONST blasint n, OPENBLAS_CONST double *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_icamax(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_izamax(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); - -CBLAS_INDEX cblas_isamin(OPENBLAS_CONST blasint n, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_idamin(OPENBLAS_CONST blasint n, OPENBLAS_CONST double *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_icamin(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_izamin(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); - -CBLAS_INDEX cblas_ismax(OPENBLAS_CONST blasint n, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_idmax(OPENBLAS_CONST blasint n, OPENBLAS_CONST double *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_icmax(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_izmax(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); - -CBLAS_INDEX cblas_ismin(OPENBLAS_CONST blasint n, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_idmin(OPENBLAS_CONST blasint n, OPENBLAS_CONST double *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_icmin(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); -CBLAS_INDEX cblas_izmin(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx); - -void cblas_saxpy(OPENBLAS_CONST blasint n, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx, float *y, OPENBLAS_CONST blasint incy); -void cblas_daxpy(OPENBLAS_CONST blasint n, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *x, OPENBLAS_CONST blasint incx, double *y, OPENBLAS_CONST blasint incy); -void cblas_caxpy(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, void *y, OPENBLAS_CONST blasint incy); -void cblas_zaxpy(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, void *y, OPENBLAS_CONST blasint incy); - -void cblas_scopy(OPENBLAS_CONST blasint n, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx, float *y, OPENBLAS_CONST blasint incy); -void cblas_dcopy(OPENBLAS_CONST blasint n, OPENBLAS_CONST double *x, OPENBLAS_CONST blasint incx, double *y, OPENBLAS_CONST blasint incy); -void cblas_ccopy(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, void *y, OPENBLAS_CONST blasint incy); -void cblas_zcopy(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, void *y, OPENBLAS_CONST blasint incy); - -void cblas_sswap(OPENBLAS_CONST blasint n, float *x, OPENBLAS_CONST blasint incx, float *y, OPENBLAS_CONST blasint incy); -void cblas_dswap(OPENBLAS_CONST blasint n, double *x, OPENBLAS_CONST blasint incx, double *y, OPENBLAS_CONST blasint incy); -void cblas_cswap(OPENBLAS_CONST blasint n, void *x, OPENBLAS_CONST blasint incx, void *y, OPENBLAS_CONST blasint incy); -void cblas_zswap(OPENBLAS_CONST blasint n, void *x, OPENBLAS_CONST blasint incx, void *y, OPENBLAS_CONST blasint incy); - -void cblas_srot(OPENBLAS_CONST blasint N, float *X, OPENBLAS_CONST blasint incX, float *Y, OPENBLAS_CONST blasint incY, OPENBLAS_CONST float c, OPENBLAS_CONST float s); -void cblas_drot(OPENBLAS_CONST blasint N, double *X, OPENBLAS_CONST blasint incX, double *Y, OPENBLAS_CONST blasint incY, OPENBLAS_CONST double c, OPENBLAS_CONST double s); -void cblas_csrot(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, void *y, OPENBLAS_CONST blasint incY, OPENBLAS_CONST float c, OPENBLAS_CONST float s); -void cblas_zdrot(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, void *y, OPENBLAS_CONST blasint incY, OPENBLAS_CONST double c, OPENBLAS_CONST double s); - -void cblas_srotg(float *a, float *b, float *c, float *s); -void cblas_drotg(double *a, double *b, double *c, double *s); -void cblas_crotg(void *a, void *b, float *c, void *s); -void cblas_zrotg(void *a, void *b, double *c, void *s); - - -void cblas_srotm(OPENBLAS_CONST blasint N, float *X, OPENBLAS_CONST blasint incX, float *Y, OPENBLAS_CONST blasint incY, OPENBLAS_CONST float *P); -void cblas_drotm(OPENBLAS_CONST blasint N, double *X, OPENBLAS_CONST blasint incX, double *Y, OPENBLAS_CONST blasint incY, OPENBLAS_CONST double *P); - -void cblas_srotmg(float *d1, float *d2, float *b1, OPENBLAS_CONST float b2, float *P); -void cblas_drotmg(double *d1, double *d2, double *b1, OPENBLAS_CONST double b2, double *P); - -void cblas_sscal(OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, float *X, OPENBLAS_CONST blasint incX); -void cblas_dscal(OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, double *X, OPENBLAS_CONST blasint incX); -void cblas_cscal(OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, void *X, OPENBLAS_CONST blasint incX); -void cblas_zscal(OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, void *X, OPENBLAS_CONST blasint incX); -void cblas_csscal(OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, void *X, OPENBLAS_CONST blasint incX); -void cblas_zdscal(OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, void *X, OPENBLAS_CONST blasint incX); - -void cblas_sgemv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_TRANSPOSE trans, OPENBLAS_CONST blasint m, OPENBLAS_CONST blasint n, - OPENBLAS_CONST float alpha, OPENBLAS_CONST float *a, OPENBLAS_CONST blasint lda, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST float beta, float *y, OPENBLAS_CONST blasint incy); -void cblas_dgemv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_TRANSPOSE trans, OPENBLAS_CONST blasint m, OPENBLAS_CONST blasint n, - OPENBLAS_CONST double alpha, OPENBLAS_CONST double *a, OPENBLAS_CONST blasint lda, OPENBLAS_CONST double *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST double beta, double *y, OPENBLAS_CONST blasint incy); -void cblas_cgemv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_TRANSPOSE trans, OPENBLAS_CONST blasint m, OPENBLAS_CONST blasint n, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *a, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST void *beta, void *y, OPENBLAS_CONST blasint incy); -void cblas_zgemv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_TRANSPOSE trans, OPENBLAS_CONST blasint m, OPENBLAS_CONST blasint n, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *a, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST void *beta, void *y, OPENBLAS_CONST blasint incy); - -void cblas_sger (OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST float *Y, OPENBLAS_CONST blasint incY, float *A, OPENBLAS_CONST blasint lda); -void cblas_dger (OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST double *Y, OPENBLAS_CONST blasint incY, double *A, OPENBLAS_CONST blasint lda); -void cblas_cgeru(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *Y, OPENBLAS_CONST blasint incY, void *A, OPENBLAS_CONST blasint lda); -void cblas_cgerc(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *Y, OPENBLAS_CONST blasint incY, void *A, OPENBLAS_CONST blasint lda); -void cblas_zgeru(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *Y, OPENBLAS_CONST blasint incY, void *A, OPENBLAS_CONST blasint lda); -void cblas_zgerc(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *Y, OPENBLAS_CONST blasint incY, void *A, OPENBLAS_CONST blasint lda); - -void cblas_strsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint N, OPENBLAS_CONST float *A, OPENBLAS_CONST blasint lda, float *X, OPENBLAS_CONST blasint incX); -void cblas_dtrsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint N, OPENBLAS_CONST double *A, OPENBLAS_CONST blasint lda, double *X, OPENBLAS_CONST blasint incX); -void cblas_ctrsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *X, OPENBLAS_CONST blasint incX); -void cblas_ztrsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *X, OPENBLAS_CONST blasint incX); - -void cblas_strmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint N, OPENBLAS_CONST float *A, OPENBLAS_CONST blasint lda, float *X, OPENBLAS_CONST blasint incX); -void cblas_dtrmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint N, OPENBLAS_CONST double *A, OPENBLAS_CONST blasint lda, double *X, OPENBLAS_CONST blasint incX); -void cblas_ctrmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *X, OPENBLAS_CONST blasint incX); -void cblas_ztrmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *X, OPENBLAS_CONST blasint incX); - -void cblas_ssyr(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *X, OPENBLAS_CONST blasint incX, float *A, OPENBLAS_CONST blasint lda); -void cblas_dsyr(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *X, OPENBLAS_CONST blasint incX, double *A, OPENBLAS_CONST blasint lda); -void cblas_cher(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, void *A, OPENBLAS_CONST blasint lda); -void cblas_zher(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, void *A, OPENBLAS_CONST blasint lda); - -void cblas_ssyr2(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo,OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *X, - OPENBLAS_CONST blasint incX, OPENBLAS_CONST float *Y, OPENBLAS_CONST blasint incY, float *A, OPENBLAS_CONST blasint lda); -void cblas_dsyr2(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *X, - OPENBLAS_CONST blasint incX, OPENBLAS_CONST double *Y, OPENBLAS_CONST blasint incY, double *A, OPENBLAS_CONST blasint lda); -void cblas_cher2(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, - OPENBLAS_CONST void *Y, OPENBLAS_CONST blasint incY, void *A, OPENBLAS_CONST blasint lda); -void cblas_zher2(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, - OPENBLAS_CONST void *Y, OPENBLAS_CONST blasint incY, void *A, OPENBLAS_CONST blasint lda); - -void cblas_sgbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, - OPENBLAS_CONST blasint KL, OPENBLAS_CONST blasint KU, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST float *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST float beta, float *Y, OPENBLAS_CONST blasint incY); -void cblas_dgbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, - OPENBLAS_CONST blasint KL, OPENBLAS_CONST blasint KU, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST double *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST double beta, double *Y, OPENBLAS_CONST blasint incY); -void cblas_cgbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, - OPENBLAS_CONST blasint KL, OPENBLAS_CONST blasint KU, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *beta, void *Y, OPENBLAS_CONST blasint incY); -void cblas_zgbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, - OPENBLAS_CONST blasint KL, OPENBLAS_CONST blasint KU, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *beta, void *Y, OPENBLAS_CONST blasint incY); - -void cblas_ssbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *A, - OPENBLAS_CONST blasint lda, OPENBLAS_CONST float *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST float beta, float *Y, OPENBLAS_CONST blasint incY); -void cblas_dsbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *A, - OPENBLAS_CONST blasint lda, OPENBLAS_CONST double *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST double beta, double *Y, OPENBLAS_CONST blasint incY); - - -void cblas_stbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST float *A, OPENBLAS_CONST blasint lda, float *X, OPENBLAS_CONST blasint incX); -void cblas_dtbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST double *A, OPENBLAS_CONST blasint lda, double *X, OPENBLAS_CONST blasint incX); -void cblas_ctbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *X, OPENBLAS_CONST blasint incX); -void cblas_ztbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *X, OPENBLAS_CONST blasint incX); - -void cblas_stbsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST float *A, OPENBLAS_CONST blasint lda, float *X, OPENBLAS_CONST blasint incX); -void cblas_dtbsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST double *A, OPENBLAS_CONST blasint lda, double *X, OPENBLAS_CONST blasint incX); -void cblas_ctbsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *X, OPENBLAS_CONST blasint incX); -void cblas_ztbsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *X, OPENBLAS_CONST blasint incX); - -void cblas_stpmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST float *Ap, float *X, OPENBLAS_CONST blasint incX); -void cblas_dtpmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST double *Ap, double *X, OPENBLAS_CONST blasint incX); -void cblas_ctpmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST void *Ap, void *X, OPENBLAS_CONST blasint incX); -void cblas_ztpmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST void *Ap, void *X, OPENBLAS_CONST blasint incX); - -void cblas_stpsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST float *Ap, float *X, OPENBLAS_CONST blasint incX); -void cblas_dtpsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST double *Ap, double *X, OPENBLAS_CONST blasint incX); -void cblas_ctpsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST void *Ap, void *X, OPENBLAS_CONST blasint incX); -void cblas_ztpsv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_DIAG Diag, - OPENBLAS_CONST blasint N, OPENBLAS_CONST void *Ap, void *X, OPENBLAS_CONST blasint incX); - -void cblas_ssymv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *A, - OPENBLAS_CONST blasint lda, OPENBLAS_CONST float *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST float beta, float *Y, OPENBLAS_CONST blasint incY); -void cblas_dsymv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *A, - OPENBLAS_CONST blasint lda, OPENBLAS_CONST double *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST double beta, double *Y, OPENBLAS_CONST blasint incY); -void cblas_chemv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, - OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *beta, void *Y, OPENBLAS_CONST blasint incY); -void cblas_zhemv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, - OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *beta, void *Y, OPENBLAS_CONST blasint incY); - - -void cblas_sspmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *Ap, - OPENBLAS_CONST float *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST float beta, float *Y, OPENBLAS_CONST blasint incY); -void cblas_dspmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *Ap, - OPENBLAS_CONST double *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST double beta, double *Y, OPENBLAS_CONST blasint incY); - -void cblas_sspr(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *X, OPENBLAS_CONST blasint incX, float *Ap); -void cblas_dspr(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *X, OPENBLAS_CONST blasint incX, double *Ap); - -void cblas_chpr(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, void *A); -void cblas_zhpr(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, OPENBLAS_CONST void *X,OPENBLAS_CONST blasint incX, void *A); - -void cblas_sspr2(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST float *Y, OPENBLAS_CONST blasint incY, float *A); -void cblas_dspr2(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST double *Y, OPENBLAS_CONST blasint incY, double *A); -void cblas_chpr2(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *Y, OPENBLAS_CONST blasint incY, void *Ap); -void cblas_zhpr2(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *Y, OPENBLAS_CONST blasint incY, void *Ap); - -void cblas_chbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *beta, void *Y, OPENBLAS_CONST blasint incY); -void cblas_zhbmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *beta, void *Y, OPENBLAS_CONST blasint incY); - -void cblas_chpmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *Ap, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *beta, void *Y, OPENBLAS_CONST blasint incY); -void cblas_zhpmv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint N, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *Ap, OPENBLAS_CONST void *X, OPENBLAS_CONST blasint incX, OPENBLAS_CONST void *beta, void *Y, OPENBLAS_CONST blasint incY); - -void cblas_sgemm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransB, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST float alpha, OPENBLAS_CONST float *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST float *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST float beta, float *C, OPENBLAS_CONST blasint ldc); -void cblas_dgemm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransB, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST double alpha, OPENBLAS_CONST double *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST double *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST double beta, double *C, OPENBLAS_CONST blasint ldc); -void cblas_cgemm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransB, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); -void cblas_cgemm3m(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransB, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); -void cblas_zgemm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransB, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); -void cblas_zgemm3m(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransB, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); - - -void cblas_ssymm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, - OPENBLAS_CONST float alpha, OPENBLAS_CONST float *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST float *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST float beta, float *C, OPENBLAS_CONST blasint ldc); -void cblas_dsymm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, - OPENBLAS_CONST double alpha, OPENBLAS_CONST double *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST double *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST double beta, double *C, OPENBLAS_CONST blasint ldc); -void cblas_csymm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); -void cblas_zsymm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); - -void cblas_ssyrk(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST float beta, float *C, OPENBLAS_CONST blasint ldc); -void cblas_dsyrk(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST double beta, double *C, OPENBLAS_CONST blasint ldc); -void cblas_csyrk(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); -void cblas_zsyrk(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); - -void cblas_ssyr2k(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST float *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST float beta, float *C, OPENBLAS_CONST blasint ldc); -void cblas_dsyr2k(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST double *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST double beta, double *C, OPENBLAS_CONST blasint ldc); -void cblas_csyr2k(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); -void cblas_zsyr2k(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, - OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); - -void cblas_strmm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, - OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *A, OPENBLAS_CONST blasint lda, float *B, OPENBLAS_CONST blasint ldb); -void cblas_dtrmm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, - OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *A, OPENBLAS_CONST blasint lda, double *B, OPENBLAS_CONST blasint ldb); -void cblas_ctrmm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, - OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *B, OPENBLAS_CONST blasint ldb); -void cblas_ztrmm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, - OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *B, OPENBLAS_CONST blasint ldb); - -void cblas_strsm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, - OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *A, OPENBLAS_CONST blasint lda, float *B, OPENBLAS_CONST blasint ldb); -void cblas_dtrsm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, - OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *A, OPENBLAS_CONST blasint lda, double *B, OPENBLAS_CONST blasint ldb); -void cblas_ctrsm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, - OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *B, OPENBLAS_CONST blasint ldb); -void cblas_ztrsm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, - OPENBLAS_CONST enum CBLAS_DIAG Diag, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, void *B, OPENBLAS_CONST blasint ldb); - -void cblas_chemm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); -void cblas_zhemm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_SIDE Side, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST void *beta, void *C, OPENBLAS_CONST blasint ldc); - -void cblas_cherk(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST float alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST float beta, void *C, OPENBLAS_CONST blasint ldc); -void cblas_zherk(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST double alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST double beta, void *C, OPENBLAS_CONST blasint ldc); - -void cblas_cher2k(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST float beta, void *C, OPENBLAS_CONST blasint ldc); -void cblas_zher2k(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_UPLO Uplo, OPENBLAS_CONST enum CBLAS_TRANSPOSE Trans, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST void *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST double beta, void *C, OPENBLAS_CONST blasint ldc); - -void cblas_xerbla(blasint p, char *rout, char *form, ...); - -/*** BLAS extensions ***/ - -void cblas_saxpby(OPENBLAS_CONST blasint n, OPENBLAS_CONST float alpha, OPENBLAS_CONST float *x, OPENBLAS_CONST blasint incx,OPENBLAS_CONST float beta, float *y, OPENBLAS_CONST blasint incy); - -void cblas_daxpby(OPENBLAS_CONST blasint n, OPENBLAS_CONST double alpha, OPENBLAS_CONST double *x, OPENBLAS_CONST blasint incx,OPENBLAS_CONST double beta, double *y, OPENBLAS_CONST blasint incy); - -void cblas_caxpby(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx,OPENBLAS_CONST void *beta, void *y, OPENBLAS_CONST blasint incy); - -void cblas_zaxpby(OPENBLAS_CONST blasint n, OPENBLAS_CONST void *alpha, OPENBLAS_CONST void *x, OPENBLAS_CONST blasint incx,OPENBLAS_CONST void *beta, void *y, OPENBLAS_CONST blasint incy); - -void cblas_somatcopy(OPENBLAS_CONST enum CBLAS_ORDER CORDER, OPENBLAS_CONST enum CBLAS_TRANSPOSE CTRANS, OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST float calpha, OPENBLAS_CONST float *a, - OPENBLAS_CONST blasint clda, float *b, OPENBLAS_CONST blasint cldb); -void cblas_domatcopy(OPENBLAS_CONST enum CBLAS_ORDER CORDER, OPENBLAS_CONST enum CBLAS_TRANSPOSE CTRANS, OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST double calpha, OPENBLAS_CONST double *a, - OPENBLAS_CONST blasint clda, double *b, OPENBLAS_CONST blasint cldb); -void cblas_comatcopy(OPENBLAS_CONST enum CBLAS_ORDER CORDER, OPENBLAS_CONST enum CBLAS_TRANSPOSE CTRANS, OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST float* calpha, OPENBLAS_CONST float* a, - OPENBLAS_CONST blasint clda, float*b, OPENBLAS_CONST blasint cldb); -void cblas_zomatcopy(OPENBLAS_CONST enum CBLAS_ORDER CORDER, OPENBLAS_CONST enum CBLAS_TRANSPOSE CTRANS, OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST double* calpha, OPENBLAS_CONST double* a, - OPENBLAS_CONST blasint clda, double *b, OPENBLAS_CONST blasint cldb); - -void cblas_simatcopy(OPENBLAS_CONST enum CBLAS_ORDER CORDER, OPENBLAS_CONST enum CBLAS_TRANSPOSE CTRANS, OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST float calpha, float *a, - OPENBLAS_CONST blasint clda, OPENBLAS_CONST blasint cldb); -void cblas_dimatcopy(OPENBLAS_CONST enum CBLAS_ORDER CORDER, OPENBLAS_CONST enum CBLAS_TRANSPOSE CTRANS, OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST double calpha, double *a, - OPENBLAS_CONST blasint clda, OPENBLAS_CONST blasint cldb); -void cblas_cimatcopy(OPENBLAS_CONST enum CBLAS_ORDER CORDER, OPENBLAS_CONST enum CBLAS_TRANSPOSE CTRANS, OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST float* calpha, float* a, - OPENBLAS_CONST blasint clda, OPENBLAS_CONST blasint cldb); -void cblas_zimatcopy(OPENBLAS_CONST enum CBLAS_ORDER CORDER, OPENBLAS_CONST enum CBLAS_TRANSPOSE CTRANS, OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST double* calpha, double* a, - OPENBLAS_CONST blasint clda, OPENBLAS_CONST blasint cldb); - -void cblas_sgeadd(OPENBLAS_CONST enum CBLAS_ORDER CORDER,OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST float calpha, float *a, OPENBLAS_CONST blasint clda, OPENBLAS_CONST float cbeta, - float *c, OPENBLAS_CONST blasint cldc); -void cblas_dgeadd(OPENBLAS_CONST enum CBLAS_ORDER CORDER,OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST double calpha, double *a, OPENBLAS_CONST blasint clda, OPENBLAS_CONST double cbeta, - double *c, OPENBLAS_CONST blasint cldc); -void cblas_cgeadd(OPENBLAS_CONST enum CBLAS_ORDER CORDER,OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST float *calpha, float *a, OPENBLAS_CONST blasint clda, OPENBLAS_CONST float *cbeta, - float *c, OPENBLAS_CONST blasint cldc); -void cblas_zgeadd(OPENBLAS_CONST enum CBLAS_ORDER CORDER,OPENBLAS_CONST blasint crows, OPENBLAS_CONST blasint ccols, OPENBLAS_CONST double *calpha, double *a, OPENBLAS_CONST blasint clda, OPENBLAS_CONST double *cbeta, - double *c, OPENBLAS_CONST blasint cldc); - -/*** BFLOAT16 and INT8 extensions ***/ -/* convert float array to BFLOAT16 array by rounding */ -void cblas_sbstobf16(OPENBLAS_CONST blasint n, OPENBLAS_CONST float *in, OPENBLAS_CONST blasint incin, bfloat16 *out, OPENBLAS_CONST blasint incout); -/* convert double array to BFLOAT16 array by rounding */ -void cblas_sbdtobf16(OPENBLAS_CONST blasint n, OPENBLAS_CONST double *in, OPENBLAS_CONST blasint incin, bfloat16 *out, OPENBLAS_CONST blasint incout); -/* convert BFLOAT16 array to float array */ -void cblas_sbf16tos(OPENBLAS_CONST blasint n, OPENBLAS_CONST bfloat16 *in, OPENBLAS_CONST blasint incin, float *out, OPENBLAS_CONST blasint incout); -/* convert BFLOAT16 array to double array */ -void cblas_dbf16tod(OPENBLAS_CONST blasint n, OPENBLAS_CONST bfloat16 *in, OPENBLAS_CONST blasint incin, double *out, OPENBLAS_CONST blasint incout); -/* dot production of BFLOAT16 input arrays, and output as float */ -float cblas_sbdot(OPENBLAS_CONST blasint n, OPENBLAS_CONST bfloat16 *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST bfloat16 *y, OPENBLAS_CONST blasint incy); -void cblas_sbgemv(OPENBLAS_CONST enum CBLAS_ORDER order, OPENBLAS_CONST enum CBLAS_TRANSPOSE trans, OPENBLAS_CONST blasint m, OPENBLAS_CONST blasint n, OPENBLAS_CONST float alpha, OPENBLAS_CONST bfloat16 *a, OPENBLAS_CONST blasint lda, OPENBLAS_CONST bfloat16 *x, OPENBLAS_CONST blasint incx, OPENBLAS_CONST float beta, float *y, OPENBLAS_CONST blasint incy); - -void cblas_sbgemm(OPENBLAS_CONST enum CBLAS_ORDER Order, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransA, OPENBLAS_CONST enum CBLAS_TRANSPOSE TransB, OPENBLAS_CONST blasint M, OPENBLAS_CONST blasint N, OPENBLAS_CONST blasint K, - OPENBLAS_CONST float alpha, OPENBLAS_CONST bfloat16 *A, OPENBLAS_CONST blasint lda, OPENBLAS_CONST bfloat16 *B, OPENBLAS_CONST blasint ldb, OPENBLAS_CONST float beta, float *C, OPENBLAS_CONST blasint ldc); -#ifdef __cplusplus -} -#endif /* __cplusplus */ - -#endif diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/README.md b/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/README.md deleted file mode 100644 index 67335464d794776140fd0308f408608f2231309b..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/lama/saicinpainting/evaluation/masks/countless/README.md +++ /dev/null @@ -1,25 +0,0 @@ -[![Build Status](https://travis-ci.org/william-silversmith/countless.svg?branch=master)](https://travis-ci.org/william-silversmith/countless) - -Python COUNTLESS Downsampling -============================= - -To install: - -`pip install -r requirements.txt` - -To test: - -`python test.py` - -To benchmark countless2d: - -`python python/countless2d.py python/images/gray_segmentation.png` - -To benchmark countless3d: - -`python python/countless3d.py` - -Adjust N and the list of algorithms inside each script to modify the run parameters. - - -Python3 is slightly faster than Python2. \ No newline at end of file diff --git a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/maskUtils.tsx b/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/maskUtils.tsx deleted file mode 100644 index 709c77e28d2f3fbe457742dcfd2dccf28923e4a5..0000000000000000000000000000000000000000 --- a/spaces/InpaintAI/Inpaint-Anything/third_party/segment-anything/demo/src/components/helpers/maskUtils.tsx +++ /dev/null @@ -1,47 +0,0 @@ -// Copyright (c) Meta Platforms, Inc. and affiliates. -// All rights reserved. - -// This source code is licensed under the license found in the -// LICENSE file in the root directory of this source tree. - -// Convert the onnx model mask prediction to ImageData -function arrayToImageData(input: any, width: number, height: number) { - const [r, g, b, a] = [0, 114, 189, 255]; // the masks's blue color - const arr = new Uint8ClampedArray(4 * width * height).fill(0); - for (let i = 0; i < input.length; i++) { - - // Threshold the onnx model mask prediction at 0.0 - // This is equivalent to thresholding the mask using predictor.model.mask_threshold - // in python - if (input[i] > 0.0) { - arr[4 * i + 0] = r; - arr[4 * i + 1] = g; - arr[4 * i + 2] = b; - arr[4 * i + 3] = a; - } - } - return new ImageData(arr, height, width); -} - -// Use a Canvas element to produce an image from ImageData -function imageDataToImage(imageData: ImageData) { - const canvas = imageDataToCanvas(imageData); - const image = new Image(); - image.src = canvas.toDataURL(); - return image; -} - -// Canvas elements can be created from ImageData -function imageDataToCanvas(imageData: ImageData) { - const canvas = document.createElement("canvas"); - const ctx = canvas.getContext("2d"); - canvas.width = imageData.width; - canvas.height = imageData.height; - ctx?.putImageData(imageData, 0, 0); - return canvas; -} - -// Convert the onnx model mask output to an HTMLImageElement -export function onnxMaskToImage(input: any, width: number, height: number) { - return imageDataToImage(arrayToImageData(input, width, height)); -} diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py deleted file mode 100644 index 3a90ae2c7620aae03d32604eefae7b8f1f2b028f..0000000000000000000000000000000000000000 --- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/versatile_diffusion/pipeline_versatile_diffusion_dual_guided.py +++ /dev/null @@ -1,621 +0,0 @@ -# Copyright 2022 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import inspect -from typing import Callable, List, Optional, Tuple, Union - -import numpy as np -import torch -import torch.utils.checkpoint - -import PIL -from transformers import ( - CLIPFeatureExtractor, - CLIPTextModelWithProjection, - CLIPTokenizer, - CLIPVisionModelWithProjection, -) - -from ...models import AutoencoderKL, UNet2DConditionModel -from ...models.attention import DualTransformer2DModel, Transformer2DModel -from ...pipeline_utils import DiffusionPipeline, ImagePipelineOutput -from ...schedulers import DDIMScheduler, LMSDiscreteScheduler, PNDMScheduler -from ...utils import is_accelerate_available, logging -from .modeling_text_unet import UNetFlatConditionModel - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -class VersatileDiffusionDualGuidedPipeline(DiffusionPipeline): - r""" - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Parameters: - vqvae ([`VQModel`]): - Vector-quantized (VQ) Model to encode and decode images to and from latent representations. - bert ([`LDMBertModel`]): - Text-encoder model based on [BERT](https://huggingface.co/docs/transformers/model_doc/bert) architecture. - tokenizer (`transformers.BertTokenizer`): - Tokenizer of class - [BertTokenizer](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`]. - """ - tokenizer: CLIPTokenizer - image_feature_extractor: CLIPFeatureExtractor - text_encoder: CLIPTextModelWithProjection - image_encoder: CLIPVisionModelWithProjection - image_unet: UNet2DConditionModel - text_unet: UNetFlatConditionModel - vae: AutoencoderKL - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler] - - _optional_components = ["text_unet"] - - def __init__( - self, - tokenizer: CLIPTokenizer, - image_feature_extractor: CLIPFeatureExtractor, - text_encoder: CLIPTextModelWithProjection, - image_encoder: CLIPVisionModelWithProjection, - image_unet: UNet2DConditionModel, - text_unet: UNetFlatConditionModel, - vae: AutoencoderKL, - scheduler: Union[DDIMScheduler, PNDMScheduler, LMSDiscreteScheduler], - ): - super().__init__() - self.register_modules( - tokenizer=tokenizer, - image_feature_extractor=image_feature_extractor, - text_encoder=text_encoder, - image_encoder=image_encoder, - image_unet=image_unet, - text_unet=text_unet, - vae=vae, - scheduler=scheduler, - ) - self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1) - - if self.text_unet is not None and ( - "dual_cross_attention" not in self.image_unet.config or not self.image_unet.config.dual_cross_attention - ): - # if loading from a universal checkpoint rather than a saved dual-guided pipeline - self._convert_to_dual_attention() - - def remove_unused_weights(self): - self.register_modules(text_unet=None) - - def _convert_to_dual_attention(self): - """ - Replace image_unet's `Transformer2DModel` blocks with `DualTransformer2DModel` that contains transformer blocks - from both `image_unet` and `text_unet` - """ - for name, module in self.image_unet.named_modules(): - if isinstance(module, Transformer2DModel): - parent_name, index = name.rsplit(".", 1) - index = int(index) - - image_transformer = self.image_unet.get_submodule(parent_name)[index] - text_transformer = self.text_unet.get_submodule(parent_name)[index] - - config = image_transformer.config - dual_transformer = DualTransformer2DModel( - num_attention_heads=config.num_attention_heads, - attention_head_dim=config.attention_head_dim, - in_channels=config.in_channels, - num_layers=config.num_layers, - dropout=config.dropout, - norm_num_groups=config.norm_num_groups, - cross_attention_dim=config.cross_attention_dim, - attention_bias=config.attention_bias, - sample_size=config.sample_size, - num_vector_embeds=config.num_vector_embeds, - activation_fn=config.activation_fn, - num_embeds_ada_norm=config.num_embeds_ada_norm, - ) - dual_transformer.transformers[0] = image_transformer - dual_transformer.transformers[1] = text_transformer - - self.image_unet.get_submodule(parent_name)[index] = dual_transformer - self.image_unet.register_to_config(dual_cross_attention=True) - - def _revert_dual_attention(self): - """ - Revert the image_unet `DualTransformer2DModel` blocks back to `Transformer2DModel` with image_unet weights Call - this function if you reuse `image_unet` in another pipeline, e.g. `VersatileDiffusionPipeline` - """ - for name, module in self.image_unet.named_modules(): - if isinstance(module, DualTransformer2DModel): - parent_name, index = name.rsplit(".", 1) - index = int(index) - self.image_unet.get_submodule(parent_name)[index] = module.transformers[0] - - self.image_unet.register_to_config(dual_cross_attention=False) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_attention_slicing with unet->image_unet - def enable_attention_slicing(self, slice_size: Optional[Union[str, int]] = "auto"): - r""" - Enable sliced attention computation. - - When this option is enabled, the attention module will split the input tensor in slices, to compute attention - in several steps. This is useful to save some memory in exchange for a small speed decrease. - - Args: - slice_size (`str` or `int`, *optional*, defaults to `"auto"`): - When `"auto"`, halves the input to the attention heads, so attention will be computed in two steps. If - a number is provided, uses as many slices as `attention_head_dim // slice_size`. In this case, - `attention_head_dim` must be a multiple of `slice_size`. - """ - if slice_size == "auto": - if isinstance(self.image_unet.config.attention_head_dim, int): - # half the attention head size is usually a good trade-off between - # speed and memory - slice_size = self.image_unet.config.attention_head_dim // 2 - else: - # if `attention_head_dim` is a list, take the smallest head size - slice_size = min(self.image_unet.config.attention_head_dim) - - self.image_unet.set_attention_slice(slice_size) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_attention_slicing - def disable_attention_slicing(self): - r""" - Disable sliced attention computation. If `enable_attention_slicing` was previously invoked, this method will go - back to computing attention in one step. - """ - # set slice_size = `None` to disable `attention slicing` - self.enable_attention_slicing(None) - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.image_unet, self.text_unet, self.text_encoder, self.vae]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device with unet->image_unet - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if self.device != torch.device("meta") or not hasattr(self.image_unet, "_hf_hook"): - return self.device - for module in self.image_unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def _encode_text_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - """ - - def normalize_embeddings(encoder_output): - embeds = self.text_encoder.text_projection(encoder_output.last_hidden_state) - embeds_pooled = encoder_output.text_embeds - embeds = embeds / torch.norm(embeds_pooled.unsqueeze(1), dim=-1, keepdim=True) - return embeds - - batch_size = len(prompt) - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - untruncated_ids = self.tokenizer(prompt, padding="max_length", return_tensors="pt").input_ids - - if not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = text_inputs.attention_mask.to(device) - else: - attention_mask = None - - text_embeddings = self.text_encoder( - text_input_ids.to(device), - attention_mask=attention_mask, - ) - text_embeddings = normalize_embeddings(text_embeddings) - - # duplicate text embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = text_embeddings.shape - text_embeddings = text_embeddings.repeat(1, num_images_per_prompt, 1) - text_embeddings = text_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens = [""] * batch_size - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_tensors="pt", - ) - - if hasattr(self.text_encoder.config, "use_attention_mask") and self.text_encoder.config.use_attention_mask: - attention_mask = uncond_input.attention_mask.to(device) - else: - attention_mask = None - - uncond_embeddings = self.text_encoder( - uncond_input.input_ids.to(device), - attention_mask=attention_mask, - ) - uncond_embeddings = normalize_embeddings(uncond_embeddings) - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1) - uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - return text_embeddings - - def _encode_image_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - device: (`torch.device`): - torch device - num_images_per_prompt (`int`): - number of images that should be generated per prompt - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - """ - - def normalize_embeddings(encoder_output): - embeds = self.image_encoder.vision_model.post_layernorm(encoder_output.last_hidden_state) - embeds = self.image_encoder.visual_projection(embeds) - embeds_pooled = embeds[:, 0:1] - embeds = embeds / torch.norm(embeds_pooled, dim=-1, keepdim=True) - return embeds - - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - # get prompt text embeddings - image_input = self.image_feature_extractor(images=prompt, return_tensors="pt") - pixel_values = image_input.pixel_values.to(device).to(self.image_encoder.dtype) - image_embeddings = self.image_encoder(pixel_values) - image_embeddings = normalize_embeddings(image_embeddings) - - # duplicate image embeddings for each generation per prompt, using mps friendly method - bs_embed, seq_len, _ = image_embeddings.shape - image_embeddings = image_embeddings.repeat(1, num_images_per_prompt, 1) - image_embeddings = image_embeddings.view(bs_embed * num_images_per_prompt, seq_len, -1) - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_images = [np.zeros((512, 512, 3)) + 0.5] * batch_size - uncond_images = self.image_feature_extractor(images=uncond_images, return_tensors="pt") - pixel_values = uncond_images.pixel_values.to(device).to(self.image_encoder.dtype) - uncond_embeddings = self.image_encoder(pixel_values) - uncond_embeddings = normalize_embeddings(uncond_embeddings) - - # duplicate unconditional embeddings for each generation per prompt, using mps friendly method - seq_len = uncond_embeddings.shape[1] - uncond_embeddings = uncond_embeddings.repeat(1, num_images_per_prompt, 1) - uncond_embeddings = uncond_embeddings.view(batch_size * num_images_per_prompt, seq_len, -1) - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and conditional embeddings into a single batch - # to avoid doing two forward passes - image_embeddings = torch.cat([uncond_embeddings, image_embeddings]) - - return image_embeddings - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloa16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs - def prepare_extra_step_kwargs(self, generator, eta): - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - # check if the scheduler accepts generator - accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys()) - if accepts_generator: - extra_step_kwargs["generator"] = generator - return extra_step_kwargs - - def check_inputs(self, prompt, image, height, width, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, PIL.Image.Image) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` `PIL.Image` or `list` but is {type(prompt)}") - if not isinstance(image, str) and not isinstance(image, PIL.Image.Image) and not isinstance(image, list): - raise ValueError(f"`image` has to be of type `str` `PIL.Image` or `list` but is {type(image)}") - - if height % 8 != 0 or width % 8 != 0: - raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.") - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor) - if latents is None: - if device.type == "mps": - # randn does not work reproducibly on mps - latents = torch.randn(shape, generator=generator, device="cpu", dtype=dtype).to(device) - else: - latents = torch.randn(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - def set_transformer_params(self, mix_ratio: float = 0.5, condition_types: Tuple = ("text", "image")): - for name, module in self.image_unet.named_modules(): - if isinstance(module, DualTransformer2DModel): - module.mix_ratio = mix_ratio - - for i, type in enumerate(condition_types): - if type == "text": - module.condition_lengths[i] = self.text_encoder.config.max_position_embeddings - module.transformer_index_for_condition[i] = 1 # use the second (text) transformer - else: - module.condition_lengths[i] = 257 - module.transformer_index_for_condition[i] = 0 # use the first (image) transformer - - @torch.no_grad() - def __call__( - self, - prompt: Union[PIL.Image.Image, List[PIL.Image.Image]], - image: Union[str, List[str]], - text_to_image_strength: float = 0.5, - height: Optional[int] = None, - width: Optional[int] = None, - num_inference_steps: int = 50, - guidance_scale: float = 7.5, - num_images_per_prompt: Optional[int] = 1, - eta: float = 0.0, - generator: Optional[torch.Generator] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: Optional[int] = 1, - **kwargs, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image generation. - height (`int`, *optional*, defaults to self.image_unet.config.sample_size * self.vae_scale_factor): - The height in pixels of the generated image. - width (`int`, *optional*, defaults to self.image_unet.config.sample_size * self.vae_scale_factor): - The width in pixels of the generated image. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - num_images_per_prompt (`int`, *optional*, defaults to 1): - The number of images to generate per prompt. - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - A [torch generator](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make generation - deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Examples: - - ```py - >>> from diffusers import VersatileDiffusionDualGuidedPipeline - >>> import torch - >>> import requests - >>> from io import BytesIO - >>> from PIL import Image - - >>> # let's download an initial image - >>> url = "https://huggingface.co/datasets/diffusers/images/resolve/main/benz.jpg" - - >>> response = requests.get(url) - >>> image = Image.open(BytesIO(response.content)).convert("RGB") - >>> text = "a red car in the sun" - - >>> pipe = VersatileDiffusionDualGuidedPipeline.from_pretrained( - ... "shi-labs/versatile-diffusion", torch_dtype=torch.float16 - ... ) - >>> pipe.remove_unused_weights() - >>> pipe = pipe.to("cuda") - - >>> generator = torch.Generator(device="cuda").manual_seed(0) - >>> text_to_image_strength = 0.75 - - >>> image = pipe( - ... prompt=text, image=image, text_to_image_strength=text_to_image_strength, generator=generator - ... ).images[0] - >>> image.save("./car_variation.png") - ``` - - Returns: - [`~pipelines.stable_diffusion.ImagePipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.ImagePipelineOutput`] if `return_dict` is True, otherwise a `tuple. When - returning a tuple, the first element is a list with the generated images. - """ - # 0. Default height and width to unet - height = height or self.image_unet.config.sample_size * self.vae_scale_factor - width = width or self.image_unet.config.sample_size * self.vae_scale_factor - - # 1. Check inputs. Raise error if not correct - self.check_inputs(prompt, image, height, width, callback_steps) - - # 2. Define call parameters - prompt = [prompt] if not isinstance(prompt, list) else prompt - image = [image] if not isinstance(image, list) else image - batch_size = len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - # 3. Encode input prompts - text_embeddings = self._encode_text_prompt(prompt, device, num_images_per_prompt, do_classifier_free_guidance) - image_embeddings = self._encode_image_prompt(image, device, num_images_per_prompt, do_classifier_free_guidance) - dual_prompt_embeddings = torch.cat([text_embeddings, image_embeddings], dim=1) - prompt_types = ("text", "image") - - # 4. Prepare timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - # 5. Prepare latent variables - num_channels_latents = self.image_unet.in_channels - latents = self.prepare_latents( - batch_size * num_images_per_prompt, - num_channels_latents, - height, - width, - dual_prompt_embeddings.dtype, - device, - generator, - latents, - ) - - # 6. Prepare extra step kwargs. - extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta) - - # 7. Combine the attention blocks of the image and text UNets - self.set_transformer_params(text_to_image_strength, prompt_types) - - # 8. Denoising loop - for i, t in enumerate(self.progress_bar(timesteps)): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - latent_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - # predict the noise residual - noise_pred = self.image_unet(latent_model_input, t, encoder_hidden_states=dual_prompt_embeddings).sample - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs).prev_sample - - # call the callback, if provided - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 9. Post-processing - image = self.decode_latents(latents) - - # 10. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/Junity/TokaiTeio-SVC/inference/slicer.py b/spaces/Junity/TokaiTeio-SVC/inference/slicer.py deleted file mode 100644 index b05840bcf6bdced0b6e2adbecb1a1dd5b3dee462..0000000000000000000000000000000000000000 --- a/spaces/Junity/TokaiTeio-SVC/inference/slicer.py +++ /dev/null @@ -1,142 +0,0 @@ -import librosa -import torch -import torchaudio - - -class Slicer: - def __init__(self, - sr: int, - threshold: float = -40., - min_length: int = 5000, - min_interval: int = 300, - hop_size: int = 20, - max_sil_kept: int = 5000): - if not min_length >= min_interval >= hop_size: - raise ValueError('The following condition must be satisfied: min_length >= min_interval >= hop_size') - if not max_sil_kept >= hop_size: - raise ValueError('The following condition must be satisfied: max_sil_kept >= hop_size') - min_interval = sr * min_interval / 1000 - self.threshold = 10 ** (threshold / 20.) - self.hop_size = round(sr * hop_size / 1000) - self.win_size = min(round(min_interval), 4 * self.hop_size) - self.min_length = round(sr * min_length / 1000 / self.hop_size) - self.min_interval = round(min_interval / self.hop_size) - self.max_sil_kept = round(sr * max_sil_kept / 1000 / self.hop_size) - - def _apply_slice(self, waveform, begin, end): - if len(waveform.shape) > 1: - return waveform[:, begin * self.hop_size: min(waveform.shape[1], end * self.hop_size)] - else: - return waveform[begin * self.hop_size: min(waveform.shape[0], end * self.hop_size)] - - # @timeit - def slice(self, waveform): - if len(waveform.shape) > 1: - samples = librosa.to_mono(waveform) - else: - samples = waveform - if samples.shape[0] <= self.min_length: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - rms_list = librosa.feature.rms(y=samples, frame_length=self.win_size, hop_length=self.hop_size).squeeze(0) - sil_tags = [] - silence_start = None - clip_start = 0 - for i, rms in enumerate(rms_list): - # Keep looping while frame is silent. - if rms < self.threshold: - # Record start of silent frames. - if silence_start is None: - silence_start = i - continue - # Keep looping while frame is not silent and silence start has not been recorded. - if silence_start is None: - continue - # Clear recorded silence start if interval is not enough or clip is too short - is_leading_silence = silence_start == 0 and i > self.max_sil_kept - need_slice_middle = i - silence_start >= self.min_interval and i - clip_start >= self.min_length - if not is_leading_silence and not need_slice_middle: - silence_start = None - continue - # Need slicing. Record the range of silent frames to be removed. - if i - silence_start <= self.max_sil_kept: - pos = rms_list[silence_start: i + 1].argmin() + silence_start - if silence_start == 0: - sil_tags.append((0, pos)) - else: - sil_tags.append((pos, pos)) - clip_start = pos - elif i - silence_start <= self.max_sil_kept * 2: - pos = rms_list[i - self.max_sil_kept: silence_start + self.max_sil_kept + 1].argmin() - pos += i - self.max_sil_kept - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - clip_start = pos_r - else: - sil_tags.append((min(pos_l, pos), max(pos_r, pos))) - clip_start = max(pos_r, pos) - else: - pos_l = rms_list[silence_start: silence_start + self.max_sil_kept + 1].argmin() + silence_start - pos_r = rms_list[i - self.max_sil_kept: i + 1].argmin() + i - self.max_sil_kept - if silence_start == 0: - sil_tags.append((0, pos_r)) - else: - sil_tags.append((pos_l, pos_r)) - clip_start = pos_r - silence_start = None - # Deal with trailing silence. - total_frames = rms_list.shape[0] - if silence_start is not None and total_frames - silence_start >= self.min_interval: - silence_end = min(total_frames, silence_start + self.max_sil_kept) - pos = rms_list[silence_start: silence_end + 1].argmin() + silence_start - sil_tags.append((pos, total_frames + 1)) - # Apply and return slices. - if len(sil_tags) == 0: - return {"0": {"slice": False, "split_time": f"0,{len(waveform)}"}} - else: - chunks = [] - # 第一段静音并非从头开始,补上有声片段 - if sil_tags[0][0]: - chunks.append( - {"slice": False, "split_time": f"0,{min(waveform.shape[0], sil_tags[0][0] * self.hop_size)}"}) - for i in range(0, len(sil_tags)): - # 标识有声片段(跳过第一段) - if i: - chunks.append({"slice": False, - "split_time": f"{sil_tags[i - 1][1] * self.hop_size},{min(waveform.shape[0], sil_tags[i][0] * self.hop_size)}"}) - # 标识所有静音片段 - chunks.append({"slice": True, - "split_time": f"{sil_tags[i][0] * self.hop_size},{min(waveform.shape[0], sil_tags[i][1] * self.hop_size)}"}) - # 最后一段静音并非结尾,补上结尾片段 - if sil_tags[-1][1] * self.hop_size < len(waveform): - chunks.append({"slice": False, "split_time": f"{sil_tags[-1][1] * self.hop_size},{len(waveform)}"}) - chunk_dict = {} - for i in range(len(chunks)): - chunk_dict[str(i)] = chunks[i] - return chunk_dict - - -def cut(audio_path, db_thresh=-30, min_len=5000): - audio, sr = librosa.load(audio_path, sr=None) - slicer = Slicer( - sr=sr, - threshold=db_thresh, - min_length=min_len - ) - chunks = slicer.slice(audio) - return chunks - - -def chunks2audio(audio_path, chunks): - chunks = dict(chunks) - audio, sr = torchaudio.load(audio_path) - if len(audio.shape) == 2 and audio.shape[1] >= 2: - audio = torch.mean(audio, dim=0).unsqueeze(0) - audio = audio.cpu().numpy()[0] - result = [] - for k, v in chunks.items(): - tag = v["split_time"].split(",") - if tag[0] != tag[1]: - result.append((v["slice"], audio[int(tag[0]):int(tag[1])])) - return result, sr diff --git a/spaces/Kichkinya/reverseproxynya/Dockerfile b/spaces/Kichkinya/reverseproxynya/Dockerfile deleted file mode 100644 index e6158e4b2d67eeea6e30ad3c1bb6043ec09b7b9b..0000000000000000000000000000000000000000 --- a/spaces/Kichkinya/reverseproxynya/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ -apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/mmdet/models/backbones/detectors_resnext.py b/spaces/KyanChen/RSPrompter/mmdet/models/backbones/detectors_resnext.py deleted file mode 100644 index 4bbd63154bb47910e27cf6a75e4b359e050063e1..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmdet/models/backbones/detectors_resnext.py +++ /dev/null @@ -1,123 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import math - -from mmcv.cnn import build_conv_layer, build_norm_layer - -from mmdet.registry import MODELS -from .detectors_resnet import Bottleneck as _Bottleneck -from .detectors_resnet import DetectoRS_ResNet - - -class Bottleneck(_Bottleneck): - expansion = 4 - - def __init__(self, - inplanes, - planes, - groups=1, - base_width=4, - base_channels=64, - **kwargs): - """Bottleneck block for ResNeXt. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__(inplanes, planes, **kwargs) - - if groups == 1: - width = self.planes - else: - width = math.floor(self.planes * - (base_width / base_channels)) * groups - - self.norm1_name, norm1 = build_norm_layer( - self.norm_cfg, width, postfix=1) - self.norm2_name, norm2 = build_norm_layer( - self.norm_cfg, width, postfix=2) - self.norm3_name, norm3 = build_norm_layer( - self.norm_cfg, self.planes * self.expansion, postfix=3) - - self.conv1 = build_conv_layer( - self.conv_cfg, - self.inplanes, - width, - kernel_size=1, - stride=self.conv1_stride, - bias=False) - self.add_module(self.norm1_name, norm1) - fallback_on_stride = False - self.with_modulated_dcn = False - if self.with_dcn: - fallback_on_stride = self.dcn.pop('fallback_on_stride', False) - if self.with_sac: - self.conv2 = build_conv_layer( - self.sac, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - elif not self.with_dcn or fallback_on_stride: - self.conv2 = build_conv_layer( - self.conv_cfg, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - else: - assert self.conv_cfg is None, 'conv_cfg must be None for DCN' - self.conv2 = build_conv_layer( - self.dcn, - width, - width, - kernel_size=3, - stride=self.conv2_stride, - padding=self.dilation, - dilation=self.dilation, - groups=groups, - bias=False) - - self.add_module(self.norm2_name, norm2) - self.conv3 = build_conv_layer( - self.conv_cfg, - width, - self.planes * self.expansion, - kernel_size=1, - bias=False) - self.add_module(self.norm3_name, norm3) - - -@MODELS.register_module() -class DetectoRS_ResNeXt(DetectoRS_ResNet): - """ResNeXt backbone for DetectoRS. - - Args: - groups (int): The number of groups in ResNeXt. - base_width (int): The base width of ResNeXt. - """ - - arch_settings = { - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, groups=1, base_width=4, **kwargs): - self.groups = groups - self.base_width = base_width - super(DetectoRS_ResNeXt, self).__init__(**kwargs) - - def make_res_layer(self, **kwargs): - return super().make_res_layer( - groups=self.groups, - base_width=self.base_width, - base_channels=self.base_channels, - **kwargs) diff --git a/spaces/KyanChen/RSPrompter/mmpl/models/heads/cls_head.py b/spaces/KyanChen/RSPrompter/mmpl/models/heads/cls_head.py deleted file mode 100644 index 26c01ac3c61170bdc8dab2377795b1a75e3fd881..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/mmpl/models/heads/cls_head.py +++ /dev/null @@ -1,157 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import List, Optional, Tuple, Union - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from mmcls.evaluation.metrics import Accuracy -from mmcls.registry import MODELS -from mmcls.structures import ClsDataSample -from .base_head import BaseHead - - -@MODELS.register_module() -class ClsHead(BaseHead): - """Classification head. - - Args: - loss (dict): Config of classification loss. Defaults to - ``dict(type='CrossEntropyLoss', loss_weight=1.0)``. - topk (int | Tuple[int]): Top-k accuracy. Defaults to ``(1, )``. - cal_acc (bool): Whether to calculate accuracy during training. - If you use batch augmentations like Mixup and CutMix during - training, it is pointless to calculate accuracy. - Defaults to False. - init_cfg (dict, optional): the config to control the initialization. - Defaults to None. - """ - - def __init__(self, - loss: dict = dict(type='CrossEntropyLoss', loss_weight=1.0), - topk: Union[int, Tuple[int]] = (1, ), - cal_acc: bool = False, - init_cfg: Optional[dict] = None): - super(ClsHead, self).__init__(init_cfg=init_cfg) - - self.topk = topk - if not isinstance(loss, nn.Module): - loss = MODELS.build(loss) - self.loss_module = loss - self.cal_acc = cal_acc - - def pre_logits(self, feats: Tuple[torch.Tensor]) -> torch.Tensor: - """The process before the final classification head. - - The input ``feats`` is a tuple of tensor, and each tensor is the - feature of a backbone stage. In ``ClsHead``, we just obtain the feature - of the last stage. - """ - # The ClsHead doesn't have other module, just return after unpacking. - return feats[-1] - - def forward(self, feats: Tuple[torch.Tensor]) -> torch.Tensor: - """The forward process.""" - pre_logits = self.pre_logits(feats) - # The ClsHead doesn't have the final classification head, - # just return the unpacked inputs. - return pre_logits - - def loss(self, feats: Tuple[torch.Tensor], - data_samples: List[ClsDataSample], **kwargs) -> dict: - """Calculate losses from the classification score. - - Args: - feats (tuple[Tensor]): The features extracted from the backbone. - Multiple stage inputs are acceptable but only the last stage - will be used to classify. The shape of every item should be - ``(num_samples, num_classes)``. - data_samples (List[ClsDataSample]): The annotation data of - every samples. - **kwargs: Other keyword arguments to forward the loss module. - - Returns: - dict[str, Tensor]: a dictionary of loss components - """ - # The part can be traced by torch.fx - cls_score = self(feats) - # import pdb - # pdb.set_trace() - # The part can not be traced by torch.fx - losses = self._get_loss(cls_score, data_samples, **kwargs) - return losses - - def _get_loss(self, cls_score: torch.Tensor, - data_samples: List[ClsDataSample], **kwargs): - """Unpack data samples and compute loss.""" - # Unpack data samples and pack targets - if 'score' in data_samples[0].gt_label: - # Batch augmentation may convert labels to one-hot format scores. - target = torch.stack([i.gt_label.score for i in data_samples]) - else: - target = torch.cat([i.gt_label.label for i in data_samples]) - - # compute loss - losses = dict() - loss = self.loss_module( - cls_score, target, avg_factor=cls_score.size(0), **kwargs) - losses['loss'] = loss - - # compute accuracy - if self.cal_acc: - assert target.ndim == 1, 'If you enable batch augmentation ' \ - 'like mixup during training, `cal_acc` is pointless.' - acc = Accuracy.calculate(cls_score, target, topk=self.topk) - losses.update( - {f'accuracy_top-{k}': a - for k, a in zip(self.topk, acc)}) - - return losses - - def predict( - self, - feats: Tuple[torch.Tensor], - data_samples: List[Union[ClsDataSample, None]] = None - ) -> List[ClsDataSample]: - """Inference without augmentation. - - Args: - feats (tuple[Tensor]): The features extracted from the backbone. - Multiple stage inputs are acceptable but only the last stage - will be used to classify. The shape of every item should be - ``(num_samples, num_classes)``. - data_samples (List[ClsDataSample | None], optional): The annotation - data of every samples. If not None, set ``pred_label`` of - the input data samples. Defaults to None. - - Returns: - List[ClsDataSample]: A list of data samples which contains the - predicted results. - """ - # The part can be traced by torch.fx - cls_score = self(feats) - - # The part can not be traced by torch.fx - predictions = self._get_predictions(cls_score, data_samples) - return predictions - - def _get_predictions(self, cls_score, data_samples): - """Post-process the output of head. - - Including softmax and set ``pred_label`` of data samples. - """ - pred_scores = F.softmax(cls_score, dim=1) - pred_labels = pred_scores.argmax(dim=1, keepdim=True).detach() - - out_data_samples = [] - if data_samples is None: - data_samples = [None for _ in range(pred_scores.size(0))] - - for data_sample, score, label in zip(data_samples, pred_scores, - pred_labels): - if data_sample is None: - data_sample = ClsDataSample() - - data_sample.set_pred_score(score).set_pred_label(label) - out_data_samples.append(data_sample) - return out_data_samples diff --git a/spaces/Lamai/LAMAIGPT/autogpt/logs.py b/spaces/Lamai/LAMAIGPT/autogpt/logs.py deleted file mode 100644 index 35037404a98f7be9b7d577b625cc190ca27f4566..0000000000000000000000000000000000000000 --- a/spaces/Lamai/LAMAIGPT/autogpt/logs.py +++ /dev/null @@ -1,332 +0,0 @@ -"""Logging module for Auto-GPT.""" -import json -import logging -import os -import random -import re -import time -import traceback -from logging import LogRecord - -from colorama import Fore, Style - -from autogpt.config import Config, Singleton -from autogpt.speech import say_text - -CFG = Config() - - -class Logger(metaclass=Singleton): - """ - Logger that handle titles in different colors. - Outputs logs in console, activity.log, and errors.log - For console handler: simulates typing - """ - - def __init__(self): - # create log directory if it doesn't exist - this_files_dir_path = os.path.dirname(__file__) - log_dir = os.path.join(this_files_dir_path, "../logs") - if not os.path.exists(log_dir): - os.makedirs(log_dir) - - log_file = "activity.log" - error_file = "error.log" - - console_formatter = AutoGptFormatter("%(title_color)s %(message)s") - - # Create a handler for console which simulate typing - self.typing_console_handler = TypingConsoleHandler() - self.typing_console_handler.setLevel(logging.INFO) - self.typing_console_handler.setFormatter(console_formatter) - - # Create a handler for console without typing simulation - self.console_handler = ConsoleHandler() - self.console_handler.setLevel(logging.DEBUG) - self.console_handler.setFormatter(console_formatter) - - # Info handler in activity.log - self.file_handler = logging.FileHandler( - os.path.join(log_dir, log_file), "a", "utf-8" - ) - self.file_handler.setLevel(logging.DEBUG) - info_formatter = AutoGptFormatter( - "%(asctime)s %(levelname)s %(title)s %(message_no_color)s" - ) - self.file_handler.setFormatter(info_formatter) - - # Error handler error.log - error_handler = logging.FileHandler( - os.path.join(log_dir, error_file), "a", "utf-8" - ) - error_handler.setLevel(logging.ERROR) - error_formatter = AutoGptFormatter( - "%(asctime)s %(levelname)s %(module)s:%(funcName)s:%(lineno)d %(title)s" - " %(message_no_color)s" - ) - error_handler.setFormatter(error_formatter) - - self.typing_logger = logging.getLogger("TYPER") - self.typing_logger.addHandler(self.typing_console_handler) - self.typing_logger.addHandler(self.file_handler) - self.typing_logger.addHandler(error_handler) - self.typing_logger.setLevel(logging.DEBUG) - - self.logger = logging.getLogger("LOGGER") - self.logger.addHandler(self.console_handler) - self.logger.addHandler(self.file_handler) - self.logger.addHandler(error_handler) - self.logger.setLevel(logging.DEBUG) - - def typewriter_log( - self, title="", title_color="", content="", speak_text=False, level=logging.INFO - ): - if speak_text and CFG.speak_mode: - say_text(f"{title}. {content}") - - if content: - if isinstance(content, list): - content = " ".join(content) - else: - content = "" - - self.typing_logger.log( - level, content, extra={"title": title, "color": title_color} - ) - - def debug( - self, - message, - title="", - title_color="", - ): - self._log(title, title_color, message, logging.DEBUG) - - def warn( - self, - message, - title="", - title_color="", - ): - self._log(title, title_color, message, logging.WARN) - - def error(self, title, message=""): - self._log(title, Fore.RED, message, logging.ERROR) - - def _log(self, title="", title_color="", message="", level=logging.INFO): - if message: - if isinstance(message, list): - message = " ".join(message) - self.logger.log(level, message, extra={"title": title, "color": title_color}) - - def set_level(self, level): - self.logger.setLevel(level) - self.typing_logger.setLevel(level) - - def double_check(self, additionalText=None): - if not additionalText: - additionalText = ( - "Please ensure you've setup and configured everything" - " correctly. Read https://github.com/Torantulino/Auto-GPT#readme to " - "double check. You can also create a github issue or join the discord" - " and ask there!" - ) - - self.typewriter_log("DOUBLE CHECK CONFIGURATION", Fore.YELLOW, additionalText) - - -""" -Output stream to console using simulated typing -""" - - -class TypingConsoleHandler(logging.StreamHandler): - def emit(self, record): - min_typing_speed = 0.05 - max_typing_speed = 0.01 - - msg = self.format(record) - try: - words = msg.split() - for i, word in enumerate(words): - print(word, end="", flush=True) - if i < len(words) - 1: - print(" ", end="", flush=True) - typing_speed = random.uniform(min_typing_speed, max_typing_speed) - time.sleep(typing_speed) - # type faster after each word - min_typing_speed = min_typing_speed * 0.95 - max_typing_speed = max_typing_speed * 0.95 - print() - except Exception: - self.handleError(record) - - -class ConsoleHandler(logging.StreamHandler): - def emit(self, record) -> None: - msg = self.format(record) - try: - print(msg) - except Exception: - self.handleError(record) - - -class AutoGptFormatter(logging.Formatter): - """ - Allows to handle custom placeholders 'title_color' and 'message_no_color'. - To use this formatter, make sure to pass 'color', 'title' as log extras. - """ - - def format(self, record: LogRecord) -> str: - if hasattr(record, "color"): - record.title_color = ( - getattr(record, "color") - + getattr(record, "title") - + " " - + Style.RESET_ALL - ) - else: - record.title_color = getattr(record, "title") - if hasattr(record, "msg"): - record.message_no_color = remove_color_codes(getattr(record, "msg")) - else: - record.message_no_color = "" - return super().format(record) - - -def remove_color_codes(s: str) -> str: - ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])") - return ansi_escape.sub("", s) - - -logger = Logger() - - -def print_assistant_thoughts(ai_name, assistant_reply): - """Prints the assistant's thoughts to the console""" - from autogpt.json_utils.json_fix_llm import ( - attempt_to_fix_json_by_finding_outermost_brackets, - fix_and_parse_json, - ) - - try: - try: - # Parse and print Assistant response - assistant_reply_json = fix_and_parse_json(assistant_reply) - except json.JSONDecodeError: - logger.error("Error: Invalid JSON in assistant thoughts\n", assistant_reply) - assistant_reply_json = attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply - ) - if isinstance(assistant_reply_json, str): - assistant_reply_json = fix_and_parse_json(assistant_reply_json) - - # Check if assistant_reply_json is a string and attempt to parse - # it into a JSON object - if isinstance(assistant_reply_json, str): - try: - assistant_reply_json = json.loads(assistant_reply_json) - except json.JSONDecodeError: - logger.error("Error: Invalid JSON\n", assistant_reply) - assistant_reply_json = ( - attempt_to_fix_json_by_finding_outermost_brackets( - assistant_reply_json - ) - ) - - assistant_thoughts_reasoning = None - assistant_thoughts_plan = None - assistant_thoughts_speak = None - assistant_thoughts_criticism = None - if not isinstance(assistant_reply_json, dict): - assistant_reply_json = {} - assistant_thoughts = assistant_reply_json.get("thoughts", {}) - assistant_thoughts_text = assistant_thoughts.get("text") - - if assistant_thoughts: - assistant_thoughts_reasoning = assistant_thoughts.get("reasoning") - assistant_thoughts_plan = assistant_thoughts.get("plan") - assistant_thoughts_criticism = assistant_thoughts.get("criticism") - assistant_thoughts_speak = assistant_thoughts.get("speak") - - logger.typewriter_log( - f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}" - ) - logger.typewriter_log( - "REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}" - ) - - if assistant_thoughts_plan: - logger.typewriter_log("PLAN:", Fore.YELLOW, "") - # If it's a list, join it into a string - if isinstance(assistant_thoughts_plan, list): - assistant_thoughts_plan = "\n".join(assistant_thoughts_plan) - elif isinstance(assistant_thoughts_plan, dict): - assistant_thoughts_plan = str(assistant_thoughts_plan) - - # Split the input_string using the newline character and dashes - lines = assistant_thoughts_plan.split("\n") - for line in lines: - line = line.lstrip("- ") - logger.typewriter_log("- ", Fore.GREEN, line.strip()) - - logger.typewriter_log( - "CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}" - ) - # Speak the assistant's thoughts - if CFG.speak_mode and assistant_thoughts_speak: - say_text(assistant_thoughts_speak) - else: - logger.typewriter_log("SPEAK:", Fore.YELLOW, f"{assistant_thoughts_speak}") - - return assistant_reply_json - except json.decoder.JSONDecodeError: - logger.error("Error: Invalid JSON\n", assistant_reply) - if CFG.speak_mode: - say_text( - "I have received an invalid JSON response from the OpenAI API." - " I cannot ignore this response." - ) - - # All other errors, return "Error: + error message" - except Exception: - call_stack = traceback.format_exc() - logger.error("Error: \n", call_stack) - - -def print_assistant_thoughts( - ai_name: object, assistant_reply_json_valid: object -) -> None: - assistant_thoughts_reasoning = None - assistant_thoughts_plan = None - assistant_thoughts_speak = None - assistant_thoughts_criticism = None - - assistant_thoughts = assistant_reply_json_valid.get("thoughts", {}) - assistant_thoughts_text = assistant_thoughts.get("text") - if assistant_thoughts: - assistant_thoughts_reasoning = assistant_thoughts.get("reasoning") - assistant_thoughts_plan = assistant_thoughts.get("plan") - assistant_thoughts_criticism = assistant_thoughts.get("criticism") - assistant_thoughts_speak = assistant_thoughts.get("speak") - logger.typewriter_log( - f"{ai_name.upper()} THOUGHTS:", Fore.YELLOW, f"{assistant_thoughts_text}" - ) - logger.typewriter_log("REASONING:", Fore.YELLOW, f"{assistant_thoughts_reasoning}") - if assistant_thoughts_plan: - logger.typewriter_log("PLAN:", Fore.YELLOW, "") - # If it's a list, join it into a string - if isinstance(assistant_thoughts_plan, list): - assistant_thoughts_plan = "\n".join(assistant_thoughts_plan) - elif isinstance(assistant_thoughts_plan, dict): - assistant_thoughts_plan = str(assistant_thoughts_plan) - - # Split the input_string using the newline character and dashes - lines = assistant_thoughts_plan.split("\n") - for line in lines: - line = line.lstrip("- ") - logger.typewriter_log("- ", Fore.GREEN, line.strip()) - logger.typewriter_log("CRITICISM:", Fore.YELLOW, f"{assistant_thoughts_criticism}") - # Speak the assistant's thoughts - if CFG.speak_mode and assistant_thoughts_speak: - say_text(assistant_thoughts_speak) diff --git a/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/models/experimental.py b/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/models/experimental.py deleted file mode 100644 index 37ba4c4420789c92dc0e2aaeb3d5b64859ec728c..0000000000000000000000000000000000000000 --- a/spaces/LightSY/W2L-TD/facelib/detection/yolov5face/models/experimental.py +++ /dev/null @@ -1,45 +0,0 @@ -# # This file contains experimental modules - -import numpy as np -import torch -from torch import nn - -from facelib.detection.yolov5face.models.common import Conv - - -class CrossConv(nn.Module): - # Cross Convolution Downsample - def __init__(self, c1, c2, k=3, s=1, g=1, e=1.0, shortcut=False): - # ch_in, ch_out, kernel, stride, groups, expansion, shortcut - super().__init__() - c_ = int(c2 * e) # hidden channels - self.cv1 = Conv(c1, c_, (1, k), (1, s)) - self.cv2 = Conv(c_, c2, (k, 1), (s, 1), g=g) - self.add = shortcut and c1 == c2 - - def forward(self, x): - return x + self.cv2(self.cv1(x)) if self.add else self.cv2(self.cv1(x)) - - -class MixConv2d(nn.Module): - # Mixed Depthwise Conv https://arxiv.org/abs/1907.09595 - def __init__(self, c1, c2, k=(1, 3), s=1, equal_ch=True): - super().__init__() - groups = len(k) - if equal_ch: # equal c_ per group - i = torch.linspace(0, groups - 1e-6, c2).floor() # c2 indices - c_ = [(i == g).sum() for g in range(groups)] # intermediate channels - else: # equal weight.numel() per group - b = [c2] + [0] * groups - a = np.eye(groups + 1, groups, k=-1) - a -= np.roll(a, 1, axis=1) - a *= np.array(k) ** 2 - a[0] = 1 - c_ = np.linalg.lstsq(a, b, rcond=None)[0].round() # solve for equal weight indices, ax = b - - self.m = nn.ModuleList([nn.Conv2d(c1, int(c_[g]), k[g], s, k[g] // 2, bias=False) for g in range(groups)]) - self.bn = nn.BatchNorm2d(c2) - self.act = nn.LeakyReLU(0.1, inplace=True) - - def forward(self, x): - return x + self.act(self.bn(torch.cat([m(x) for m in self.m], 1))) diff --git a/spaces/Linkthat/IntentClassification/README.md b/spaces/Linkthat/IntentClassification/README.md deleted file mode 100644 index 4fbdd84f17c42dee284823ee35f2114549001104..0000000000000000000000000000000000000000 --- a/spaces/Linkthat/IntentClassification/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Intentclassification -emoji: 😻 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Liu-LAB/GPT-academic/request_llm/edge_gpt.py b/spaces/Liu-LAB/GPT-academic/request_llm/edge_gpt.py deleted file mode 100644 index bbf84000d84a42de80d3c051a24f06336af76aaf..0000000000000000000000000000000000000000 --- a/spaces/Liu-LAB/GPT-academic/request_llm/edge_gpt.py +++ /dev/null @@ -1,409 +0,0 @@ -""" -======================================================================== -第一部分:来自EdgeGPT.py -https://github.com/acheong08/EdgeGPT -======================================================================== -""" - -import argparse -import asyncio -import json -import os -import random -import re -import ssl -import sys -import uuid -from enum import Enum -from typing import Generator -from typing import Literal -from typing import Optional -from typing import Union -import websockets.client as websockets - -DELIMITER = "\x1e" - - -# Generate random IP between range 13.104.0.0/14 -FORWARDED_IP = ( - f"13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}" -) - -HEADERS = { - "accept": "application/json", - "accept-language": "en-US,en;q=0.9", - "content-type": "application/json", - "sec-ch-ua": '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"109.0.1518.78"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": "", - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "empty", - "sec-fetch-mode": "cors", - "sec-fetch-site": "same-origin", - "x-ms-client-request-id": str(uuid.uuid4()), - "x-ms-useragent": "azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32", - "Referer": "https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx", - "Referrer-Policy": "origin-when-cross-origin", - "x-forwarded-for": FORWARDED_IP, -} - -HEADERS_INIT_CONVER = { - "authority": "edgeservices.bing.com", - "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7", - "accept-language": "en-US,en;q=0.9", - "cache-control": "max-age=0", - "sec-ch-ua": '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"', - "sec-ch-ua-arch": '"x86"', - "sec-ch-ua-bitness": '"64"', - "sec-ch-ua-full-version": '"110.0.1587.69"', - "sec-ch-ua-full-version-list": '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - "sec-ch-ua-mobile": "?0", - "sec-ch-ua-model": '""', - "sec-ch-ua-platform": '"Windows"', - "sec-ch-ua-platform-version": '"15.0.0"', - "sec-fetch-dest": "document", - "sec-fetch-mode": "navigate", - "sec-fetch-site": "none", - "sec-fetch-user": "?1", - "upgrade-insecure-requests": "1", - "user-agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69", - "x-edge-shopping-flag": "1", - "x-forwarded-for": FORWARDED_IP, -} - -def get_ssl_context(): - import certifi - ssl_context = ssl.create_default_context() - ssl_context.load_verify_locations(certifi.where()) - return ssl_context - - - -class NotAllowedToAccess(Exception): - pass - - -class ConversationStyle(Enum): - creative = "h3imaginative,clgalileo,gencontentv3" - balanced = "galileo" - precise = "h3precise,clgalileo" - - -CONVERSATION_STYLE_TYPE = Optional[ - Union[ConversationStyle, Literal["creative", "balanced", "precise"]] -] - - -def _append_identifier(msg: dict) -> str: - """ - Appends special character to end of message to identify end of message - """ - # Convert dict to json string - return json.dumps(msg) + DELIMITER - - -def _get_ran_hex(length: int = 32) -> str: - """ - Returns random hex string - """ - return "".join(random.choice("0123456789abcdef") for _ in range(length)) - - -class _ChatHubRequest: - """ - Request object for ChatHub - """ - - def __init__( - self, - conversation_signature: str, - client_id: str, - conversation_id: str, - invocation_id: int = 0, - ) -> None: - self.struct: dict = {} - - self.client_id: str = client_id - self.conversation_id: str = conversation_id - self.conversation_signature: str = conversation_signature - self.invocation_id: int = invocation_id - - def update( - self, - prompt, - conversation_style, - options, - ) -> None: - """ - Updates request object - """ - if options is None: - options = [ - "deepleo", - "enable_debug_commands", - "disable_emoji_spoken_text", - "enablemm", - ] - if conversation_style: - if not isinstance(conversation_style, ConversationStyle): - conversation_style = getattr(ConversationStyle, conversation_style) - options = [ - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - conversation_style.value, - "dtappid", - "cricinfo", - "cricinfov2", - "dv3sugg", - ] - self.struct = { - "arguments": [ - { - "source": "cib", - "optionsSets": options, - "sliceIds": [ - "222dtappid", - "225cricinfo", - "224locals0", - ], - "traceId": _get_ran_hex(32), - "isStartOfSession": self.invocation_id == 0, - "message": { - "author": "user", - "inputMethod": "Keyboard", - "text": prompt, - "messageType": "Chat", - }, - "conversationSignature": self.conversation_signature, - "participant": { - "id": self.client_id, - }, - "conversationId": self.conversation_id, - }, - ], - "invocationId": str(self.invocation_id), - "target": "chat", - "type": 4, - } - self.invocation_id += 1 - - -class _Conversation: - """ - Conversation API - """ - - def __init__( - self, - cookies, - proxy, - ) -> None: - self.struct: dict = { - "conversationId": None, - "clientId": None, - "conversationSignature": None, - "result": {"value": "Success", "message": None}, - } - import httpx - self.proxy = proxy - proxy = ( - proxy - or os.environ.get("all_proxy") - or os.environ.get("ALL_PROXY") - or os.environ.get("https_proxy") - or os.environ.get("HTTPS_PROXY") - or None - ) - if proxy is not None and proxy.startswith("socks5h://"): - proxy = "socks5://" + proxy[len("socks5h://") :] - self.session = httpx.Client( - proxies=proxy, - timeout=30, - headers=HEADERS_INIT_CONVER, - ) - for cookie in cookies: - self.session.cookies.set(cookie["name"], cookie["value"]) - - # Send GET request - response = self.session.get( - url=os.environ.get("BING_PROXY_URL") - or "https://edgeservices.bing.com/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - response = self.session.get( - "https://edge.churchless.tech/edgesvc/turing/conversation/create", - ) - if response.status_code != 200: - print(f"Status code: {response.status_code}") - print(response.text) - print(response.url) - raise Exception("Authentication failed") - try: - self.struct = response.json() - except (json.decoder.JSONDecodeError, NotAllowedToAccess) as exc: - raise Exception( - "Authentication failed. You have not been accepted into the beta.", - ) from exc - if self.struct["result"]["value"] == "UnauthorizedRequest": - raise NotAllowedToAccess(self.struct["result"]["message"]) - - -class _ChatHub: - """ - Chat API - """ - - def __init__(self, conversation) -> None: - self.wss = None - self.request: _ChatHubRequest - self.loop: bool - self.task: asyncio.Task - print(conversation.struct) - self.request = _ChatHubRequest( - conversation_signature=conversation.struct["conversationSignature"], - client_id=conversation.struct["clientId"], - conversation_id=conversation.struct["conversationId"], - ) - - async def ask_stream( - self, - prompt: str, - wss_link: str, - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - if self.wss and not self.wss.closed: - await self.wss.close() - # Check if websocket is closed - self.wss = await websockets.connect( - wss_link, - extra_headers=HEADERS, - max_size=None, - ssl=get_ssl_context() - ) - await self._initial_handshake() - # Construct a ChatHub request - self.request.update( - prompt=prompt, - conversation_style=conversation_style, - options=options, - ) - # Send request - await self.wss.send(_append_identifier(self.request.struct)) - final = False - while not final: - objects = str(await self.wss.recv()).split(DELIMITER) - for obj in objects: - if obj is None or not obj: - continue - response = json.loads(obj) - if response.get("type") != 2 and raw: - yield False, response - elif response.get("type") == 1 and response["arguments"][0].get( - "messages", - ): - resp_txt = response["arguments"][0]["messages"][0]["adaptiveCards"][ - 0 - ]["body"][0].get("text") - yield False, resp_txt - elif response.get("type") == 2: - final = True - yield True, response - - async def _initial_handshake(self) -> None: - await self.wss.send(_append_identifier({"protocol": "json", "version": 1})) - await self.wss.recv() - - async def close(self) -> None: - """ - Close the connection - """ - if self.wss and not self.wss.closed: - await self.wss.close() - - -class NewbingChatbot: - """ - Combines everything to make it seamless - """ - - def __init__( - self, - cookies, - proxy - ) -> None: - if cookies is None: - cookies = {} - self.cookies = cookies - self.proxy = proxy - self.chat_hub: _ChatHub = _ChatHub( - _Conversation(self.cookies, self.proxy), - ) - - async def ask( - self, - prompt: str, - wss_link: str, - conversation_style: CONVERSATION_STYLE_TYPE = None, - options: dict = None, - ) -> dict: - """ - Ask a question to the bot - """ - async for final, response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - options=options, - ): - if final: - return response - await self.chat_hub.wss.close() - return None - - async def ask_stream( - self, - prompt: str, - wss_link: str, - conversation_style: CONVERSATION_STYLE_TYPE = None, - raw: bool = False, - options: dict = None, - ) -> Generator[str, None, None]: - """ - Ask a question to the bot - """ - async for response in self.chat_hub.ask_stream( - prompt=prompt, - conversation_style=conversation_style, - wss_link=wss_link, - raw=raw, - options=options, - ): - yield response - - async def close(self) -> None: - """ - Close the connection - """ - await self.chat_hub.close() - - async def reset(self) -> None: - """ - Reset the conversation - """ - await self.close() - self.chat_hub = _ChatHub(_Conversation(self.cookies, self.proxy)) - - diff --git a/spaces/LuxOAI/ChatGpt-Web/app/locales/en.ts b/spaces/LuxOAI/ChatGpt-Web/app/locales/en.ts deleted file mode 100644 index 58a8f773f1e4403ecaa7f0d4f4fc587ddcadff8d..0000000000000000000000000000000000000000 --- a/spaces/LuxOAI/ChatGpt-Web/app/locales/en.ts +++ /dev/null @@ -1,243 +0,0 @@ -import { SubmitKey } from "../store/config"; -import type { LocaleType } from "./index"; - -const en: LocaleType = { - WIP: "Coming Soon...", - Error: { - Unauthorized: - "Unauthorized access, please enter access code in settings page.", - }, - ChatItem: { - ChatItemCount: (count: number) => `${count} messages`, - }, - Chat: { - SubTitle: (count: number) => `${count} messages with ChatGPT`, - Actions: { - ChatList: "Go To Chat List", - CompressedHistory: "Compressed History Memory Prompt", - Export: "Export All Messages as Markdown", - Copy: "Copy", - Stop: "Stop", - Retry: "Retry", - Delete: "Delete", - }, - Rename: "Rename Chat", - Typing: "Typing…", - Input: (submitKey: string) => { - var inputHints = `${submitKey} to send`; - if (submitKey === String(SubmitKey.Enter)) { - inputHints += ", Shift + Enter to wrap"; - } - return inputHints + ", / to search prompts"; - }, - Send: "Send", - Config: { - Reset: "Reset to Default", - SaveAs: "Save as Mask", - }, - }, - Export: { - Title: "All Messages", - Copy: "Copy All", - Download: "Download", - MessageFromYou: "Message From You", - MessageFromChatGPT: "Message From ChatGPT", - }, - Memory: { - Title: "Memory Prompt", - EmptyContent: "Nothing yet.", - Send: "Send Memory", - Copy: "Copy Memory", - Reset: "Reset Session", - ResetConfirm: - "Resetting will clear the current conversation history and historical memory. Are you sure you want to reset?", - }, - Home: { - NewChat: "New Chat", - DeleteChat: "Confirm to delete the selected conversation?", - DeleteToast: "Chat Deleted", - Revert: "Revert", - }, - Settings: { - Title: "Settings", - SubTitle: "All Settings", - Actions: { - ClearAll: "Clear All Data", - ResetAll: "Reset All Settings", - Close: "Close", - ConfirmResetAll: "Are you sure you want to reset all configurations?", - ConfirmClearAll: "Are you sure you want to reset all data?", - }, - Lang: { - Name: "Language", // ATTENTION: if you wanna add a new translation, please do not translate this value, leave it as `Language` - All: "All Languages", - Options: { - cn: "简体中文", - en: "English", - tw: "繁體中文", - es: "Español", - it: "Italiano", - tr: "Türkçe", - jp: "日本語", - de: "Deutsch", - }, - }, - Avatar: "Avatar", - FontSize: { - Title: "Font Size", - SubTitle: "Adjust font size of chat content", - }, - Update: { - Version: (x: string) => `Version: ${x}`, - IsLatest: "Latest version", - CheckUpdate: "Check Update", - IsChecking: "Checking update...", - FoundUpdate: (x: string) => `Found new version: ${x}`, - GoToUpdate: "Update", - }, - SendKey: "Send Key", - Theme: "Theme", - TightBorder: "Tight Border", - SendPreviewBubble: { - Title: "Send Preview Bubble", - SubTitle: "Preview markdown in bubble", - }, - Mask: { - Title: "Mask Splash Screen", - SubTitle: "Show a mask splash screen before starting new chat", - }, - Prompt: { - Disable: { - Title: "Disable auto-completion", - SubTitle: "Input / to trigger auto-completion", - }, - List: "Prompt List", - ListCount: (builtin: number, custom: number) => - `${builtin} built-in, ${custom} user-defined`, - Edit: "Edit", - Modal: { - Title: "Prompt List", - Add: "Add One", - Search: "Search Prompts", - }, - EditModal: { - Title: "Edit Prompt", - }, - }, - HistoryCount: { - Title: "Attached Messages Count", - SubTitle: "Number of sent messages attached per request", - }, - CompressThreshold: { - Title: "History Compression Threshold", - SubTitle: - "Will compress if uncompressed messages length exceeds the value", - }, - Token: { - Title: "API Key", - SubTitle: "Use your key to ignore access code limit", - Placeholder: "OpenAI API Key", - }, - Usage: { - Title: "Account Balance", - SubTitle(used: any, total: any) { - return `Used this month $${used}, subscription $${total}`; - }, - IsChecking: "Checking...", - Check: "Check", - NoAccess: "Enter API Key to check balance", - }, - AccessCode: { - Title: "Access Code", - SubTitle: "Access control enabled", - Placeholder: "Need Access Code", - }, - Bot: "AI Vendors (bot)", - Model: "Model", - Temperature: { - Title: "Temperature", - SubTitle: "A larger value makes the more random output", - }, - MaxTokens: { - Title: "Max Tokens", - SubTitle: "Maximum length of input tokens and generated tokens", - }, - PresencePenlty: { - Title: "Presence Penalty", - SubTitle: - "A larger value increases the likelihood to talk about new topics", - }, - }, - Store: { - DefaultTopic: "New Conversation", - BotHello: "Hello! How can I assist you today?", - Error: "Something went wrong, please try again later.", - Prompt: { - History: (content: string) => - "This is a summary of the chat history between the AI and the user as a recap: " + - content, - Topic: - "Please generate a four to five word title summarizing our conversation without any lead-in, punctuation, quotation marks, periods, symbols, or additional text. Remove enclosing quotation marks.", - Summarize: - "Summarize our discussion briefly in 200 words or less to use as a prompt for future context.", - }, - }, - Copy: { - Success: "Copied to clipboard", - Failed: "Copy failed, please grant permission to access clipboard", - }, - Context: { - Toast: (x: any) => `With ${x} contextual prompts`, - Edit: "Contextual and Memory Prompts", - Add: "Add a Prompt", - }, - Plugin: { - Name: "Plugin", - }, - Mask: { - Name: "Mask", - Page: { - Title: "Prompt Template", - SubTitle: (count: number) => `${count} prompt templates`, - Search: "Search Templates", - Create: "Create", - }, - Item: { - Info: (count: number) => `${count} prompts`, - Chat: "Chat", - View: "View", - Edit: "Edit", - Delete: "Delete", - DeleteConfirm: "Confirm to delete?", - }, - EditModal: { - Title: (readonly: boolean) => - `Edit Prompt Template ${readonly ? "(readonly)" : ""}`, - Download: "Download", - Clone: "Clone", - }, - Config: { - Avatar: "Bot Avatar", - Name: "Bot Name", - }, - }, - NewChat: { - Return: "Return", - Skip: "Skip", - Title: "Pick a Mask", - SubTitle: "Chat with the Soul behind the Mask", - More: "Find More", - NotShow: "Not Show Again", - ConfirmNoShow: "Confirm to disable?You can enable it in settings later.", - }, - - UI: { - Confirm: "Confirm", - Cancel: "Cancel", - Close: "Close", - Create: "Create", - Edit: "Edit", - }, -}; - -export default en; diff --git a/spaces/Lykon/NeverEnding-Dream-webui/README.md b/spaces/Lykon/NeverEnding-Dream-webui/README.md deleted file mode 100644 index e93accb34ea67445e3dbfae7a6648f91d2f1ece2..0000000000000000000000000000000000000000 --- a/spaces/Lykon/NeverEnding-Dream-webui/README.md +++ /dev/null @@ -1,20 +0,0 @@ ---- -title: NeverEnding Dream Web UI -emoji: 🚧 -colorFrom: black -colorTo: red -sdk: gradio -sdk_version: 3.9 -app_file: app.py -pinned: false -duplicated_from: Lykon/DreamShaper-webui ---- - -## Stable Diffusion Web UI -[https://github.com/AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - -## Documentation -[https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki) - -## Models License -https://huggingface.co/spaces/CompVis/stable-diffusion-license \ No newline at end of file diff --git a/spaces/MSLAB/PaperGPT/src/suggest.py b/spaces/MSLAB/PaperGPT/src/suggest.py deleted file mode 100644 index 6f214acd067ac9b48024103c10b46dac8e762eab..0000000000000000000000000000000000000000 --- a/spaces/MSLAB/PaperGPT/src/suggest.py +++ /dev/null @@ -1,181 +0,0 @@ -import requests -import logging -import json -import tiktoken -import gradio as gr -from typing import Any, List -from langchain.schema import Document -from langchain.document_loaders import UnstructuredPDFLoader -from langchain.text_splitter import RecursiveCharacterTextSplitter - -from utils import json_validator, fetch_chat - - -class LatexTextSplitter(RecursiveCharacterTextSplitter): - """Attempts to split the text along Latex-formatted layout elements.""" - - def __init__(self, **kwargs: Any): - """Initialize a LatexTextSplitter.""" - separators = [ - # First, try to split along Latex sections - "\chapter{", - "\section{", - "\subsection{", - "\subsubsection{", - - # Now split by environments - "\begin{" - # "\n\\begin{enumerate}", - # "\n\\begin{itemize}", - # "\n\\begin{description}", - # "\n\\begin{list}", - # "\n\\begin{quote}", - # "\n\\begin{quotation}", - # "\n\\begin{verse}", - # "\n\\begin{verbatim}", - - ## Now split by math environments - # "\n\\begin{align}", - # "$$", - # "$", - - # Now split by the normal type of lines - " ", - "", - ] - super().__init__(separators=separators, **kwargs) - - -class Suggest(): - - def __init__(self, max_ideas: int, model: str = "gpt-3.5-turbo"): - self.max_ideas = max_ideas - self.encoder = tiktoken.encoding_for_model(model) - self.model = model - self.idea_list = [] - with open("./sample/sample.tex", "r") as f: - self.sample_content = f.read() - - def split_chunk(self, latex_whole_document: str, chunk_size: int = 2000, retry: int = 5) -> List[Document]: - - chunk_size = min(chunk_size, len(latex_whole_document)) - - for _ in range(retry): - try: - latex_splitter = LatexTextSplitter( - chunk_size=chunk_size, - chunk_overlap=0, - ) - docs = latex_splitter.create_documents([latex_whole_document]) - return docs - except: - chunk_size = chunk_size // 2 - - raise Exception("Latex document split check failed.") - - def analyze(self, latex_whole_document: str, openai_key: str, progress: gr.Progress): - - logging.info("start analysis") - docs = self.split_chunk(latex_whole_document) - progress(0.05) - - output_format = """ - - ```json - [ - \\ Potential point for improvement 1 - {{ - "title": string \\ What this modification is about - "thought": string \\ The reason why this should be improved - "action": string \\ how to make improvement - "original": string \\ the original latex snippet that can be improved - "improved": string \\ the improved latex snippet which address your point - }}, - {{}} - ] - ``` - """ - - ideas = [] - for doc in progress.tqdm(docs): - - prompt = f""" - I'm a computer science student. - You are my editor. - Your goal is to improve my paper quality at your best. - - - ``` - {doc.page_content} - ``` - The above is a segment of my research paper. If the end of the segment is not complete, just ignore it. - Point out the parts that can be improved. - Focus on grammar, writing, content, section structure. - Ignore comments and those that are outside the document environment. - List out all the points with a latex snippet which is the improved version addressing your point. - Same paragraph should be only address once. - Output the response in the following valid json format: - {output_format} - - """ - - idea = fetch_chat(prompt, openai_key, model=self.model) - idea = json_validator(idea, openai_key) - if isinstance(idea, list): - ideas += idea - if len(ideas) >= self.max_ideas: - break - else: - # raise gr.Error(idea) - continue - - if not ideas: - raise gr.Error('No suggestions generated.') - - logging.info('complete analysis') - return ideas - - def read_file(self, f: str): - if f is None: - return "" - elif f.name.endswith('pdf'): - loader = UnstructuredPDFLoader(f.name) - pages = loader.load_and_split() - return "\n".join([p.page_content for p in pages]) - elif f.name.endswith('tex'): - with open(f.name, "r") as f: - return f.read() - else: - return "Only support .tex & .pdf" - - def generate(self, txt: str, openai_key: str, progress=gr.Progress()): - - if not openai_key: - raise gr.Error("Please provide openai key !") - - try: - idea_list = self.analyze(txt, openai_key, progress) - self.idea_list = idea_list - k = min(len(idea_list), self.max_ideas) - - idea_buttons = [ - gr.Button.update(visible=True, value=i['title']) - for e, i in enumerate(idea_list[:self.max_ideas]) - ] - idea_buttons += [ - gr.Button.update(visible=False) - ] * (self.max_ideas - len(idea_buttons)) - - idea_details = [ - gr.Textbox.update(value="", label="thought", visible=True), - gr.Textbox.update(value="", label="action", visible=True), - gr.Textbox.update(value="", label="original", visible=True, max_lines=5, lines=5), - gr.Textbox.update(value="", label="improved", visible=True, max_lines=5, lines=5), - ] - - return [ - gr.Textbox.update("Suggestions", interactive=False, show_label=False), - gr.Button.update(visible=True, value="Analyze") - ] + idea_details + idea_buttons - except Exception as e: - raise gr.Error(str(e)) \ No newline at end of file diff --git a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/amg.py b/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/amg.py deleted file mode 100644 index 3a137778e45c464c079658ecb87ec53270e789f7..0000000000000000000000000000000000000000 --- a/spaces/Make-A-Protagonist/Make-A-Protagonist-inference/Make-A-Protagonist/experts/GroundedSAM/segment_anything/segment_anything/utils/amg.py +++ /dev/null @@ -1,346 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -import math -from copy import deepcopy -from itertools import product -from typing import Any, Dict, Generator, ItemsView, List, Tuple - - -class MaskData: - """ - A structure for storing masks and their related data in batched format. - Implements basic filtering and concatenation. - """ - - def __init__(self, **kwargs) -> None: - for v in kwargs.values(): - assert isinstance( - v, (list, np.ndarray, torch.Tensor) - ), "MaskData only supports list, numpy arrays, and torch tensors." - self._stats = dict(**kwargs) - - def __setitem__(self, key: str, item: Any) -> None: - assert isinstance( - item, (list, np.ndarray, torch.Tensor) - ), "MaskData only supports list, numpy arrays, and torch tensors." - self._stats[key] = item - - def __delitem__(self, key: str) -> None: - del self._stats[key] - - def __getitem__(self, key: str) -> Any: - return self._stats[key] - - def items(self) -> ItemsView[str, Any]: - return self._stats.items() - - def filter(self, keep: torch.Tensor) -> None: - for k, v in self._stats.items(): - if v is None: - self._stats[k] = None - elif isinstance(v, torch.Tensor): - self._stats[k] = v[torch.as_tensor(keep, device=v.device)] - elif isinstance(v, np.ndarray): - self._stats[k] = v[keep.detach().cpu().numpy()] - elif isinstance(v, list) and keep.dtype == torch.bool: - self._stats[k] = [a for i, a in enumerate(v) if keep[i]] - elif isinstance(v, list): - self._stats[k] = [v[i] for i in keep] - else: - raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") - - def cat(self, new_stats: "MaskData") -> None: - for k, v in new_stats.items(): - if k not in self._stats or self._stats[k] is None: - self._stats[k] = deepcopy(v) - elif isinstance(v, torch.Tensor): - self._stats[k] = torch.cat([self._stats[k], v], dim=0) - elif isinstance(v, np.ndarray): - self._stats[k] = np.concatenate([self._stats[k], v], axis=0) - elif isinstance(v, list): - self._stats[k] = self._stats[k] + deepcopy(v) - else: - raise TypeError(f"MaskData key {k} has an unsupported type {type(v)}.") - - def to_numpy(self) -> None: - for k, v in self._stats.items(): - if isinstance(v, torch.Tensor): - self._stats[k] = v.detach().cpu().numpy() - - -def is_box_near_crop_edge( - boxes: torch.Tensor, crop_box: List[int], orig_box: List[int], atol: float = 20.0 -) -> torch.Tensor: - """Filter masks at the edge of a crop, but not at the edge of the original image.""" - crop_box_torch = torch.as_tensor(crop_box, dtype=torch.float, device=boxes.device) - orig_box_torch = torch.as_tensor(orig_box, dtype=torch.float, device=boxes.device) - boxes = uncrop_boxes_xyxy(boxes, crop_box).float() - near_crop_edge = torch.isclose(boxes, crop_box_torch[None, :], atol=atol, rtol=0) - near_image_edge = torch.isclose(boxes, orig_box_torch[None, :], atol=atol, rtol=0) - near_crop_edge = torch.logical_and(near_crop_edge, ~near_image_edge) - return torch.any(near_crop_edge, dim=1) - - -def box_xyxy_to_xywh(box_xyxy: torch.Tensor) -> torch.Tensor: - box_xywh = deepcopy(box_xyxy) - box_xywh[2] = box_xywh[2] - box_xywh[0] - box_xywh[3] = box_xywh[3] - box_xywh[1] - return box_xywh - - -def batch_iterator(batch_size: int, *args) -> Generator[List[Any], None, None]: - assert len(args) > 0 and all( - len(a) == len(args[0]) for a in args - ), "Batched iteration must have inputs of all the same size." - n_batches = len(args[0]) // batch_size + int(len(args[0]) % batch_size != 0) - for b in range(n_batches): - yield [arg[b * batch_size : (b + 1) * batch_size] for arg in args] - - -def mask_to_rle_pytorch(tensor: torch.Tensor) -> List[Dict[str, Any]]: - """ - Encodes masks to an uncompressed RLE, in the format expected by - pycoco tools. - """ - # Put in fortran order and flatten h,w - b, h, w = tensor.shape - tensor = tensor.permute(0, 2, 1).flatten(1) - - # Compute change indices - diff = tensor[:, 1:] ^ tensor[:, :-1] - change_indices = diff.nonzero() - - # Encode run length - out = [] - for i in range(b): - cur_idxs = change_indices[change_indices[:, 0] == i, 1] - cur_idxs = torch.cat( - [ - torch.tensor([0], dtype=cur_idxs.dtype, device=cur_idxs.device), - cur_idxs + 1, - torch.tensor([h * w], dtype=cur_idxs.dtype, device=cur_idxs.device), - ] - ) - btw_idxs = cur_idxs[1:] - cur_idxs[:-1] - counts = [] if tensor[i, 0] == 0 else [0] - counts.extend(btw_idxs.detach().cpu().tolist()) - out.append({"size": [h, w], "counts": counts}) - return out - - -def rle_to_mask(rle: Dict[str, Any]) -> np.ndarray: - """Compute a binary mask from an uncompressed RLE.""" - h, w = rle["size"] - mask = np.empty(h * w, dtype=bool) - idx = 0 - parity = False - for count in rle["counts"]: - mask[idx : idx + count] = parity - idx += count - parity ^= True - mask = mask.reshape(w, h) - return mask.transpose() # Put in C order - - -def area_from_rle(rle: Dict[str, Any]) -> int: - return sum(rle["counts"][1::2]) - - -def calculate_stability_score( - masks: torch.Tensor, mask_threshold: float, threshold_offset: float -) -> torch.Tensor: - """ - Computes the stability score for a batch of masks. The stability - score is the IoU between the binary masks obtained by thresholding - the predicted mask logits at high and low values. - """ - # One mask is always contained inside the other. - # Save memory by preventing unnecesary cast to torch.int64 - intersections = ( - (masks > (mask_threshold + threshold_offset)) - .sum(-1, dtype=torch.int16) - .sum(-1, dtype=torch.int32) - ) - unions = ( - (masks > (mask_threshold - threshold_offset)) - .sum(-1, dtype=torch.int16) - .sum(-1, dtype=torch.int32) - ) - return intersections / unions - - -def build_point_grid(n_per_side: int) -> np.ndarray: - """Generates a 2D grid of points evenly spaced in [0,1]x[0,1].""" - offset = 1 / (2 * n_per_side) - points_one_side = np.linspace(offset, 1 - offset, n_per_side) - points_x = np.tile(points_one_side[None, :], (n_per_side, 1)) - points_y = np.tile(points_one_side[:, None], (1, n_per_side)) - points = np.stack([points_x, points_y], axis=-1).reshape(-1, 2) - return points - - -def build_all_layer_point_grids( - n_per_side: int, n_layers: int, scale_per_layer: int -) -> List[np.ndarray]: - """Generates point grids for all crop layers.""" - points_by_layer = [] - for i in range(n_layers + 1): - n_points = int(n_per_side / (scale_per_layer**i)) - points_by_layer.append(build_point_grid(n_points)) - return points_by_layer - - -def generate_crop_boxes( - im_size: Tuple[int, ...], n_layers: int, overlap_ratio: float -) -> Tuple[List[List[int]], List[int]]: - """ - Generates a list of crop boxes of different sizes. Each layer - has (2**i)**2 boxes for the ith layer. - """ - crop_boxes, layer_idxs = [], [] - im_h, im_w = im_size - short_side = min(im_h, im_w) - - # Original image - crop_boxes.append([0, 0, im_w, im_h]) - layer_idxs.append(0) - - def crop_len(orig_len, n_crops, overlap): - return int(math.ceil((overlap * (n_crops - 1) + orig_len) / n_crops)) - - for i_layer in range(n_layers): - n_crops_per_side = 2 ** (i_layer + 1) - overlap = int(overlap_ratio * short_side * (2 / n_crops_per_side)) - - crop_w = crop_len(im_w, n_crops_per_side, overlap) - crop_h = crop_len(im_h, n_crops_per_side, overlap) - - crop_box_x0 = [int((crop_w - overlap) * i) for i in range(n_crops_per_side)] - crop_box_y0 = [int((crop_h - overlap) * i) for i in range(n_crops_per_side)] - - # Crops in XYWH format - for x0, y0 in product(crop_box_x0, crop_box_y0): - box = [x0, y0, min(x0 + crop_w, im_w), min(y0 + crop_h, im_h)] - crop_boxes.append(box) - layer_idxs.append(i_layer + 1) - - return crop_boxes, layer_idxs - - -def uncrop_boxes_xyxy(boxes: torch.Tensor, crop_box: List[int]) -> torch.Tensor: - x0, y0, _, _ = crop_box - offset = torch.tensor([[x0, y0, x0, y0]], device=boxes.device) - # Check if boxes has a channel dimension - if len(boxes.shape) == 3: - offset = offset.unsqueeze(1) - return boxes + offset - - -def uncrop_points(points: torch.Tensor, crop_box: List[int]) -> torch.Tensor: - x0, y0, _, _ = crop_box - offset = torch.tensor([[x0, y0]], device=points.device) - # Check if points has a channel dimension - if len(points.shape) == 3: - offset = offset.unsqueeze(1) - return points + offset - - -def uncrop_masks( - masks: torch.Tensor, crop_box: List[int], orig_h: int, orig_w: int -) -> torch.Tensor: - x0, y0, x1, y1 = crop_box - if x0 == 0 and y0 == 0 and x1 == orig_w and y1 == orig_h: - return masks - # Coordinate transform masks - pad_x, pad_y = orig_w - (x1 - x0), orig_h - (y1 - y0) - pad = (x0, pad_x - x0, y0, pad_y - y0) - return torch.nn.functional.pad(masks, pad, value=0) - - -def remove_small_regions( - mask: np.ndarray, area_thresh: float, mode: str -) -> Tuple[np.ndarray, bool]: - """ - Removes small disconnected regions and holes in a mask. Returns the - mask and an indicator of if the mask has been modified. - """ - import cv2 # type: ignore - - assert mode in ["holes", "islands"] - correct_holes = mode == "holes" - working_mask = (correct_holes ^ mask).astype(np.uint8) - n_labels, regions, stats, _ = cv2.connectedComponentsWithStats(working_mask, 8) - sizes = stats[:, -1][1:] # Row 0 is background label - small_regions = [i + 1 for i, s in enumerate(sizes) if s < area_thresh] - if len(small_regions) == 0: - return mask, False - fill_labels = [0] + small_regions - if not correct_holes: - fill_labels = [i for i in range(n_labels) if i not in fill_labels] - # If every region is below threshold, keep largest - if len(fill_labels) == 0: - fill_labels = [int(np.argmax(sizes)) + 1] - mask = np.isin(regions, fill_labels) - return mask, True - - -def coco_encode_rle(uncompressed_rle: Dict[str, Any]) -> Dict[str, Any]: - from pycocotools import mask as mask_utils # type: ignore - - h, w = uncompressed_rle["size"] - rle = mask_utils.frPyObjects(uncompressed_rle, h, w) - rle["counts"] = rle["counts"].decode("utf-8") # Necessary to serialize with json - return rle - - -def batched_mask_to_box(masks: torch.Tensor) -> torch.Tensor: - """ - Calculates boxes in XYXY format around masks. Return [0,0,0,0] for - an empty mask. For input shape C1xC2x...xHxW, the output shape is C1xC2x...x4. - """ - # torch.max below raises an error on empty inputs, just skip in this case - if torch.numel(masks) == 0: - return torch.zeros(*masks.shape[:-2], 4, device=masks.device) - - # Normalize shape to CxHxW - shape = masks.shape - h, w = shape[-2:] - if len(shape) > 2: - masks = masks.flatten(0, -3) - else: - masks = masks.unsqueeze(0) - - # Get top and bottom edges - in_height, _ = torch.max(masks, dim=-1) - in_height_coords = in_height * torch.arange(h, device=in_height.device)[None, :] - bottom_edges, _ = torch.max(in_height_coords, dim=-1) - in_height_coords = in_height_coords + h * (~in_height) - top_edges, _ = torch.min(in_height_coords, dim=-1) - - # Get left and right edges - in_width, _ = torch.max(masks, dim=-2) - in_width_coords = in_width * torch.arange(w, device=in_width.device)[None, :] - right_edges, _ = torch.max(in_width_coords, dim=-1) - in_width_coords = in_width_coords + w * (~in_width) - left_edges, _ = torch.min(in_width_coords, dim=-1) - - # If the mask is empty the right edge will be to the left of the left edge. - # Replace these boxes with [0, 0, 0, 0] - empty_filter = (right_edges < left_edges) | (bottom_edges < top_edges) - out = torch.stack([left_edges, top_edges, right_edges, bottom_edges], dim=-1) - out = out * (~empty_filter).unsqueeze(-1) - - # Return to original shape - if len(shape) > 2: - out = out.reshape(*shape[:-2], 4) - else: - out = out[0] - - return out diff --git a/spaces/Manjushri/SDXL-1.0-Inpainting-CPU/app.py b/spaces/Manjushri/SDXL-1.0-Inpainting-CPU/app.py deleted file mode 100644 index 46013062aabea9e0566890faa04757ec6cd96cc6..0000000000000000000000000000000000000000 --- a/spaces/Manjushri/SDXL-1.0-Inpainting-CPU/app.py +++ /dev/null @@ -1,30 +0,0 @@ -from diffusers import StableDiffusionXLInpaintPipeline -import gradio as gr -import numpy as np -import imageio -from PIL import Image -import torch -import modin.pandas as pd - -device = "cuda" if torch.cuda.is_available() else "cpu" -pipe = StableDiffusionXLInpaintPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", safety_checker=None) -pipe = pipe.to(device) - -def resize(value,img): - img = Image.open(img) - img = img.resize((value,value)) - return img - -def predict(source_img, prompt, negative_prompt): - imageio.imwrite("data.png", source_img["image"]) - imageio.imwrite("data_mask.png", source_img["mask"]) - src = resize(768, "data.png") - src.save("src.png") - mask = resize(768, "data_mask.png") - mask.save("mask.png") - image = pipe(prompt=prompt, negative_prompt=negative_prompt, image=src, mask_image=mask, num_inference_steps=20).images[0] - return image - -title="SDXL 1.0 Inpainting CPU" -description="Inpainting with SDXL 1.0
    Warning: Slow process... ~10 min inference time.
    Please use square .png image as input, 512x512, 768x768, or 1024x1024" -gr.Interface(fn=predict, inputs=[gr.Image(source="upload", type="numpy", tool="sketch", elem_id="source_container"), gr.Textbox(label='What you want the AI to Generate, 77 Token limit'), gr.Textbox(label='What you Do Not want the AI to generate')], outputs='image', title=title, description=description, article = "Code Monkey: Manjushri").launch(max_threads=True, debug=True) \ No newline at end of file diff --git a/spaces/Marshalls/testmtd/analysis/pymo/mocapplayer/libs/math.min.js b/spaces/Marshalls/testmtd/analysis/pymo/mocapplayer/libs/math.min.js deleted file mode 100644 index 12d578beeeb83a90de342bf5f2a3d51179d2c296..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/pymo/mocapplayer/libs/math.min.js +++ /dev/null @@ -1,47 +0,0 @@ -/** - * math.js - * https://github.com/josdejong/mathjs - * - * Math.js is an extensive math library for JavaScript and Node.js, - * It features real and complex numbers, units, matrices, a large set of - * mathematical functions, and a flexible expression parser. - * - * @version 2.6.0 - * @date 2016-01-08 - * - * @license - * Copyright (C) 2013-2016 Jos de Jong - * - * Licensed under the Apache License, Version 2.0 (the "License"); you may not - * use this file except in compliance with the License. You may obtain a copy - * of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT - * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the - * License for the specific language governing permissions and limitations under - * the License. - */ -!function(e,t){"object"==typeof exports&&"object"==typeof module?module.exports=t():"function"==typeof define&&define.amd?define([],t):"object"==typeof exports?exports.math=t():e.math=t()}(this,function(){return function(e){function t(n){if(r[n])return r[n].exports;var i=r[n]={exports:{},id:n,loaded:!1};return e[n].call(i.exports,i,i.exports,t),i.loaded=!0,i.exports}var r={};return t.m=e,t.c=r,t.p="",t(0)}([function(e,t,r){function n(e){var t=i.create(e);return t.create=n,t["import"](r(13)),t}var i=r(1);e.exports=n()},function(e,t,r){e.exports=r(2)},function(e,t,r){var n=r(3).isFactory,i=r(3).deepExtend,a=r(4),o=r(8),s=r(10),u=r(12);t.create=function(e){function t(e){if(!n(e))throw new Error("Factory object with properties `type`, `name`, and `factory` expected");var i,a=r.indexOf(e);return-1===a?(i=e.math===!0?e.factory(f.type,l,t,f.typed,f):e.factory(f.type,l,t,f.typed),r.push(e),c.push(i)):i=c[a],i}if("function"!=typeof Object.create)throw new Error("ES5 not supported by this JavaScript engine. Please load the es5-shim and es5-sham library for compatibility.");var r=[],c=[],f=o.mixin({});f.type={},f.expression={transform:Object.create(f)},f.typed=a.create(f.type);var l={epsilon:1e-14,matrix:"matrix",number:"number",precision:64,predictable:!1};return e&&i(l,e),f["import"]=t(s),f.config=t(u),f}},function(e,t){"use strict";t.clone=function r(e){var t=typeof e;if("number"===t||"string"===t||"boolean"===t||null===e||void 0===e)return e;if("function"==typeof e.clone)return e.clone();if(Array.isArray(e))return e.map(function(e){return r(e)});if(e instanceof Number)return new Number(e.valueOf());if(e instanceof String)return new String(e.valueOf());if(e instanceof Boolean)return new Boolean(e.valueOf());if(e instanceof Date)return new Date(e.valueOf());if(e&&e.isBigNumber===!0)return e;if(e instanceof RegExp)throw new TypeError("Cannot clone "+e);var n={};for(var i in e)e.hasOwnProperty(i)&&(n[i]=r(e[i]));return n},t.extend=function(e,t){for(var r in t)t.hasOwnProperty(r)&&(e[r]=t[r]);return e},t.deepExtend=function n(e,t){if(Array.isArray(t))throw new TypeError("Arrays are not supported by deepExtend");for(var r in t)if(t.hasOwnProperty(r))if(t[r]&&t[r].constructor===Object)void 0===e[r]&&(e[r]={}),e[r].constructor===Object?n(e[r],t[r]):e[r]=t[r];else{if(Array.isArray(t[r]))throw new TypeError("Arrays are not supported by deepExtend");e[r]=t[r]}return e},t.deepEqual=function(e,r){var n,i,a;if(Array.isArray(e)){if(!Array.isArray(r))return!1;if(e.length!=r.length)return!1;for(i=0,a=e.length;a>i;i++)if(!t.deepEqual(e[i],r[i]))return!1;return!0}if(e instanceof Object){if(Array.isArray(r)||!(r instanceof Object))return!1;for(n in e)if(!t.deepEqual(e[n],r[n]))return!1;for(n in r)if(!t.deepEqual(e[n],r[n]))return!1;return!0}return typeof e==typeof r&&e==r},t.canDefineProperty=function(){try{if(Object.defineProperty)return Object.defineProperty({},"x",{get:function(){}}),!0}catch(e){}return!1},t.lazy=function(e,r,n){if(t.canDefineProperty()){var i,a=!0;Object.defineProperty(e,r,{get:function(){return a&&(i=n(),a=!1),i},set:function(e){i=e,a=!1},configurable:!0,enumerable:!0})}else e[r]=n()},t.traverse=function(e,t){var r=e;if(t)for(var n=t.split("."),i=0;i15)throw new TypeError("Cannot implicitly convert a number with >15 significant digits to BigNumber (value: "+t+"). Use function bignumber(x) to convert to BigNumber.");return new e.BigNumber(t)}},{from:"number",to:"Complex",convert:function(t){return new e.Complex(t,0)}},{from:"number",to:"string",convert:function(e){return e+""}},{from:"BigNumber",to:"Complex",convert:function(t){return new e.Complex(t.toNumber(),0)}},{from:"number",to:"Fraction",convert:function(t){if(i(t)>15)throw new TypeError("Cannot implicitly convert a number with >15 significant digits to Fraction (value: "+t+"). Use function fraction(x) to convert to Fraction.");return new e.Fraction(t)}},{from:"string",to:"number",convert:function(e){var t=Number(e);if(isNaN(t))throw new Error('Cannot convert "'+e+'" to a number');return t}},{from:"boolean",to:"number",convert:function(e){return+e}},{from:"boolean",to:"BigNumber",convert:function(t){return new e.BigNumber(+t)}},{from:"boolean",to:"string",convert:function(e){return+e}},{from:"null",to:"number",convert:function(){return 0}},{from:"null",to:"string",convert:function(){return"null"}},{from:"null",to:"BigNumber",convert:function(){return new e.BigNumber(0)}},{from:"Array",to:"Matrix",convert:function(t){return new e.DenseMatrix(t)}},{from:"Matrix",to:"Array",convert:function(e){return e.valueOf()}}],t}},function(e,t,r){var n,i,a;!function(r){i=[],n=r,a="function"==typeof n?n.apply(t,i):n,!(void 0!==a&&(e.exports=a))}(function(){function e(){function t(e){for(var t,r=0;rr&&!c?"Unexpected type of argument in function "+u+" (expected: "+s.join(" or ")+", actual: "+o+", index: "+r+")":"Too few arguments in function "+u+" (expected: "+s.join(" or ")+", index: "+r+")":"Too many arguments in function "+u+" (expected: "+r+", actual: "+t+")";var l=new TypeError(a);return l.data=f,l}function i(e){this.name=e||"refs",this.categories={}}function a(e,t){if("string"==typeof e){var r=e.trim(),n="..."===r.substr(0,3);if(n&&(r=r.substr(3)),""===r)this.types=["any"];else{this.types=r.split("|");for(var i=0;ip)n[f]=c;else if(0===p)throw new Error('Signature "'+f+'" is defined twice')}else n[f]=c}}for(f in n)n.hasOwnProperty(f)&&i.push(n[f]);for(i.sort(function(e,t){return o.compare(e,t)}),r=0;rr;r++)t[r]="arg"+r;return t}function p(e,t){var r=new i,a=u(t);if(0==a.length)throw new Error("No signatures provided");var o=f(a,[]),s=[],p=e||"",h=l(m(a));s.push("function "+p+"("+h.join(", ")+") {"),s.push(' "use strict";'),s.push(" var name = '"+p+"';"),s.push(o.toCode(r," ")),s.push("}");var g=[r.toCode(),"return "+s.join("\n")].join("\n"),v=new Function(r.name,"createError",g),d=v(r,n);return d.signatures=c(a),d}function m(e){for(var t=0,r=0;rt&&(t=n)}return t}function h(e){for(var t,r=0;r0},a.prototype.contains=function(e){for(var t=0;tt.params.length)return 1;if(e.params.lengthr;r++)e.params[r].hasConversions()&&i++,t.params[r].hasConversions()&&o++;if(i>o)return 1;if(o>i)return-1;for(r=0;r "+a+") {"),i.push(r+" var varArgs = [];"),i.push(r+" for (var i = "+a+"; i < arguments.length; i++) {"),i.push(r+" varArgs.push(arguments[i]);"),i.push(r+" }"),i.push(this.signature.toCode(e,r+" ")),i.push(r+"}");else{for(var u=function(r,n){for(var i=[],a=0;a "+r+") {",t+" throw createError(name, arguments.length, "+r+", arguments["+r+"]);",t+"}"].join("\n");for(var n={},i=[],a=0;a0?1:0>e?-1:0},t.format=function(e,r){if("function"==typeof r)return r(e);if(e===1/0)return"Infinity";if(e===-(1/0))return"-Infinity";if(isNaN(e))return"NaN";var n="auto",i=void 0;switch(r&&(r.notation&&(n=r.notation),t.isNumber(r)?i=r:r.precision&&(i=r.precision)),n){case"fixed":return t.toFixed(e,i);case"exponential":return t.toExponential(e,i);case"engineering":return t.toEngineering(e,i);case"auto":return t.toPrecision(e,i,r&&r.exponential).replace(/((\.\d*?)(0+))($|e)/,function(){var e=arguments[2],t=arguments[4];return"."!==e?e+t:t});default:throw new Error('Unknown notation "'+n+'". Choose "auto", "exponential", or "fixed".')}},t.toExponential=function(e,t){return new n(e).toExponential(t)},t.toEngineering=function(e,t){return new n(e).toEngineering(t)},t.toFixed=function(e,t){return new n(e).toFixed(t)},t.toPrecision=function(e,t,r){return new n(e).toPrecision(t,r)},t.digits=function(e){return e.toExponential().replace(/e.*$/,"").replace(/^0\.?0*|\./,"").length},t.DBL_EPSILON=Number.EPSILON||2.220446049250313e-16,t.nearlyEqual=function(e,r,n){if(null==n)return e==r;if(e==r)return!0;if(isNaN(e)||isNaN(r))return!1;if(isFinite(e)&&isFinite(r)){var i=Math.abs(e-r);return ir;r++)t.push(0);return t}r.prototype.toEngineering=function(e){var t=this.roundDigits(e),r=t.exponent,i=t.coefficients,a=r%3===0?r:0>r?r-3-r%3:r-r%3,o=r>=0?r:Math.abs(a);i.length-1=0;)u++;var f=i.slice(u).join(""),l=f.match(/[1-9]/)?"."+f:"";return c=i.slice(0,u).join("")+l,c+="e"+(r>=0?"+":"")+a.toString(),t.sign+c},r.prototype.toFixed=function(e){var t=this.roundDigits(this.exponent+1+(e||0)),r=t.coefficients,i=t.exponent+1,a=i+(e||0);return r.lengthi&&(r=n(-i+1).concat(r),i=1),e&&r.splice(i,0,0===i?"0.":"."),this.sign+r.join("")},r.prototype.toExponential=function(e){var t=e?this.roundDigits(e):this.clone(),r=t.coefficients,i=t.exponent;r.length0?"."+r.join(""):"")+"e"+(i>=0?"+":"")+i},r.prototype.toPrecision=function(e,t){var r=t&&void 0!==t.lower?t.lower:.001,i=t&&void 0!==t.upper?t.upper:1e5,a=Math.abs(Math.pow(10,this.exponent));if(r>a||a>=i)return this.toExponential(e);var o=e?this.roundDigits(e):this.clone(),s=o.coefficients,u=o.exponent;s.length0?u:0;return c=e;)r.unshift(0),t.exponent++,e++;if(r.length>e){var n=r.splice(e,r.length-e);if(n[0]>=5){var i=e-1;for(r[i]++;10===r[i];)r.pop(),0===i&&(r.unshift(0),t.exponent++,i++),i--,r[i]++}}return t},e.exports=r},function(e,t,r){var n=r(9);t.mixin=function(e){var t=new n;return e.on=t.on.bind(t),e.off=t.off.bind(t),e.once=t.once.bind(t),e.emit=t.emit.bind(t),e}},function(e,t){function r(){}r.prototype={on:function(e,t,r){var n=this.e||(this.e={});return(n[e]||(n[e]=[])).push({fn:t,ctx:r}),this},once:function(e,t,r){function n(){i.off(e,n),t.apply(r,arguments)}var i=this;return n._=t,this.on(e,n,r)},emit:function(e){var t=[].slice.call(arguments,1),r=((this.e||(this.e={}))[e]||[]).slice(),n=0,i=r.length;for(n;i>n;n++)r[n].fn.apply(r[n].ctx,t);return this},off:function(e,t){var r=this.e||(this.e={}),n=r[e],i=[];if(n&&t)for(var a=0,o=n.length;o>a;a++)n[a].fn!==t&&n[a].fn._!==t&&i.push(n[a]);return i.length?r[e]=i:delete r[e],this}},e.exports=r},function(e,t,r){"use strict";function n(e,t,r,n,u){function c(e,t){var r=arguments.length;if(1!=r&&2!=r)throw new s("import",r,1,2);if(t||(t={}),a(e))m(e,t);else if(Array.isArray(e))e.forEach(function(e){c(e,t)});else if("object"==typeof e){for(var n in e)if(e.hasOwnProperty(n)){var i=e[n];h(i)?f(n,i,t):a(e)?m(e,t):c(i,t)}}else if(!t.silent)throw new TypeError("Factory, Object, or Array expected")}function f(e,t,r){if(r.wrap&&"function"==typeof t&&(t=p(t)),g(u[e])&&g(t))return t=r.override?n(e,t.signatures):n(u[e],t),u[e]=t,l(e,t),void u.emit("import",e,function(){return t});if(void 0===u[e]||r.override)return u[e]=t,l(e,t),void u.emit("import",e,function(){return t});if(!r.silent)throw new Error('Cannot import "'+e+'": already exists')}function l(e,t){t&&"function"==typeof t.transform&&(u.expression.transform[e]=t.transform)}function p(e){var t=function(){for(var t=[],r=0,n=arguments.length;n>r;r++){var i=arguments[r];t[r]=i&&i.valueOf()}return e.apply(u,t)};return e.transform&&(t.transform=e.transform),t}function m(e,t){if("string"==typeof e.name){var a=e.name,s=e.path?o(u,e.path):u,c=s.hasOwnProperty(a)?s[a]:void 0,f=function(){var i=r(e);if(g(c)&&g(i))return t.override||(i=n(c,i)),i;if(void 0===c||t.override)return i;if(!t.silent)throw new Error('Cannot import "'+a+'": already exists')};e.lazy!==!1?i(s,a,f):s[a]=f(),u.emit("import",a,f,e.path)}else r(e)}function h(e){return"function"==typeof e||"number"==typeof e||"string"==typeof e||"boolean"==typeof e||null===e||e&&e.isUnit===!0||e&&e.isComplex===!0}function g(e){return"function"==typeof e&&"object"==typeof e.signatures}return c}var i=r(3).lazy,a=r(3).isFactory,o=r(3).traverse,s=(r(3).extend,r(11));t.math=!0,t.name="import",t.factory=n,t.lazy=!0},function(e,t){"use strict";function r(e,t,n,i){if(!(this instanceof r))throw new SyntaxError("Constructor must be called with the new operator");this.fn=e,this.count=t,this.min=n,this.max=i,this.message="Wrong number of arguments in function "+e+" ("+t+" provided, "+n+(void 0!=i?"-"+i:"")+" expected)",this.stack=(new Error).stack}r.prototype=new Error,r.prototype.constructor=Error,r.prototype.name="ArgumentsError",r.prototype.isArgumentsError=!0,e.exports=r},function(e,t,r){"use strict";function n(e,t,r,n,a){return function(e){if(e){var r=i.clone(t);i.deepExtend(t,e);var n=i.clone(t);return a.emit("config",n,r),n}return i.clone(t)}}var i=r(3);t.name="config",t.math=!0,t.factory=n},function(e,t,r){e.exports=[r(14),r(92),r(96),r(320),r(495),r(497)]},function(e,t,r){e.exports=[r(15),r(20),r(21),r(26),r(30),r(36),r(68),r(69),r(71),r(72)]},function(e,t,r){e.exports=[r(16),r(18)]},function(e,t,r){function n(e,t,r,n,a){var o=i.constructor(t);return o.prototype.type="BigNumber",o.prototype.isBigNumber=!0,o.prototype.toJSON=function(){return{mathjs:"BigNumber",value:this.toString()}},o.fromJSON=function(e){return new o(e.value)},a.on("config",function(e,t){e.precision!==t.precision&&o.config({precision:e.precision})}),o}var i=r(17);r(6).digits;t.name="BigNumber",t.path="type",t.factory=n,t.math=!0},function(e,t,r){var n;!function(i){"use strict";function a(e){for(var t,r,n=1,i=e.length,a=e[0]+"";i>n;n++){for(t=e[n]+"",r=_-t.length;r--;)t="0"+t;a+=t}for(i=a.length;48===a.charCodeAt(--i););return a.slice(0,i+1||1)}function o(e,t,r,n){var i,a,o,s,u;for(a=1,o=e[0];o>=10;o/=10,a++);return o=t-a,0>o?(o+=_,i=0):(i=Math.ceil((o+1)/_),o%=_),a=E(10,_-o),u=e[i]%a|0,null==n?3>o?(0==o?u=u/100|0:1==o&&(u=u/10|0),s=4>r&&99999==u||r>3&&49999==u||5e4==u||0==u):s=(4>r&&u+1==a||r>3&&u+1==a/2)&&(e[i+1]/a/100|0)==E(10,o-2)-1||(u==a/2||0==u)&&0==(e[i+1]/a/100|0):4>o?(0==o?u=u/1e3|0:1==o?u=u/100|0:2==o&&(u=u/10|0),s=(n||4>r)&&9999==u||!n&&r>3&&4999==u):s=((n||4>r)&&u+1==a||!n&&r>3&&u+1==a/2)&&(e[i+1]/a/1e3|0)==E(10,o-3)-1,s}function s(e,t,r){var n=e.constructor;return null==t||((y=0>t||t>8)||0!==t&&(n.errors?parseInt:parseFloat)(t)!=t)&&!p(n,"rounding mode",t,r,0)?n.rounding:0|t}function u(e,t,r,n){var i=e.constructor;return!(y=(n||0)>t||t>=S+1)&&(0===t||(i.errors?parseInt:parseFloat)(t)==t)||p(i,"argument",t,r,0)}function c(e,t){var r,n,i,s,u,c,f,l=0,p=0,m=0,h=e.constructor,v=h.ONE,d=h.rounding,y=h.precision;if(!e.c||!e.c[0]||e.e>17)return new h(e.c?e.c[0]?e.s<0?0:1/0:v:e.s?e.s<0?0:e:NaN);for(null==t?(b=!1,u=y):u=t,f=new h(.03125);e.e>-2;)e=e.times(f),m+=5;for(n=Math.log(E(2,m))/Math.LN10*2+5|0,u+=n,r=s=c=new h(v),h.precision=u;;){if(s=g(s.times(e),u,1),r=r.times(++p),f=c.plus(k(s,r,u,1)),a(f.c).slice(0,u)===a(c.c).slice(0,u)){for(i=m;i--;)c=g(c.times(c),u,1);if(null!=t)return h.precision=y,c;if(!(3>l&&o(c.c,u-n,d,l)))return g(c,h.precision=y,d,b=!0);h.precision=u+=10,r=s=f=new h(v),p=0,l++}c=f}}function f(e,t,r,n){var i,o,s=e.constructor,u=(e=new s(e)).e;if(null==t?r=0:(g(e,++t,r),r=n?t:t+e.e-u),u=e.e,i=a(e.c),1==n||2==n&&(u>=t||u<=s.toExpNeg)){for(;i.length1&&(i=i.charAt(0)+"."+i.slice(1)),i+=(0>u?"e":"e+")+u}else{if(n=i.length,0>u){for(o=r-n;++u;i="0"+i);i="0."+i}else if(++u>n){for(o=r-u,u-=n;u--;i+="0");o>0&&(i+=".")}else o=r-n,n>u?i=i.slice(0,u)+"."+i.slice(u):o>0&&(i+=".");if(o>0)for(;o--;i+="0");}return e.s<0&&e.c[0]?"-"+i:i}function l(e){var t=e.length-1,r=t*_+1;if(t=e[t]){for(;t%10==0;t/=10,r--);for(t=e[0];t>=10;t/=10,r++);}return r}function p(e,t,r,n,i){if(e.errors){var a=new Error((n||["new Decimal","cmp","div","eq","gt","gte","lt","lte","minus","mod","plus","times","toFraction","pow","random","log","sqrt","toNearest","divToInt"][w?0>w?-w:w:0>1/w?1:0])+"() "+(["number type has more than 15 significant digits","LN10 out of digits"][t]||t+([y?" out of range":" not an integer"," not a boolean or binary digit"][i]||""))+": "+r);throw a.name="Decimal Error",y=w=0,a}}function m(e,t,r){var n=new e(e.ONE);for(b=!1;1&r&&(n=n.times(t)),r>>=1,r;)t=t.times(t);return b=!0,n}function h(e,t){var r,n,i,s,u,c,f,l,m,v,d,y=1,x=10,w=e,N=w.c,E=w.constructor,M=E.ONE,A=E.rounding,_=E.precision;if(w.s<0||!N||!N[0]||!w.e&&1==N[0]&&1==N.length)return new E(N&&!N[0]?-1/0:1!=w.s?NaN:N?0:w);if(null==t?(b=!1,f=_):f=t,E.precision=f+=x,r=a(N),n=r.charAt(0),!(Math.abs(s=w.e)<15e14))return w=new E(n+"."+r.slice(1)),f+2>B.length&&p(E,1,f+2,"ln"),w=h(w,f-x).plus(new E(B.slice(0,f+2)).times(s+"")),E.precision=_,null==t?g(w,_,A,b=!0):w;for(;7>n&&1!=n||1==n&&r.charAt(1)>3;)w=w.times(e),r=a(w.c),n=r.charAt(0),y++;for(s=w.e,n>1?(w=new E("0."+r),s++):w=new E(n+"."+r.slice(1)),v=w,l=u=w=k(w.minus(M),w.plus(M),f,1),d=g(w.times(w),f,1),i=3;;){if(u=g(u.times(d),f,1),m=l.plus(k(u,new E(i),f,1)),a(m.c).slice(0,f)===a(l.c).slice(0,f)){if(l=l.times(2),0!==s&&(f+2>B.length&&p(E,1,f+2,"ln"),l=l.plus(new E(B.slice(0,f+2)).times(s+""))),l=k(l,new E(y),f,1),null!=t)return E.precision=_,l;if(!o(l.c,f-x,A,c))return g(l,E.precision=_,A,b=!0);E.precision=f+=x,m=u=w=k(v.minus(M),v.plus(M),f,1),d=g(w.times(w),f,1),i=c=1}l=m,i+=2}}function g(e,t,r,n){var i,a,o,s,u,c,f,l,p=e.constructor;e:if(null!=t){if(!(f=e.c))return e;for(i=1,s=f[0];s>=10;s/=10,i++);if(a=t-i,0>a)a+=_,o=t,u=f[l=0],c=u/E(10,i-o-1)%10|0;else if(l=Math.ceil((a+1)/_),l>=f.length){if(!n)break e;for(;f.length<=l;f.push(0));u=c=0,i=1,a%=_,o=a-_+1}else{for(u=s=f[l],i=1;s>=10;s/=10,i++);a%=_,o=a-_+i,c=0>o?0:N(u/E(10,i-o-1)%10)}if(n=n||0>t||null!=f[l+1]||(0>o?u:u%E(10,i-o-1)),n=4>r?(c||n)&&(0==r||r==(e.s<0?3:2)):c>5||5==c&&(4==r||n||6==r&&(a>0?o>0?u/E(10,i-o):0:f[l-1])%10&1||r==(e.s<0?8:7)),1>t||!f[0])return f.length=0,n?(t-=e.e+1,f[0]=E(10,(_-t%_)%_),e.e=-t||0):f[0]=e.e=0,e;if(0==a?(f.length=l,s=1,l--):(f.length=l+1,s=E(10,_-a),f[l]=o>0?(u/E(10,i-o)%E(10,o)|0)*s:0),n)for(;;){if(0==l){for(a=1,o=f[0];o>=10;o/=10,a++);for(o=f[0]+=s,s=1;o>=10;o/=10,s++);a!=s&&(e.e++,f[0]==A&&(f[0]=1));break}if(f[l]+=s,f[l]!=A)break;f[l--]=0,s=1}for(a=f.length;0===f[--a];f.pop());}return b&&(e.e>p.maxE?e.c=e.e=null:e.eo,!i||!a)return u==c?0:!i^r?1:-1;if(u!=c)return u>c^r?1:-1;for(o=-1,s=(u=i.length)<(c=a.length)?u:c;++oa[o]^r?1:-1;return u==c?0:u>c^r?1:-1},T.decimalPlaces=T.dp=function(){var e,t,r=null;if(e=this.c){if(r=((t=e.length-1)-N(this.e/_))*_,t=e[t])for(;t%10==0;t/=10,r--);0>r&&(r=0)}return r},T.dividedBy=T.div=function(e,t){return w=2,k(this,new this.constructor(e,t))},T.dividedToIntegerBy=T.divToInt=function(e,t){var r=this,n=r.constructor;return w=18,g(k(r,new n(e,t),0,1,1),n.precision,n.rounding)},T.equals=T.eq=function(e,t){return w=3,0===this.cmp(e,t)},T.exponential=T.exp=function(){return c(this)},T.floor=function(){return g(new this.constructor(this),this.e+1,3)},T.greaterThan=T.gt=function(e,t){return w=4,this.cmp(e,t)>0},T.greaterThanOrEqualTo=T.gte=function(e,t){return w=5,t=this.cmp(e,t),1==t||0===t},T.isFinite=function(){return!!this.c},T.isInteger=T.isInt=function(){return!!this.c&&N(this.e/_)>this.c.length-2},T.isNaN=function(){return!this.s},T.isNegative=T.isNeg=function(){return this.s<0},T.isZero=function(){return!!this.c&&0==this.c[0]},T.lessThan=T.lt=function(e,t){return w=6,this.cmp(e,t)<0},T.lessThanOrEqualTo=T.lte=function(e,t){return w=7,t=this.cmp(e,t),-1==t||0===t},T.logarithm=T.log=function(e,t){var r,n,i,s,u,c,f,l,m,v=this,d=v.constructor,y=d.precision,x=d.rounding,N=5;if(null==e)e=new d(10),r=!0;else{if(w=15,e=new d(e,t),n=e.c,e.s<0||!n||!n[0]||!e.e&&1==n[0]&&1==n.length)return new d(NaN);r=e.eq(10)}if(n=v.c,v.s<0||!n||!n[0]||!v.e&&1==n[0]&&1==n.length)return new d(n&&!n[0]?-1/0:1!=v.s?NaN:n?0:1/0);if(u=r&&(s=n[0],n.length>1||1!=s&&10!=s&&100!=s&&1e3!=s&&1e4!=s&&1e5!=s&&1e6!=s),b=!1,f=y+N,l=f+10,c=h(v,f),r?(l>B.length&&p(d,1,l,"log"),i=new d(B.slice(0,l))):i=h(e,f),m=k(c,i,f,1),o(m.c,s=y,x))do if(f+=10,c=h(v,f),r?(l=f+10,l>B.length&&p(d,1,l,"log"),i=new d(B.slice(0,l))):i=h(e,f),m=k(c,i,f,1),!u){+a(m.c).slice(s+1,s+15)+1==1e14&&(m=g(m,y+1,0));break}while(o(m.c,s+=10,x));return b=!0,g(m,y,x)},T.minus=function(e,t){var r,n,i,a,o=this,s=o.constructor,u=o.s;if(w=8,e=new s(e,t),t=e.s,!u||!t)return new s(NaN);if(u!=t)return e.s=-t,o.plus(e);var c=o.c,f=e.c,l=N(e.e/_),p=N(o.e/_),m=s.precision,h=s.rounding;if(!p||!l){if(!c||!f)return c?(e.s=-t,e):new s(f?o:NaN);if(!c[0]||!f[0])return o=f[0]?(e.s=-t,e):new s(c[0]?o:3==h?-0:0),b?g(o,m,h):o}if(c=c.slice(),n=c.length,u=p-l){for((a=0>u)?(u=-u,r=c,n=f.length):(l=p,r=f),(p=Math.ceil(m/_))>n&&(n=p),u>(n+=2)&&(u=n,r.length=1),r.reverse(),t=u;t--;r.push(0));r.reverse()}else for((a=n<(i=f.length))&&(i=n),u=t=0;i>t;t++)if(c[t]!=f[t]){a=c[t]0)for(;t--;c[i++]=0);for(p=A-1,t=f.length;t>u;){if(c[--t]=10;t/=10,u++);return e.e=u+l*_-1,b?g(e,m,h):e},T.modulo=T.mod=function(e,t){var r,n,i=this,a=i.constructor,o=a.modulo;return w=9,e=new a(e,t),t=e.s,r=!i.c||!t||e.c&&!e.c[0],r||!e.c||i.c&&!i.c[0]?r?new a(NaN):g(new a(i),a.precision,a.rounding):(b=!1,9==o?(e.s=1,n=k(i,e,0,3,1),e.s=t,n.s*=t):n=k(i,e,0,o,1),n=n.times(e),b=!0,i.minus(n))},T.naturalLogarithm=T.ln=function(){return h(this)},T.negated=T.neg=function(){var e=new this.constructor(this);return e.s=-e.s||null,g(e)},T.plus=function(e,t){var r,n=this,i=n.constructor,a=n.s;if(w=10,e=new i(e,t),t=e.s,!a||!t)return new i(NaN);if(a!=t)return e.s=-t,n.minus(e);var o=n.c,s=e.c,u=N(e.e/_),c=N(n.e/_),f=i.precision,l=i.rounding;if(!c||!u){if(!o||!s)return new i(a/0);if(!o[0]||!s[0])return n=s[0]?e:new i(o[0]?n:0*a),b?g(n,f,l):n}if(o=o.slice(),a=c-u){for(0>a?(a=-a,r=o,t=s.length):(u=c,r=s,t=o.length),(c=Math.ceil(f/_))>t&&(t=c),a>++t&&(a=t,r.length=1),r.reverse();a--;r.push(0));r.reverse()}for(o.length-s.length<0&&(r=s,s=o,o=r),a=s.length,t=0,c=A;a;o[a]%=c)t=(o[--a]=o[a]+s[a]+t)/c|0;for(t&&(o.unshift(t),++u),a=o.length;0==o[--a];o.pop());for(e.c=o,a=1,t=o[0];t>=10;t/=10,a++);return e.e=a+u*_-1,b?g(e,f,l):e},T.precision=T.sd=function(e){var t=null,r=this;return e!=t&&e!==!!e&&1!==e&&0!==e&&p(r.constructor,"argument",e,"precision",1),r.c&&(t=l(r.c),e&&r.e+1>t&&(t=r.e+1)),t},T.round=function(){var e=this,t=e.constructor;return g(new t(e),e.e+1,t.rounding)},T.squareRoot=T.sqrt=function(){var e,t,r,n,i,o,s=this,u=s.c,c=s.s,f=s.e,l=s.constructor,p=new l(.5);if(1!==c||!u||!u[0])return new l(!c||0>c&&(!u||u[0])?NaN:u?s:1/0);for(b=!1,c=Math.sqrt(+s),0==c||c==1/0?(t=a(u),(t.length+f)%2==0&&(t+="0"),c=Math.sqrt(t),f=N((f+1)/2)-(0>f||f%2),c==1/0?t="1e"+f:(t=c.toExponential(),t=t.slice(0,t.indexOf("e")+1)+f),n=new l(t)):n=new l(c.toString()),r=(f=l.precision)+3;;)if(o=n,n=p.times(o.plus(k(s,o,r+2,1))),a(o.c).slice(0,r)===(t=a(n.c)).slice(0,r)){if(t=t.slice(r-3,r+1),"9999"!=t&&(i||"4999"!=t)){(!+t||!+t.slice(1)&&"5"==t.charAt(0))&&(g(n,f+1,1),e=!n.times(n).eq(s));break}if(!i&&(g(o,f+1,0),o.times(o).eq(s))){n=o;break}r+=4,i=1}return b=!0,g(n,f,l.rounding,e)},T.times=function(e,t){var r,n,i=this,a=i.constructor,o=i.c,s=(w=11,e=new a(e,t),e.c),u=N(i.e/_),c=N(e.e/_),f=i.s;if(t=e.s,e.s=f==t?1:-1,!((u||o&&o[0])&&(c||s&&s[0])))return new a(!f||!t||o&&!o[0]&&!s||s&&!s[0]&&!o?NaN:o&&s?0*e.s:e.s/0);for(n=u+c,f=o.length,t=s.length,t>f&&(r=o,o=s,s=r,c=f,f=t,t=c),c=f+t,r=[];c--;r.push(0));for(u=t-1;u>-1;u--){for(t=0,c=f+u;c>u;)t=r[c]+s[u]*o[c-u-1]+t,r[c--]=t%A|0,t=t/A|0;r[c]=(r[c]+t)%A|0}for(t?++n:r[0]||r.shift(),c=r.length;!r[--c];r.pop());for(e.c=r,f=1,t=r[0];t>=10;t/=10,f++);return e.e=f+n*_-1,b?g(e,a.precision,a.rounding):e},T.toDecimalPlaces=T.toDP=function(e,t){var r=this;return r=new r.constructor(r),null!=e&&u(r,e,"toDP")?g(r,(0|e)+r.e+1,s(r,t,"toDP")):r},T.toExponential=function(e,t){var r=this;return r.c?f(r,null!=e&&u(r,e,"toExponential")?0|e:null,null!=e&&s(r,t,"toExponential"),1):r.toString()},T.toFixed=function(e,t){var r,n=this,i=n.constructor,a=i.toExpNeg,o=i.toExpPos;return null!=e&&(e=u(n,e,r="toFixed")?n.e+(0|e):null,t=s(n,t,r)),i.toExpNeg=-(i.toExpPos=1/0),null!=e&&n.c?(r=f(n,e,t),n.s<0&&n.c&&(n.c[0]?r.indexOf("-")<0&&(r="-"+r):r=r.replace("-",""))):r=n.toString(),i.toExpNeg=a,i.toExpPos=o,r},T.toFormat=function(e,t){var r=this;if(!r.c)return r.toString();var n,i=r.s<0,a=r.constructor.format,o=a.groupSeparator,s=+a.groupSize,u=+a.secondaryGroupSize,c=r.toFixed(e,t).split("."),f=c[0],l=c[1],p=i?f.slice(1):f,m=p.length;if(u&&(n=s,s=u,m-=u=n),s>0&&m>0){for(n=m%s||s,f=p.substr(0,n);m>n;n+=s)f+=o+p.substr(n,s);u>0&&(f+=o+p.slice(n)),i&&(f="-"+f)}return l?f+a.decimalSeparator+((u=+a.fractionGroupSize)?l.replace(new RegExp("\\d{"+u+"}\\B","g"),"$&"+a.fractionGroupSeparator):l):f},T.toFraction=function(e){var t,r,n,i,o,s,u,c,f=this,m=f.constructor,h=t=new m(m.ONE),g=s=new m(0),v=f.c,d=new m(g);if(!v)return f.toString();for(n=d.e=l(v)-f.e-1,d.c[0]=E(10,(u=n%_)<0?_+u:u),(null==e||(!(w=12,o=new m(e)).s||(y=o.cmp(h)<0||!o.c)||m.errors&&N(o.e/_)0)&&(e=n>0?d:h),b=!1,o=new m(a(v)),u=m.precision,m.precision=n=v.length*_*2;c=k(o,d,0,1,1),r=t.plus(c.times(g)),1!=r.cmp(e);)t=g,g=r,h=s.plus(c.times(r=h)),s=r,d=o.minus(c.times(r=d)),o=r;return r=k(e.minus(t),g,0,1,1),s=s.plus(r.times(h)),t=t.plus(r.times(g)),s.s=h.s=f.s,i=k(h,g,n,1).minus(f).abs().cmp(k(s,t,n,1).minus(f).abs())<1?[h+"",g+""]:[s+"",t+""],b=!0,m.precision=u,i},T.toNearest=function(e,t){var r=this,n=r.constructor;return r=new n(r),null==e?(e=new n(n.ONE),t=n.rounding):(w=17,e=new n(e),t=s(r,t,"toNearest")),e.c?r.c&&(e.c[0]?(b=!1,r=k(r,e,0,4>t?[4,5,7,8][t]:t,1).times(e),b=!0,g(r)):r.c=[r.e=0]):r.s&&(e.s&&(e.s=r.s),r=e),r},T.toNumber=function(){var e=this;return+e||(e.s?0*e.s:NaN)},T.toPower=T.pow=function(e,t){var r,n,i,s,u=this,f=u.constructor,l=u.s,p=(w=13,+(e=new f(e,t))),v=0>p?-p:p,d=f.precision,y=f.rounding;if(!u.c||!e.c||(i=!u.c[0])||!e.c[0])return new f(E(i?0*l:+u,p));if(u=new f(u),r=u.c.length,!u.e&&u.c[0]==u.s&&1==r)return u;if(t=e.c.length-1,e.e||e.c[0]!=e.s||t)if(n=N(e.e/_),i=n>=t,!i&&0>l)s=new f(NaN);else{if(i&&z>r*_*v){if(s=m(f,u,v),e.s<0)return f.ONE.div(s)}else{if(l=0>l&&1&e.c[Math.max(n,t)]?-1:1,t=E(+u,p),n=0!=t&&isFinite(t)?new f(t+"").e:N(p*(Math.log("0."+a(u.c))/Math.LN10+u.e+1)),n>f.maxE+1||n0?l/0:0);b=!1,f.rounding=u.s=1,v=Math.min(12,(n+"").length),s=c(e.times(h(u,d+v)),d),s=g(s,d+5,1),o(s.c,d,y)&&(n=d+10,s=g(c(e.times(h(u,n+v)),n),n+5,1),+a(s.c).slice(d+1,d+15)+1==1e14&&(s=g(s,d+1,0))),s.s=l,b=!0,f.rounding=y}s=g(s,d,y)}else s=g(u,d,y);return s},T.toPrecision=function(e,t){var r=this;return null!=e&&u(r,e,"toPrecision",1)&&r.c?f(r,0|--e,s(r,t,"toPrecision"),2):r.toString()},T.toSignificantDigits=T.toSD=function(e,t){var r=this,n=r.constructor;return r=new n(r),null!=e&&u(r,e,"toSD",1)?g(r,0|e,s(r,t,"toSD")):g(r,n.precision,n.rounding)},T.toString=function(e){var t,r,n,i=this,o=i.constructor,s=i.e;if(null===s)r=i.s?"Infinity":"NaN";else{if(e===t&&(s<=o.toExpNeg||s>=o.toExpPos))return f(i,null,o.rounding,1);if(r=a(i.c),0>s){for(;++s;r="0"+r);r="0."+r}else if(n=r.length,s>0)if(++s>n)for(s-=n;s--;r+="0");else n>s&&(r=r.slice(0,s)+"."+r.slice(s));else if(t=r.charAt(0),n>1)r=t+"."+r.slice(1);else if("0"==t)return t;if(null!=e)if((y=!(e>=2&&65>e))||e!=(0|e)&&o.errors)p(o,"base",e,"toString",0);else if(r=v(o,r,0|e,10,i.s),"0"==r)return r}return i.s<0?"-"+r:r},T.truncated=T.trunc=function(){return g(new this.constructor(this),this.e+1,1)},T.valueOf=T.toJSON=function(){return this.toString()},v=function(){function e(e,t,r){for(var n,i,a=[0],o=0,s=e.length;s>o;){for(i=a.length;i--;a[i]*=t);for(a[n=0]+=O.indexOf(e.charAt(o++));nr-1&&(null==a[n+1]&&(a[n+1]=0),a[n+1]+=a[n]/r|0,a[n]%=r)}return a.reverse()}return function(t,r,n,i,a){var o,s,u,c,f,l,p=r.indexOf("."),h=t.precision,g=t.rounding;for(37>i&&(r=r.toLowerCase()),p>=0&&(r=r.replace(".",""),l=new t(i),c=m(t,l,r.length-p),l.c=e(c.toFixed(),10,n),l.e=l.c.length),f=e(r,i,n),o=s=f.length;0==f[--s];f.pop());if(!f[0])return"0";if(0>p?o--:(c.c=f,c.e=o,c.s=a,c=k(c,l,h,g,0,n),f=c.c,u=c.r,o=c.e),p=f[h],s=n/2,u=u||null!=f[h+1],4>g?(null!=p||u)&&(0==g||g==(0>a?3:2)):p>s||p==s&&(4==g||u||6==g&&1&f[h-1]||g==(0>a?8:7)))for(f.length=h,--n;++f[--h]>n;)f[h]=0,h||(++o,f.unshift(1));else f.length=h;for(s=f.length;!f[--s];);for(p=0,r="";s>=p;r+=O.charAt(f[p++]));if(0>o){for(;++o;r="0"+r);r="0."+r}else if(p=r.length,++o>p)for(o-=p;o--;r+="0");else p>o&&(r=r.slice(0,o)+"."+r.slice(o));return r}}();var k=function(){function e(e,t,r){var n,i=0,a=e.length;for(e=e.slice();a--;)n=e[a]*t+i,e[a]=n%r|0,i=n/r|0;return i&&e.unshift(i),e}function t(e,t,r,n){var i,a;if(r!=n)a=r>n?1:-1;else for(i=a=0;r>i;i++)if(e[i]!=t[i]){a=e[i]>t[i]?1:-1;break}return a}function r(e,t,r,n){for(var i=0;r--;)e[r]-=i,i=e[r]1;e.shift());}return function(n,i,a,o,s,u){var c,f,l,p,m,h,v,d,y,x,b,w,E,M,O,T,C,S,z,B=n.constructor,k=n.s==i.s?1:-1,I=n.c,R=i.c;if(!(I&&I[0]&&R&&R[0]))return new B(n.s&&i.s&&(I?!R||I[0]!=R[0]:R)?I&&0==I[0]||!R?0*k:k/0:NaN);for(u?(p=1,f=n.e-i.e):(u=A,p=_,f=N(n.e/p)-N(i.e/p)),S=R.length,T=I.length,y=new B(k),x=y.c=[],l=0;R[l]==(I[l]||0);l++);if(R[l]>(I[l]||0)&&f--,null==a?(k=a=B.precision,o=B.rounding):k=s?a+(n.e-i.e)+1:a,0>k)x.push(1),m=!0;else{if(k=k/p+2|0,l=0,1==S){for(h=0,R=R[0],k++;(T>l||h)&&k--;l++)M=h*u+(I[l]||0),x[l]=M/R|0,h=M%R|0;m=h||T>l}else{for(h=u/(R[0]+1)|0,h>1&&(R=e(R,h,u),I=e(I,h,u),S=R.length,T=I.length),O=S,b=I.slice(0,S),w=b.length;S>w;b[w++]=0);z=R.slice(),z.unshift(0),C=R[0],R[1]>=u/2&&C++;do h=0,c=t(R,b,S,w),0>c?(E=b[0],S!=w&&(E=E*u+(b[1]||0)),h=E/C|0,h>1?(h>=u&&(h=u-1),v=e(R,h,u),d=v.length,w=b.length,c=t(v,b,d,w),1==c&&(h--,r(v,d>S?z:R,d,u))):(0==h&&(c=h=1),v=R.slice()),d=v.length,w>d&&v.unshift(0),r(b,v,w,u),-1==c&&(w=b.length,c=t(R,b,S,w),1>c&&(h++,r(b,w>S?z:R,w,u))),w=b.length):0===c&&(h++,b=[0]),x[l++]=h,c&&b[0]?b[w++]=I[O]||0:(b=[I[O]],w=1);while((O++=10;k/=10,l++);y.e=l+f*p-1,g(y,s?a+y.e+1:a,o,m)}return y}}();d=function(){function e(e){var t,r,n,i=this,a="config",o=i.errors?parseInt:parseFloat;return e==r||"object"!=typeof e&&!p(i,"object expected",e,a)?i:((n=e[t="precision"])!=r&&((y=1>n||n>S)||o(n)!=n?p(i,t,n,a,0):i[t]=0|n),(n=e[t="rounding"])!=r&&((y=0>n||n>8)||o(n)!=n?p(i,t,n,a,0):i[t]=0|n),(n=e[t="toExpNeg"])!=r&&((y=-C>n||n>0)||o(n)!=n?p(i,t,n,a,0):i[t]=N(n)),(n=e[t="toExpPos"])!=r&&((y=0>n||n>C)||o(n)!=n?p(i,t,n,a,0):i[t]=N(n)),(n=e[t="minE"])!=r&&((y=-C>n||n>0)||o(n)!=n?p(i,t,n,a,0):i[t]=N(n)),(n=e[t="maxE"])!=r&&((y=0>n||n>C)||o(n)!=n?p(i,t,n,a,0):i[t]=N(n)),(n=e[t="errors"])!=r&&(n===!!n||1===n||0===n?(y=w=0,i[t]=!!n):p(i,t,n,a,1)),(n=e[t="crypto"])!=r&&(n===!!n||1===n||0===n?i[t]=!(!n||!x||"object"!=typeof x):p(i,t,n,a,1)),(n=e[t="modulo"])!=r&&((y=0>n||n>9)||o(n)!=n?p(i,t,n,a,0):i[t]=0|n),(e=e[t="format"])!=r&&("object"==typeof e?i[t]=e:p(i,"format object expected",e,a)),i)}function t(e){return new this(e).exp()}function r(e){return new this(e).ln()}function n(e,t){return new this(e).log(t)}function i(e,t,r){var n,i,a=0;for("[object Array]"==M.call(t[0])&&(t=t[0]),n=new e(t[0]);++ai;)n=t[i],n>=429e7?t[i]=x.getRandomValues(new Uint32Array(1))[0]:a[i++]=n%1e7;else if(x&&x.randomBytes){for(t=x.randomBytes(r*=4);r>i;)n=t[i]+(t[i+1]<<8)+(t[i+2]<<16)+((127&t[i+3])<<24),n>=214e7?x.randomBytes(4).copy(t,i):(a.push(n%1e7),i+=4);i=r/4}else p(o,"crypto unavailable",x,"random");if(!i)for(;r>i;)a[i++]=1e7*Math.random()|0;for(r=a[--i],e%=_,r&&e&&(n=E(10,_-e),a[i]=(r/n|0)*n);0===a[i];i--)a.pop();if(0>i)a=[r=0];else{for(r=-1;0===a[0];)a.shift(),r-=_;for(i=1,n=a[0];n>=10;)n/=10,i++;_>i&&(r-=_-i)}return s.e=r,s.c=a,s}function f(e){return new this(e).sqrt()}function l(i){function u(e,t){var r=this;if(!(r instanceof u))return p(u,"Decimal called without new",e),new u(e,t);if(r.constructor=u,e instanceof u){if(null==t)return w=0,r.s=e.s,r.e=e.e,r.c=(e=e.c)?e.slice():e,r;if(10==t)return g(new u(e),u.precision,u.rounding);e+=""}return m(u,r,e,t)}return u.precision=20,u.rounding=4,u.modulo=1,u.toExpNeg=-7,u.toExpPos=21,u.minE=-C,u.maxE=C,u.errors=!0,u.crypto=!1,u.format={decimalSeparator:".",groupSeparator:",",groupSize:3,secondaryGroupSize:0,fractionGroupSeparator:" ",fractionGroupSize:0},u.prototype=T,u.ONE=new u(1),u.ROUND_UP=0,u.ROUND_DOWN=1,u.ROUND_CEIL=2,u.ROUND_FLOOR=3,u.ROUND_HALF_UP=4,u.ROUND_HALF_DOWN=5,u.ROUND_HALF_EVEN=6,u.ROUND_HALF_CEIL=7,u.ROUND_HALF_FLOOR=8,u.EUCLID=9,u.config=e,u.constructor=l,u.exp=t,u.ln=r,u.log=n,u.max=a,u.min=o,u.pow=s,u.sqrt=f,u.random=c,null!=i&&u.config(i),u}var m=function(){var e=/^-?(\d+(\.\d*)?|\.\d+)(e[+-]?\d+)?$/i,t=String.prototype.trim||function(){return this.replace(/^\s+|\s+$/g,"")};return function(r,n,i,a){var o,s,u,c,f,l;if("string"!=typeof i&&(i=(c="number"==typeof i||"[object Number]"==M.call(i))&&0===i&&0>1/i?"-0":i+""),f=i,null==a&&e.test(i))n.s=45===i.charCodeAt(0)?(i=i.slice(1),-1):1;else{if(10==a)return g(new r(i),r.precision,r.rounding);if(i=t.call(i).replace(/^\+(?!-)/,""),n.s=45===i.charCodeAt(0)?(i=i.replace(/^-(?!-)/,""),-1):1,null!=a?a!=(0|a)&&r.errors||(y=!(a>=2&&65>a))?(p(r,"base",a,0,0),l=e.test(i)):(o="["+O.slice(0,a=0|a)+"]+",i=i.replace(/\.$/,"").replace(/^\./,"0."),(l=new RegExp("^"+o+"(?:\\."+o+")?$",37>a?"i":"").test(i))?(c&&(i.replace(/^0\.0*|\./,"").length>15&&p(r,0,f),c=!c),i=v(r,i,10,a,n.s)):"Infinity"!=i&&"NaN"!=i&&(p(r,"not a base "+a+" number",f),i="NaN")):l=e.test(i),!l)return n.c=n.e=null,"Infinity"!=i&&("NaN"!=i&&p(r,"not a number",f),n.s=null),w=0,n}for((s=i.indexOf("."))>-1&&(i=i.replace(".","")),(u=i.search(/e/i))>0?(0>s&&(s=u),s+=+i.slice(u+1),i=i.substring(0,u)):0>s&&(s=i.length),u=0;48===i.charCodeAt(u);u++);for(a=i.length;48===i.charCodeAt(--a););if(i=i.slice(u,a+1)){if(a=i.length,c&&a>15&&p(r,0,f),n.e=s=s-u-1,n.c=[],u=(s+1)%_,0>s&&(u+=_),a>u){for(u&&n.c.push(+i.slice(0,u)),a-=_;a>u;)n.c.push(+i.slice(u,u+=_));i=i.slice(u),u=_-i.length}else u-=a;for(;u--;i+="0");n.c.push(+i),b&&(n.e>r.maxE?n.c=n.e=null:n.eo;o++)0!=o&&(i+=", "),i+=n(e[o],r);return i+="]"}return t.format(e,r)}var i=r(6).format,a=r(24).format;t.isString=function(e){return"string"==typeof e},t.endsWith=function(e,t){var r=e.length-t.length,n=e.length;return e.substring(r,n)===t},t.format=function(e,r){return"number"==typeof e?i(e,r):e&&e.isBigNumber===!0?a(e,r):e&&e.isFraction===!0?r&&"decimal"===r.fraction?e.toString():e.s*e.n+"/"+e.d:Array.isArray(e)?n(e,r):t.isString(e)?'"'+e+'"':"function"==typeof e?e.syntax?e.syntax+"":"function":"object"==typeof e?"function"==typeof e.format?e.format(r):e.toString():String(e)}},function(e,t){t.format=function(e,r){if("function"==typeof r)return r(e);if(!e.isFinite())return e.isNaN()?"NaN":e.gt(0)?"Infinity":"-Infinity";var n="auto",i=void 0;switch(void 0!==r&&(r.notation&&(n=r.notation),"number"==typeof r?i=r:r.precision&&(i=r.precision)),n){case"fixed":return t.toFixed(e,i);case"exponential":return t.toExponential(e,i);case"auto":var a=.001,o=1e5;r&&r.exponential&&(void 0!==r.exponential.lower&&(a=r.exponential.lower),void 0!==r.exponential.upper&&(o=r.exponential.upper));({toExpNeg:e.constructor.toExpNeg,toExpPos:e.constructor.toExpPos});if(e.constructor.config({toExpNeg:Math.round(Math.log(a)/Math.LN10),toExpPos:Math.round(Math.log(o)/Math.LN10)}),e.isZero())return"0";var s,u=e.abs();return s=u.gte(a)&&u.lt(o)?e.toSignificantDigits(i).toFixed():t.toExponential(e,i),s.replace(/((\.\d*?)(0+))($|e)/,function(){var e=arguments[2],t=arguments[4];return"."!==e?e+t:t});default:throw new Error('Unknown notation "'+n+'". Choose "auto", "exponential", or "fixed".')}},t.toExponential=function(e,t){return void 0!==t?e.toExponential(t-1):e.toExponential()},t.toFixed=function(e,t){return e.toFixed(t||0)}},function(e,t){"use strict";function r(e,t,r,n){return n("chain",{"":function(){return new e.Chain},any:function(t){return new e.Chain(t)}})}t.name="chain",t.factory=r},function(e,t,r){e.exports=[r(27),r(28)]},function(e,t,r){"use strict";function n(e,t,r,n){function o(e,t){if(!(this instanceof o))throw new SyntaxError("Constructor must be called with the new operator");switch(arguments.length){case 0:this.re=0,this.im=0;break;case 1:var r=arguments[0];if("object"==typeof r){if("re"in r&&"im"in r){var n=new o(r.re,r.im);this.re=n.re,this.im=n.im;break}if("r"in r&&"phi"in r){var n=o.fromPolar(r.r,r.phi);this.re=n.re,this.im=n.im;break}}throw new SyntaxError("Object with the re and im or r and phi properties expected.");case 2:if(!i(e)||!i(t))throw new TypeError("Two numbers expected in Complex constructor");this.re=e,this.im=t;break;default:throw new SyntaxError("One, two or three arguments expected in Complex constructor")}}function s(){for(;" "==d||" "==d;)f()}function u(e){return e>="0"&&"9">=e||"."==e}function c(e){return e>="0"&&"9">=e}function f(){v++,d=g.charAt(v)}function l(e){v=e,d=g.charAt(v)}function p(){var e,t="";if(e=v,"+"==d?f():"-"==d&&(t+=d,f()),!u(d))return l(e),null;if("."==d){if(t+=d,f(),!c(d))return l(e),null}else{for(;c(d);)t+=d,f();"."==d&&(t+=d,f())}for(;c(d);)t+=d,f();if("E"==d||"e"==d){if(t+=d,f(),("+"==d||"-"==d)&&(t+=d,f()),!c(d))return l(e),null;for(;c(d);)t+=d,f()}return t}function m(){var e=g.charAt(v+1);if("I"==d||"i"==d)return f(),"1";if(!("+"!=d&&"-"!=d||"I"!=e&&"i"!=e)){var t="+"==d?"1":"-1";return f(),f(),t}return null}function h(){return new SyntaxError('End of string expected, got "'+g.substr(v)+'"')}o.prototype.isComplex=!0,o.prototype.type="Complex";var g,v,d;return o.parse=function(e){if(g=e,v=-1,d="","string"!=typeof g)throw new TypeError("Invalid argument in Complex.parse, string expected");f(),s();var t=p();if(t){if("I"==d||"i"==d){if(f(),s(),d)throw h();return new o(0,Number(t))}s();var r=d;if("+"!=r&&"-"!=r){if(s(),d)throw h();return new o(Number(t),0)}f(),s();var n=p();if(n){if("I"!=d&&"i"!=d)throw new SyntaxError('Character "i" expected, got "'+d+'"');f()}else if(n=m(),!n)throw new SyntaxError("Imaginary part expected");if("-"==r&&(n="-"==n[0]?"+"+n.substring(1):"-"+n),f(),s(),d)throw h();return new o(Number(t),Number(n))}if(t=m()){if(s(),d)throw h();return new o(0,Number(t))}throw new SyntaxError('Could not parse: "'+e+'" as complex number')},o.fromPolar=function(e){switch(arguments.length){case 1:var t=arguments[0];if("object"==typeof t)return o.fromPolar(t.r,t.phi);throw new TypeError("Input has to be an object with r and phi keys.");case 2:var r=arguments[0],n=arguments[1];if(i(r)){if(n&&n.isUnit&&n.hasBase("ANGLE")&&(n=n.toNumber("rad")),i(n))return new o(r*Math.cos(n),r*Math.sin(n));throw new TypeError("Phi is not a number nor an angle unit.")}throw new TypeError("Radius r is not a number.");default:throw new SyntaxError("Wrong number of arguments in function fromPolar")}},o.prototype.toPolar=function(){return{r:Math.sqrt(this.re*this.re+this.im*this.im),phi:Math.atan2(this.im,this.re)}},o.prototype.clone=function(){return new o(this.re,this.im)},o.prototype.equals=function(e){return this.re===e.re&&this.im===e.im},o.prototype.format=function(e){var t="",r=this.im,n=this.re,o=a(this.re,e),s=a(this.im,e),u=i(e)?e:e?e.precision:null;if(null!==u){var c=Math.pow(10,-u);Math.abs(n/r)0?1==r?o+" + i":o+" + "+s+"i":-1==r?o+" - i":o+" - "+s.substring(1)+"i"},o.prototype.toString=function(){return this.format()},o.prototype.toJSON=function(){return{mathjs:"Complex",re:this.re,im:this.im}},o.fromJSON=function(e){return new o(e)},o.prototype.valueOf=o.prototype.toString,o}var i=r(6).isNumber,a=r(6).format;t.name="Complex",t.path="type",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=r(29),s=a("complex",{"":function(){return new e.Complex(0,0)},number:function(t){return new e.Complex(t,0)},"number, number":function(t,r){return new e.Complex(t,r)},"BigNumber, BigNumber":function(t,r){return new e.Complex(t.toNumber(),r.toNumber())},Complex:function(e){return e.clone()},string:function(t){return e.Complex.parse(t)},Object:function(t){if("re"in t&&"im"in t)return new e.Complex(t.re,t.im);if("r"in t&&"phi"in t)return e.Complex.fromPolar(t.r,t.phi);throw new Error("Expected object with either properties re and im, or properties r and phi.")},"Array | Matrix":function(e){return i(e,s)}});return s.toTex={0:"0",1:"\\left(${args[0]}\\right)",2:"\\left(\\left(${args[0]}\\right)+"+o.symbols.i+"\\cdot\\left(${args[1]}\\right)\\right)"},s}var i=r(19);t.name="complex",t.factory=n},function(e,t){"use strict";t.symbols={Alpha:"A",alpha:"\\alpha",Beta:"B",beta:"\\beta",Gamma:"\\Gamma",gamma:"\\gamma",Delta:"\\Delta",delta:"\\delta",Epsilon:"E",epsilon:"\\epsilon",varepsilon:"\\varepsilon",Zeta:"Z",zeta:"\\zeta",Eta:"H",eta:"\\eta",Theta:"\\Theta",theta:"\\theta",vartheta:"\\vartheta",Iota:"I",iota:"\\iota",Kappa:"K",kappa:"\\kappa",varkappa:"\\varkappa",Lambda:"\\Lambda",lambda:"\\lambda",Mu:"M",mu:"\\mu",Nu:"N",nu:"\\nu",Xi:"\\Xi",xi:"\\xi",Omicron:"O",omicron:"o",Pi:"\\Pi",pi:"\\pi",varpi:"\\varpi",Rho:"P",rho:"\\rho",varrho:"\\varrho",Sigma:"\\Sigma",sigma:"\\sigma",varsigma:"\\varsigma",Tau:"T",tau:"\\tau",Upsilon:"\\Upsilon",upsilon:"\\upsilon",Phi:"\\Phi",phi:"\\phi",varphi:"\\varphi",Chi:"X",chi:"\\chi",Psi:"\\Psi",psi:"\\psi",Omega:"\\Omega",omega:"\\omega","true":"\\mathrm{True}","false":"\\mathrm{False}",i:"i",inf:"\\infty",Inf:"\\infty",infinity:"\\infty",Infinity:"\\infty",oo:"\\infty",lim:"\\lim",undefined:"\\mathbf{?}"},t.operators={transpose:"^\\top",factorial:"!",pow:"^",dotPow:".^\\wedge",unaryPlus:"+",unaryMinus:"-",bitNot:"~",not:"\\neg",multiply:"\\cdot",divide:"\\frac",dotMultiply:".\\cdot",dotDivide:".:",mod:"\\mod",add:"+",subtract:"-",to:"\\rightarrow",leftShift:"<<",rightArithShift:">>",rightLogShift:">>>",equal:"=",unequal:"\\neq",smaller:"<",larger:">",smallerEq:"\\leq",largerEq:"\\geq",bitAnd:"\\&",bitXor:"\\underline{|}",bitOr:"|",and:"\\wedge",xor:"\\veebar",or:"\\vee"},t.defaultTemplate="\\mathrm{${name}}\\left(${args}\\right)";var r={deg:"^\\circ"};t.toSymbol=function(e,n){if(n="undefined"==typeof n?!1:n)return r.hasOwnProperty(e)?r[e]:"\\mathrm{"+e+"}";if(t.symbols.hasOwnProperty(e))return t.symbols[e];if(-1!==e.indexOf("_")){var i=e.indexOf("_");return t.toSymbol(e.substring(0,i))+"_{"+t.toSymbol(e.substring(i+1))+"}"}return e}},function(e,t,r){e.exports=[r(31),r(35)]},function(e,t,r){function n(e,t,r,n){return i}var i=r(32);i.prototype.type="Fraction",i.prototype.isFraction=!0,i.prototype.toJSON=function(){return{mathjs:"Fraction",n:this.s*this.n,d:this.d}},i.fromJSON=function(e){return new i(e)},t.name="Fraction",t.path="type",t.factory=n},function(e,t,r){var n,i;(function(e){/** - * @license Fraction.js v3.0.0 09/09/2015 - * http://www.xarg.org/2014/03/precise-calculations-in-javascript/ - * - * Copyright (c) 2015, Robert Eisele (robert@xarg.org) - * Dual licensed under the MIT or GPL Version 2 licenses. - **/ -!function(a){"use strict";function o(e,t){return isNaN(e=parseInt(e,10))&&s(),e*t}function s(){throw"Invalid Param"}function u(e,t){return this instanceof u?(l(e,t),e=u.REDUCE?g(f.d,f.n):1,this.s=f.s,this.n=f.n/e,void(this.d=f.d/e)):new u(e,t)}var c=2e3,f={s:1,n:0,d:1},l=function(e,t){var r,n=0,i=1,a=1,u=0,c=0,l=0,p=1,m=1,h=0,g=1,v=1,d=1,y=1e7;if(void 0===e||null===e);else if(void 0!==t)n=e,i=t,a=n*i;else switch(typeof e){case"object":"d"in e&&"n"in e?(n=e.n,i=e.d,"s"in e&&(n*=e.s)):0 in e?(n=e[0],1 in e&&(i=e[1])):s(),a=n*i;break;case"number":if(0>e&&(a=e,e=-e),e%1===0)n=e;else if(e>0){for(e>=1&&(m=Math.pow(10,Math.floor(1+Math.log(e)/Math.LN10)),e/=m);y>=g&&y>=d;){if(r=(h+v)/(g+d),e===r){y>=g+d?(n=h+v,i=g+d):d>g?(n=v,i=d):(n=h,i=g);break}e>r?(h+=v,g+=d):(v+=h,d+=g),g>y?(n=v,i=d):(n=h,i=g)}n*=m}break;case"string":if(g=e.match(/\d+|./g),"-"===g[h]?(a=-1,h++):"+"===g[h]&&h++,g.length===h+1?c=o(g[h++],a):"."===g[h+1]||"."===g[h]?("."!==g[h]&&(u=o(g[h++],a)),h++,(h+1===g.length||"("===g[h+1]&&")"===g[h+3]||"'"===g[h+1]&&"'"===g[h+3])&&(c=o(g[h],a),p=Math.pow(10,g[h].length),h++),("("===g[h]&&")"===g[h+2]||"'"===g[h]&&"'"===g[h+2])&&(l=o(g[h+1],a),m=Math.pow(10,g[h+1].length)-1,h+=3)):"/"===g[h+1]||":"===g[h+1]?(c=o(g[h],a),p=o(g[h+2],1),h+=3):"/"===g[h+3]&&" "===g[h+1]&&(u=o(g[h],a),c=o(g[h+2],a),p=o(g[h+4],1),h+=5),g.length<=h){a=n=l+m*(u*p+c),i=p*m;break}default:s()}if(!i)throw"DIV/0";f.s=0>a?-1:1,f.n=Math.abs(n),f.d=Math.abs(i)},p=function(e,t,r){for(var n=1;t>0;e=e*e%r,t>>=1)1&t&&(n=n*e%r);return n},m=function(e,t){for(;t%2===0;t/=2);for(;t%5===0;t/=5);if(1===t)return 0;for(var r=10%t,n=1;1!==r;n++)if(r=10*r%t,n>c)return 0;return n},h=function(e,t,r){for(var n=1,i=p(10,r,t),a=0;300>a;a++){if(n===i)return a;n=10*n%t,i=10*i%t}return 0},g=function(e,t){if(!e)return t;if(!t)return e;for(;;){if(e%=t,!e)return t;if(t%=e,!t)return e}};u.REDUCE=1,u.prototype={s:1,n:0,d:1,abs:function(){return new u(this.n,this.d)},neg:function(){return new u(-this.s*this.n,this.d)},add:function(e,t){return l(e,t),new u(this.s*this.n*f.d+f.s*this.d*f.n,this.d*f.d)},sub:function(e,t){return l(e,t),new u(this.s*this.n*f.d-f.s*this.d*f.n,this.d*f.d)},mul:function(e,t){return l(e,t),new u(this.s*f.s*this.n*f.n,this.d*f.d)},div:function(e,t){return l(e,t),new u(this.s*f.s*this.n*f.d,this.d*f.n)},clone:function(){return new u(this)},mod:function(e,t){return void 0===e?new u(this.s*this.n%this.d,1):(l(e,t),0===f.n*this.d&&u(0,0),new u(this.s*f.d*this.n%(f.n*this.d),f.d*this.d))},gcd:function(e,t){return l(e,t),new u(g(f.n,this.n),f.d*this.d/g(f.d,this.d))},lcm:function(e,t){return l(e,t),new u(f.n*this.n/g(f.n,this.n),g(f.d,this.d))},ceil:function(){return new u(Math.ceil(this.s*this.n/this.d),1)},floor:function(){return new u(Math.floor(this.s*this.n/this.d),1)},round:function(){return new u(Math.round(this.s*this.n/this.d),1)},inverse:function(){return new u(this.s*this.d,this.n)},pow:function(e){var t=this.d,r=this.n;return 0>e?(this.d=Math.pow(r,-e),this.n=Math.pow(t,-e)):(this.d=Math.pow(t,e),this.n=Math.pow(r,e)),0===e%2&&(this.s=1),this},equals:function(e,t){return l(e,t),this.s*this.n*f.d===f.s*f.n*this.d},compare:function(e,t){l(e,t);var r=this.s*this.n*f.d-f.s*f.n*this.d;return(r>0)-(0>r)},divisible:function(e,t){return l(e,t),!(!(f.n*this.d)||this.n*f.d%(f.n*this.d))},valueOf:function(){return this.s*this.n/this.d},toFraction:function(e){var t,r="",n=this.n,i=this.d;return this.s<0&&(r+="-"),1===i?r+=n:(e&&(t=Math.floor(n/i))>0&&(r+=t,r+=" ",n%=i),r+=n,r+="/",r+=i),r},toLatex:function(e){var t,r="",n=this.n,i=this.d;return this.s<0&&(r+="-"),1===i?r+=n:(e&&(t=Math.floor(n/i))>0&&(r+=t,n%=i),r+="\\frac{",r+=n,r+="}{",r+=i,r+="}"),r},toString:function(){var e,t=this.n,r=this.d;u.REDUCE||(e=g(t,r),t/=e,r/=e);for(var n=String(t).split(""),i=0,a=[~this.s?"":"-","",""],o="",s=m(t,r),c=h(t,r,s),f=-1,l=1,p=10+s+c+n.length,v=0;p>v;v++,i*=10){if(v0)if(f===c)a[l]+=o+"(",o="";else if(f===s+c){a[l]+=o+")";break}i>=r?(a[l]+=o+(i/r|0),o="",i%=r):l>1?o+="0":a[l]&&(a[l]+="0")}return a[0]+=a[1]||"0",a[2]?a[0]+"."+a[2]:a[0]}},r(34).amd?(n=[],i=function(){return u}.apply(t,n),!(void 0!==i&&(e.exports=i))):e.exports=u}(this)}).call(t,r(33)(e))},function(e,t){e.exports=function(e){return e.webpackPolyfill||(e.deprecate=function(){},e.paths=[],e.children=[],e.webpackPolyfill=1),e}},function(e,t){e.exports=function(){throw new Error("define cannot be used indirect")}},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("fraction",{number:function(t){if(!isFinite(t)||isNaN(t))throw new Error(t+" cannot be represented as a fraction");return new e.Fraction(t)},string:function(t){return new e.Fraction(t)},"number, number":function(t,r){return new e.Fraction(t,r)},Fraction:function(e){return e},Object:function(t){return new e.Fraction(t)},"Array | Matrix":function(e){return i(e,a)}});return a}var i=r(19);t.name="fraction",t.factory=n},function(e,t,r){e.exports=[r(37),r(45),r(46),r(48),r(57),r(63),r(64),r(65),r(66),r(50),r(67)]},function(e,t,r){"use strict";function n(e,t,r,n){function i(){if(!(this instanceof i))throw new SyntaxError("Constructor must be called with the new operator")}return i.prototype.type="Matrix",i.prototype.isMatrix=!0,i.storage=function(e){if(!o(e))throw new TypeError("format must be a string value");var t=i._storage[e];if(!t)throw new SyntaxError("Unsupported matrix storage format: "+e);return t},i._storage={},i.prototype.storage=function(){throw new Error("Cannot invoke storage on a Matrix interface")},i.prototype.datatype=function(){throw new Error("Cannot invoke datatype on a Matrix interface")},i.prototype.create=function(e,t){throw new Error("Cannot invoke create on a Matrix interface")},i.prototype.subset=function(e,t,r){throw new Error("Cannot invoke subset on a Matrix interface")},i.prototype.get=function(e){throw new Error("Cannot invoke get on a Matrix interface")},i.prototype.set=function(e,t,r){throw new Error("Cannot invoke set on a Matrix interface")},i.prototype.resize=function(e,t){throw new Error("Cannot invoke resize on a Matrix interface")},i.prototype.clone=function(){throw new Error("Cannot invoke clone on a Matrix interface")},i.prototype.size=function(){throw new Error("Cannot invoke size on a Matrix interface")},i.prototype.map=function(e,t){throw new Error("Cannot invoke map on a Matrix interface")},i.prototype.forEach=function(e){throw new Error("Cannot invoke forEach on a Matrix interface")},i.prototype.toArray=function(){throw new Error("Cannot invoke toArray on a Matrix interface")},i.prototype.valueOf=function(){throw new Error("Cannot invoke valueOf on a Matrix interface")},i.prototype.format=function(e){throw new Error("Cannot invoke format on a Matrix interface")},i.prototype.toString=function(){throw new Error("Cannot invoke toString on a Matrix interface")},i}var i=r(38),a=i.string,o=a.isString;t.name="Matrix",t.path="type",t.factory=n},function(e,t,r){"use strict";t.array=r(39),t["boolean"]=r(43),t["function"]=r(44),t.number=r(6),t.object=r(3),t.string=r(23),t.types=r(40),t.emitter=r(8)},function(e,t,r){"use strict";function n(e,t,r){var i,a=e.length;if(a!=t[r])throw new f(a,t[r]);if(ri;i++){var s=e[i];if(!Array.isArray(s))throw new f(t.length-1,t.length,"<");n(e[i],t,o)}}else for(i=0;a>i;i++)if(Array.isArray(e[i]))throw new f(t.length+1,t.length,">")}function i(e,r,n,a){var o,s,u=e.length,f=r[n],l=Math.min(u,f);if(e.length=f,no;o++)s=e[o],Array.isArray(s)||(s=[s],e[o]=s),i(s,r,p,a);for(o=l;f>o;o++)s=[],e[o]=s,i(s,r,p,a)}else{for(o=0;l>o;o++)for(;Array.isArray(e[o]);)e[o]=e[o][0];if(a!==t.UNINITIALIZED)for(o=l;f>o;o++)e[o]=c.clone(a)}}function a(e,t,r){var n,i;if(t>r){var o=r+1;for(n=0,i=e.length;i>n;n++)e[n]=a(e[n],t,o)}else for(;Array.isArray(e);)e=e[0];return e}function o(e,t,r){var n,i;if(Array.isArray(e)){var a=r+1;for(n=0,i=e.length;i>n;n++)e[n]=o(e[n],t,a)}else for(var s=r;t>s;s++)e=[e];return e}var s=r(6),u=r(23),c=r(3),f=(r(40),r(41)),l=r(42);t.size=function(e){for(var t=[];Array.isArray(e);)t.push(e.length),e=e[0];return t},t.validate=function(e,t){var r=0==t.length;if(r){if(Array.isArray(e))throw new f(e.length,0)}else n(e,t,0)},t.validateIndex=function(e,t){if(!s.isNumber(e)||!s.isInteger(e))throw new TypeError("Index must be an integer (value: "+e+")");if(0>e)throw new l(e);if(void 0!==t&&e>=t)throw new l(e,t)},t.UNINITIALIZED={},t.resize=function(e,t,r){if(!Array.isArray(e)||!Array.isArray(t))throw new TypeError("Array expected");if(0===t.length)throw new Error("Resizing to scalar is not supported");t.forEach(function(e){if(!s.isNumber(e)||!s.isInteger(e)||0>e)throw new TypeError("Invalid size, must contain positive integers (size: "+u.format(t)+")")});var n=void 0!==r?r:0;return i(e,t,0,n),e},t.squeeze=function(e,r){for(var n=r||t.size(e);Array.isArray(e)&&1===e.length;)e=e[0],n.shift();for(var i=n.length;1===n[i-1];)i--;return is;s++)e=[e],a.unshift(1);for(e=o(e,r,0);a.length=this.max?this.message="Index out of range ("+this.index+" > "+(this.max-1)+")":this.message="Index out of range ("+this.index+")",this.stack=(new Error).stack}r.prototype=new RangeError,r.prototype.constructor=RangeError,r.prototype.name="IndexError",r.prototype.isIndexError=!0,e.exports=r},function(e,t){"use strict";t.isBoolean=function(e){return"boolean"==typeof e}},function(e,t){t.memoize=function(e,t){return function r(){"object"!=typeof r.cache&&(r.cache={});for(var n=[],i=0;is;s++)h(i[s],e._size[s]),h(o[s],e._size[s]);return new g(d(e._data,t,n.length,0),e._datatype)}function d(e,t,r,n){var i=n==r-1,a=t.dimension(n);return i?a.map(function(t){return e[t]}).valueOf():a.map(function(i){var a=e[i];return d(a,t,r,n+1)}).valueOf()}function y(e,t,r,n){if(!t||t.isIndex!==!0)throw new TypeError("Invalid index");var i,o=t.size(),c=t.isScalar();if(r&&r.isMatrix===!0?(i=r.size(),r=r.valueOf()):i=s.size(r),c){if(0!==i.length)throw new TypeError("Scalar expected");e.set(t.min(),r,n)}else{if(o.length");var p=t.max().map(function(e){return e+1});b(e,p,n);var m=o.length,h=0;x(e._data,t,r,m,h)}return e}function x(e,t,r,n,i){var a=i==n-1,o=t.dimension(i);a?o.forEach(function(t,n){h(t),e[t]=r[n[0]]}):o.forEach(function(a,o){h(a),x(e[a],t,r[o[0]],n,i+1)})}function b(e,t,r){for(var n=u.clone(e._size),i=!1;n.lengtha;a++)t[a]>n[a]&&(n[a]=t[a],i=!0);i&&E(e,n,r)}function w(e){for(var t=0,r=e.length;r>t;t++){var n=e[t];f(n)?e[t]=w(n):n&&n.isMatrix===!0&&(e[t]=w(n.valueOf()))}return e}var N=n(r(37));g.prototype=new N,g.prototype.type="DenseMatrix",g.prototype.isDenseMatrix=!0,g.prototype.storage=function(){return"dense"},g.prototype.datatype=function(){return this._datatype},g.prototype.create=function(e,t){return new g(e,t)},g.prototype.subset=function(e,t,r){switch(arguments.length){case 1:return v(this,e);case 2:case 3:return y(this,e,t,r);default:throw new SyntaxError("Wrong number of arguments")}},g.prototype.get=function(e){if(!f(e))throw new TypeError("Array expected");if(e.length!=this._size.length)throw new a(e.length,this._size.length);for(var t=0;tn;n++){var o=e[n];h(o,r.length),r=r[o]}return u.clone(r)},g.prototype.set=function(e,t,r){if(!f(e))throw new TypeError("Array expected");if(e.lengthn;n++)o=e[n],h(o,u.length),u=u[o];return o=e[e.length-1],h(o,u.length),u[o]=t,this},g.prototype.resize=function(e,t,r){if(!f(e))throw new TypeError("Array expected");var n=r?this.clone():this;return E(n,e,t)};var E=function(e,t,r){if(0===t.length){for(var n=e._data;f(n);)n=n[0];return u.clone(n)}return e._size=u.clone(t),e._data=s.resize(e._data,e._size,r),e};return g.prototype.clone=function(){var e=new g({data:u.clone(this._data),size:u.clone(this._size),datatype:this._datatype});return e},g.prototype.size=function(){return this._size},g.prototype.map=function(e){var t=this,r=function(n,i){return f(n)?n.map(function(e,t){return r(e,i.concat(t))}):e(n,i,t)};return new g({data:r(this._data,[]),size:u.clone(this._size),datatype:this._datatype})},g.prototype.forEach=function(e){var t=this,r=function(n,i){f(n)?n.forEach(function(e,t){r(e,i.concat(t))}):e(n,i,t)};r(this._data,[])},g.prototype.toArray=function(){return u.clone(this._data)},g.prototype.valueOf=function(){return this._data},g.prototype.format=function(e){return o.format(this._data,e)},g.prototype.toString=function(){return o.format(this._data)},g.prototype.toJSON=function(){return{mathjs:"DenseMatrix",data:this._data,size:this._size,datatype:this._datatype}},g.prototype.diagonal=function(e){if(e){if(e.isBigNumber===!0&&(e=e.toNumber()),!l(e)||!p(e))throw new TypeError("The parameter k must be an integer number")}else e=0;for(var t=e>0?e:0,r=0>e?-e:0,n=this._size[0],i=this._size[1],a=Math.min(n-r,i-t),o=[],s=0;a>s;s++)o[s]=u.clone(this._data[s+r][s+t]);return new g({data:o,size:[a],datatype:this._datatype})},g.diagonal=function(t,r,n,i,a){if(!f(t))throw new TypeError("Array expected, size parameter");if(2!==t.length)throw new Error("Only two dimensions matrix are supported");if(t=t.map(function(e){if(e&&e.isBigNumber===!0&&(e=e.toNumber()),!l(e)||!p(e)||1>e)throw new Error("Size values must be positive integers");return e}),n){if(n&&n.isBigNumber===!0&&(n=n.toNumber()),!l(n)||!p(n))throw new TypeError("The parameter k must be an integer number")}else n=0;i&&m(a)&&(i=c.convert(i,a));var o,u=n>0?n:0,h=0>n?-n:0,v=t[0],d=t[1],y=Math.min(v-h,d-u);if(f(r)){if(r.length!==y)throw new Error("Invalid value array length");o=function(e){return r[e]}}else if(r&&r.isMatrix===!0){var x=r.size();if(1!==x.length||x[0]!==y)throw new Error("Invalid matrix length");o=function(e){return r.get([e])}}else o=function(){return r};i||(i=o(0)&&o(0).isBigNumber===!0?new e.BigNumber(0):0);var b=[];if(t.length>0){b=s.resize(b,t,i);for(var w=0;y>w;w++)b[w+h][w+u]=o(w)}return new g({data:b,size:[v,d]})},g.fromJSON=function(e){return new g(e)},g.prototype.swapRows=function(e,t){if(!(l(e)&&p(e)&&l(t)&&p(t)))throw new Error("Row index must be positive integers");if(2!==this._size.length)throw new Error("Only two dimensional matrix is supported");return h(e,this._size[0]),h(t,this._size[0]),g._swapRows(e,t,this._data),this},g._swapRows=function(e,t,r){var n=r[e];r[e]=r[t],r[t]=n},e.Matrix._storage.dense=g,e.Matrix._storage["default"]=g,g}var i=r(38),a=r(41),o=i.string,s=i.array,u=i.object,c=i.number,f=Array.isArray,l=c.isNumber,p=c.isInteger,m=o.isString,h=s.validateIndex;t.name="DenseMatrix",t.path="type",t.factory=n,t.lazy=!1},function(e,t,r){"use strict";function n(e,t,n,g){function v(e,t){if(!(this instanceof v))throw new SyntaxError("Constructor must be called with the new operator");if(t&&!m(t))throw new Error("Invalid datatype: "+t);if(e&&e.isMatrix===!0)x(this,e,t);else if(e&&f(e.index)&&f(e.ptr)&&f(e.size))this._values=e.values,this._index=e.index,this._ptr=e.ptr,this._size=e.size,this._datatype=t||e.datatype;else if(f(e))b(this,e,t);else{if(e)throw new TypeError("Unsupported type of data ("+i.types.type(e)+")");this._values=[],this._index=[],this._ptr=[0],this._size=[0,0],this._datatype=t}}var d=n(r(37)),y=n(r(47)),x=function(e,t,r){"SparseMatrix"===t.type?(e._values=t._values?s.clone(t._values):void 0,e._index=s.clone(t._index),e._ptr=s.clone(t._ptr),e._size=s.clone(t._size),e._datatype=r||t._datatype):b(e,t.valueOf(),r||t._datatype)},b=function(e,t,r){e._values=[],e._index=[],e._ptr=[],e._datatype=r;var n=t.length,i=0,a=y,o=0;if(m(r)&&(a=g.find(y,[r,r])||y,o=g.convert(0,r)),n>0){var s=0;do{e._ptr.push(e._index.length);for(var u=0;n>u;u++){var c=t[u];if(f(c)){if(0===s&&ii&&(i=1),a(c,o)||(e._values.push(c),e._index.push(u))}s++}while(i>s)}e._ptr.push(e._index.length),e._size=[n,i]};v.prototype=new d,v.prototype.type="SparseMatrix",v.prototype.isSparseMatrix=!0,v.prototype.storage=function(){return"sparse"},v.prototype.datatype=function(){return this._datatype},v.prototype.create=function(e,t){return new v(e,t)},v.prototype.density=function(){var e=this._size[0],t=this._size[1];return 0!==e&&0!==t?this._index.length/(e*t):0},v.prototype.subset=function(e,t,r){if(!this._values)throw new Error("Cannot invoke subset on a Pattern only matrix");switch(arguments.length){case 1:return w(this,e);case 2:case 3:return N(this,e,t,r);default:throw new SyntaxError("Wrong number of arguments")}};var w=function(e,t){if(!t||t.isIndex!==!0)throw new TypeError("Invalid index");var r=t.isScalar();if(r)return e.get(t.min());var n=t.size();if(n.length!=e._size.length)throw new a(n.length,e._size.length);var i,o,s,u,c=t.min(),f=t.max();for(i=0,o=e._size.length;o>i;i++)h(c[i],e._size[i]),h(f[i],e._size[i]);var l=e._values,p=e._index,m=e._ptr,g=t.dimension(0),d=t.dimension(1),y=[],x=[];g.forEach(function(e,t){x[e]=t[0],y[e]=!0});var b=l?[]:void 0,w=[],N=[];return d.forEach(function(e){for(N.push(w.length),s=m[e],u=m[e+1];u>s;s++)i=p[s],y[i]===!0&&(w.push(x[i]),b&&b.push(l[s]))}),N.push(w.length),new v({values:b,index:w,ptr:N,size:n,datatype:e._datatype})},N=function(e,t,r,n){if(!t||t.isIndex!==!0)throw new TypeError("Invalid index");var i,u=t.size(),c=t.isScalar();if(r&&r.isMatrix===!0?(i=r.size(),r=r.toArray()):i=o.size(r),c){if(0!==i.length)throw new TypeError("Scalar expected");e.set(t.min(),r,n)}else{if(1!==u.length&&2!==u.length)throw new a(u.length,e._size.length,"<");if(i.length");for(var p=t.min()[0],m=t.min()[1],h=i[0],g=i[1],v=0;h>v;v++)for(var d=0;g>d;d++){var y=r[v][d];e.set([v+p,d+m],y,n)}}return e};v.prototype.get=function(e){if(!f(e))throw new TypeError("Array expected");if(e.length!=this._size.length)throw new a(e.length,this._size.length);if(!this._values)throw new Error("Cannot invoke get on a Pattern only matrix");var t=e[0],r=e[1];h(t,this._size[0]),h(r,this._size[1]);var n=E(t,this._ptr[r],this._ptr[r+1],this._index);return no-1||i>s-1)&&(_(this,Math.max(n+1,o),Math.max(i+1,s),r),o=this._size[0],s=this._size[1]),h(n,o),h(i,s);var l=E(n,this._ptr[i],this._ptr[i+1],this._index);return li;i++)if(n[i]===e)return i;return t},M=function(e,t,r,n,i){r.splice(e,1),n.splice(e,1);for(var a=t+1;at)throw new TypeError("Invalid size, must contain positive integers (size: "+u.format(e)+")")});var n=r?this.clone():this;return _(n,e[0],e[1],t)};var _=function(e,t,r,n){var i=n||0,a=y,o=0;m(e._datatype)&&(a=g.find(y,[e._datatype,e._datatype])||y,o=g.convert(0,e._datatype),i=g.convert(i,e._datatype));var s,u,c,f=!a(i,o),l=e._size[0],p=e._size[1];if(r>p){for(u=p;r>u;u++)if(e._ptr[u]=e._values.length,f)for(s=0;l>s;s++)e._values.push(i),e._index.push(s);e._ptr[r]=e._values.length}else p>r&&(e._ptr.splice(r+1,p-r),e._values.splice(e._ptr[r],e._values.length),e._index.splice(e._ptr[r],e._index.length));if(p=r,t>l){if(f){var h=0;for(u=0;p>u;u++){e._ptr[u]=e._ptr[u]+h,c=e._ptr[u+1]+h;var v=0;for(s=l;t>s;s++,v++)e._values.splice(c+v,0,i),e._index.splice(c+v,0,s),h++}e._ptr[p]=e._values.length}}else if(l>t){var d=0;for(u=0;p>u;u++){e._ptr[u]=e._ptr[u]-d;var x=e._ptr[u],b=e._ptr[u+1]-d;for(c=x;b>c;c++)s=e._index[c],s>t-1&&(e._values.splice(c,1),e._index.splice(c,1),d++)}e._ptr[u]=e._values.length}return e._size[0]=t,e._size[1]=r,e};v.prototype.clone=function(){var e=new v({values:this._values?s.clone(this._values):void 0,index:s.clone(this._index),ptr:s.clone(this._ptr),size:s.clone(this._size),datatype:this._datatype});return e},v.prototype.size=function(){return s.clone(this._size)},v.prototype.map=function(e,t){if(!this._values)throw new Error("Cannot invoke map on a Pattern only matrix");var r=this,n=this._size[0],i=this._size[1],a=function(t,n,i){return e(t,[n,i],r)};return O(this,0,n-1,0,i-1,a,t)};var O=function(e,t,r,n,i,a,o){var s=[],u=[],c=[],f=y,l=0;m(e._datatype)&&(f=g.find(y,[e._datatype,e._datatype])||y,l=g.convert(0,e._datatype));for(var p=function(e,t,r){e=a(e,t,r),f(e,l)||(s.push(e),u.push(t))},h=n;i>=h;h++){c.push(s.length);for(var d=e._ptr[h],x=e._ptr[h+1],b=t,w=d;x>w;w++){var N=e._index[w];if(N>=t&&r>=N){if(!o)for(var E=b;N>E;E++)p(0,E-t,h-n);p(e._values[w],N-t,h-n)}b=N+1}if(!o)for(var M=b;r>=M;M++)p(0,M-t,h-n)}return c.push(s.length),new v({values:s,index:u,ptr:c,size:[r-t+1,i-n+1]})};v.prototype.forEach=function(e,t){if(!this._values)throw new Error("Cannot invoke forEach on a Pattern only matrix");for(var r=this,n=this._size[0],i=this._size[1],a=0;i>a;a++){for(var o=this._ptr[a],s=this._ptr[a+1],u=0,c=o;s>c;c++){var f=this._index[c];if(!t)for(var l=u;f>l;l++)e(0,[l,a],r);e(this._values[c],[f,a],r),u=f+1}if(!t)for(var p=u;n>p;p++)e(0,[p,a],r)}},v.prototype.toArray=function(){return T(this._values,this._index,this._ptr,this._size,!0)},v.prototype.valueOf=function(){return T(this._values,this._index,this._ptr,this._size,!1)};var T=function(e,t,r,n,i){var a,o,u=n[0],c=n[1],f=[];for(a=0;u>a;a++)for(f[a]=[],o=0;c>o;o++)f[a][o]=0;for(o=0;c>o;o++)for(var l=r[o],p=r[o+1],m=l;p>m;m++)a=t[m],f[a][o]=e?i?s.clone(e[m]):e[m]:1;return f};return v.prototype.format=function(e){for(var t=this._size[0],r=this._size[1],n=this.density(),i="Sparse Matrix ["+u.format(t,e)+" x "+u.format(r,e)+"] density: "+u.format(n,e)+"\n",a=0;r>a;a++)for(var o=this._ptr[a],s=this._ptr[a+1],c=o;s>c;c++){var f=this._index[c];i+="\n ("+u.format(f,e)+", "+u.format(a,e)+") ==> "+(this._values?u.format(this._values[c],e):"X")}return i},v.prototype.toString=function(){return u.format(this.toArray())},v.prototype.toJSON=function(){return{mathjs:"SparseMatrix",values:this._values,index:this._index,ptr:this._ptr,size:this._size,datatype:this._datatype}},v.prototype.diagonal=function(e){if(e){if(e.isBigNumber===!0&&(e=e.toNumber()),!l(e)||!p(e))throw new TypeError("The parameter k must be an integer number")}else e=0;var t=e>0?e:0,r=0>e?-e:0,n=this._size[0],i=this._size[1],a=Math.min(n-r,i-t),o=[],u=[],c=[];c[0]=0;for(var f=t;i>f&&o.lengthg;g++){var d=this._index[g];if(d===f-t+r){o.push(s.clone(this._values[g])),u[o.length-1]=d-r;break}}return c.push(o.length),new v({values:o,index:u,ptr:c,size:[a,1]})},v.fromJSON=function(e){return new v(e)},v.diagonal=function(e,t,r,n,i){if(!f(e))throw new TypeError("Array expected, size parameter");if(2!==e.length)throw new Error("Only two dimensions matrix are supported");if(e=e.map(function(e){if(e&&e.isBigNumber===!0&&(e=e.toNumber()),!l(e)||!p(e)||1>e)throw new Error("Size values must be positive integers");return e}),r){if(r.isBigNumber===!0&&(r=r.toNumber()),!l(r)||!p(r))throw new TypeError("The parameter k must be an integer number")}else r=0;var a=y,o=0;m(i)&&(a=g.find(y,[i,i])||y,o=g.convert(0,i));var s,u=r>0?r:0,c=0>r?-r:0,h=e[0],d=e[1],x=Math.min(h-c,d-u);if(f(t)){if(t.length!==x)throw new Error("Invalid value array length");s=function(e){return t[e]}}else if(t&&t.isMatrix===!0){var b=t.size();if(1!==b.length||b[0]!==x)throw new Error("Invalid matrix length");s=function(e){return t.get([e])}}else s=function(){return t};for(var w=[],N=[],E=[],M=0;d>M;M++){E.push(w.length);var A=M-u;if(A>=0&&x>A){var _=s(A);a(_,o)||(N.push(A+c),w.push(_))}}return E.push(w.length),new v({values:w,index:N,ptr:E,size:[h,d]})},v.prototype.swapRows=function(e,t){if(!(l(e)&&p(e)&&l(t)&&p(t)))throw new Error("Row index must be positive integers");if(2!==this._size.length)throw new Error("Only two dimensional matrix is supported");return h(e,this._size[0]),h(t,this._size[0]),v._swapRows(e,t,this._size[1],this._values,this._index,this._ptr),this},v._forEachRow=function(e,t,r,n,i){for(var a=n[e],o=n[e+1],s=a;o>s;s++)i(r[s],t[s])},v._swapRows=function(e,t,r,n,i,a){for(var o=0;r>o;o++){var s=a[o],u=a[o+1],c=E(e,s,u,i),f=E(t,s,u,i);if(u>c&&u>f&&i[c]===e&&i[f]===t){if(n){var l=n[c];n[c]=n[f],n[f]=l}}else if(u>c&&i[c]===e&&(f>=u||i[f]!==t)){var p=n?n[c]:void 0;i.splice(f,0,t),n&&n.splice(f,0,p),i.splice(c>=f?c+1:c,1),n&&n.splice(c>=f?c+1:c,1)}else if(u>f&&i[f]===t&&(c>=u||i[c]!==e)){var m=n?n[f]:void 0;i.splice(c,0,e),n&&n.splice(c,0,m),i.splice(f>=c?f+1:f,1),n&&n.splice(f>=c?f+1:f,1)}}},e.Matrix._storage.sparse=v,v}var i=r(38),a=r(41),o=i.array,s=i.object,u=i.string,c=i.number,f=Array.isArray,l=c.isNumber,p=c.isInteger,m=u.isString,h=o.validateIndex;t.name="SparseMatrix",t.path="type",t.factory=n,t.lazy=!1},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("equalScalar",{"boolean, boolean":function(e,t){return e===t},"number, number":function(e,r){return e===r||i(e,r,t.epsilon)},"BigNumber, BigNumber":function(e,t){return e.eq(t)},"Fraction, Fraction":function(e,t){return e.equals(t)},"Complex, Complex":function(e,r){return(e.re===r.re||i(e.re,r.re,t.epsilon))&&(e.im===r.im||i(e.im,r.im,t.epsilon))},"Unit, Unit":function(e,t){if(!e.equalBase(t))throw new Error("Cannot compare units with different base");return a(e.value,t.value)},"string, string":function(e,t){return e===t}});return a}var i=r(6).nearlyEqual;t.factory=n},function(e,t,r){"use strict";function n(e,t,n){function i(){if(!(this instanceof i))throw new SyntaxError("Constructor must be called with the new operator");this._values=[],this._heap=new e.FibonacciHeap}var a=n(r(49)),o=n(r(47));return i.prototype.type="Spa",i.prototype.isSpa=!0,i.prototype.set=function(e,t){if(this._values[e])this._values[e].value=t;else{var r=this._heap.insert(e,t);this._values[e]=r}},i.prototype.get=function(e){var t=this._values[e];return t?t.value:0},i.prototype.accumulate=function(e,t){var r=this._values[e];r?r.value=a(r.value,t):(r=this._heap.insert(e,t),this._values[e]=r)},i.prototype.forEach=function(e,t,r){var n=this._heap,i=this._values,a=[],s=n.extractMinimum();for(s&&a.push(s);s&&s.key<=t;)s.key>=e&&(o(s.value,0)||r(s.key,s.value,this)),s=n.extractMinimum(),s&&a.push(s);for(var u=0;ug;g++)w[g]=[];var N=[],E=[];for(v=0;y>v;v++){for(var M=v+1,A=p[v],_=p[v+1],O=A;_>O;O++)g=l[O],N[g]=o?b(f[O],s[g][v]):b(s[g][v],f[O]),E[g]=M;for(g=0;d>g;g++)E[g]===M?w[g][v]=N[g]:w[g][v]=s[g][v]}return new a({data:w,size:[d,y],datatype:x})};return o}var i=r(41);t.name="algorithm01",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(47)),s=e.SparseMatrix,u=function(e,t,r){var n=e._values,u=e._index,c=e._ptr,f=e._size,l=e._datatype,p=t._values,m=t._index,h=t._ptr,g=t._size,v=t._datatype;if(f.length!==g.length)throw new i(f.length,g.length);if(f[0]!==g[0]||f[1]!==g[1])throw new RangeError("Dimension mismatch. Matrix A ("+f+") must match Matrix B ("+g+")");var d,y=f[0],x=f[1],b=o,w=0,N=r;"string"==typeof l&&l===v&&(d=l,b=a.find(o,[d,d]),w=a.convert(0,d),N=a.find(r,[d,d]));var E,M,A,_,O,T=n&&p?[]:void 0,C=[],S=[],z=new s({values:T,index:C,ptr:S,size:[y,x],datatype:d}),B=n&&p?[]:void 0,k=n&&p?[]:void 0,I=[],R=[];for(M=0;x>M;M++){S[M]=C.length;var P=M+1;for(_=c[M],O=c[M+1],A=_;O>A;A++)E=u[A],C.push(E),I[E]=P,B&&(B[E]=n[A]);for(_=h[M],O=h[M+1],A=_;O>A;A++)if(E=m[A],I[E]===P){if(B){var U=N(B[E],p[A]);b(U,w)?I[E]=null:B[E]=U}}else C.push(E),R[E]=P,k&&(k[E]=p[A]);if(B&&k)for(A=S[M];Ax;x++){for(var b=x+1,w=u[x],N=u[x+1],E=w;N>E;E++){var M=s[E];d[M]=o[E],y[M]=b}for(var A=0;p>A;A++)0===x&&(g[A]=[]),y[A]===b?g[A][x]=a?h(t,d[A]):h(d[A],t):g[A][x]=t}return v};return a}t.name="algorithm10",t.factory=r},function(e,t,r){"use strict";function n(e,t,r,n){var i=e.DenseMatrix,o=function(e,t,r){var o=e._data,u=e._size,c=e._datatype,f=t._data,l=t._size,p=t._datatype,m=[];if(u.length!==l.length)throw new a(u.length,l.length);for(var h=0;h0?s(v,0,m,m[0],o,f):[];return new i({data:d,size:m,datatype:g})},s=function(e,t,r,n,i,a){var o=[];if(t===r.length-1)for(var u=0;n>u;u++)o[u]=e(i[u],a[u]);else for(var c=0;n>c;c++)o[c]=s(e,t+1,r,r[t+1],i[c],a[c]);return o};return o}var i=r(38),a=r(41),o=i.string;o.isString;t.name="algorithm13",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=e.DenseMatrix,o=function(e,t,r,o){var u,c=e._data,f=e._size,l=e._datatype,p=r;"string"==typeof l&&(u=l,t=n.convert(t,u),p=n.find(r,[u,u]));var m=f.length>0?s(p,0,f,f[0],c,t,o):[];return new a({data:m,size:i(f),datatype:u})},s=function(e,t,r,n,i,a,o){var u=[];if(t===r.length-1)for(var c=0;n>c;c++)u[c]=o?e(a,i[c]):e(i[c],a);else for(var f=0;n>f;f++)u[f]=s(e,t+1,r,r[t+1],i[f],a,o);return u};return o}var i=r(3).clone;t.name="algorithm14",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){function a(){if(!(this instanceof a))throw new SyntaxError("Constructor must be called with the new operator");this._minimum=null,this._size=0}var o=n(r(58)),s=n(r(62)),u=1/Math.log((1+Math.sqrt(5))/2);a.prototype.type="FibonacciHeap",a.prototype.isFibonacciHeap=!0,a.prototype.insert=function(e,t){var r={key:e,value:t,degree:0};if(this._minimum){var n=this._minimum;r.left=n,r.right=n.right,n.right=r,r.right.left=r,o(e,n.key)&&(this._minimum=r)}else r.left=r,r.right=r,this._minimum=r;return this._size++,r},a.prototype.size=function(){return this._size},a.prototype.clear=function(){this._minimum=null,this._size=0},a.prototype.isEmpty=function(){return!!this._minimum},a.prototype.extractMinimum=function(){var e=this._minimum;if(null===e)return e;for(var t=this._minimum,r=e.degree,n=e.child;r>0;){var i=n.right;n.left.right=n.right,n.right.left=n.left,n.left=t,n.right=t.right,t.right=n,n.right.left=n,n.parent=null,n=i,r--}return e.left.right=e.right,e.right.left=e.left,e==e.right?t=null:(t=e.right,t=m(t,this._size)),this._size--,this._minimum=t,e},a.prototype.remove=function(e){this._minimum=c(this._minimum,e,-1),this.extractMinimum()};var c=function(e,t,r){t.key=r;var n=t.parent;return n&&o(t.key,n.key)&&(f(e,t,n),l(e,n)),o(t.key,e.key)&&(e=t),e},f=function(e,t,r){t.left.right=t.right,t.right.left=t.left,r.degree--,r.child==t&&(r.child=t.right),0===r.degree&&(r.child=null),t.left=e,t.right=e.right,e.right=t,t.right.left=t,t.parent=null,t.mark=!1},l=function(e,t){var r=t.parent;r&&(t.mark?(f(e,t,r),l(r)):t.mark=!0)},p=function(e,t){e.left.right=e.right,e.right.left=e.left,e.parent=t,t.child?(e.left=t.child,e.right=t.child.right,t.child.right=e,e.right.left=e):(t.child=e,e.right=e,e.left=e),t.degree++,e.mark=!1},m=function(e,t){var r=Math.floor(Math.log(t)*u)+1,n=new Array(r),i=0,a=e;if(a)for(i++,a=a.right;a!==e;)i++,a=a.right;for(var c;i>0;){for(var f=a.degree,l=a.right;;){if(c=n[f],!c)break;if(s(a.key,c.key)){var m=c;c=a,a=m}p(c,a),n[f]=null,f++}n[f]=a,a=l,i--}e=null;for(var h=0;r>h;h++)c=n[h],c&&(e?(c.left.right=c.right,c.right.left=c.left,c.left=e,c.right=e.right,e.right=c,c.right.left=c,o(c.key,e.key)&&(e=c)):e=c);return e};return a}t.name="FibonacciHeap",t.path="type",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(50)),s=n(r(59)),u=n(r(60)),c=n(r(61)),f=n(r(55)),l=n(r(56)),p=r(29),m=a("smaller",{"boolean, boolean":function(e,t){return t>e},"number, number":function(e,r){return r>e&&!i(e,r,t.epsilon)},"BigNumber, BigNumber":function(e,t){return e.lt(t)},"Fraction, Fraction":function(e,t){return-1===e.compare(t)},"Complex, Complex":function(e,t){throw new TypeError("No ordering relation is defined for complex numbers")},"Unit, Unit":function(e,t){if(!e.equalBase(t))throw new Error("Cannot compare units with different base");return m(e.value,t.value)},"string, string":function(e,t){return t>e},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=u(e,t,m);break;default:r=s(t,e,m,!0)}break;default:switch(t.storage()){case"sparse":r=s(e,t,m,!1);break;default:r=f(e,t,m)}}return r},"Array, Array":function(e,t){return m(o(e),o(t)).valueOf()},"Array, Matrix":function(e,t){return m(o(e),t)},"Matrix, Array":function(e,t){return m(e,o(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=c(e,t,m,!1);break;default:r=l(e,t,m,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=c(t,e,m,!0);break;default:r=l(t,e,m,!0)}return r},"Array, any":function(e,t){return l(o(e),t,m,!1).valueOf()},"any, Array":function(e,t){return l(o(t),e,m,!0).valueOf()}});return m.toTex="\\left(${args[0]}"+p.operators.smaller+"${args[1]}\\right)",m}var i=r(6).nearlyEqual;t.name="smaller",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=e.DenseMatrix,o=function(e,t,r,o){var s=e._data,u=e._size,c=e._datatype,f=t._values,l=t._index,p=t._ptr,m=t._size,h=t._datatype;if(u.length!==m.length)throw new i(u.length,m.length);if(u[0]!==m[0]||u[1]!==m[1])throw new RangeError("Dimension mismatch. Matrix A ("+u+") must match Matrix B ("+m+")");if(!f)throw new Error("Cannot perform operation on Dense Matrix and Pattern Sparse Matrix");var g,v=u[0],d=u[1],y=0,x=r;"string"==typeof c&&c===h&&(g=c,y=n.convert(0,g),x=n.find(r,[g,g]));for(var b=[],w=0;v>w;w++)b[w]=[];for(var N=[],E=[],M=0;d>M;M++){for(var A=M+1,_=p[M],O=p[M+1],T=_;O>T;T++){var C=l[T];N[C]=o?x(f[T],s[C][M]):x(s[C][M],f[T]),E[C]=A}for(var S=0;v>S;S++)E[S]===A?b[S][M]=N[S]:b[S][M]=o?x(y,s[S][M]):x(s[S][M],y)}return new a({data:b,size:[v,d],datatype:g})};return o}var i=r(41);t.name="algorithm03",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=e.DenseMatrix,o=function(e,t,r){var o=e._size,u=e._datatype,c=t._size,f=t._datatype;if(o.length!==c.length)throw new i(o.length,c.length);if(o[0]!==c[0]||o[1]!==c[1])throw new RangeError("Dimension mismatch. Matrix A ("+o+") must match Matrix B ("+c+")");var l,p=o[0],m=o[1],h=0,g=r;"string"==typeof u&&u===f&&(l=u,h=n.convert(0,l),g=n.find(r,[l,l]));var v,d,y=[];for(v=0;p>v;v++)y[v]=[];var x=new a({data:y,size:[p,m],datatype:l}),b=[],w=[],N=[],E=[];for(d=0;m>d;d++){var M=d+1;for(s(e,d,N,b,M),s(t,d,E,w,M),v=0;p>v;v++){var A=N[v]===M?b[v]:h,_=E[v]===M?w[v]:h;y[v][d]=g(A,_)}}return x},s=function(e,t,r,n,i){for(var a=e._values,o=e._index,s=e._ptr,u=s[t],c=s[t+1];c>u;u++){var f=o[u];r[f]=i,n[f]=a[u]}};return o}var i=r(41);t.name="algorithm07",t.factory=n},function(e,t){"use strict";function r(e,t,r,n){var i=e.DenseMatrix,a=function(e,t,r,a){var o=e._values,s=e._index,u=e._ptr,c=e._size,f=e._datatype;if(!o)throw new Error("Cannot perform operation on Pattern Sparse Matrix and Scalar value");var l,p=c[0],m=c[1],h=r;"string"==typeof f&&(l=f,t=n.convert(t,l),h=n.find(r,[l,l]));for(var g=[],v=new i({data:g,size:[p,m],datatype:l}),d=[],y=[],x=0;m>x;x++){for(var b=x+1,w=u[x],N=u[x+1],E=w;N>E;E++){var M=s[E];d[M]=o[E],y[M]=b}for(var A=0;p>A;A++)0===x&&(g[A]=[]),y[A]===b?g[A][x]=a?h(t,d[A]):h(d[A],t):g[A][x]=a?h(t,0):h(0,t)}return v};return a}t.name="algorithm12",t.factory=r},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(50)),s=n(r(59)),u=n(r(60)),c=n(r(61)),f=n(r(55)),l=n(r(56)),p=r(29),m=a("larger",{"boolean, boolean":function(e,t){return e>t},"number, number":function(e,r){return e>r&&!i(e,r,t.epsilon)},"BigNumber, BigNumber":function(e,t){return e.gt(t)},"Fraction, Fraction":function(e,t){return 1===e.compare(t)},"Complex, Complex":function(){throw new TypeError("No ordering relation is defined for complex numbers")},"Unit, Unit":function(e,t){if(!e.equalBase(t))throw new Error("Cannot compare units with different base");return m(e.value,t.value)},"string, string":function(e,t){return e>t},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=u(e,t,m);break;default:r=s(t,e,m,!0)}break;default:switch(t.storage()){case"sparse":r=s(e,t,m,!1);break;default:r=f(e,t,m)}}return r},"Array, Array":function(e,t){return m(o(e),o(t)).valueOf()},"Array, Matrix":function(e,t){return m(o(e),t)},"Matrix, Array":function(e,t){return m(e,o(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=c(e,t,m,!1);break;default:r=l(e,t,m,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=c(t,e,m,!0);break;default:r=l(t,e,m,!0)}return r},"Array, any":function(e,t){return l(o(e),t,m,!1).valueOf()},"any, Array":function(e,t){return l(o(t),e,m,!0).valueOf()}});return m.toTex="\\left(${args[0]}"+p.operators.larger+"${args[1]}\\right)",m}var i=r(6).nearlyEqual;t.name="larger",t.factory=n},function(e,t,r){"use strict";function n(e,t,n){function a(e,t){if(!(this instanceof a))throw new SyntaxError("Constructor must be called with the new operator");if(t&&!u(t))throw new Error("Invalid datatype: "+t);if(e&&e.isMatrix===!0||s(e)){var r=new c(e,t);this._data=r._data,this._size=r._size,this._datatype=r._datatype,this._min=null,this._max=null}else if(e&&s(e.data)&&s(e.size))this._data=e.data,this._size=e.size,this._datatype=e.datatype,this._min="undefined"!=typeof e.min?e.min:null,this._max="undefined"!=typeof e.max?e.max:null;else{if(e)throw new TypeError("Unsupported type of data ("+i.types.type(e)+")");this._data=[],this._size=[0],this._datatype=t,this._min=null,this._max=null}}var c=n(r(45)),f=n(r(58));return a.prototype=new c,a.prototype.type="ImmutableDenseMatrix",a.prototype.isImmutableDenseMatrix=!0,a.prototype.subset=function(e){switch(arguments.length){case 1:var t=c.prototype.subset.call(this,e);return t.isMatrix?new a({data:t._data,size:t._size,datatype:t._datatype}):t;case 2:case 3:throw new Error("Cannot invoke set subset on an Immutable Matrix instance");default:throw new SyntaxError("Wrong number of arguments")}},a.prototype.set=function(){throw new Error("Cannot invoke set on an Immutable Matrix instance")},a.prototype.resize=function(){throw new Error("Cannot invoke resize on an Immutable Matrix instance")},a.prototype.clone=function(){var e=new a({data:o.clone(this._data),size:o.clone(this._size),datatype:this._datatype});return e},a.prototype.toJSON=function(){return{mathjs:"ImmutableDenseMatrix",data:this._data,size:this._size,datatype:this._datatype}},a.fromJSON=function(e){return new a(e)},a.prototype.swapRows=function(){throw new Error("Cannot invoke swapRows on an Immutable Matrix instance")},a.prototype.min=function(){if(null===this._min){var e=null;this.forEach(function(t){(null===e||f(t,e))&&(e=t)}),this._min=null!==e?e:void 0}return this._min},a.prototype.max=function(){if(null===this._max){var e=null;this.forEach(function(t){(null===e||f(e,t))&&(e=t)}),this._max=null!==e?e:void 0}return this._max},a}var i=r(38),a=i.string,o=i.object,s=Array.isArray,u=a.isString;t.name="ImmutableDenseMatrix",t.path="type",t.factory=n},function(e,t,r){"use strict";function n(e){function t(e){if(!(this instanceof t))throw new SyntaxError("Constructor must be called with the new operator");this._dimensions=[],this._isScalar=!0;for(var n=0,i=arguments.length;i>n;n++){var a=arguments[n];if(a&&a.isRange===!0)this._dimensions.push(a),this._isScalar=!1;else if(a&&(Array.isArray(a)||a.isMatrix===!0)){var o=r(a.valueOf());this._dimensions.push(o);var s=o.size();(1!==s.length||1!==s[0])&&(this._isScalar=!1)}else{if("number"!=typeof a)throw new TypeError("Dimension must be an Array, Matrix, Number or Range");this._dimensions.push(r([a]))}}}function r(t){for(var r=0,n=t.length;n>r;r++)if("number"!=typeof t[r]||!a(t[r]))throw new TypeError("Index parameters must be positive integer numbers");return new e.ImmutableDenseMatrix(t)}return t.prototype.type="Index",t.prototype.isIndex=!0,t.prototype.clone=function(){var e=new t;return e._dimensions=i(this._dimensions),e._isScalar=this._isScalar,e},t.create=function(e){var r=new t;return t.apply(r,e),r},t.prototype.size=function(){for(var e=[],t=0,r=this._dimensions.length;r>t;t++){var n=this._dimensions[t];e[t]=n.size()[0]}return e},t.prototype.max=function(){for(var e=[],t=0,r=this._dimensions.length;r>t;t++){var n=this._dimensions[t];e[t]=n.max()}return e},t.prototype.min=function(){for(var e=[],t=0,r=this._dimensions.length;r>t;t++){var n=this._dimensions[t];e[t]=n.min()}return e},t.prototype.forEach=function(e){for(var t=0,r=this._dimensions.length;r>t;t++)e(this._dimensions[t],t,this)},t.prototype.dimension=function(e){return this._dimensions[e]||null},t.prototype.isScalar=function(){return this._isScalar},t.prototype.toArray=function(){for(var e=[],t=0,r=this._dimensions.length;r>t;t++)e.push(this._dimensions[t].toArray());return e},t.prototype.valueOf=t.prototype.toArray,t.prototype.toString=function(){for(var e=[],t=0,r=this._dimensions.length;r>t;t++)e.push(this._dimensions[t].toString());return"["+e.join(", ")+"]"},t.prototype.toJSON=function(){return{mathjs:"Index",dimensions:this._dimensions}},t.fromJSON=function(e){return t.create(e.dimensions)},t}var i=r(3).clone,a=r(6).isInteger;t.name="Index",t.path="type",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){function a(e,t,r){if(!(this instanceof a))throw new SyntaxError("Constructor must be called with the new operator");if(null!=e)if(e.isBigNumber===!0)e=e.toNumber();else if("number"!=typeof e)throw new TypeError("Parameter start must be a number");if(null!=t)if(t.isBigNumber===!0)t=t.toNumber();else if("number"!=typeof t)throw new TypeError("Parameter end must be a number");if(null!=r)if(r.isBigNumber===!0)r=r.toNumber();else if("number"!=typeof r)throw new TypeError("Parameter step must be a number");this.start=null!=e?parseFloat(e):0,this.end=null!=t?parseFloat(t):0,this.step=null!=r?parseFloat(r):1}return a.prototype.type="Range",a.prototype.isRange=!0,a.parse=function(e){if("string"!=typeof e)return null;var t=e.split(":"),r=t.map(function(e){return parseFloat(e)}),n=r.some(function(e){return isNaN(e)});if(n)return null;switch(r.length){case 2:return new a(r[0],r[1]);case 3:return new a(r[0],r[2],r[1]);default:return null}},a.prototype.clone=function(){return new a(this.start,this.end,this.step)},a.prototype.size=function(){var e=0,t=this.start,r=this.step,n=this.end,a=n-t;return i.sign(r)==i.sign(a)?e=Math.ceil(a/r):0==a&&(e=0),isNaN(e)&&(e=0),[e]},a.prototype.min=function(){var e=this.size()[0];return e>0?this.step>0?this.start:this.start+(e-1)*this.step:void 0},a.prototype.max=function(){var e=this.size()[0];return e>0?this.step>0?this.start+(e-1)*this.step:this.start:void 0},a.prototype.forEach=function(e){var t=this.start,r=this.step,n=this.end,i=0;if(r>0)for(;n>t;)e(t,[i],this),t+=r,i++;else if(0>r)for(;t>n;)e(t,[i],this),t+=r,i++},a.prototype.map=function(e){var t=[];return this.forEach(function(r,n,i){t[n[0]]=e(r,n,i)}),t},a.prototype.toArray=function(){var e=[];return this.forEach(function(t,r){e[r[0]]=t}),e},a.prototype.valueOf=function(){return this.toArray()},a.prototype.format=function(e){var t=i.format(this.start,e);return 1!=this.step&&(t+=":"+i.format(this.step,e)),t+=":"+i.format(this.end,e)},a.prototype.toString=function(){return this.format()},a.prototype.toJSON=function(){return{mathjs:"Range",start:this.start,end:this.end,step:this.step}},a.fromJSON=function(e){return new a(e.start,e.end,e.step)},a}var i=r(6);t.name="Range",t.path="type",t.factory=n},function(e,t){"use strict";function r(e,t,r,n){return n("index",{"...number | BigNumber | Range | Array | Matrix":function(t){var r=t.map(function(e){return e&&e.isBigNumber===!0?e.toNumber():e&&(Array.isArray(e)||e.isMatrix===!0)?e.map(function(e){return e&&e.isBigNumber===!0?e.toNumber():e}):e}),n=new e.Index;return e.Index.apply(n,r),n}})}t.name="index",t.factory=r},function(e,t){"use strict";function r(e,t,r,n){var i=e.SparseMatrix,a=n("sparse",{"":function(){return new i([])},string:function(e){return new i([],e)},"Array | Matrix":function(e){return new i(e)},"Array | Matrix, string":function(e,t){return new i(e,t)}});return a.toTex={0:"\\begin{bsparse}\\end{bsparse}",1:"\\left(${args[0]}\\right)"},a}t.name="sparse",t.factory=r},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("number",{"":function(){return 0},number:function(e){return e},string:function(e){var t=Number(e);if(isNaN(t))throw new SyntaxError('String "'+e+'" is no valid number');return t},BigNumber:function(e){return e.toNumber()},Fraction:function(e){return e.valueOf()},Unit:function(e){throw new Error("Second argument with valueless unit expected")},"Unit, string | Unit":function(e,t){return e.toNumber(t)},"Array | Matrix":function(e){return i(e,a)}});return a.toTex={0:"0",1:"\\left(${args[0]}\\right)",2:"\\left(\\left(${args[0]}\\right)${args[1]}\\right)"},a}var i=r(19);t.name="number",t.factory=n},function(e,t,r){e.exports=[r(70)]},function(e,t){"use strict";function r(e,t,r,n){function i(e){if(!(this instanceof i))throw new SyntaxError("Constructor must be called with the new operator");this.entries=e||[]}return i.prototype.type="ResultSet",i.prototype.isResultSet=!0,i.prototype.valueOf=function(){return this.entries},i.prototype.toString=function(){return"["+this.entries.join(", ")+"]"},i.prototype.toJSON=function(){return{mathjs:"ResultSet",entries:this.entries}},i.fromJSON=function(e){return new i(e.entries)},i}t.name="ResultSet",t.path="type",t.factory=r},function(e,t,r){"use strict";function n(e,t,r,n){var o=n("string",{"":function(){return""},number:a.format,"null":function(e){return"null"},"boolean":function(e){return e+""},string:function(e){return e},"Array | Matrix":function(e){return i(e,o)},any:function(e){return String(e)}});return o.toTex={0:'\\mathtt{""}',1:"\\mathrm{string}\\left(${args[0]}\\right)"},o}var i=r(19),a=r(6);t.name="string",t.factory=n},function(e,t,r){e.exports=[r(73),r(90),r(91)]},function(e,t,r){"use strict";function n(e,t,n,o){function s(e,t){if(!(this instanceof s))throw new Error("Constructor must be called with the new operator");if(void 0!==e&&!M(e)&&!e.isComplex)throw new TypeError("First parameter in Unit constructor must be number, BigNumber, Fraction, Complex, or undefined");if(void 0!=t&&("string"!=typeof t||""==t))throw new TypeError("Second parameter in Unit constructor must be a string");if(void 0!=t){var r=s.parse(t);this.units=r.units,this.dimensions=r.dimensions}else this.units=[{unit:q,prefix:I,power:0}],this.dimensions=[0,0,0,0,0,0,0,0,0];this.value=void 0!=e?this._normalize(e):null,this.fixPrefix=!1,this.isUnitListSimplified=!0}function u(){for(;" "==z||" "==z;)l()}function c(e){return e>="0"&&"9">=e||"."==e}function f(e){return e>="0"&&"9">=e}function l(){S++,z=C.charAt(S)}function p(e){S=e,z=C.charAt(S)}function m(){var e,t="";if(e=S,"+"==z?l():"-"==z&&(t+=z,l()),!c(z))return p(e),null;if("."==z){if(t+=z,l(),!f(z))return p(e),null}else{for(;f(z);)t+=z,l();"."==z&&(t+=z,l())}for(;f(z);)t+=z,l();if("E"==z||"e"==z){var r="",n=S;if(r+=z,l(),("+"==z||"-"==z)&&(r+=z,l()),!f(z))return p(n),t;for(t+=r;f(z);)t+=z,l()}return t}function h(){for(var e="",t=C.charCodeAt(S);t>=48&&57>=t||t>=65&&90>=t||t>=97&&122>=t;)e+=z,l(),t=C.charCodeAt(S);return t=e.charCodeAt(0),t>=65&&90>=t||t>=97&&122>=t?e||null:null}function g(e){return z===e?(l(),e):null}function v(e){for(var t in L)if(L.hasOwnProperty(t)&&i(e,t)){var r=L[t],n=e.length-t.length,a=e.substring(0,n),o=r.prefixes[a];if(void 0!==o)return{unit:r,prefix:o}}return null}var d=n(r(51)),y=n(r(74)),x=n(r(77)),b=n(r(78)),w=n(r(79)),N=n(r(85)),E=n(r(86)),M=n(r(87)),A=n(r(88)),_=n(r(89)),O=n(r(68)),T=n(r(27));s.prototype.type="Unit",s.prototype.isUnit=!0;var C,S,z;s.parse=function(r){if(C=r,S=-1,z="","string"!=typeof C)throw new TypeError("Invalid argument in Unit.parse, string expected");var n=new s;n.units=[],l(),u();var i=m(),a=null;i&&(a="bignumber"===t.number?new e.BigNumber(i):"fraction"===t.number?new e.Fraction(i):parseFloat(i)),u();for(var o=1,c=!1,f=[],p=1;;){for(u();"("===z;)f.push(o),p*=o,o=1,l(),u();if(!z)break;var d=z,y=h();if(null==y)throw new SyntaxError('Unexpected "'+d+'" in "'+C+'" at index '+S.toString());var x=v(y);if(null==x)throw new SyntaxError('Unit "'+y+'" not found.');var b=o*p;if(u(),g("^")){u();var w=m();if(null==w)throw new SyntaxError('In "'+r+'", "^" must be followed by a floating-point number');b*=w}n.units.push({unit:x.unit,prefix:x.prefix,power:b});for(var N=0;N1||Math.abs(this.units[0].power-1)>1e-15},s.prototype._normalize=function(e){var t,r,n,i,a;if(null==e||0===this.units.length)return e;if(this._isDerived()){var o=e;a=s._getNumberConverter(_(e));for(var u=0;u1e-12)return!1;return!0},s.prototype.equalBase=function(e){for(var t=0;t1e-12)return!1;return!0},s.prototype.equals=function(e){return this.equalBase(e)&&E(this.value,e.value)},s.prototype.multiply=function(e){for(var t=this.clone(),r=0;r1e-12&&t.push({unit:$[a].unit,prefix:$[a].prefix,power:this.dimensions[i]})}t.length0?(r++,e+=" "+this.units[i].prefix.name+this.units[i].unit.name,Math.abs(this.units[i].power-1)>1e-15&&(e+="^"+this.units[i].power)):this.units[i].power<0&&n++;if(n>0)for(var i=0;i0?(t+=" "+this.units[i].prefix.name+this.units[i].unit.name,Math.abs(this.units[i].power+1)>1e-15&&(t+="^"+-this.units[i].power)):(t+=" "+this.units[i].prefix.name+this.units[i].unit.name,t+="^"+this.units[i].power));e=e.substr(1),t=t.substr(1),r>1&&n>0&&(e="("+e+")"),n>1&&r>0&&(t="("+t+")");var a=e;return r>0&&n>0&&(a+=" / "),a+=t},s.prototype.format=function(e){this.simplifyUnitListLazy();var t=!1,r=!0;"undefined"!=typeof this.value&&null!==this.value&&this.value.isComplex&&(t=Math.abs(this.value.re)<1e-14,r=Math.abs(this.value.im)<1e-14);for(var n in this.units)this.units[n].unit&&("VA"===this.units[n].unit.name&&t?this.units[n].unit=L.VAR:"VAR"!==this.units[n].unit.name||t||(this.units[n].unit=L.VA));1!==this.units.length||this.fixPrefix||Math.abs(this.units[0].power-Math.round(this.units[0].power))<1e-14&&(this.units[0].prefix=this._bestPrefix());var i=this._denormalize(this.value),a=null!==this.value?A(i,e||{}):"",o=this.formatUnits();return this.value&&this.value.isComplex&&(a="("+a+")"),o.length>0&&a.length>0&&(a+=" "),a+=o},s.prototype._bestPrefix=function(){if(1!==this.units.length)throw new Error("Can only compute the best prefix for single units with integer powers, like kg, s^2, N^-1, and so forth!");if(Math.abs(this.units[0].power-Math.round(this.units[0].power))>=1e-14)throw new Error("Can only compute the best prefix for single units with integer powers, like kg, s^2, N^-1, and so forth!");var e=N(this.value),t=N(this.units[0].unit.value),r=this.units[0].prefix;if(0===e)return r;var n=this.units[0].power,i=Math.abs(Math.log(e/Math.pow(r.value*t,n))/Math.LN10-1.2),a=this.units[0].unit.prefixes;for(var o in a)if(a.hasOwnProperty(o)){var s=a[o];if(s.scientific){var u=Math.abs(Math.log(e/Math.pow(s.value*t,n))/Math.LN10-1.2);(i>u||u===i&&s.name.lengthM;M++){C[M]=T.length;var R=M+1;for(A=c[M],_=c[M+1];_>A;A++)E=u[A],T.push(E),k[E]=R,z&&(z[E]=n[A]);for(A=h[M],_=h[M+1];_>A;A++)E=m[A],k[E]!==R&&T.push(E),I[E]=R,B&&(B[E]=p[A]);if(O)for(A=C[M];A=0||t.predictable?Math.pow(r,n):u(new e.Complex(r,0),new e.Complex(n,0))}function u(e,t){return p(g(h(e),t))}function c(e,t){if(!i(t)||0>t)throw new TypeError("For A^b, b must be a positive integer (value is "+t+")");var r=a(e);if(2!=r.length)throw new Error("For A^b, A must be 2 dimensional (A has "+r.length+" dimensions)");if(r[0]!=r[1])throw new Error("For A^b, A must be square (size is "+r[0]+"x"+r[1]+")");for(var n=m(r[0]).valueOf(),o=e;t>=1;)1==(1&t)&&(n=g(o,n)),t>>=1,o=g(o,o);return n}function f(e,t){return v(c(e.valueOf(),t))}var l=r(29),p=n(r(80)),m=n(r(81)),h=n(r(82)),g=n(r(83)),v=n(r(50)),d=o("pow",{"number, number":s,"Complex, Complex":u,"BigNumber, BigNumber":function(r,n){return n.isInteger()||r>=0||t.predictable?r.pow(n):u(new e.Complex(r.toNumber(),0),new e.Complex(n.toNumber(),0))},"Fraction, Fraction":function(e,r){if(1!==r.d){if(t.predictable)throw new Error("Function pow does not support non-integer exponents for fractions.");return s(e.valueOf(),r.valueOf())}return e.pow(r)},"Array, number":c,"Array, BigNumber":function(e,t){return c(e,t.toNumber())},"Matrix, number":f,"Matrix, BigNumber":function(e,t){return f(e,t.toNumber())},"Unit, number":function(e,t){return e.pow(t)}});return d.toTex="\\left(${args[0]}\\right)"+l.operators.pow+"{${args[1]}}",d}var i=r(6).isInteger,a=r(39).size;t.name="pow",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("exp",{number:Math.exp,Complex:function(t){var r=Math.exp(t.re);return new e.Complex(r*Math.cos(t.im),r*Math.sin(t.im))},BigNumber:function(e){return e.exp()},"Array | Matrix":function(e){return i(e,a)}});return a.toTex="\\exp\\left(${args[0]}\\right)",a}var i=r(19);t.name="exp",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(e,t){switch(e.length){case 0:return t?c(t):[];case 1:return u(e[0],e[0],t);case 2:return u(e[0],e[1],t);default:throw new Error("Vector containing two values expected")}}function u(t,r,n){var o=t&&t.isBigNumber===!0?e.BigNumber:r&&r.isBigNumber===!0?e.BigNumber:null;if(t&&t.isBigNumber===!0&&(t=t.toNumber()),r&&r.isBigNumber===!0&&(r=r.toNumber()),!a(t)||1>t)throw new Error("Parameters in function eye must be positive integers");if(!a(r)||1>r)throw new Error("Parameters in function eye must be positive integers");var s=o?new e.BigNumber(1):1,u=o?new o(0):0,c=[t,r];if(n){var f=e.Matrix.storage(n);return f.diagonal(c,s,0,u)}for(var l=i.resize([],c,u),p=r>t?t:r,m=0;p>m;m++)l[m][m]=s;return l}var c=n(r(50)),f=o("eye",{"":function(){return"matrix"===t.matrix?c([]):[]},string:function(e){return c(e)},"number | BigNumber":function(e){return u(e,e,"matrix"===t.matrix?"default":void 0)},"number | BigNumber, string":function(e,t){return u(e,e,t)},"number | BigNumber, number | BigNumber":function(e,r){return u(e,r,"matrix"===t.matrix?"default":void 0)},"number | BigNumber, number | BigNumber, string":function(e,t,r){return u(e,t,r)},Array:function(e){return s(e)},"Array, string":function(e,t){return s(e,t)},Matrix:function(e){return s(e.valueOf(),e.storage())},"Matrix, string":function(e,t){return s(e.valueOf(),t)}});return f.toTex="\\mathrm{${name}}\\left(${args}\\right)",f}var i=r(39),a=r(6).isInteger;t.name="eye",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(r){return r>=0||t.predictable?Math.log(r):c(new e.Complex(r,0))}function s(t){return new e.Complex(Math.log(Math.sqrt(t.re*t.re+t.im*t.im)),Math.atan2(t.im,t.re))}var u=n(r(78)),c=a("log",{number:o,Complex:s,BigNumber:function(r){return!r.isNegative()||t.predictable?r.ln():s(new e.Complex(r.toNumber(),0))},"Array | Matrix":function(e){return i(e,c)},"any, any":function(e,t){return u(c(e),c(t))}});return c.toTex={1:"\\ln\\left(${args[0]}\\right)",2:"\\log_{${args[1]}}\\left(${args[0]}\\right)"},c}var i=r(19);t.name="log",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var s=r(29),u=n(r(50)),c=n(r(51)),f=n(r(77)),l=n(r(47)),p=n(r(84)),m=n(r(56)),h=e.DenseMatrix,g=e.SparseMatrix,v=o("multiply",i({"Array, Array":function(e,t){d(a.size(e),a.size(t));var r=v(u(e),u(t));return r&&r.isMatrix===!0?r.valueOf():r},"Matrix, Matrix":function(e,t){var r=e.size(),n=t.size();return d(r,n),1===r.length?1===n.length?y(e,t,r[0]):x(e,t):1===n.length?w(e,t):N(e,t)},"Matrix, Array":function(e,t){return v(e,u(t))},"Array, Matrix":function(e,t){return v(u(e,t.storage()),t)},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=p(e,t,f,!1);break;case"dense":r=m(e,t,f,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=p(t,e,f,!0);break;case"dense":r=m(t,e,f,!0)}return r},"Array, any":function(e,t){return m(u(e),t,f,!1).valueOf()},"any, Array":function(e,t){return m(u(t),e,f,!0).valueOf()}},f.signatures)),d=function(e,t){switch(e.length){case 1:switch(t.length){case 1:if(e[0]!==t[0])throw new RangeError("Dimension mismatch in multiplication. Vectors must have the same length");break;case 2:if(e[0]!==t[0])throw new RangeError("Dimension mismatch in multiplication. Vector length ("+e[0]+") must match Matrix rows ("+t[0]+")");break;default:throw new Error("Can only multiply a 1 or 2 dimensional matrix (Matrix B has "+t.length+" dimensions)")}break;case 2:switch(t.length){case 1:if(e[1]!==t[0])throw new RangeError("Dimension mismatch in multiplication. Matrix columns ("+e[1]+") must match Vector length ("+t[0]+")");break;case 2:if(e[1]!==t[0])throw new RangeError("Dimension mismatch in multiplication. Matrix A columns ("+e[1]+") must match Matrix B rows ("+t[0]+")");break;default:throw new Error("Can only multiply a 1 or 2 dimensional matrix (Matrix B has "+t.length+" dimensions)")}break;default:throw new Error("Can only multiply a 1 or 2 dimensional matrix (Matrix A has "+e.length+" dimensions)")}},y=function(e,t,r){if(0===r)throw new Error("Cannot multiply two empty vectors");var n,i=e._data,a=e._datatype,s=t._data,u=t._datatype,l=c,p=f;a&&u&&a===u&&"string"==typeof a&&(n=a,l=o.find(c,[n,n]),p=o.find(f,[n,n]));for(var m=p(i[0],s[0]),h=1;r>h;h++)m=l(m,p(i[h],s[h]));return m},x=function(e,t){switch(t.storage()){case"dense":return b(e,t)}throw new Error("Not implemented")},b=function(e,t){var r,n=e._data,i=e._size,a=e._datatype,s=t._data,u=t._size,l=t._datatype,p=i[0],m=u[1],g=c,v=f;a&&l&&a===l&&"string"==typeof a&&(r=a,g=o.find(c,[r,r]),v=o.find(f,[r,r]));for(var d=[],y=0;m>y;y++){for(var x=v(n[0],s[0][y]),b=1;p>b;b++)x=g(x,v(n[b],s[b][y]));d[y]=x}return 1===m?d[0]:new h({data:d,size:[m],datatype:r})},w=function(e,t){switch(e.storage()){case"dense":return E(e,t);case"sparse":return _(e,t)}},N=function(e,t){switch(e.storage()){case"dense":switch(t.storage()){case"dense":return M(e,t);case"sparse":return A(e,t)}break;case"sparse":switch(t.storage()){case"dense":return O(e,t);case"sparse":return T(e,t)}}},E=function(e,t){var r,n=e._data,i=e._size,a=e._datatype,s=t._data,u=t._datatype,l=i[0],p=i[1],m=c,g=f;a&&u&&a===u&&"string"==typeof a&&(r=a,m=o.find(c,[r,r]),g=o.find(f,[r,r]));for(var v=[],d=0;l>d;d++){for(var y=n[d],x=g(y[0],s[0]),b=1;p>b;b++)x=m(x,g(y[b],s[b]));v[d]=x}return 1===l?v[0]:new h({data:v,size:[l],datatype:r})},M=function(e,t){var r,n=e._data,i=e._size,a=e._datatype,s=t._data,u=t._size,l=t._datatype,p=i[0],m=i[1],g=u[1],v=c,d=f;a&&l&&a===l&&"string"==typeof a&&(r=a, -v=o.find(c,[r,r]),d=o.find(f,[r,r]));for(var y=[],x=0;p>x;x++){var b=n[x];y[x]=[];for(var w=0;g>w;w++){for(var N=d(b[0],s[0][w]),E=1;m>E;E++)N=v(N,d(b[E],s[E][w]));y[x][w]=N}}return 1===p&&1===g?y[0][0]:new h({data:y,size:[p,g],datatype:r})},A=function(e,t){var r=e._data,n=e._size,i=e._datatype,a=t._values,s=t._index,u=t._ptr,p=t._size,m=t._datatype;if(!a)throw new Error("Cannot multiply Dense Matrix times Pattern only Matrix");var h,v=n[0],d=p[1],y=c,x=f,b=l,w=0;i&&m&&i===m&&"string"==typeof i&&(h=i,y=o.find(c,[h,h]),x=o.find(f,[h,h]),b=o.find(l,[h,h]),w=o.convert(0,h));for(var N=[],E=[],M=[],A=new g({values:N,index:E,ptr:M,size:[v,d],datatype:h}),_=0;d>_;_++){M[_]=E.length;var O=u[_],T=u[_+1];if(T>O)for(var C=0,S=0;v>S;S++){for(var z,B=S+1,k=O;T>k;k++){var I=s[k];C!==B?(z=x(r[S][I],a[k]),C=B):z=y(z,x(r[S][I],a[k]))}C!==B||b(z,w)||(E.push(S),N.push(z))}}return M[d]=E.length,1===v&&1===d?1===N.length?N[0]:0:A},_=function(e,t){var r=e._values,n=e._index,i=e._ptr,a=e._datatype;if(!r)throw new Error("Cannot multiply Pattern only Matrix times Dense Matrix");var s,u=t._data,p=t._datatype,m=e._size[0],h=t._size[0],v=[],d=[],y=[],x=c,b=f,w=l,N=0;a&&p&&a===p&&"string"==typeof a&&(s=a,x=o.find(c,[s,s]),b=o.find(f,[s,s]),w=o.find(l,[s,s]),N=o.convert(0,s));var E=[],M=[];y[0]=0;for(var A=0;h>A;A++){var _=u[A];if(!w(_,N))for(var O=i[A],T=i[A+1],C=O;T>C;C++){var S=n[C];M[S]?E[S]=x(E[S],b(_,r[C])):(M[S]=!0,d.push(S),E[S]=b(_,r[C]))}}for(var z=d.length,B=0;z>B;B++){var k=d[B];v[B]=E[k]}return y[1]=d.length,1===m?1===v.length?v[0]:0:new g({values:v,index:d,ptr:y,size:[m,1],datatype:s})},O=function(e,t){var r=e._values,n=e._index,i=e._ptr,a=e._datatype;if(!r)throw new Error("Cannot multiply Pattern only Matrix times Dense Matrix");var s,u=t._data,p=t._datatype,m=e._size[0],h=t._size[0],v=t._size[1],d=c,y=f,x=l,b=0;a&&p&&a===p&&"string"==typeof a&&(s=a,d=o.find(c,[s,s]),y=o.find(f,[s,s]),x=o.find(l,[s,s]),b=o.convert(0,s));for(var w=[],N=[],E=[],M=new g({values:w,index:N,ptr:E,size:[m,v],datatype:s}),A=[],_=[],O=0;v>O;O++){E[O]=N.length;for(var T=O+1,C=0;h>C;C++){var S=u[C][O];if(!x(S,b))for(var z=i[C],B=i[C+1],k=z;B>k;k++){var I=n[k];_[I]!==T?(_[I]=T,N.push(I),A[I]=y(S,r[k])):A[I]=d(A[I],y(S,r[k]))}}for(var R=E[O],P=N.length,U=R;P>U;U++){var q=N[U];w[U]=A[q]}}return E[v]=N.length,1===m&&1===v?1===w.length?w[0]:0:M},T=function(e,t){var r,n=e._values,i=e._index,a=e._ptr,s=e._datatype,u=t._values,l=t._index,p=t._ptr,m=t._datatype,h=e._size[0],v=t._size[1],d=n&&u,y=c,x=f;s&&m&&s===m&&"string"==typeof s&&(r=s,y=o.find(c,[r,r]),x=o.find(f,[r,r]));for(var b,w,N,E,M,A,_,O,T=d?[]:void 0,C=[],S=[],z=new g({values:T,index:C,ptr:S,size:[h,v],datatype:r}),B=d?[]:void 0,k=[],I=0;v>I;I++){S[I]=C.length;var R=I+1;for(M=p[I],A=p[I+1],E=M;A>E;E++)if(O=l[E],d)for(w=a[O],N=a[O+1],b=w;N>b;b++)_=i[b],k[_]!==R?(k[_]=R,C.push(_),B[_]=x(u[E],n[b])):B[_]=y(B[_],x(u[E],n[b]));else for(w=a[O],N=a[O+1],b=w;N>b;b++)_=i[b],k[_]!==R&&(k[_]=R,C.push(_));if(d)for(var P=S[I],U=C.length,q=P;U>q;q++){var L=C[q];T[q]=B[L]}}return S[v]=C.length,1===h&&1===v&&d?1===T.length?T[0]:0:z};return v.toTex="\\left(${args[0]}"+s.operators.multiply+"${args[1]}\\right)",v}var i=r(3).extend,a=r(39);t.name="multiply",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(47)),o=e.SparseMatrix,s=function(e,t,r,n){var s=e._values,u=e._index,c=e._ptr,f=e._size,l=e._datatype;if(!s)throw new Error("Cannot perform operation on Pattern Sparse Matrix and Scalar value");var p,m=f[0],h=f[1],g=a,v=0,d=r;"string"==typeof l&&(p=l,g=i.find(a,[p,p]),v=i.convert(0,p),t=i.convert(t,p),d=i.find(r,[p,p]));for(var y=[],x=[],b=[],w=new o({values:y,index:x,ptr:b,size:[m,h],datatype:p}),N=0;h>N;N++){b[N]=x.length;for(var E=c[N],M=c[N+1],A=E;M>A;A++){var _=u[A],O=n?d(t,s[A]):d(s[A],t);g(O,v)||(x.push(_),y.push(O))}}return b[h]=x.length,w};return s}t.name="algorithm11",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("abs",{number:Math.abs,Complex:function(e){var t=Math.abs(e.re),r=Math.abs(e.im);if(1e3>t&&1e3>r)return Math.sqrt(t*t+r*r);if(t>=r){var n=r/t;return t*Math.sqrt(1+n*n)}var i=t/r;return r*Math.sqrt(1+i*i)},BigNumber:function(e){return e.abs()},Fraction:function(e){return e.abs()},"Array | Matrix":function(e){return i(e,a,!0)},Unit:function(e){return e.abs()}});return a.toTex="\\left|${args[0]}\\right|",a}var i=r(19);t.name="abs",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(50)),o=n(r(47)),s=n(r(59)),u=n(r(60)),c=n(r(61)),f=n(r(55)),l=n(r(56)),p=r(29),m=i("equal",{"any, any":function(e,t){return null===e?null===t:null===t?null===e:void 0===e?void 0===t:void 0===t?void 0===e:o(e,t)},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=u(e,t,o);break;default:r=s(t,e,o,!0)}break;default:switch(t.storage()){case"sparse":r=s(e,t,o,!1);break;default:r=f(e,t,o)}}return r},"Array, Array":function(e,t){return m(a(e),a(t)).valueOf()},"Array, Matrix":function(e,t){return m(a(e),t)},"Matrix, Array":function(e,t){return m(e,a(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=c(e,t,o,!1);break;default:r=l(e,t,o,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=c(t,e,o,!0);break;default:r=l(t,e,o,!0)}return r},"Array, any":function(e,t){return l(a(e),t,o,!1).valueOf()},"any, Array":function(e,t){return l(a(t),e,o,!0).valueOf()}});return m.toTex="\\left(${args[0]}"+p.operators.equal+"${args[1]}\\right)",m}t.name="equal",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("isNumeric",{"number | BigNumber | Fraction | boolean":function(){return!0},"Complex | Unit | string":function(){return!1},"Array | Matrix":function(e){return i(e,a)}});return a}var i=r(19);r(6);t.name="isNumeric",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("format",{any:i.format,"any, Object | function | number":i.format});return a.toTex="\\mathrm{${name}}\\left(${args}\\right)",a}var i=r(23);t.name="format",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("_typeof",{any:function(e){var t=i.type(e);if("Object"===t){if(e.isBigNumber===!0)return"BigNumber";if(e.isComplex===!0)return"Complex";if(e.isFraction===!0)return"Fraction";if(e.isMatrix===!0)return"Matrix";if(e.isUnit===!0)return"Unit";if(e.isIndex===!0)return"Index";if(e.isRange===!0)return"Range";if(e.isChain===!0)return"Chain";if(e.isHelp===!0)return"Help"}return t}});return a.toTex="\\mathrm{${name}}\\left(${args}\\right)",a}var i=r(40);t.name="typeof",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("unit",{Unit:function(e){return e.clone()},string:function(t){return e.Unit.isValuelessUnit(t)?new e.Unit(null,t):e.Unit.parse(t)},"number | BigNumber | Fraction | Complex, string":function(t,r){return new e.Unit(t,r)},"Array | Matrix":function(e){return i(e,a)}});return a.toTex={1:"\\left(${args[0]}\\right)",2:"\\left(\\left(${args[0]}\\right)${args[1]}\\right)"},a}var i=r(19);t.name="unit",t.factory=n},function(e,t,r){function n(e,t,r,n,a){function o(t){var r=e.Unit.parse(t);return r.fixPrefix=!0,r}i(a,"speedOfLight",function(){return o("299792458 m s^-1")}),i(a,"gravitationConstant",function(){return o("6.6738480e-11 m^3 kg^-1 s^-2")}),i(a,"planckConstant",function(){return o("6.626069311e-34 J s")}),i(a,"reducedPlanckConstant",function(){return o("1.05457172647e-34 J s")}),i(a,"magneticConstant",function(){return o("1.2566370614e-6 N A^-2")}),i(a,"electricConstant",function(){return o("8.854187817e-12 F m^-1")}),i(a,"vacuumImpedance",function(){return o("376.730313461 ohm")}),i(a,"coulomb",function(){return o("8.9875517873681764e9 N m^2 C^-2")}),i(a,"elementaryCharge",function(){return o("1.60217656535e-19 C")}),i(a,"bohrMagneton",function(){return o("9.2740096820e-24 J T^-1")}),i(a,"conductanceQuantum",function(){return o("7.748091734625e-5 S")}),i(a,"inverseConductanceQuantum",function(){return o("12906.403721742 ohm")}),i(a,"magneticFluxQuantum",function(){return o("2.06783375846e-15 Wb")}),i(a,"nuclearMagneton",function(){return o("5.0507835311e-27 J T^-1")}),i(a,"klitzing",function(){return o("25812.807443484 ohm")}),i(a,"bohrRadius",function(){return o("5.291772109217e-11 m")}),i(a,"classicalElectronRadius",function(){return o("2.817940326727e-15 m")}),i(a,"electronMass",function(){return o("9.1093829140e-31 kg")}),i(a,"fermiCoupling",function(){return o("1.1663645e-5 GeV^-2")}),i(a,"fineStructure",function(){return.007297352569824}),i(a,"hartreeEnergy",function(){return o("4.3597443419e-18 J")}),i(a,"protonMass",function(){return o("1.67262177774e-27 kg")}),i(a,"deuteronMass",function(){return o("3.3435830926e-27 kg")}),i(a,"neutronMass",function(){return o("1.6749271613e-27 kg")}),i(a,"quantumOfCirculation",function(){return o("3.636947552024e-4 m^2 s^-1")}),i(a,"rydberg",function(){return o("10973731.56853955 m^-1")}),i(a,"thomsonCrossSection",function(){return o("6.65245873413e-29 m^2")}),i(a,"weakMixingAngle",function(){return.222321}),i(a,"efimovFactor",function(){return 22.7}),i(a,"atomicMass",function(){return o("1.66053892173e-27 kg")}),i(a,"avogadro",function(){return o("6.0221412927e23 mol^-1")}),i(a,"boltzmann",function(){return o("1.380648813e-23 J K^-1")}),i(a,"faraday",function(){return o("96485.336521 C mol^-1")}),i(a,"firstRadiation",function(){return o("3.7417715317e-16 W m^2")}),i(a,"loschmidt",function(){return o("2.686780524e25 m^-3")}),i(a,"gasConstant",function(){return o("8.314462175 J K^-1 mol^-1")}),i(a,"molarPlanckConstant",function(){return o("3.990312717628e-10 J s mol^-1")}),i(a,"molarVolume",function(){return o("2.241396820e-10 m^3 mol^-1")}),i(a,"sackurTetrode",function(){return-1.164870823}),i(a,"secondRadiation",function(){return o("1.438777013e-2 m K")}),i(a,"stefanBoltzmann",function(){return o("5.67037321e-8 W m^-2 K^-4")}),i(a,"wienDisplacement",function(){return o("2.897772126e-3 m K")}),i(a,"molarMass",function(){return o("1e-3 kg mol^-1")}),i(a,"molarMassC12",function(){return o("1.2e-2 kg mol^-1")}),i(a,"gravity",function(){return o("9.80665 m s^-2")}),i(a,"planckLength",function(){return o("1.61619997e-35 m")}),i(a,"planckMass",function(){return o("2.1765113e-8 kg")}),i(a,"planckTime",function(){return o("5.3910632e-44 s")}),i(a,"planckCharge",function(){return o("1.87554595641e-18 C")}),i(a,"planckTemperature",function(){return o("1.41683385e+32 K")})}var i=r(3).lazy;t.factory=n,t.lazy=!1,t.math=!0},function(e,t,r){"use strict";function n(e,t,o,s,u){u.on("config",function(r,i){r.number!==i.number&&n(e,t,o,s,u)}),u["true"]=!0,u["false"]=!1,u["null"]=null,u.uninitialized=r(39).UNINITIALIZED,"bignumber"===t.number?(u.Infinity=new e.BigNumber(1/0),u.NaN=new e.BigNumber(NaN),i.lazy(u,"pi",function(){return a.pi(e.BigNumber)}),i.lazy(u,"tau",function(){return a.tau(e.BigNumber)}),i.lazy(u,"e",function(){return a.e(e.BigNumber)}),i.lazy(u,"phi",function(){return a.phi(e.BigNumber)}),i.lazy(u,"E",function(){return u.e}),i.lazy(u,"LN2",function(){return new e.BigNumber(2).ln()}),i.lazy(u,"LN10",function(){return new e.BigNumber(10).ln()}),i.lazy(u,"LOG2E",function(){return new e.BigNumber(1).div(new e.BigNumber(2).ln())}),i.lazy(u,"LOG10E",function(){return new e.BigNumber(1).div(new e.BigNumber(10).ln())}),i.lazy(u,"PI",function(){return u.pi}),i.lazy(u,"SQRT1_2",function(){return new e.BigNumber("0.5").sqrt()}),i.lazy(u,"SQRT2",function(){return new e.BigNumber(2).sqrt()})):(u.Infinity=1/0,u.NaN=NaN,u.pi=Math.PI,u.tau=2*Math.PI,u.e=Math.E,u.phi=1.618033988749895,u.E=u.e,u.LN2=Math.LN2,u.LN10=Math.LN10,u.LOG2E=Math.LOG2E,u.LOG10E=Math.LOG10E,u.PI=u.pi,u.SQRT1_2=Math.SQRT1_2,u.SQRT2=Math.SQRT2),u.i=new e.Complex(0,1),u.version=r(95)}var i=r(3),a=r(93);t.factory=n,t.lazy=!1,t.math=!0},function(e,t,r){function n(e){return e[0].precision}var i=r(44).memoize,a=r(94);t.e=i(function(e){return new e(1).exp()},n),t.phi=i(function(e){return new e(1).plus(new e(5).sqrt()).div(2)},n),t.pi=i(function(e){var t=e.constructor({precision:e.precision+4}),r=new t(4).times(a(new t(1).div(5))).minus(a(new t(1).div(239)));return new e(4).times(r)},n),t.tau=i(function(e){var r=t.pi(e.constructor({precision:e.precision+2}));return new e(2).times(r)},n)},function(e,t){e.exports=function(e){for(var t=e,r=NaN,n=e.times(e),i=e,a=!0,o=3;!t.equals(r);o+=2)i=i.times(n),r=t,a=!a,t=a?t.plus(i.div(o)):t.minus(i.div(o));return t}},function(e,t){e.exports="2.6.0"},function(e,t,r){e.exports=[r(97),r(267),r(291),r(292),r(319),r(269),r(290)]},function(e,t,r){function n(e,t,n,i){var a={};return a.bignumber=r(98),a["boolean"]=r(99),a.complex=r(100),a.fraction=r(101),a.index=r(102),a.matrix=r(103),a.number=r(104),a.sparse=r(105),a.string=r(106),a.unit=r(107),a.e=r(108),a.E=r(108),a["false"]=r(109),a.i=r(110),a.Infinity=r(111),a.LN2=r(112),a.LN10=r(113),a.LOG2E=r(114),a.LOG10E=r(115),a.NaN=r(116),a["null"]=r(117),a.pi=r(118),a.PI=r(118),a.phi=r(119),a.SQRT1_2=r(120),a.SQRT2=r(121),a.tau=r(122),a["true"]=r(123),a.version=r(124),a.speedOfLight={description:"Speed of light in vacuum",examples:["speedOfLight"]},a.gravitationConstant={description:"Newtonian constant of gravitation",examples:["gravitationConstant"]},a.planckConstant={description:"Planck constant",examples:["planckConstant"]},a.reducedPlanckConstant={description:"Reduced Planck constant",examples:["reducedPlanckConstant"]},a.magneticConstant={description:"Magnetic constant (vacuum permeability)",examples:["magneticConstant"]},a.electricConstant={description:"Electric constant (vacuum permeability)",examples:["electricConstant"]},a.vacuumImpedance={description:"Characteristic impedance of vacuum",examples:["vacuumImpedance"]},a.coulomb={description:"Coulomb's constant",examples:["coulomb"]},a.elementaryCharge={description:"Elementary charge",examples:["elementaryCharge"]},a.bohrMagneton={description:"Borh magneton",examples:["bohrMagneton"]},a.conductanceQuantum={description:"Conductance quantum",examples:["conductanceQuantum"]},a.inverseConductanceQuantum={description:"Inverse conductance quantum",examples:["inverseConductanceQuantum"]},a.magneticFluxQuantum={description:"Magnetic flux quantum",examples:["magneticFluxQuantum"]},a.nuclearMagneton={description:"Nuclear magneton",examples:["nuclearMagneton"]},a.klitzing={description:"Von Klitzing constant",examples:["klitzing"]},a.bohrRadius={description:"Borh radius",examples:["bohrRadius"]},a.classicalElectronRadius={description:"Classical electron radius",examples:["classicalElectronRadius"]},a.electronMass={description:"Electron mass",examples:["electronMass"]},a.fermiCoupling={description:"Fermi coupling constant",examples:["fermiCoupling"]},a.fineStructure={description:"Fine-structure constant",examples:["fineStructure"]},a.hartreeEnergy={description:"Hartree energy",examples:["hartreeEnergy"]},a.protonMass={description:"Proton mass",examples:["protonMass"]},a.deuteronMass={description:"Deuteron Mass",examples:["deuteronMass"]},a.neutronMass={description:"Neutron mass",examples:["neutronMass"]},a.quantumOfCirculation={description:"Quantum of circulation",examples:["quantumOfCirculation"]},a.rydberg={description:"Rydberg constant",examples:["rydberg"]},a.thomsonCrossSection={description:"Thomson cross section",examples:["thomsonCrossSection"]},a.weakMixingAngle={description:"Weak mixing angle",examples:["weakMixingAngle"]},a.efimovFactor={description:"Efimov factor",examples:["efimovFactor"]},a.atomicMass={description:"Atomic mass constant",examples:["atomicMass"]},a.avogadro={description:"Avogadro's number",examples:["avogadro"]},a.boltzmann={description:"Boltzmann constant",examples:["boltzmann"]},a.faraday={description:"Faraday constant",examples:["faraday"]},a.firstRadiation={description:"First radiation constant",examples:["firstRadiation"]},a.loschmidt={description:"Loschmidt constant at T=273.15 K and p=101.325 kPa",examples:["loschmidt"]},a.gasConstant={description:"Gas constant",examples:["gasConstant"]},a.molarPlanckConstant={description:"Molar Planck constant",examples:["molarPlanckConstant"]},a.molarVolume={description:"Molar volume of an ideal gas at T=273.15 K and p=101.325 kPa",examples:["molarVolume"]},a.sackurTetrode={description:"Sackur-Tetrode constant at T=1 K and p=101.325 kPa",examples:["sackurTetrode"]},a.secondRadiation={description:"Second radiation constant",examples:["secondRadiation"]},a.stefanBoltzmann={description:"Stefan-Boltzmann constant",examples:["stefanBoltzmann"]},a.wienDisplacement={description:"Wien displacement law constant",examples:["wienDisplacement"]},a.molarMass={description:"Molar mass constant",examples:["molarMass"]},a.molarMassC12={description:"Molar mass constant of carbon-12",examples:["molarMassC12"]},a.gravity={description:"Standard acceleration of gravity (standard acceleration of free-fall on Earth)",examples:["gravity"]},a.planckLength={description:"Planck length",examples:["planckLength"]},a.planckMass={description:"Planck mass",examples:["planckMass"]},a.planckTime={description:"Planck time",examples:["planckTime"]},a.planckCharge={description:"Planck charge",examples:["planckCharge"]},a.planckTemperature={description:"Planck temperature",examples:["planckTemperature"]},a.lsolve=r(125),a.lup=r(126),a.lusolve=r(127),a.slu=r(128),a.usolve=r(129),a.abs=r(130),a.add=r(131),a.cbrt=r(132),a.ceil=r(133),a.cube=r(134),a.divide=r(135),a.dotDivide=r(136),a.dotMultiply=r(137),a.dotPow=r(138),a.exp=r(139),a.fix=r(140),a.floor=r(141),a.gcd=r(142),a.hypot=r(143),a.lcm=r(144),a.log=r(145),a.log10=r(146),a.mod=r(147),a.multiply=r(148),a.norm=r(149),a.nthRoot=r(150),a.pow=r(151),a.round=r(152),a.sign=r(153),a.sqrt=r(154),a.square=r(155),a.subtract=r(156),a.unaryMinus=r(157),a.unaryPlus=r(158),a.xgcd=r(159),a.bitAnd=r(160),a.bitNot=r(161),a.bitOr=r(162),a.bitXor=r(163),a.leftShift=r(164),a.rightArithShift=r(165),a.rightLogShift=r(166),a.bellNumbers=r(167),a.catalan=r(168),a.composition=r(169),a.stirlingS2=r(170),a.arg=r(171),a.conj=r(172),a.re=r(173),a.im=r(174),a.eval=r(175),a.help=r(176),a.distance=r(177),a.intersect=r(178),a.and=r(179),a.not=r(180),a.or=r(181),a.xor=r(182),a.concat=r(183),a.cross=r(184),a.det=r(185),a.diag=r(186),a.dot=r(187),a.eye=r(188),a.flatten=r(189),a.inv=r(190),a.ones=r(191),a.range=r(192),a.resize=r(193),a.size=r(194),a.squeeze=r(195),a.subset=r(196),a.trace=r(197),a.transpose=r(198),a.zeros=r(199),a.combinations=r(200),a.factorial=r(201),a.gamma=r(202),a.kldivergence=r(203),a.multinomial=r(204),a.permutations=r(205),a.pickRandom=r(206),a.random=r(207),a.randomInt=r(208),a.compare=r(209),a.deepEqual=r(210),a.equal=r(211),a.larger=r(212),a.largerEq=r(213),a.smaller=r(214),a.smallerEq=r(215),a.unequal=r(216),a.max=r(217),a.mean=r(218),a.median=r(219),a.min=r(220),a.mode=r(221),a.prod=r(222),a.quantileSeq=r(223),a.std=r(224),a.sum=r(225),a["var"]=r(226),a.acos=r(227),a.acosh=r(228),a.acot=r(229),a.acoth=r(230),a.acsc=r(231),a.acsch=r(232),a.asec=r(233),a.asech=r(234),a.asin=r(235),a.asinh=r(236),a.atan=r(237),a.atanh=r(238),a.atan2=r(239),a.cos=r(240),a.cosh=r(241),a.cot=r(242),a.coth=r(243),a.csc=r(244),a.csch=r(245),a.sec=r(246),a.sech=r(247),a.sin=r(248),a.sinh=r(249),a.tan=r(250),a.tanh=r(251),a.to=r(252),a.clone=r(253),a.map=r(254),a.partitionSelect=r(255),a.filter=r(256),a.forEach=r(257),a.format=r(258),a.isInteger=r(259),a.isNegative=r(260),a.isNumeric=r(261),a.isPositive=r(262),a.isZero=r(263),a["import"]=r(264),a.sort=r(265),a["typeof"]=r(266),a}t.name="docs",t.path="expression",t.factory=n},function(e,t){e.exports={name:"bignumber",category:"Type",syntax:["bignumber(x)"],description:"Create a big number from a number or string.",examples:["0.1 + 0.2","bignumber(0.1) + bignumber(0.2)",'bignumber("7.2")','bignumber("7.2e500")',"bignumber([0.1, 0.2, 0.3])"],seealso:["boolean","complex","fraction","index","matrix","string","unit"]}},function(e,t){e.exports={name:"boolean",category:"Type",syntax:["x","boolean(x)"],description:"Convert a string or number into a boolean.",examples:["boolean(0)","boolean(1)","boolean(3)",'boolean("true")','boolean("false")',"boolean([1, 0, 1, 1])"],seealso:["bignumber","complex","index","matrix","number","string","unit"]}},function(e,t){e.exports={name:"complex",category:"Type",syntax:["complex()","complex(re, im)","complex(string)"],description:"Create a complex number.",examples:["complex()","complex(2, 3)",'complex("7 - 2i")'],seealso:["bignumber","boolean","index","matrix","number","string","unit"]}},function(e,t){e.exports={name:"fraction",category:"Type",syntax:["fraction(num)","fraction(num,den)"],description:"Create a fraction from a number or from a numerator and denominator.",examples:["fraction(0.125)","fraction(1, 3) + fraction(2, 5)"],seealso:["bignumber","boolean","complex","index","matrix","string","unit"]}},function(e,t){e.exports={name:"index",category:"Type",syntax:["[start]","[start:end]","[start:step:end]","[start1, start 2, ...]","[start1:end1, start2:end2, ...]","[start1:step1:end1, start2:step2:end2, ...]"],description:"Create an index to get or replace a subset of a matrix",examples:["[]","[1, 2, 3]","A = [1, 2, 3; 4, 5, 6]","A[1, :]","A[1, 2] = 50","A[0:2, 0:2] = ones(2, 2)"],seealso:["bignumber","boolean","complex","matrix,","number","range","string","unit"]}},function(e,t){e.exports={name:"matrix",category:"Type",syntax:["[]","[a1, b1, ...; a2, b2, ...]","matrix()",'matrix("dense")',"matrix([...])"],description:"Create a matrix.",examples:["[]","[1, 2, 3]","[1, 2, 3; 4, 5, 6]","matrix()","matrix([3, 4])",'matrix([3, 4; 5, 6], "sparse")','matrix([3, 4; 5, 6], "sparse", "number")'],seealso:["bignumber","boolean","complex","index","number","string","unit","sparse"]}},function(e,t){e.exports={name:"number",category:"Type",syntax:["x","number(x)"],description:"Create a number or convert a string or boolean into a number.",examples:["2","2e3","4.05","number(2)",'number("7.2")',"number(true)","number([true, false, true, true])",'number("52cm", "m")'],seealso:["bignumber","boolean","complex","fraction","index","matrix","string","unit"]}},function(e,t){e.exports={name:"sparse",category:"Type",syntax:["sparse()","sparse([a1, b1, ...; a1, b2, ...])",'sparse([a1, b1, ...; a1, b2, ...], "number")'],description:"Create a sparse matrix.",examples:["sparse()","sparse([3, 4; 5, 6])",'sparse([3, 0; 5, 0], "number")'],seealso:["bignumber","boolean","complex","index","number","string","unit","matrix"]}},function(e,t){e.exports={name:"string",category:"Type",syntax:['"text"',"string(x)"],description:"Create a string or convert a value to a string",examples:['"Hello World!"',"string(4.2)","string(3 + 2i)"],seealso:["bignumber","boolean","complex","index","matrix","number","unit"]}},function(e,t){e.exports={name:"unit",category:"Type",syntax:["value unit","unit(value, unit)","unit(string)"],description:"Create a unit.",examples:["5.5 mm","3 inch",'unit(7.1, "kilogram")','unit("23 deg")'],seealso:["bignumber","boolean","complex","index","matrix","number","string"]}},function(e,t){e.exports={name:"e",category:"Constants",syntax:["e"],description:"Euler's number, the base of the natural logarithm. Approximately equal to 2.71828",examples:["e","e ^ 2","exp(2)","log(e)"],seealso:["exp"]}},function(e,t){e.exports={name:"false",category:"Constants",syntax:["false"],description:"Boolean value false",examples:["false"],seealso:["true"]}},function(e,t){e.exports={name:"i",category:"Constants",syntax:["i"],description:"Imaginary unit, defined as i*i=-1. A complex number is described as a + b*i, where a is the real part, and b is the imaginary part.",examples:["i","i * i","sqrt(-1)"],seealso:[]}},function(e,t){e.exports={name:"Infinity",category:"Constants",syntax:["Infinity"],description:"Infinity, a number which is larger than the maximum number that can be handled by a floating point number.",examples:["Infinity","1 / 0"],seealso:[]}},function(e,t){e.exports={name:"LN2",category:"Constants",syntax:["LN2"],description:"Returns the natural logarithm of 2, approximately equal to 0.693",examples:["LN2","log(2)"],seealso:[]}},function(e,t){e.exports={name:"LN10",category:"Constants",syntax:["LN10"],description:"Returns the natural logarithm of 10, approximately equal to 2.302",examples:["LN10","log(10)"],seealso:[]}},function(e,t){e.exports={name:"LOG2E",category:"Constants",syntax:["LOG2E"],description:"Returns the base-2 logarithm of E, approximately equal to 1.442",examples:["LOG2E","log(e, 2)"],seealso:[]}},function(e,t){e.exports={name:"LOG10E",category:"Constants",syntax:["LOG10E"],description:"Returns the base-10 logarithm of E, approximately equal to 0.434",examples:["LOG10E","log(e, 10)"],seealso:[]}},function(e,t){e.exports={name:"NaN",category:"Constants",syntax:["NaN"],description:"Not a number",examples:["NaN","0 / 0"],seealso:[]}},function(e,t){e.exports={name:"null",category:"Constants",syntax:["null"],description:"Value null",examples:["null"],seealso:["true","false"]}},function(e,t){e.exports={name:"pi",category:"Constants",syntax:["pi"],description:"The number pi is a mathematical constant that is the ratio of a circle's circumference to its diameter, and is approximately equal to 3.14159",examples:["pi","sin(pi/2)"],seealso:["tau"]}},function(e,t){e.exports={name:"phi",category:"Constants",syntax:["phi"],description:"Phi is the golden ratio. Two quantities are in the golden ratio if their ratio is the same as the ratio of their sum to the larger of the two quantities. Phi is defined as `(1 + sqrt(5)) / 2` and is approximately 1.618034...",examples:["tau"],seealso:[]}},function(e,t){e.exports={name:"SQRT1_2",category:"Constants",syntax:["SQRT1_2"],description:"Returns the square root of 1/2, approximately equal to 0.707",examples:["SQRT1_2","sqrt(1/2)"],seealso:[]}},function(e,t){e.exports={name:"SQRT2",category:"Constants",syntax:["SQRT2"],description:"Returns the square root of 2, approximately equal to 1.414",examples:["SQRT2","sqrt(2)"],seealso:[]}},function(e,t){e.exports={name:"tau",category:"Constants",syntax:["tau"],description:"Tau is the ratio constant of a circle's circumference to radius, equal to 2 * pi, approximately 6.2832.",examples:["tau","2 * pi"],seealso:["pi"]}},function(e,t){e.exports={name:"true",category:"Constants",syntax:["true"],description:"Boolean value true",examples:["true"],seealso:["false"]}},function(e,t){e.exports={name:"version",category:"Constants",syntax:["version"],description:"A string with the version number of math.js",examples:["version"],seealso:[]}},function(e,t){e.exports={name:"lsolve",category:"Algebra",syntax:["x=lsolve(L, b)"],description:"Solves the linear system L * x = b where L is an [n x n] lower triangular matrix and b is a [n] column vector.",examples:["a = [-2, 3; 2, 1]","b = [11, 9]","x = lsolve(a, b)"],seealso:["lup","lusolve","usolve","matrix","sparse"]}},function(e,t){e.exports={name:"lup",category:"Algebra",syntax:["lup(m)"],description:"Calculate the Matrix LU decomposition with partial pivoting. Matrix A is decomposed in three matrices (L, U, P) where P * A = L * U",examples:["lup([[2, 1], [1, 4]])","lup(matrix([[2, 1], [1, 4]]))","lup(sparse([[2, 1], [1, 4]]))"],seealso:["lusolve","lsolve","usolve","matrix","sparse","slu"]}},function(e,t){e.exports={name:"lusolve",category:"Algebra",syntax:["x=lusolve(A, b)","x=lusolve(lu, b)"],description:"Solves the linear system A * x = b where A is an [n x n] matrix and b is a [n] column vector.",examples:["a = [-2, 3; 2, 1]","b = [11, 9]","x = lusolve(a, b)"],seealso:["lup","slu","lsolve","usolve","matrix","sparse"]}},function(e,t){e.exports={name:"slu",category:"Algebra",syntax:["slu(A, order, threshold)"],description:"Calculate the Matrix LU decomposition with full pivoting. Matrix A is decomposed in two matrices (L, U) and two permutation vectors (pinv, q) where P * A * Q = L * U",examples:["slu(sparse([4.5, 0, 3.2, 0; 3.1, 2.9, 0, 0.9; 0, 1.7, 3, 0; 3.5, 0.4, 0, 1]), 1, 0.001)"],seealso:["lusolve","lsolve","usolve","matrix","sparse","lup"]}},function(e,t){e.exports={name:"usolve",category:"Algebra",syntax:["x=usolve(U, b)"],description:"Solves the linear system U * x = b where U is an [n x n] upper triangular matrix and b is a [n] column vector.",examples:["x=usolve(sparse([1, 1, 1, 1; 0, 1, 1, 1; 0, 0, 1, 1; 0, 0, 0, 1]), [1; 2; 3; 4])"],seealso:["lup","lusolve","lsolve","matrix","sparse"]}},function(e,t){e.exports={name:"abs",category:"Arithmetic",syntax:["abs(x)"],description:"Compute the absolute value.",examples:["abs(3.5)","abs(-4.2)"],seealso:["sign"]}},function(e,t){e.exports={name:"add",category:"Operators",syntax:["x + y","add(x, y)"],description:"Add two values.",examples:["a = 2.1 + 3.6","a - 3.6","3 + 2i","3 cm + 2 inch",'"2.3" + "4"'],seealso:["subtract"]}},function(e,t){e.exports={name:"cbrt",category:"Arithmetic",syntax:["cbrt(x)","cbrt(x, allRoots)"],description:"Compute the cubic root value. If x = y * y * y, then y is the cubic root of x. When `x` is a number or complex number, an optional second argument `allRoots` can be provided to return all three cubic roots. If not provided, the principal root is returned",examples:["cbrt(64)","cube(4)","cbrt(-8)","cbrt(2 + 3i)","cbrt(8i)","cbrt(8i, true)","cbrt(27 m^3)"],seealso:["square","sqrt","cube","multiply"]}},function(e,t){e.exports={name:"ceil",category:"Arithmetic",syntax:["ceil(x)"],description:"Round a value towards plus infinity. If x is complex, both real and imaginary part are rounded towards plus infinity.",examples:["ceil(3.2)","ceil(3.8)","ceil(-4.2)"],seealso:["floor","fix","round"]}},function(e,t){e.exports={name:"cube",category:"Arithmetic",syntax:["cube(x)"],description:"Compute the cube of a value. The cube of x is x * x * x.",examples:["cube(2)","2^3","2 * 2 * 2"],seealso:["multiply","square","pow"]}},function(e,t){e.exports={name:"divide",category:"Operators",syntax:["x / y","divide(x, y)"],description:"Divide two values.",examples:["a = 2 / 3","a * 3","4.5 / 2","3 + 4 / 2","(3 + 4) / 2","18 km / 4.5"],seealso:["multiply"]}},function(e,t){e.exports={name:"dotDivide",category:"Operators",syntax:["x ./ y","dotDivide(x, y)"],description:"Divide two values element wise.",examples:["a = [1, 2, 3; 4, 5, 6]","b = [2, 1, 1; 3, 2, 5]","a ./ b"],seealso:["multiply","dotMultiply","divide"]}},function(e,t){e.exports={name:"dotMultiply",category:"Operators",syntax:["x .* y","dotMultiply(x, y)"],description:"Multiply two values element wise.",examples:["a = [1, 2, 3; 4, 5, 6]","b = [2, 1, 1; 3, 2, 5]","a .* b"],seealso:["multiply","divide","dotDivide"]}},function(e,t){e.exports={name:"dotpow",category:"Operators",syntax:["x .^ y","dotpow(x, y)"],description:"Calculates the power of x to y element wise.",examples:["a = [1, 2, 3; 4, 5, 6]","a .^ 2"],seealso:["pow"]}},function(e,t){e.exports={name:"exp",category:"Arithmetic",syntax:["exp(x)"],description:"Calculate the exponent of a value.",examples:["exp(1.3)","e ^ 1.3","log(exp(1.3))","x = 2.4","(exp(i*x) == cos(x) + i*sin(x)) # Euler's formula"],seealso:["pow","log"]}},function(e,t){e.exports={name:"fix",category:"Arithmetic",syntax:["fix(x)"],description:"Round a value towards zero. If x is complex, both real and imaginary part are rounded towards zero.",examples:["fix(3.2)","fix(3.8)","fix(-4.2)","fix(-4.8)"],seealso:["ceil","floor","round"]}},function(e,t){e.exports={name:"floor",category:"Arithmetic",syntax:["floor(x)"],description:"Round a value towards minus infinity.If x is complex, both real and imaginary part are rounded towards minus infinity.",examples:["floor(3.2)","floor(3.8)","floor(-4.2)"],seealso:["ceil","fix","round"]}},function(e,t){e.exports={name:"gcd",category:"Arithmetic",syntax:["gcd(a, b)","gcd(a, b, c, ...)"],description:"Compute the greatest common divisor.",examples:["gcd(8, 12)","gcd(-4, 6)","gcd(25, 15, -10)"],seealso:["lcm","xgcd"]}},function(e,t){e.exports={name:"hypot",category:"Arithmetic",syntax:["hypot(a, b, c, ...)","hypot([a, b, c, ...])"],description:"Calculate the hypotenusa of a list with values. ",examples:["hypot(3, 4)","sqrt(3^2 + 4^2)","hypot(-2)","hypot([3, 4, 5])"],seealso:["abs","norm"]}},function(e,t){e.exports={ -name:"lcm",category:"Arithmetic",syntax:["lcm(x, y)"],description:"Compute the least common multiple.",examples:["lcm(4, 6)","lcm(6, 21)","lcm(6, 21, 5)"],seealso:["gcd"]}},function(e,t){e.exports={name:"log",category:"Arithmetic",syntax:["log(x)","log(x, base)"],description:"Compute the logarithm of a value. If no base is provided, the natural logarithm of x is calculated. If base if provided, the logarithm is calculated for the specified base. log(x, base) is defined as log(x) / log(base).",examples:["log(3.5)","a = log(2.4)","exp(a)","10 ^ 4","log(10000, 10)","log(10000) / log(10)","b = log(1024, 2)","2 ^ b"],seealso:["exp","log10"]}},function(e,t){e.exports={name:"log10",category:"Arithmetic",syntax:["log10(x)"],description:"Compute the 10-base logarithm of a value.",examples:["log10(0.00001)","log10(10000)","10 ^ 4","log(10000) / log(10)","log(10000, 10)"],seealso:["exp","log"]}},function(e,t){e.exports={name:"mod",category:"Operators",syntax:["x % y","x mod y","mod(x, y)"],description:"Calculates the modulus, the remainder of an integer division.",examples:["7 % 3","11 % 2","10 mod 4","function isOdd(x) = x % 2","isOdd(2)","isOdd(3)"],seealso:["divide"]}},function(e,t){e.exports={name:"multiply",category:"Operators",syntax:["x * y","multiply(x, y)"],description:"multiply two values.",examples:["a = 2.1 * 3.4","a / 3.4","2 * 3 + 4","2 * (3 + 4)","3 * 2.1 km"],seealso:["divide"]}},function(e,t){e.exports={name:"norm",category:"Arithmetic",syntax:["norm(x)","norm(x, p)"],description:"Calculate the norm of a number, vector or matrix.",examples:["abs(-3.5)","norm(-3.5)","norm(3 - 4i))","norm([1, 2, -3], Infinity)","norm([1, 2, -3], -Infinity)","norm([3, 4], 2)","norm([[1, 2], [3, 4]], 1)","norm([[1, 2], [3, 4]], 'inf')","norm([[1, 2], [3, 4]], 'fro')"]}},function(e,t){e.exports={name:"nthRoot",category:"Arithmetic",syntax:["nthRoot(a)","nthRoot(a, root)"],description:'Calculate the nth root of a value. The principal nth root of a positive real number A, is the positive real solution of the equation "x^root = A".',examples:["4 ^ 3","nthRoot(64, 3)","nthRoot(9, 2)","sqrt(9)"],seealso:["sqrt","pow"]}},function(e,t){e.exports={name:"pow",category:"Operators",syntax:["x ^ y","pow(x, y)"],description:"Calculates the power of x to y, x^y.",examples:["2^3 = 8","2*2*2","1 + e ^ (pi * i)"],seealso:["multiply"]}},function(e,t){e.exports={name:"round",category:"Arithmetic",syntax:["round(x)","round(x, n)"],description:"round a value towards the nearest integer.If x is complex, both real and imaginary part are rounded towards the nearest integer. When n is specified, the value is rounded to n decimals.",examples:["round(3.2)","round(3.8)","round(-4.2)","round(-4.8)","round(pi, 3)","round(123.45678, 2)"],seealso:["ceil","floor","fix"]}},function(e,t){e.exports={name:"sign",category:"Arithmetic",syntax:["sign(x)"],description:"Compute the sign of a value. The sign of a value x is 1 when x>1, -1 when x<0, and 0 when x=0.",examples:["sign(3.5)","sign(-4.2)","sign(0)"],seealso:["abs"]}},function(e,t){e.exports={name:"sqrt",category:"Arithmetic",syntax:["sqrt(x)"],description:"Compute the square root value. If x = y * y, then y is the square root of x.",examples:["sqrt(25)","5 * 5","sqrt(-1)"],seealso:["square","multiply"]}},function(e,t){e.exports={name:"square",category:"Arithmetic",syntax:["square(x)"],description:"Compute the square of a value. The square of x is x * x.",examples:["square(3)","sqrt(9)","3^2","3 * 3"],seealso:["multiply","pow","sqrt","cube"]}},function(e,t){e.exports={name:"subtract",category:"Operators",syntax:["x - y","subtract(x, y)"],description:"subtract two values.",examples:["a = 5.3 - 2","a + 2","2/3 - 1/6","2 * 3 - 3","2.1 km - 500m"],seealso:["add"]}},function(e,t){e.exports={name:"unaryMinus",category:"Operators",syntax:["-x","unaryMinus(x)"],description:"Inverse the sign of a value. Converts booleans and strings to numbers.",examples:["-4.5","-(-5.6)",'-"22"'],seealso:["add","subtract","unaryPlus"]}},function(e,t){e.exports={name:"unaryPlus",category:"Operators",syntax:["+x","unaryPlus(x)"],description:"Converts booleans and strings to numbers.",examples:["+true",'+"2"'],seealso:["add","subtract","unaryMinus"]}},function(e,t){e.exports={name:"xgcd",category:"Arithmetic",syntax:["xgcd(a, b)"],description:"Calculate the extended greatest common divisor for two values",examples:["xgcd(8, 12)","gcd(8, 12)","xgcd(36163, 21199)"],seealso:["gcd","lcm"]}},function(e,t){e.exports={name:"bitAnd",category:"Bitwise",syntax:["x & y","bitAnd(x, y)"],description:"Bitwise AND operation. Performs the logical AND operation on each pair of the corresponding bits of the two given values by multiplying them. If both bits in the compared position are 1, the bit in the resulting binary representation is 1, otherwise, the result is 0",examples:["5 & 3","bitAnd(53, 131)","[1, 12, 31] & 42"],seealso:["bitNot","bitOr","bitXor","leftShift","rightArithShift","rightLogShift"]}},function(e,t){e.exports={name:"bitNot",category:"Bitwise",syntax:["~x","bitNot(x)"],description:"Bitwise NOT operation. Performs a logical negation on each bit of the given value. Bits that are 0 become 1, and those that are 1 become 0.",examples:["~1","~2","bitNot([2, -3, 4])"],seealso:["bitAnd","bitOr","bitXor","leftShift","rightArithShift","rightLogShift"]}},function(e,t){e.exports={name:"bitOr",category:"Bitwise",syntax:["x | y","bitOr(x, y)"],description:"Bitwise OR operation. Performs the logical inclusive OR operation on each pair of corresponding bits of the two given values. The result in each position is 1 if the first bit is 1 or the second bit is 1 or both bits are 1, otherwise, the result is 0.",examples:["5 | 3","bitOr([1, 2, 3], 4)"],seealso:["bitAnd","bitNot","bitXor","leftShift","rightArithShift","rightLogShift"]}},function(e,t){e.exports={name:"bitXor",category:"Bitwise",syntax:["bitXor(x, y)"],description:"Bitwise XOR operation, exclusive OR. Performs the logical exclusive OR operation on each pair of corresponding bits of the two given values. The result in each position is 1 if only the first bit is 1 or only the second bit is 1, but will be 0 if both are 0 or both are 1.",examples:["bitOr(1, 2)","bitXor([2, 3, 4], 4)"],seealso:["bitAnd","bitNot","bitOr","leftShift","rightArithShift","rightLogShift"]}},function(e,t){e.exports={name:"leftShift",category:"Bitwise",syntax:["x << y","leftShift(x, y)"],description:"Bitwise left logical shift of a value x by y number of bits.",examples:["4 << 1","8 >> 1"],seealso:["bitAnd","bitNot","bitOr","bitXor","rightArithShift","rightLogShift"]}},function(e,t){e.exports={name:"rightArithShift",category:"Bitwise",syntax:["x >> y","leftShift(x, y)"],description:"Bitwise right arithmetic shift of a value x by y number of bits.",examples:["8 >> 1","4 << 1","-12 >> 2"],seealso:["bitAnd","bitNot","bitOr","bitXor","leftShift","rightLogShift"]}},function(e,t){e.exports={name:"rightLogShift",category:"Bitwise",syntax:["x >> y","leftShift(x, y)"],description:"Bitwise right logical shift of a value x by y number of bits.",examples:["8 >>> 1","4 << 1","-12 >>> 2"],seealso:["bitAnd","bitNot","bitOr","bitXor","leftShift","rightArithShift"]}},function(e,t){e.exports={name:"bellNumbers",category:"Combinatorics",syntax:["bellNumbers(n)"],description:"The Bell Numbers count the number of partitions of a set. A partition is a pairwise disjoint subset of S whose union is S. `bellNumbers` only takes integer arguments. The following condition must be enforced: n >= 0.",examples:["bellNumbers(3)","bellNumbers(8)"],seealso:["stirlingS2"]}},function(e,t){e.exports={name:"catalan",category:"Combinatorics",syntax:["catalan(n)"],description:"The Catalan Numbers enumerate combinatorial structures of many different types. catalan only takes integer arguments. The following condition must be enforced: n >= 0.",examples:["catalan(3)","catalan(8)"],seealso:["bellNumbers"]}},function(e,t){e.exports={name:"composition",category:"Combinatorics",syntax:["composition(n, k)"],description:"The composition counts of n into k parts. composition only takes integer arguments. The following condition must be enforced: k <= n.",examples:["composition(5, 3)"],seealso:["combinations"]}},function(e,t){e.exports={name:"stirlingS2",category:"Combinatorics",syntax:["stirlingS2(n, k)"],description:"he Stirling numbers of the second kind, counts the number of ways to partition a set of n labelled objects into k nonempty unlabelled subsets. `stirlingS2` only takes integer arguments. The following condition must be enforced: k <= n. If n = k or k = 1, then s(n,k) = 1.",examples:["stirlingS2(5, 3)"],seealso:["bellNumbers"]}},function(e,t){e.exports={name:"arg",category:"Complex",syntax:["arg(x)"],description:"Compute the argument of a complex value. If x = a+bi, the argument is computed as atan2(b, a).",examples:["arg(2 + 2i)","atan2(3, 2)","arg(2 + 3i)"],seealso:["re","im","conj","abs"]}},function(e,t){e.exports={name:"conj",category:"Complex",syntax:["conj(x)"],description:"Compute the complex conjugate of a complex value. If x = a+bi, the complex conjugate is a-bi.",examples:["conj(2 + 3i)","conj(2 - 3i)","conj(-5.2i)"],seealso:["re","im","abs","arg"]}},function(e,t){e.exports={name:"re",category:"Complex",syntax:["re(x)"],description:"Get the real part of a complex number.",examples:["re(2 + 3i)","im(2 + 3i)","re(-5.2i)","re(2.4)"],seealso:["im","conj","abs","arg"]}},function(e,t){e.exports={name:"im",category:"Complex",syntax:["im(x)"],description:"Get the imaginary part of a complex number.",examples:["im(2 + 3i)","re(2 + 3i)","im(-5.2i)","im(2.4)"],seealso:["re","conj","abs","arg"]}},function(e,t){e.exports={name:"eval",category:"Expression",syntax:["eval(expression)","eval([expr1, expr2, expr3, ...])"],description:"Evaluate an expression or an array with expressions.",examples:['eval("2 + 3")','eval("sqrt(" + 4 + ")")'],seealso:[]}},function(e,t){e.exports={name:"help",category:"Expression",syntax:["help(object)","help(string)"],description:"Display documentation on a function or data type.",examples:["help(sqrt)",'help("complex")'],seealso:[]}},function(e,t){e.exports={name:"distance",category:"Geometry",syntax:["distance([x1, y1], [x2, y2])","distance([[x1, y1], [x2, y2])"],description:"Calculates the Euclidean distance between two points.",examples:["distance([0,0], [4,4])","distance([[0,0], [4,4]])"],seealso:[]}},function(e,t){e.exports={name:"intersect",category:"Geometry",syntax:["intersect(expr1, expr2, expr3, expr4)","intersect(expr1, expr2, expr3)"],description:"Computes the intersection point of lines and/or planes.",examples:["intersect([0, 0], [10, 10], [10, 0], [0, 10])","intersect([1, 0, 1], [4, -2, 2], [1, 1, 1, 6])"],seealso:[]}},function(e,t){e.exports={name:"and",category:"Logical",syntax:["x and y","and(x, y)"],description:"Logical and. Test whether two values are both defined with a nonzero/nonempty value.",examples:["true and false","true and true","2 and 4"],seealso:["not","or","xor"]}},function(e,t){e.exports={name:"not",category:"Logical",syntax:["not x","not(x)"],description:"Logical not. Flips the boolean value of given argument.",examples:["not true","not false","not 2","not 0"],seealso:["and","or","xor"]}},function(e,t){e.exports={name:"or",category:"Logical",syntax:["x or y","or(x, y)"],description:"Logical or. Test if at least one value is defined with a nonzero/nonempty value.",examples:["true or false","false or false","0 or 4"],seealso:["not","and","xor"]}},function(e,t){e.exports={name:"xor",category:"Logical",syntax:["x or y","or(x, y)"],description:"Logical exclusive or, xor. Test whether one and only one value is defined with a nonzero/nonempty value.",examples:["true xor false","false xor false","true xor true","0 or 4"],seealso:["not","and","or"]}},function(e,t){e.exports={name:"concat",category:"Matrix",syntax:["concat(A, B, C, ...)","concat(A, B, C, ..., dim)"],description:"Concatenate matrices. By default, the matrices are concatenated by the last dimension. The dimension on which to concatenate can be provided as last argument.",examples:["A = [1, 2; 5, 6]","B = [3, 4; 7, 8]","concat(A, B)","concat(A, B, 1)","concat(A, B, 2)"],seealso:["det","diag","eye","inv","ones","range","size","squeeze","subset","trace","transpose","zeros"]}},function(e,t){e.exports={name:"cross",category:"Matrix",syntax:["cross(A, B)"],description:"Calculate the cross product for two vectors in three dimensional space.",examples:["cross([1, 1, 0], [0, 1, 1])","cross([3, -3, 1], [4, 9, 2])","cross([2, 3, 4], [5, 6, 7])"],seealso:["multiply","dot"]}},function(e,t){e.exports={name:"det",category:"Matrix",syntax:["det(x)"],description:"Calculate the determinant of a matrix",examples:["det([1, 2; 3, 4])","det([-2, 2, 3; -1, 1, 3; 2, 0, -1])"],seealso:["concat","diag","eye","inv","ones","range","size","squeeze","subset","trace","transpose","zeros"]}},function(e,t){e.exports={name:"diag",category:"Matrix",syntax:["diag(x)","diag(x, k)"],description:"Create a diagonal matrix or retrieve the diagonal of a matrix. When x is a vector, a matrix with the vector values on the diagonal will be returned. When x is a matrix, a vector with the diagonal values of the matrix is returned. When k is provided, the k-th diagonal will be filled in or retrieved, if k is positive, the values are placed on the super diagonal. When k is negative, the values are placed on the sub diagonal.",examples:["diag(1:3)","diag(1:3, 1)","a = [1, 2, 3; 4, 5, 6; 7, 8, 9]","diag(a)"],seealso:["concat","det","eye","inv","ones","range","size","squeeze","subset","trace","transpose","zeros"]}},function(e,t){e.exports={name:"dot",category:"Matrix",syntax:["dot(A, B)"],description:"Calculate the dot product of two vectors. The dot product of A = [a1, a2, a3, ..., an] and B = [b1, b2, b3, ..., bn] is defined as dot(A, B) = a1 * b1 + a2 * b2 + a3 * b3 + ... + an * bn",examples:["dot([2, 4, 1], [2, 2, 3])","[2, 4, 1] * [2, 2, 3]"],seealso:["multiply","cross"]}},function(e,t){e.exports={name:"eye",category:"Matrix",syntax:["eye(n)","eye(m, n)","eye([m, n])","eye"],description:"Returns the identity matrix with size m-by-n. The matrix has ones on the diagonal and zeros elsewhere.",examples:["eye(3)","eye(3, 5)","a = [1, 2, 3; 4, 5, 6]","eye(size(a))"],seealso:["concat","det","diag","inv","ones","range","size","squeeze","subset","trace","transpose","zeros"]}},function(e,t){e.exports={name:"flatten",category:"Matrix",syntax:["flatten(x)"],description:"Flatten a multi dimensional matrix into a single dimensional matrix.",examples:["a = [1, 2, 3; 4, 5, 6]","size(a)","b = flatten(a)","size(b)"],seealso:["concat","resize","size","squeeze"]}},function(e,t){e.exports={name:"inv",category:"Matrix",syntax:["inv(x)"],description:"Calculate the inverse of a matrix",examples:["inv([1, 2; 3, 4])","inv(4)","1 / 4"],seealso:["concat","det","diag","eye","ones","range","size","squeeze","subset","trace","transpose","zeros"]}},function(e,t){e.exports={name:"ones",category:"Matrix",syntax:["ones(m)","ones(m, n)","ones(m, n, p, ...)","ones([m])","ones([m, n])","ones([m, n, p, ...])","ones"],description:"Create a matrix containing ones.",examples:["ones(3)","ones(3, 5)","ones([2,3]) * 4.5","a = [1, 2, 3; 4, 5, 6]","ones(size(a))"],seealso:["concat","det","diag","eye","inv","range","size","squeeze","subset","trace","transpose","zeros"]}},function(e,t){e.exports={name:"range",category:"Type",syntax:["start:end","start:step:end","range(start, end)","range(start, end, step)","range(string)"],description:"Create a range. Lower bound of the range is included, upper bound is excluded.",examples:["1:5","3:-1:-3","range(3, 7)","range(0, 12, 2)",'range("4:10")',"a = [1, 2, 3, 4; 5, 6, 7, 8]","a[1:2, 1:2]"],seealso:["concat","det","diag","eye","inv","ones","size","squeeze","subset","trace","transpose","zeros"]}},function(e,t){e.exports={name:"resize",category:"Matrix",syntax:["resize(x, size)","resize(x, size, defaultValue)"],description:"Resize a matrix.",examples:["resize([1,2,3,4,5], [3])","resize([1,2,3], [5])","resize([1,2,3], [5], -1)","resize(2, [2, 3])",'resize("hello", [8], "!")'],seealso:["size","subset","squeeze"]}},function(e,t){e.exports={name:"size",category:"Matrix",syntax:["size(x)"],description:"Calculate the size of a matrix.",examples:["size(2.3)",'size("hello world")',"a = [1, 2; 3, 4; 5, 6]","size(a)","size(1:6)"],seealso:["concat","det","diag","eye","inv","ones","range","squeeze","subset","trace","transpose","zeros"]}},function(e,t){e.exports={name:"squeeze",category:"Matrix",syntax:["squeeze(x)"],description:"Remove inner and outer singleton dimensions from a matrix.",examples:["a = zeros(3,2,1)","size(squeeze(a))","b = zeros(1,1,3)","size(squeeze(b))"],seealso:["concat","det","diag","eye","inv","ones","range","size","subset","trace","transpose","zeros"]}},function(e,t){e.exports={name:"subset",category:"Matrix",syntax:["value(index)","value(index) = replacement","subset(value, [index])","subset(value, [index], replacement)"],description:"Get or set a subset of a matrix or string. Indexes are one-based. Both the ranges lower-bound and upper-bound are included.",examples:["d = [1, 2; 3, 4]","e = []","e[1, 1:2] = [5, 6]","e[2, :] = [7, 8]","f = d * e","f[2, 1]","f[:, 1]"],seealso:["concat","det","diag","eye","inv","ones","range","size","squeeze","trace","transpose","zeros"]}},function(e,t){e.exports={name:"trace",category:"Matrix",syntax:["trace(A)"],description:"Calculate the trace of a matrix: the sum of the elements on the main diagonal of a square matrix.",examples:["A = [1, 2, 3; -1, 2, 3; 2, 0, 3]","trace(A)"],seealso:["concat","det","diag","eye","inv","ones","range","size","squeeze","subset","transpose","zeros"]}},function(e,t){e.exports={name:"transpose",category:"Matrix",syntax:["x'","transpose(x)"],description:"Transpose a matrix",examples:["a = [1, 2, 3; 4, 5, 6]","a'","transpose(a)"],seealso:["concat","det","diag","eye","inv","ones","range","size","squeeze","subset","trace","zeros"]}},function(e,t){e.exports={name:"zeros",category:"Matrix",syntax:["zeros(m)","zeros(m, n)","zeros(m, n, p, ...)","zeros([m])","zeros([m, n])","zeros([m, n, p, ...])","zeros"],description:"Create a matrix containing zeros.",examples:["zeros(3)","zeros(3, 5)","a = [1, 2, 3; 4, 5, 6]","zeros(size(a))"],seealso:["concat","det","diag","eye","inv","ones","range","size","squeeze","subset","trace","transpose"]}},function(e,t){e.exports={name:"combinations",category:"Probability",syntax:["combinations(n, k)"],description:"Compute the number of combinations of n items taken k at a time",examples:["combinations(7, 5)"],seealso:["permutations","factorial"]}},function(e,t){e.exports={name:"factorial",category:"Probability",syntax:["kldivergence(x, y)"],description:"Compute the factorial of a value",examples:["5!","5 * 4 * 3 * 2 * 1","3!"],seealso:["combinations","permutations","gamma"]}},function(e,t){e.exports={name:"gamma",category:"Probability",syntax:["gamma(n)"],description:"Compute the gamma function. For small values, the Lanczos approximation is used, and for large values the extended Stirling approximation.",examples:["gamma(4)","3!","gamma(1/2)","sqrt(pi)"],seealso:["factorial"]}},function(e,t){e.exports={name:"kldivergence",category:"Probability",syntax:["n!","factorial(n)"],description:"Calculate the Kullback-Leibler (KL) divergence between two distributions.",examples:["math.kldivergence([0.7,0.5,0.4], [0.2,0.9,0.5])"],seealso:[]}},function(e,t){e.exports={name:"multinomial",category:"Probability",syntax:["multinomial(A)"],description:"Multinomial Coefficients compute the number of ways of picking a1, a2, ..., ai unordered outcomes from `n` possibilities. multinomial takes one array of integers as an argument. The following condition must be enforced: every ai <= 0.",examples:["multinomial([1, 2, 1])"],seealso:["combinations","factorial"]}},function(e,t){e.exports={name:"permutations",category:"Probability",syntax:["permutations(n)","permutations(n, k)"],description:"Compute the number of permutations of n items taken k at a time",examples:["permutations(5)","permutations(5, 3)"],seealso:["combinations","factorial"]}},function(e,t){e.exports={name:"pickRandom",category:"Probability",syntax:["pickRandom(array)"],description:"Pick a random entry from a given array.",examples:["pickRandom(0:10)","pickRandom([1, 3, 1, 6])"],seealso:["random","randomInt"]}},function(e,t){e.exports={name:"random",category:"Probability",syntax:["random()","random(max)","random(min, max)","random(size)","random(size, max)","random(size, min, max)"],description:"Return a random number.",examples:["random()","random(10, 20)","random([2, 3])"],seealso:["pickRandom","randomInt"]}},function(e,t){e.exports={name:"randInt",category:"Probability",syntax:["randInt(max)","randInt(min, max)","randInt(size)","randInt(size, max)","randInt(size, min, max)"],description:"Return a random integer number",examples:["randInt(10, 20)","randInt([2, 3], 10)"],seealso:["pickRandom","random"]}},function(e,t){e.exports={name:"compare",category:"Relational",syntax:["compare(x, y)"],description:"Compare two values. Returns 1 if x is larger than y, -1 if x is smaller than y, and 0 if x and y are equal.",examples:["compare(2, 3)","compare(3, 2)","compare(2, 2)","compare(5cm, 40mm)","compare(2, [1, 2, 3])"],seealso:["equal","unequal","smaller","smallerEq","largerEq"]}},function(e,t){e.exports={name:"deepEqual",category:"Relational",syntax:["deepEqual(x, y)"],description:"Check equality of two matrices element wise. Returns true if the size of both matrices is equal and when and each of the elements are equal.",examples:["[1,3,4] == [1,3,4]","[1,3,4] == [1,3]"],seealso:["equal","unequal","smaller","larger","smallerEq","largerEq","compare"]}},function(e,t){e.exports={name:"equal",category:"Relational",syntax:["x == y","equal(x, y)"],description:"Check equality of two values. Returns true if the values are equal, and false if not.",examples:["2+2 == 3","2+2 == 4","a = 3.2","b = 6-2.8","a == b","50cm == 0.5m"],seealso:["unequal","smaller","larger","smallerEq","largerEq","compare","deepEqual"]}},function(e,t){e.exports={name:"larger",category:"Relational",syntax:["x > y","larger(x, y)"],description:"Check if value x is larger than y. Returns true if x is larger than y, and false if not.",examples:["2 > 3","5 > 2*2","a = 3.3","b = 6-2.8","(a > b)","(b < a)","5 cm > 2 inch"],seealso:["equal","unequal","smaller","smallerEq","largerEq","compare"]}},function(e,t){e.exports={name:"largerEq",category:"Relational",syntax:["x >= y","largerEq(x, y)"],description:"Check if value x is larger or equal to y. Returns true if x is larger or equal to y, and false if not.",examples:["2 > 1+1","2 >= 1+1","a = 3.2","b = 6-2.8","(a > b)"],seealso:["equal","unequal","smallerEq","smaller","largerEq","compare"]}},function(e,t){e.exports={name:"smaller",category:"Relational",syntax:["x < y","smaller(x, y)"],description:"Check if value x is smaller than value y. Returns true if x is smaller than y, and false if not.",examples:["2 < 3","5 < 2*2","a = 3.3","b = 6-2.8","(a < b)","5 cm < 2 inch"],seealso:["equal","unequal","larger","smallerEq","largerEq","compare"]}},function(e,t){e.exports={name:"smallerEq",category:"Relational",syntax:["x <= y","smallerEq(x, y)"],description:"Check if value x is smaller or equal to value y. Returns true if x is smaller than y, and false if not.",examples:["2 < 1+1","2 <= 1+1","a = 3.2","b = 6-2.8","(a < b)"],seealso:["equal","unequal","larger","smaller","largerEq","compare"]}},function(e,t){e.exports={name:"unequal",category:"Relational",syntax:["x != y","unequal(x, y)"],description:"Check unequality of two values. Returns true if the values are unequal, and false if they are equal.",examples:["2+2 != 3","2+2 != 4","a = 3.2","b = 6-2.8","a != b","50cm != 0.5m","5 cm != 2 inch"],seealso:["equal","smaller","larger","smallerEq","largerEq","compare","deepEqual"]}},function(e,t){e.exports={name:"max",category:"Statistics",syntax:["max(a, b, c, ...)","max(A)","max(A, dim)"],description:"Compute the maximum value of a list of values.",examples:["max(2, 3, 4, 1)","max([2, 3, 4, 1])","max([2, 5; 4, 3])","max([2, 5; 4, 3], 1)","max([2, 5; 4, 3], 2)","max(2.7, 7.1, -4.5, 2.0, 4.1)","min(2.7, 7.1, -4.5, 2.0, 4.1)"],seealso:["mean","median","min","prod","std","sum","var"]}},function(e,t){e.exports={name:"mean",category:"Statistics",syntax:["mean(a, b, c, ...)","mean(A)","mean(A, dim)"],description:"Compute the arithmetic mean of a list of values.",examples:["mean(2, 3, 4, 1)","mean([2, 3, 4, 1])","mean([2, 5; 4, 3])","mean([2, 5; 4, 3], 1)","mean([2, 5; 4, 3], 2)","mean([1.0, 2.7, 3.2, 4.0])"],seealso:["max","median","min","prod","std","sum","var"]}},function(e,t){e.exports={name:"median",category:"Statistics",syntax:["median(a, b, c, ...)","median(A)"],description:"Compute the median of all values. The values are sorted and the middle value is returned. In case of an even number of values, the average of the two middle values is returned.",examples:["median(5, 2, 7)","median([3, -1, 5, 7])"],seealso:["max","mean","min","prod","std","sum","var"]}},function(e,t){e.exports={name:"min",category:"Statistics",syntax:["min(a, b, c, ...)","min(A)","min(A, dim)"],description:"Compute the minimum value of a list of values.",examples:["min(2, 3, 4, 1)","min([2, 3, 4, 1])","min([2, 5; 4, 3])","min([2, 5; 4, 3], 1)","min([2, 5; 4, 3], 2)","min(2.7, 7.1, -4.5, 2.0, 4.1)","max(2.7, 7.1, -4.5, 2.0, 4.1)"],seealso:["max","mean","median","prod","std","sum","var"]}},function(e,t){e.exports={name:"mode",category:"Statistics",syntax:["mode(a, b, c, ...)","mode(A)","mode(A, a, b, B, c, ...)"],description:"Computes the mode of all values as an array. In case mode being more than one, multiple values are returned in an array.",examples:["mode(5, 2, 7)","mode([3, -1, 5, 7])"],seealso:["max","mean","min","median","prod","std","sum","var"]}},function(e,t){e.exports={name:"prod",category:"Statistics",syntax:["prod(a, b, c, ...)","prod(A)"],description:"Compute the product of all values.",examples:["prod(2, 3, 4)","prod([2, 3, 4])","prod([2, 5; 4, 3])"],seealso:["max","mean","min","median","min","std","sum","var"]}},function(e,t){e.exports={name:"quantileSeq",category:"Statistics",syntax:["quantileSeq(A, prob[, sorted])","quantileSeq(A, [prob1, prob2, ...][, sorted])","quantileSeq(A, N[, sorted])"],description:"Compute the prob order quantile of a matrix or a list with values. The sequence is sorted and the middle value is returned. Supported types of sequence values are: Number, BigNumber, Unit Supported types of probablity are: Number, BigNumber. \n\nIn case of a (multi dimensional) array or matrix, the prob order quantile of all elements will be calculated.",examples:["quantileSeq([3, -1, 5, 7], 0.5)","quantileSeq([3, -1, 5, 7], [1/3, 2/3])","quantileSeq([3, -1, 5, 7], 2)","quantileSeq([-1, 3, 5, 7], 0.5, true)"],seealso:["mean","median","min","max","prod","std","sum","var"]}},function(e,t){e.exports={name:"std",category:"Statistics",syntax:["std(a, b, c, ...)","std(A)","std(A, normalization)"],description:'Compute the standard deviation of all values, defined as std(A) = sqrt(var(A)). Optional parameter normalization can be "unbiased" (default), "uncorrected", or "biased".',examples:["std(2, 4, 6)","std([2, 4, 6, 8])",'std([2, 4, 6, 8], "uncorrected")','std([2, 4, 6, 8], "biased")',"std([1, 2, 3; 4, 5, 6])"],seealso:["max","mean","min","median","min","prod","sum","var"]}},function(e,t){e.exports={name:"sum",category:"Statistics",syntax:["sum(a, b, c, ...)","sum(A)"],description:"Compute the sum of all values.",examples:["sum(2, 3, 4, 1)","sum([2, 3, 4, 1])","sum([2, 5; 4, 3])"],seealso:["max","mean","median","min","prod","std","sum","var"]}},function(e,t){e.exports={name:"var",category:"Statistics",syntax:["var(a, b, c, ...)","var(A)","var(A, normalization)"],description:'Compute the variance of all values. Optional parameter normalization can be "unbiased" (default), "uncorrected", or "biased".',examples:["var(2, 4, 6)","var([2, 4, 6, 8])",'var([2, 4, 6, 8], "uncorrected")','var([2, 4, 6, 8], "biased")',"var([1, 2, 3; 4, 5, 6])"],seealso:["max","mean","min","median","min","prod","std","sum"]}},function(e,t){e.exports={name:"acos",category:"Trigonometry",syntax:["acos(x)"],description:"Compute the inverse cosine of a value in radians.",examples:["acos(0.5)","acos(cos(2.3))"],seealso:["cos","atan","asin"]}},function(e,t){e.exports={name:"acosh",category:"Trigonometry",syntax:["acosh(x)"],description:"Calculate the hyperbolic arccos of a value, defined as `acosh(x) = ln(sqrt(x^2 - 1) + x)`.",examples:["acosh(1.5)"],seealso:["cosh","asinh","atanh"]}},function(e,t){e.exports={name:"acot",category:"Trigonometry",syntax:["acot(x)"],description:"Calculate the inverse cotangent of a value.",examples:["acot(0.5)","acot(cot(0.5))","acot(2)"],seealso:["cot","atan"]}},function(e,t){e.exports={name:"acoth",category:"Trigonometry",syntax:["acoth(x)"],description:"Calculate the hyperbolic arccotangent of a value, defined as `acoth(x) = (ln((x+1)/x) + ln(x/(x-1))) / 2`.",examples:["acoth(0.5)"],seealso:["acsch","asech"]}},function(e,t){e.exports={name:"acsc",category:"Trigonometry",syntax:["acsc(x)"],description:"Calculate the inverse cotangent of a value.",examples:["acsc(0.5)","acsc(csc(0.5))","acsc(2)"],seealso:["csc","asin","asec"]}},function(e,t){e.exports={name:"acsch",category:"Trigonometry",syntax:["acsch(x)"],description:"Calculate the hyperbolic arccosecant of a value, defined as `acsch(x) = ln(1/x + sqrt(1/x^2 + 1))`.",examples:["acsch(0.5)"],seealso:["asech","acoth"]}},function(e,t){e.exports={name:"asec",category:"Trigonometry",syntax:["asec(x)"],description:"Calculate the inverse secant of a value.",examples:["asec(0.5)","asec(sec(0.5))","asec(2)"],seealso:["acos","acot","acsc"]}},function(e,t){e.exports={name:"asech",category:"Trigonometry",syntax:["asech(x)"],description:"Calculate the inverse secant of a value.",examples:["asech(0.5)"],seealso:["acsch","acoth"]}},function(e,t){e.exports={name:"asin",category:"Trigonometry",syntax:["asin(x)"],description:"Compute the inverse sine of a value in radians.",examples:["asin(0.5)","asin(sin(2.3))"],seealso:["sin","acos","atan"]}},function(e,t){e.exports={name:"asinh",category:"Trigonometry",syntax:["asinh(x)"],description:"Calculate the hyperbolic arcsine of a value, defined as `asinh(x) = ln(x + sqrt(x^2 + 1))`.",examples:["asinh(0.5)"],seealso:["acosh","atanh"]}},function(e,t){e.exports={name:"atan",category:"Trigonometry",syntax:["atan(x)"],description:"Compute the inverse tangent of a value in radians.",examples:["atan(0.5)","atan(tan(2.3))"],seealso:["tan","acos","asin"]}},function(e,t){e.exports={name:"atanh",category:"Trigonometry",syntax:["atanh(x)"],description:"Calculate the hyperbolic arctangent of a value, defined as `atanh(x) = ln((1 + x)/(1 - x)) / 2`.",examples:["atanh(0.5)"],seealso:["acosh","asinh"]}},function(e,t){e.exports={name:"atan2",category:"Trigonometry",syntax:["atan2(y, x)"],description:"Computes the principal value of the arc tangent of y/x in radians.",examples:["atan2(2, 2) / pi","angle = 60 deg in rad","x = cos(angle)","y = sin(angle)","atan2(y, x)"],seealso:["sin","cos","tan"]}},function(e,t){e.exports={name:"cos",category:"Trigonometry",syntax:["cos(x)"],description:"Compute the cosine of x in radians.",examples:["cos(2)","cos(pi / 4) ^ 2","cos(180 deg)","cos(60 deg)","sin(0.2)^2 + cos(0.2)^2"],seealso:["acos","sin","tan"]}},function(e,t){e.exports={name:"cosh",category:"Trigonometry",syntax:["cosh(x)"],description:"Compute the hyperbolic cosine of x in radians.",examples:["cosh(0.5)"],seealso:["sinh","tanh","coth"]}},function(e,t){e.exports={name:"cot",category:"Trigonometry",syntax:["cot(x)"],description:"Compute the cotangent of x in radians. Defined as 1/tan(x)",examples:["cot(2)","1 / tan(2)"],seealso:["sec","csc","tan"]}},function(e,t){e.exports={name:"coth",category:"Trigonometry",syntax:["coth(x)"],description:"Compute the hyperbolic cotangent of x in radians.",examples:["coth(2)","1 / tanh(2)"], -seealso:["sech","csch","tanh"]}},function(e,t){e.exports={name:"csc",category:"Trigonometry",syntax:["csc(x)"],description:"Compute the cosecant of x in radians. Defined as 1/sin(x)",examples:["csc(2)","1 / sin(2)"],seealso:["sec","cot","sin"]}},function(e,t){e.exports={name:"csch",category:"Trigonometry",syntax:["csch(x)"],description:"Compute the hyperbolic cosecant of x in radians. Defined as 1/sinh(x)",examples:["csch(2)","1 / sinh(2)"],seealso:["sech","coth","sinh"]}},function(e,t){e.exports={name:"sec",category:"Trigonometry",syntax:["sec(x)"],description:"Compute the secant of x in radians. Defined as 1/cos(x)",examples:["sec(2)","1 / cos(2)"],seealso:["cot","csc","cos"]}},function(e,t){e.exports={name:"sech",category:"Trigonometry",syntax:["sech(x)"],description:"Compute the hyperbolic secant of x in radians. Defined as 1/cosh(x)",examples:["sech(2)","1 / cosh(2)"],seealso:["coth","csch","cosh"]}},function(e,t){e.exports={name:"sin",category:"Trigonometry",syntax:["sin(x)"],description:"Compute the sine of x in radians.",examples:["sin(2)","sin(pi / 4) ^ 2","sin(90 deg)","sin(30 deg)","sin(0.2)^2 + cos(0.2)^2"],seealso:["asin","cos","tan"]}},function(e,t){e.exports={name:"sinh",category:"Trigonometry",syntax:["sinh(x)"],description:"Compute the hyperbolic sine of x in radians.",examples:["sinh(0.5)"],seealso:["cosh","tanh"]}},function(e,t){e.exports={name:"tan",category:"Trigonometry",syntax:["tan(x)"],description:"Compute the tangent of x in radians.",examples:["tan(0.5)","sin(0.5) / cos(0.5)","tan(pi / 4)","tan(45 deg)"],seealso:["atan","sin","cos"]}},function(e,t){e.exports={name:"tanh",category:"Trigonometry",syntax:["tanh(x)"],description:"Compute the hyperbolic tangent of x in radians.",examples:["tanh(0.5)","sinh(0.5) / cosh(0.5)"],seealso:["sinh","cosh"]}},function(e,t){e.exports={name:"to",category:"Units",syntax:["x to unit","to(x, unit)"],description:"Change the unit of a value.",examples:["5 inch to cm","3.2kg to g","16 bytes in bits"],seealso:[]}},function(e,t){e.exports={name:"clone",category:"Utils",syntax:["clone(x)"],description:"Clone a variable. Creates a copy of primitive variables,and a deep copy of matrices",examples:["clone(3.5)","clone(2 - 4i)","clone(45 deg)","clone([1, 2; 3, 4])",'clone("hello world")'],seealso:[]}},function(e,t){e.exports={name:"map",category:"Utils",syntax:["map(x, callback)"],description:"Create a new matrix or array with the results of the callback function executed on each entry of the matrix/array.",examples:["map([1, 2, 3], function(val) { return value * value })"],seealso:["filter","forEach"]}},function(e,t){e.exports={name:"partitionSelect",category:"Utils",syntax:["partitionSelect(x, k)","partitionSelect(x, k, compare)"],description:"Partition-based selection of an array or 1D matrix. Will find the kth smallest value, and mutates the input array. Uses Quickselect.",examples:["partitionSelect([5, 10, 1], 2)",'partitionSelect(["C", "B", "A", "D"], 1)'],seealso:["sort"]}},function(e,t){e.exports={name:"filter",category:"Utils",syntax:["filter(x, test)"],description:"Filter items in a matrix.",examples:["isPositive(x) = x > 0","filter([6, -2, -1, 4, 3], isPositive)","filter([6, -2, 0, 1, 0], x != 0)"],seealso:["sort","map","forEach"]}},function(e,t){e.exports={name:"forEach",category:"Utils",syntax:["forEach(x, callback)"],description:"Iterates over all elements of a matrix/array, and executes the given callback function.",examples:["forEach([1, 2, 3], function(val) { console.log(val) })"],seealso:["map","sort","filter"]}},function(e,t){e.exports={name:"format",category:"Utils",syntax:["format(value)","format(value, precision)"],description:"Format a value of any type as string.",examples:["format(2.3)","format(3 - 4i)","format([])","format(pi, 3)"],seealso:["print"]}},function(e,t){e.exports={name:"isInteger",category:"Utils",syntax:["isInteger(x)"],description:"Test whether a value is an integer number.",examples:["isInteger(2)","isInteger(3.5)","isInteger([3, 0.5, -2])"],seealso:["isNegative","isNumeric","isPositive","isZero"]}},function(e,t){e.exports={name:"isNegative",category:"Utils",syntax:["isNegative(x)"],description:"Test whether a value is negative: smaller than zero.",examples:["isNegative(2)","isNegative(0)","isNegative(-4)","isNegative([3, 0.5, -2])"],seealso:["isInteger","isNumeric","isPositive","isZero"]}},function(e,t){e.exports={name:"isNumeric",category:"Utils",syntax:["isNumeric(x)"],description:"Test whether a value is a numeric value. Returns true when the input is a number, BigNumber, Fraction, or boolean.",examples:["isNumeric(2)","isNumeric(0)","isNumeric(bignumber(500))","isNumeric(fraction(0.125))",'isNumeric("3")',"isNumeric(2 + 3i)",'isNumeric([2.3, "foo", false])'],seealso:["isInteger","isZero","isNegative","isPositive"]}},function(e,t){e.exports={name:"isPositive",category:"Utils",syntax:["isPositive(x)"],description:"Test whether a value is positive: larger than zero.",examples:["isPositive(2)","isPositive(0)","isPositive(-4)","isPositive([3, 0.5, -2])"],seealso:["isInteger","isNumeric","isNegative","isZero"]}},function(e,t){e.exports={name:"isZero",category:"Utils",syntax:["isZero(x)"],description:"Test whether a value is zero.",examples:["isZero(2)","isZero(0)","isZero(-4)","isZero([3, 0, -2, 0])"],seealso:["isInteger","isNumeric","isNegative","isPositive"]}},function(e,t){e.exports={name:"import",category:"Utils",syntax:["import(string)"],description:"Import functions from a file.",examples:['import("numbers")','import("./mylib.js")'],seealso:[]}},function(e,t){e.exports={name:"sort",category:"Utils",syntax:["sort(x)","sort(x, compare)"],description:'Sort the items in a matrix. Compare can be a string "asc" or "desc", or a custom sort function.',examples:["sort([5, 10, 1])",'sort(["C", "B", "A", "D"])',"sortByLength(a, b) = size(a)[1] - size(b)[1]",'sort(["Langdon", "Tom", "Sara"], sortByLength)'],seealso:["map","filter","forEach"]}},function(e,t){e.exports={name:"typeof",category:"Utils",syntax:["typeof(x)"],description:"Get the type of a variable.",examples:["typeof(3.5)","typeof(2 - 4i)","typeof(45 deg)",'typeof("hello world")'],seealso:[]}},function(e,t,r){e.exports=[r(268),r(286),r(287),r(288),r(289)]},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(269));return a("compile",{string:function(e){return o(e).compile()},"Array | Matrix":function(e){return i(e,function(e){return o(e).compile()})}})}var i=r(19);t.name="compile",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(t,r){if(1!=arguments.length&&2!=arguments.length)throw new i("parse",arguments.length,1,2);if(he=r&&r.nodes?r.nodes:{},"string"==typeof t)return ge=t,x();if(Array.isArray(t)||t instanceof e.Matrix)return a(t,function(e){if("string"!=typeof e)throw new TypeError("String expected");return ge=e,x()});throw new TypeError("String or matrix expected")}function u(){ve=0,de=ge.charAt(0),be=0,we=null}function c(){ve++,de=ge.charAt(ve)}function f(){return ge.charAt(ve+1)}function l(){return ge.charAt(ve+2)}function p(){for(xe=le.NULL,ye="";" "==de||" "==de||"\n"==de&&be;)c();if("#"==de)for(;"\n"!=de&&""!=de;)c();if(""==de)return void(xe=le.DELIMITER);if("\n"==de&&!be)return xe=le.DELIMITER,ye=de,void c();var e=de+f(),t=e+l();if(3==t.length&&pe[t])return xe=le.DELIMITER,ye=t,c(),c(),void c();if(2==e.length&&pe[e])return xe=le.DELIMITER,ye=e,c(),void c();if(pe[de])return xe=le.DELIMITER,ye=de,void c();if(!d(de)){if(v()){for(;v()||y(de);)ye+=de,c();return void(xe=me[ye]?le.DELIMITER:le.SYMBOL)}for(xe=le.UNKNOWN;""!=de;)ye+=de,c();throw X('Syntax error in part "'+ye+'"')}if(xe=le.NUMBER,"."==de)ye+=de,c(),y(de)||(xe=le.UNKNOWN);else{for(;y(de);)ye+=de,c();"."==de&&(ye+=de,c())}for(;y(de);)ye+=de,c();if(e=f(),("E"==de||"e"==de)&&(y(e)||"-"==e||"+"==e))for(ye+=de,c(),("+"==de||"-"==de)&&(ye+=de,c()),y(de)||(xe=le.UNKNOWN);y(de);)ye+=de,c()}function m(){do p();while("\n"==ye)}function h(){be++}function g(){be--}function v(){var e=ge.charAt(ve-1),t=ge.charAt(ve+1),r=function(e){return/^[a-zA-Z_\u00C0-\u02AF\u0370-\u03FF]$/.test(e)},n=function(e,t){return/^[\uD835]$/.test(e)&&/^[\uDC00-\uDFFF]$/.test(t)&&/^[^\uDC55\uDC9D\uDCA0\uDCA1\uDCA3\uDCA4\uDCA7\uDCA8\uDCAD\uDCBA\uDCBC\uDCC4\uDD06\uDD0B\uDD0C\uDD15\uDD1D\uDD3A\uDD3F\uDD45\uDD47-\uDD49\uDD51\uDEA6\uDEA7\uDFCC\uDFCD]$/.test(t)};return r(de)||n(de,t)||n(e,de)}function d(e){return e>="0"&&"9">=e||"."==e}function y(e){return e>="0"&&"9">=e}function x(){u(),p();var e=b();if(""!=ye)throw xe==le.DELIMITER?J("Unexpected operator "+ye):X('Unexpected part "'+ye+'"');return e}function b(){var e,t,r=[];if(""==ye)return new re("undefined","undefined");for("\n"!=ye&&";"!=ye&&(e=w());"\n"==ye||";"==ye;)0==r.length&&e&&(t=";"!=ye,r.push({node:e,visible:t})),p(),"\n"!=ye&&";"!=ye&&""!=ye&&(e=w(),t=";"!=ye,r.push({node:e,visible:t}));return r.length>0?new ee(r):e}function w(){if(xe==le.SYMBOL&&"function"==ye)throw X('Deprecated keyword "function". Functions can now be assigned without it, like "f(x) = x^2".');return N()}function N(){var e,t,r,n,i=E();if("="==ye){if(i&&i.isSymbolNode)return e=i.name,m(),r=N(),new K(e,r);if(i&&i.isIndexNode)return m(),r=N(),new fe(i,r);if(i&&i.isFunctionNode&&(n=!0,t=[],e=i.name,i.args.forEach(function(e,r){e&&e.isSymbolNode?t[r]=e.name:n=!1}),n))return m(),r=N(),new ne(e,t,r);throw X("Invalid left hand side of assignment operator =")}return i}function E(){for(var e=M();"?"==ye;){var t=we;we=be,m();var r=e,n=M();if(":"!=ye)throw X("False part of conditional expression expected");we=null,m();var i=E();e=new te(r,n,i),we=t}return e}function M(){for(var e=A();"or"==ye;)m(),e=new ae("or","or",[e,A()]);return e}function A(){for(var e=_();"xor"==ye;)m(),e=new ae("xor","xor",[e,_()]);return e}function _(){for(var e=O();"and"==ye;)m(),e=new ae("and","and",[e,O()]);return e}function O(){for(var e=T();"|"==ye;)m(),e=new ae("|","bitOr",[e,T()]);return e}function T(){for(var e=C();"^|"==ye;)m(),e=new ae("^|","bitXor",[e,C()]);return e}function C(){for(var e=S();"&"==ye;)m(),e=new ae("&","bitAnd",[e,S()]);return e}function S(){var e,t,r,n,i;for(e=z(),t={"==":"equal","!=":"unequal","<":"smaller",">":"larger","<=":"smallerEq",">=":"largerEq"};ye in t;)r=ye,n=t[r],m(),i=[e,z()],e=new ae(r,n,i);return e}function z(){var e,t,r,n,i;for(e=B(),t={"<<":"leftShift",">>":"rightArithShift",">>>":"rightLogShift"};ye in t;)r=ye,n=t[r],m(),i=[e,B()],e=new ae(r,n,i);return e}function B(){var e,t,r,n,i;for(e=k(),t={to:"to","in":"to"};ye in t;)r=ye,n=t[r],m(),i=[e,k()],e=new ae(r,n,i);return e}function k(){var e,t=[];if(e=":"==ye?new re("1","number"):I(),":"==ye&&we!==be){for(t.push(e);":"==ye&&t.length<3;)m(),")"==ye||"]"==ye||","==ye||""==ye?t.push(new ce("end")):t.push(I());e=3==t.length?new ue(t[0],t[2],t[1]):new ue(t[0],t[1])}return e}function I(){var e,t,r,n,i;for(e=R(),t={"+":"add","-":"subtract"};ye in t;)r=ye,n=t[r],m(),i=[e,R()],e=new ae(r,n,i);return e}function R(){var e,t,r,n,i;for(e=P(),t=e,r={"*":"multiply",".*":"dotMultiply","/":"divide","./":"dotDivide","%":"mod",mod:"mod"};;)if(ye in r)n=ye,i=r[n],m(),t=P(),e=new ae(n,i,[e,t]);else{if(!(xe==le.SYMBOL||"in"==ye&&e&&e.isConstantNode||xe==le.NUMBER&&!t.isConstantNode||"("==ye||"["==ye))break;t=P(),e=new ae("*","multiply",[e,t])}return e}function P(){var e,t,r={"-":"unaryMinus","+":"unaryPlus","~":"bitNot",not:"not"}[ye];return r?(e=ye,m(),t=[P()],new ae(e,r,t)):U()}function U(){var e,t,r,n;return e=q(),("^"==ye||".^"==ye)&&(t=ye,r="^"==t?"pow":"dotPow",m(),n=[e,P()],e=new ae(t,r,n)),e}function q(){var e,t,r,n,i;for(e=L(),t={"!":"factorial","'":"transpose"};ye in t;)r=ye,n=t[r],p(),i=[e],e=new ae(r,n,i);return e}function L(){var e,t=[];if(xe==le.SYMBOL&&he[ye]){if(e=he[ye],p(),"("==ye){if(t=[],h(),p(),")"!=ye)for(t.push(E());","==ye;)p(),t.push(E());if(")"!=ye)throw X("Parenthesis ) expected");g(),p()}return new e(t)}return F()}function F(){var e,t;return xe==le.SYMBOL||xe==le.DELIMITER&&ye in me?(t=ye,p(),e=D(t),e=$(e)):j()}function D(e){var t;if("("==ye){if(t=[],h(),p(),")"!=ye)for(t.push(E());","==ye;)p(),t.push(E());if(")"!=ye)throw X("Parenthesis ) expected");return g(),p(),new se(e,t)}return new ce(e)}function $(e){for(var t;"["==ye;){if(t=[],h(),p(),"]"!=ye)for(t.push(E());","==ye;)p(),t.push(E());if("]"!=ye)throw X("Parenthesis ] expected");g(),p(),e=new ie(e,t)}return e}function j(){var e,t,r;if('"'==ye){for(t="",r="";""!=de&&('"'!=de||"\\"==r);)t+=de,r=de,c();if(p(),'"'!=ye)throw X('End of string " expected');return p(),e=new re(t,"string"),e=$(e)}return G()}function G(){var e,t,r,n;if("["==ye){if(h(),p(),"]"!=ye){var i=H();if(";"==ye){for(r=1,t=[i];";"==ye;)p(),t[r]=H(),r++;if("]"!=ye)throw X("End of matrix ] expected");g(),p(),n=t[0].nodes.length;for(var a=1;r>a;a++)if(t[a].nodes.length!=n)throw J("Column dimensions mismatch ("+t[a].nodes.length+" != "+n+")");e=new Q(t)}else{if("]"!=ye)throw X("End of matrix ] expected");g(),p(),e=i}}else g(),p(),e=new Q([]);return e}return V()}function H(){for(var e=[N()],t=1;","==ye;)p(),e[t]=N(),t++;return new Q(e)}function V(){var e;return xe==le.NUMBER?(e=ye,p(),new re(e,"number")):Z()}function Z(){var e;if("("==ye){if(h(),p(),e=N(),")"!=ye)throw X("Parenthesis ) expected");return g(),p(),new oe(e)}return Y()}function Y(){throw X(""==ye?"Unexpected end of expression":"Value expected")}function W(){return ve-ye.length+1}function X(e){var t=W(),r=new SyntaxError(e+" (char "+t+")");return r["char"]=t,r}function J(e){var t=W(),r=new Error(e+" (char "+t+")");return r["char"]=t,r}var Q=n(r(270)),K=n(r(273)),ee=n(r(275)),te=n(r(276)),re=n(r(277)),ne=n(r(278)),ie=n(r(279)),ae=n(r(282)),oe=n(r(284)),se=n(r(283)),ue=n(r(280)),ce=n(r(281)),fe=n(r(285)),le={NULL:0,DELIMITER:1,NUMBER:2,SYMBOL:3,UNKNOWN:4},pe={",":!0,"(":!0,")":!0,"[":!0,"]":!0,'"':!0,";":!0,"+":!0,"-":!0,"*":!0,".*":!0,"/":!0,"./":!0,"%":!0,"^":!0,".^":!0,"~":!0,"!":!0,"&":!0,"|":!0,"^|":!0,"'":!0,"=":!0,":":!0,"?":!0,"==":!0,"!=":!0,"<":!0,">":!0,"<=":!0,">=":!0,"<<":!0,">>":!0,">>>":!0},me={mod:!0,to:!0,"in":!0,and:!0,xor:!0,or:!0,not:!0},he={},ge="",ve=0,de="",ye="",xe=le.NULL,be=0,we=null;return s}var i=r(11),a=r(19);t.name="parse",t.path="expression",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(e){if(!(this instanceof o))throw new SyntaxError("Constructor must be called with the new operator");if(this.nodes=e||[],!Array.isArray(this.nodes)||!this.nodes.every(function(e){return e&&e.isNode}))throw new TypeError("Array containing Nodes expected")}var s=n(r(271));return o.prototype=new s,o.prototype.type="ArrayNode",o.prototype.isArrayNode=!0,o.prototype._compile=function(e,t){var r="array"!==e.math.config().matrix,n=this.nodes.map(function(r){return r._compile(e,t)});return(r?"math.matrix([":"[")+n.join(",")+(r?"])":"]")},o.prototype.forEach=function(e){for(var t=0;t0)throw new Error("Calling compile(math) is deprecated. Call the function as compile() instead.");var e={math:a.expression.transform,args:{},_validateScope:s},t={},r=this._compile(e,t),n=Object.keys(e).map(function(e){return" var "+e+' = defs["'+e+'"];'}),i=n.join(" ")+'return { "eval": function (scope) { if (scope) _validateScope(scope); scope = scope || {}; return '+r+"; }};",o=new Function("defs",i);return o(e)},o.prototype._compile=function(e,t){throw new Error("Cannot compile a Node interface")},o.prototype.forEach=function(e){throw new Error("Cannot run forEach on a Node interface")},o.prototype.map=function(e){throw new Error("Cannot run map on a Node interface")},o.prototype._ifNode=function(e){if(!e||!e.isNode)throw new TypeError("Callback function must return a Node");return e},o.prototype.traverse=function(e){function t(e,r){e.forEach(function(e,n,i){r(e,n,i),t(e,r)})}e(this,null,null),t(this,e)},o.prototype.transform=function(e){function t(e,r){return e.map(function(e,n,i){var a=r(e,n,i);return t(a,r)})}var r=e(this,null,null);return t(r,e)},o.prototype.filter=function(e){var t=[];return this.traverse(function(r,n,i){e(r,n,i)&&t.push(r)}),t},o.prototype.find=function(){throw new Error("Function Node.find is deprecated. Use Node.filter instead.")},o.prototype.match=function(){throw new Error("Function Node.match is deprecated. See functions Node.filter, Node.transform, Node.traverse.")},o.prototype.clone=function(){throw new Error("Cannot clone a Node interface")},o.prototype.toString=function(e){var t;if(e&&"object"==typeof e)switch(typeof e.handler){case"object":case"undefined":break;case"function":t=e.handler(this,e);break;default:throw new TypeError("Object or function expected as callback")}return"undefined"!=typeof t?t:this._toString(e)},o.prototype._toString=function(){throw new Error("_toString not implemented for "+this.type)},o.prototype.toTex=function(e){var t;if(e&&"object"==typeof e)switch(typeof e.handler){case"object":case"undefined":break;case"function":t=e.handler(this,e);break;default:throw new TypeError("Object or function expected as callback")}return"undefined"!=typeof t?t:this._toTex(e)},o.prototype._toTex=function(e){throw new Error("_toTex not implemented for "+this.type)},o.prototype.getIdentifier=function(){return this.type},o.prototype.getContent=function(){return this},o}var i=r(272);r(3).extend;t.name="Node",t.path="expression.node",t.math=!0,t.factory=n},function(e,t){"use strict";e.exports={end:!0}},function(e,t,r){"use strict";function n(e,t,n,a){function o(e,t){if(!(this instanceof o))throw new SyntaxError("Constructor must be called with the new operator");if("string"!=typeof e)throw new TypeError('String expected for parameter "name"');if(!t||!t.isNode)throw new TypeError('Node expected for parameter "expr"');if(e in c)throw new Error('Illegal symbol name, "'+e+'" is a reserved keyword');this.name=e,this.expr=t}function s(e,t){var r=f.getPrecedence(e,t),n=f.getPrecedence(e.expr,t);return"all"===t||null!==n&&r>=n}var u=n(r(271)),c=(n(r(270)),r(272)),f=r(274);return o.prototype=new u,o.prototype.type="AssignmentNode",o.prototype.isAssignmentNode=!0,o.prototype._compile=function(e,t){return'scope["'+this.name+'"] = '+this.expr._compile(e,t)},o.prototype.forEach=function(e){e(this.expr,"expr",this)},o.prototype.map=function(e){return new o(this.name,this._ifNode(e(this.expr,"expr",this)))},o.prototype.clone=function(){return new o(this.name,this.expr)},o.prototype._toString=function(e){var t=e&&e.parenthesis?e.parenthesis:"keep",r=this.expr.toString(e);return s(this,t)&&(r="("+r+")"),this.name+" = "+r},o.prototype._toTex=function(e){var t=e&&e.parenthesis?e.parenthesis:"keep",r=this.expr.toTex(e);return s(this,t)&&(r="\\left("+r+"\\right)"),i.toSymbol(this.name)+":="+r},o}var i=r(29);t.name="AssignmentNode",t.path="expression.node",t.factory=n},function(e,t){"use strict";function r(e,t){var r=e;"keep"!==t&&(r=e.getContent());for(var n=r.getIdentifier(),i=0;i=a)&&(n="("+n+")");var o=this.trueExpr.toString(e),s=i.getPrecedence(this.trueExpr,t);("all"===t||"OperatorNode"===this.trueExpr.type||null!==s&&r>=s)&&(o="("+o+")");var u=this.falseExpr.toString(e),c=i.getPrecedence(this.falseExpr,t);return("all"===t||"OperatorNode"===this.falseExpr.type||null!==c&&r>=c)&&(u="("+u+")"),n+" ? "+o+" : "+u},o.prototype._toTex=function(e){return"\\begin{cases} {"+this.trueExpr.toTex(e)+"}, &\\quad{\\text{if }\\;"+this.condition.toTex(e)+"}\\\\{"+this.falseExpr.toTex(e)+"}, &\\quad{\\text{otherwise}}\\end{cases}"},o}var i=(r(29),r(274));t.name="ConditionalNode",t.path="expression.node",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(e,t){if(!(this instanceof o))throw new SyntaxError("Constructor must be called with the new operator");if(t){if("string"!=typeof t)throw new TypeError('String expected for parameter "valueType"');if("string"!=typeof e)throw new TypeError('String expected for parameter "value"');this.value=e,this.valueType=t}else this.value=e+"",this.valueType=i(e);if(!u[this.valueType])throw new TypeError('Unsupported type of value "'+this.valueType+'"')}var s=n(r(271)),u={number:!0,string:!0,"boolean":!0,undefined:!0,"null":!0};return o.prototype=new s,o.prototype.type="ConstantNode",o.prototype.isConstantNode=!0,o.prototype._compile=function(e,t){switch(this.valueType){case"number":var r=e.math.config().number;return"bignumber"===r?'math.bignumber("'+this.value+'")':"fraction"===r?'math.fraction("'+this.value+'")':this.value.replace(/^(0*)[0-9]/,function(e,t){return e.substring(t.length)});case"string":return'"'+this.value+'"';case"boolean":return this.value;case"undefined":return this.value;case"null":return this.value;default:throw new TypeError('Unsupported type of constant "'+this.valueType+'"')}},o.prototype.forEach=function(e){},o.prototype.map=function(e){return this.clone()},o.prototype.clone=function(){return new o(this.value,this.valueType)},o.prototype._toString=function(e){switch(this.valueType){case"string":return'"'+this.value+'"';default:return this.value}},o.prototype._toTex=function(e){var t,r=this.value;switch(this.valueType){case"string":return'\\mathtt{"'+r+'"}';case"number":return t=r.toLowerCase().indexOf("e"),-1!==t?r.substring(0,t)+"\\cdot10^{"+r.substring(t+1)+"}":r;default:return r}},o}var i=r(40).type;t.name="ConstantNode",t.path="expression.node",t.factory=n},function(e,t,r){"use strict";function n(e){return"string"==typeof e}function i(e,t,i,u){function c(e,t,r){if(!(this instanceof c))throw new SyntaxError("Constructor must be called with the new operator");if("string"!=typeof e)throw new TypeError('String expected for parameter "name"');if(!Array.isArray(t)||!t.every(n))throw new TypeError('Array containing strings expected for parameter "params"');if(!r||!r.isNode)throw new TypeError('Node expected for parameter "expr"');if(e in a)throw new Error('Illegal function name, "'+e+'" is a reserved keyword');this.name=e,this.params=t,this.expr=r}function f(e,t){var r=s.getPrecedence(e,t),n=s.getPrecedence(e.expr,t);return"all"===t||null!==n&&r>=n}var l=i(r(271));return c.prototype=new l,c.prototype.type="FunctionAssignmentNode",c.prototype.isFunctionAssignmentNode=!0,c.prototype._compile=function(e,t){var r=Object.create(t);this.params.forEach(function(e){r[e]=!0});var n=this.expr._compile(e,r);return'scope["'+this.name+'"] = (function () { var fn = function '+this.name+"("+this.params.join(",")+") { if (arguments.length != "+this.params.length+') { throw new SyntaxError("Wrong number of arguments in function '+this.name+' (" + arguments.length + " provided, '+this.params.length+' expected)"); } return '+n+' }; fn.syntax = "'+this.name+"("+this.params.join(", ")+')"; return fn; })()'},c.prototype.forEach=function(e){e(this.expr,"expr",this)},c.prototype.map=function(e){var t=this._ifNode(e(this.expr,"expr",this));return new c(this.name,this.params.slice(0),t)},c.prototype.clone=function(){return new c(this.name,this.params.slice(0),this.expr)},c.prototype._toString=function(e){var t=e&&e.parenthesis?e.parenthesis:"keep",r=this.expr.toString(e);return f(this,t)&&(r="("+r+")"),"function "+this.name+"("+this.params.join(", ")+") = "+r},c.prototype._toTex=function(e){var t=e&&e.parenthesis?e.parenthesis:"keep",r=this.expr.toTex(e);return f(this,t)&&(r="\\left("+r+"\\right)"),"\\mathrm{"+this.name+"}\\left("+this.params.map(o.toSymbol).join(",")+"\\right):="+r},c}var a=r(272),o=r(29),s=r(274);t.name="FunctionAssignmentNode",t.path="expression.node",t.factory=i},function(e,t,r){"use strict";function n(e,t,n,i){function a(e,t){if(!(this instanceof a))throw new SyntaxError("Constructor must be called with the new operator");if(!e||!e.isNode)throw new TypeError('Node expected for parameter "object"');if(!c(t)||!t.every(function(e){return e&&e.isNode}))throw new TypeError('Array containing Nodes expected for parameter "ranges"');this.object=e,this.ranges=t}function o(e){switch(e.object.type){case"ArrayNode":case"ConstantNode":case"SymbolNode":case"ParenthesisNode":return!1;default:return!0}}var s=n(r(271)),u=(n(r(280)),n(r(281)),n(r(65))),c=Array.isArray;return a.prototype=new s,a.prototype.type="IndexNode",a.prototype.isIndexNode=!0,a.prototype._compile=function(e,t){return this.compileSubset(e,t)},a.prototype.compileSubset=function(e,t,r){function n(e){return e&&e.isSymbolNode&&"end"==e.name}var i=!1,a=this.ranges.map(function(e){var t=e.filter(n).length>0;return i=t?t:i,t});e.range=function(e,t,r){return new u(e&&e.isBigNumber===!0?e.toNumber():e,t&&t.isBigNumber===!0?t.toNumber():t,r&&r.isBigNumber===!0?r.toNumber():r)};var o=Object.create(t),s=this.ranges.map(function(t,r){ -var n=a[r];return t&&t.isRangeNode?n?(o.end=!0,"(function () { var end = size["+r+"]; return range( "+t.start._compile(e,o)+", "+t.end._compile(e,o)+", "+(t.step?t.step._compile(e,o):"1")+" );})()"):"range("+t.start._compile(e,o)+", "+t.end._compile(e,o)+", "+(t.step?t.step._compile(e,o):"1")+")":n?(o.end=!0,"(function () { var end = size["+r+"]; return "+t._compile(e,o)+";})()"):t._compile(e,o)});return i?"(function () { var obj = "+this.object._compile(e,o)+"; var size = math.size(obj).valueOf(); return math.subset( obj, math.index("+s.join(", ")+") "+(r?", "+r:"")+" );})()":"math.subset("+this.object._compile(e,o)+",math.index("+s.join(", ")+")"+(r?", "+r:"")+")"},a.prototype.forEach=function(e){e(this.object,"object",this);for(var t=0;t3)throw new Error("Too many arguments");this.start=e,this.end=t,this.step=r||null}function s(e,t){var r=i.getPrecedence(e,t),n={},a=i.getPrecedence(e.start,t);if(n.start=null!==a&&r>=a||"all"===t,e.step){var o=i.getPrecedence(e.step,t);n.step=null!==o&&r>=o||"all"===t}var s=i.getPrecedence(e.end,t);return n.end=null!==s&&r>=s||"all"===t,n}var u=n(r(271));return o.prototype=new u,o.prototype.type="RangeNode",o.prototype.isRangeNode=!0,o.prototype._compile=function(e,t){return"math.range("+this.start._compile(e,t)+", "+this.end._compile(e,t)+(this.step?", "+this.step._compile(e,t):"")+")"},o.prototype.forEach=function(e){e(this.start,"start",this),e(this.end,"end",this),this.step&&e(this.step,"step",this)},o.prototype.map=function(e){return new o(this._ifNode(e(this.start,"start",this)),this._ifNode(e(this.end,"end",this)),this.step&&this._ifNode(e(this.step,"step",this)))},o.prototype.clone=function(){return new o(this.start,this.end,this.step&&this.step)},o.prototype._toString=function(e){var t,r=e&&e.parenthesis?e.parenthesis:"keep",n=s(this,r),i=this.start.toString(e);if(n.start&&(i="("+i+")"),t=i,this.step){var a=this.step.toString(e);n.step&&(a="("+a+")"),t+=":"+a}var o=this.end.toString(e);return n.end&&(o="("+o+")"),t+=":"+o},o.prototype._toTex=function(e){var t=e&&e.parenthesis?e.parenthesis:"keep",r=s(this,t),n=this.start.toTex(e);if(r.start&&(n="\\left("+n+"\\right)"),this.step){var i=this.step.toTex(e);r.step&&(i="\\left("+i+"\\right)"),n+=":"+i}var a=this.end.toTex(e);return r.end&&(a="\\left("+a+"\\right)"),n+=":"+a},o}var i=r(274);t.name="RangeNode",t.path="expression.node",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a,o){function s(e){if(!(this instanceof s))throw new SyntaxError("Constructor must be called with the new operator");if("string"!=typeof e)throw new TypeError('String expected for parameter "name"');this.name=e}function u(e){throw new Error("Undefined symbol "+e)}var c=n(r(271)),f=n(r(73));return s.prototype=new c,s.prototype.type="SymbolNode",s.prototype.isSymbolNode=!0,s.prototype._compile=function(e,t){return e.undef=u,e.Unit=f,t[this.name]?this.name:this.name in e.math?'("'+this.name+'" in scope ? scope["'+this.name+'"] : math["'+this.name+'"])':'("'+this.name+'" in scope ? scope["'+this.name+'"] : '+(f.isValuelessUnit(this.name)?'new Unit(null, "'+this.name+'")':'undef("'+this.name+'")')+")"},s.prototype.forEach=function(e){},s.prototype.map=function(e){return this.clone()},s.prototype.clone=function(){return new s(this.name)},s.prototype._toString=function(e){return this.name},s.prototype._toTex=function(e){var t=!1;"undefined"==typeof o[this.name]&&f.isValuelessUnit(this.name)&&(t=!0);var r=i.toSymbol(this.name,t);return"\\"===r[0]?r:" "+r},s}var i=r(29);t.name="SymbolNode",t.path="expression.node",t.math=!0,t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o,s){function u(e,t,r){if(!(this instanceof u))throw new SyntaxError("Constructor must be called with the new operator");if("string"!=typeof e)throw new TypeError('string expected for parameter "op"');if("string"!=typeof t)throw new TypeError('string expected for parameter "fn"');if(!Array.isArray(r)||!r.every(function(e){return e&&e.isNode}))throw new TypeError('Array containing Nodes expected for parameter "args"');this.op=e,this.fn=t,this.args=r||[]}function c(e,t,r,n){var i=a.getPrecedence(e,t),o=a.getAssociativity(e,t);if("all"===t||r.length>2){var s=[];return r.forEach(function(e){switch(e.getContent().type){case"ArrayNode":case"ConstantNode":case"SymbolNode":case"ParenthesisNode":s.push(!1);break;default:s.push(!0)}}),s}switch(r.length){case 0:return[];case 1:var u=a.getPrecedence(r[0],t);if(n&&null!==u){var c,f;if("keep"===t?(c=r[0].getIdentifier(),f=e.getIdentifier()):(c=r[0].getContent().getIdentifier(),f=e.getContent().getIdentifier()),a.properties[i][f].latexLeftParens===!1)return[!1];if(a.properties[u][c].latexParens===!1)return[!1]}return null===u?[!1]:i>=u?[!0]:[!1];case 2:var l,p=a.getPrecedence(r[0],t),m=a.isAssociativeWith(e,r[0],t);l=null===p?!1:p!==i||"right"!==o||m?i>p?!0:!1:!0;var h,g=a.getPrecedence(r[1],t),v=a.isAssociativeWith(e,r[1],t);if(h=null===g?!1:g!==i||"left"!==o||v?i>g?!0:!1:!0,n){var f,d,y;"keep"===t?(f=e.getIdentifier(),d=e.args[0].getIdentifier(),y=e.args[1].getIdentifier()):(f=e.getContent().getIdentifier(),d=e.args[0].getContent().getIdentifier(),y=e.args[1].getContent().getIdentifier()),null!==p&&(a.properties[i][f].latexLeftParens===!1&&(l=!1),a.properties[p][d].latexParens===!1&&(l=!1)),null!==g&&(a.properties[i][f].latexRightParens===!1&&(h=!1),a.properties[g][y].latexParens===!1&&(h=!1))}return[l,h]}}var f=n(r(271));n(r(277)),n(r(281)),n(r(283));return u.prototype=new f,u.prototype.type="OperatorNode",u.prototype.isOperatorNode=!0,u.prototype._compile=function(e,t){if(!e.math[this.fn])throw new Error("Function "+this.fn+' missing in provided namespace "math"');var r=this.args.map(function(r){return r._compile(e,t)});return"math."+this.fn+"("+r.join(", ")+")"},u.prototype.forEach=function(e){for(var t=0;tt;t++){var h=e[t];if(h&&h.isMatrix===!0&&(p=!0),"number"==typeof h||h&&h.isBigNumber===!0){if(t!==n-1)throw new Error("Dimension must be specified as last argument");if(r=f,f=h.valueOf(),!o(f))throw new TypeError("Integer number expected for dimension");if(0>f)throw new u(f);if(t>0&&f>r)throw new u(f,r+1)}else{var g=a(h).valueOf(),v=s.size(g);if(m[t]=g,r=f,f=v.length-1,t>0&&f!=r)throw new c(r+1,f+1)}}if(0==m.length)throw new SyntaxError("At least one matrix expected");for(var d=m.shift();m.length;)d=i(d,m.shift(),f,0);return p?l(d):d},"...string":function(e){return e.join("")}});return p.toTex="\\mathrm{${name}}\\left(${args}\\right)",p}function i(e,t,r,n){if(r>n){if(e.length!=t.length)throw new c(e.length,t.length);for(var a=[],o=0;or;r++){var i=arguments[r];if(i&&i.isRange===!0)i.start--,i.end-=i.step>0?0:2;else if(i&&i.isSet===!0)i=i.map(function(e){return e-1});else if(i&&(i.isArray===!0||i.isMatrix))i=i.map(function(e){return e-1});else if("number"==typeof i)i--;else{if(!i||i.isBigNumber!==!0)throw new TypeError("Ranges must be a Number, Range, Array or Matrix");i=i.toNumber()-1}t[r]=i}var a=new e.Index;return e.Index.apply(a,t),a}}Array.isArray;t.name="index",t.path="expression.transform",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=(n(r(302)),n(r(50)));return a("max",{"Array, function":function(e,t){return i(e,t,e)},"Matrix, function":function(e,t){return o(i(e.valueOf(),t,e))}})}function i(e,t,r){function n(e,i){return Array.isArray(e)?e.map(function(e,t){return n(e,i.concat(t+1))}):t(e,i,r)}return n(e,[])}t.name="map",t.path="expression.transform",t.factory=n},function(e,t){"use strict";function r(e,t,r,i){var a=i("map",{"Array, function":n,"Matrix, function":function(e,t){return e.map(t)}});return a.toTex="\\mathrm{${name}}\\left(${args}\\right)",a}function n(e,t){var r=function(n,i){return Array.isArray(n)?n.map(function(e,t){return r(e,i.concat(t))}):t(n,i,e)};return r(e,[])}t.name="map",t.factory=r},function(e,t,r){"use strict";function n(e,t,n,o){var s=n(r(305));return o("max",{"...any":function(e){if(2==e.length&&a(e[0])){var t=e[1];"number"==typeof t?e[1]=t-1:t&&t.isBigNumber===!0&&(e[1]=t.minus(1))}try{return s.apply(null,e)}catch(r){throw i(r)}}})}var i=r(294).transform,a=r(304);t.name="max",t.path="expression.transform",t.factory=n},function(e,t){"use strict";e.exports=function(e){return Array.isArray(e)||e&&e.isMatrix===!0}},function(e,t,r){"use strict";function n(e,t,n,o){function s(e,t){return c(e,t)?e:t}function u(e){var t=void 0;if(i(e,function(e){(void 0===t||c(e,t))&&(t=e)}),void 0===t)throw new Error("Cannot calculate max of an empty array");return t}var c=n(r(62)),f=o("max",{"Array | Matrix":u,"Array | Matrix, number | BigNumber":function(e,t){return a(e,t.valueOf(),s)},"...":function(){return u(arguments)}});return f.toTex="\\max\\left(${args}\\right)",f}var i=r(306),a=r(307);t.name="max",t.factory=n},function(e,t){"use strict";e.exports=function r(e,t){e&&e.isMatrix===!0&&(e=e.valueOf());for(var n=0,i=e.length;i>n;n++){var a=e[n];Array.isArray(a)?r(a,t):t(a)}}},function(e,t,r){"use strict";function n(e,t,r){var a,o,s,u;if(0>=t){if(Array.isArray(e[0])){for(u=i(e),o=[],a=0;ar;r++){var o=[];for(t=0;n>t;t++)o.push(e[t][r]);a.push(o)}return a}var a=r(39).size,o=r(42);e.exports=function(e,t,r){var i=Array.isArray(e)?a(e):e.size();if(0>t)throw new o(t);if(t>=i.length)throw new o(t,i.length);return e&&e.isMatrix===!0?e.create(n(e.valueOf(),t,r)):n(e,t,r)}},function(e,t,r){"use strict";function n(e,t,n,o){var s=n(r(309));return o("mean",{"...any":function(e){if(2==e.length&&a(e[0])){var t=e[1];"number"==typeof t?e[1]=t-1:t&&t.isBigNumber===!0&&(e[1]=t.minus(1))}try{return s.apply(null,e)}catch(r){throw i(r)}}})}var i=r(294).transform,a=r(304);t.name="mean",t.path="expression.transform",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,s){function u(e,t){var r=o(e,t,f),n=Array.isArray(e)?i(e):e.size();return l(r,n[t])}function c(e){var t=0,r=0;if(a(e,function(e){t=f(t,e),r++}),0===r)throw new Error("Cannot calculate mean of an empty array");return l(t,r)}var f=n(r(49)),l=n(r(310)),p=s("mean",{"Array | Matrix":c,"Array | Matrix, number | BigNumber":u,"...":function(){return c(arguments)}});return p.toTex="\\mathrm{${name}}\\left(${args}\\right)",p}var i=r(39).size,a=r(306),o=r(307);t.name="mean",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(78)),s=n(r(83)),u=n(r(311)),c=n(r(50)),f=n(r(84)),l=n(r(56)),p=a("divide",i({"Array | Matrix, Array | Matrix":function(e,t){return s(e,u(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=f(e,t,o,!1);break;case"dense":r=l(e,t,o,!1)}return r},"Array, any":function(e,t){return l(c(e),t,o,!1).valueOf()},"any, Array | Matrix":function(e,t){return s(e,u(t))}},o.signatures));return p.toTex="\\frac{${args[0]}}{${args[1]}}",p}var i=r(3).extend;t.name="divide",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(e,t,r){var n,i,a,o,s;if(1==t){if(o=e[0][0],0==o)throw Error("Cannot calculate inverse, determinant is zero");return[[u(1,o)]]}if(2==t){var h=p(e);if(0==h)throw Error("Cannot calculate inverse, determinant is zero");return[[u(e[1][1],h),u(l(e[0][1]),h)],[u(l(e[1][0]),h),u(e[0][0],h)]]}var g=e.concat();for(n=0;t>n;n++)g[n]=g[n].concat();for(var v=m(t).valueOf(),d=0;r>d;d++){for(n=d;t>n&&0==g[n][d];)n++;if(n==t||0==g[n][d])throw Error("Cannot calculate inverse, determinant is zero");n!=d&&(s=g[d],g[d]=g[n],g[n]=s,s=v[d],v[d]=v[n],v[n]=s);var y=g[d],x=v[d];for(n=0;t>n;n++){var b=g[n],w=v[n];if(n!=d){if(0!=b[d]){for(a=u(l(b[d]),y[d]),i=d;r>i;i++)b[i]=c(b[i],f(a,y[i]));for(i=0;r>i;i++)w[i]=c(w[i],f(a,x[i]))}}else{for(a=y[d],i=d;r>i;i++)b[i]=u(b[i],a);for(i=0;r>i;i++)w[i]=u(w[i],a)}}}return v}var s=n(r(50)),u=n(r(78)),c=n(r(51)),f=n(r(83)),l=n(r(75)),p=n(r(312)),m=n(r(81)),h=a("inv",{"Array | Matrix":function(e){var t=e.isMatrix===!0?e.size():i.array.size(e);switch(t.length){case 1:if(1==t[0])return e.isMatrix===!0?s([u(1,e.valueOf()[0])]):[u(1,e[0])];throw new RangeError("Matrix must be square (size: "+i.string.format(t)+")");case 2:var r=t[0],n=t[1];if(r==n)return e.isMatrix===!0?s(o(e.valueOf(),r,n),e.storage()):o(e,r,n);throw new RangeError("Matrix must be square (size: "+i.string.format(t)+")");default:throw new RangeError("Matrix must be two dimensional (size: "+i.string.format(t)+")")}},any:function(e){return u(1,e)}});return h.toTex="\\left(${args[0]}\\right)^{-1}",h}var i=r(38);t.name="inv",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){function s(e,t,r){if(1==t)return a.clone(e[0][0]);if(2==t)return f(l(e[0][0],e[1][1]),l(e[1][0],e[0][1]));for(var n=function(e){var t,r,n=new Array(e.length),i=0;for(t=1;tr;r++)n[t][r]=0;for(r=t+1;ro;o++)i=l(n(i),e);return t%2==0?p(i[0][0]):i[0][0]}var u=n(r(50)),c=n(r(49)),f=n(r(74)),l=n(r(83)),p=n(r(75)),m=i("det",{any:function(e){return a.clone(e)},"Array | Matrix":function(e){var t;switch(e&&e.isMatrix===!0?t=e.size():Array.isArray(e)?(e=u(e),t=e.size()):t=[],t.length){case 0:return a.clone(e);case 1:if(1==t[0])return a.clone(e.valueOf()[0]);throw new RangeError("Matrix must be square (size: "+o.format(t)+")");case 2:var r=t[0],n=t[1];if(r==n)return s(e.clone().valueOf(),r,n);throw new RangeError("Matrix must be square (size: "+o.format(t)+")");default:throw new RangeError("Matrix must be two dimensional (size: "+o.format(t)+")")}}});return m.toTex="\\det\\left(${args[0]}\\right)",m}var i=r(38),a=i.object,o=i.string;t.name="det",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var s=n(r(314));return o("min",{"...any":function(e){if(2==e.length&&a(e[0])){var t=e[1];"number"==typeof t?e[1]=t-1:t&&t.isBigNumber===!0&&(e[1]=t.minus(1))}try{return s.apply(null,e)}catch(r){throw i(r)}}})}var i=r(294).transform,a=r(304);t.name="min",t.path="expression.transform",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(e,t){return c(e,t)?e:t}function u(e){var t=void 0;if(i(e,function(e){(void 0===t||c(e,t))&&(t=e)}),void 0===t)throw new Error("Cannot calculate min of an empty array");return t}var c=n(r(58)),f=o("min",{"Array | Matrix":u,"Array | Matrix, number | BigNumber":function(e,t){return a(e,t.valueOf(),s)},"...":function(){return u(arguments)}});return f.toTex="\\min\\left(${args}\\right)",f}var i=r(306),a=r(307);t.name="min",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(316));return i("range",{"...any":function(e){var t=e.length-1,r=e[t];return"boolean"!=typeof r&&e.push(!0),a.apply(null,e)}})}t.name="range",t.path="expression.transform",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){function a(e){return"array"===t.matrix?e:p(e)}function o(r,n){var i=l(r);if(!i)throw new SyntaxError('String "'+r+'" is no valid range');var o;return"bignumber"===t.number?(o=n?f:c,a(o(new e.BigNumber(i.start),new e.BigNumber(i.end),new e.BigNumber(i.step)))):(o=n?u:s,a(o(i.start,i.end,i.step)))}function s(e,t,r){var n=[],i=e;if(r>0)for(;t>i;)n.push(i),i+=r;else if(0>r)for(;i>t;)n.push(i),i+=r;return n}function u(e,t,r){var n=[],i=e;if(r>0)for(;t>=i;)n.push(i),i+=r;else if(0>r)for(;i>=t;)n.push(i),i+=r;return n}function c(e,t,r){var n=[],i=e;if(r.gt(m))for(;i.lt(t);)n.push(i),i=i.plus(r);else if(r.lt(m))for(;i.gt(t);)n.push(i),i=i.plus(r);return n}function f(e,t,r){var n=[],i=e;if(r.gt(m))for(;i.lte(t);)n.push(i),i=i.plus(r);else if(r.lt(m))for(;i.gte(t);)n.push(i),i=i.plus(r);return n}function l(e){var t=e.split(":"),r=t.map(function(e){return Number(e)}),n=r.some(function(e){return isNaN(e)});if(n)return null;switch(r.length){case 2:return{start:r[0],end:r[1],step:1};case 3:return{start:r[0],end:r[2],step:r[1]};default:return null}}var p=n(r(50)),m=new e.BigNumber(0),h=new e.BigNumber(1),g=i("range",{string:o,"string, boolean":o,"number, number":function(e,t){return a(s(e,t,1))},"number, number, number":function(e,t,r){return a(s(e,t,r))},"number, number, boolean":function(e,t,r){return a(r?u(e,t,1):s(e,t,1))},"number, number, number, boolean":function(e,t,r,n){return a(n?u(e,t,r):s(e,t,r))},"BigNumber, BigNumber":function(e,t){return a(c(e,t,h))},"BigNumber, BigNumber, BigNumber":function(e,t,r){return a(c(e,t,r))},"BigNumber, BigNumber, boolean":function(e,t,r){return a(r?f(e,t,h):c(e,t,h))},"BigNumber, BigNumber, BigNumber, boolean":function(e,t,r,n){return a(n?f(e,t,r):c(e,t,r))}});return g.toTex="\\mathrm{${name}}\\left(${args}\\right)",g}t.name="range",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(318));return a("subset",{"...any":function(e){try{return o.apply(null,e)}catch(t){throw i(t)}}})}var i=r(294).transform;t.name="subset",t.path="expression.transform",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,s){function u(e,t){if(!t||t.isIndex!==!0)throw new TypeError("Index expected");if(1!=t.size().length)throw new o(t.size().length,1);var r=e.length;a(t.min()[0],r),a(t.max()[0],r);var n=t.dimension(0),i="";return n.forEach(function(t){i+=e.charAt(t)}),i}function c(e,t,r,n){if(!t||t.isIndex!==!0)throw new TypeError("Index expected");if(1!=t.size().length)throw new o(t.size().length,1);if(void 0!==n){if("string"!=typeof n||1!==n.length)throw new TypeError("Single character expected as defaultValue")}else n=" ";var i=t.dimension(0),s=i.size()[0];if(s!=r.length)throw new o(i.size()[0],r.length);var u=e.length;a(t.min()[0]),a(t.max()[0]);for(var c=[],f=0;u>f;f++)c[f]=e.charAt(f);if(i.forEach(function(e,t){c[e]=r.charAt(t[0])}),c.length>u)for(f=u-1,s=c.length;s>f;f++)c[f]||(c[f]=n);return c.join("")}var f=n(r(50)),l=s("subset",{"Array, Index":function(e,t){var r=f(e),n=r.subset(t);return n&&n.valueOf()},"Matrix, Index":function(e,t){return e.subset(t)},"string, Index":u,"Array, Index, any":function(e,t,r){return f(i(e)).subset(t,r,void 0).valueOf()},"Array, Index, any, any":function(e,t,r,n){return f(i(e)).subset(t,r,n).valueOf()},"Matrix, Index, any":function(e,t,r){return e.clone().subset(t,r)},"Matrix, Index, any, any":function(e,t,r,n){return e.clone().subset(t,r,n)},"string, Index, string":c,"string, Index, string, string":c});return l.toTex="\\mathrm{${name}}\\left(${args}\\right)",l}var i=r(3).clone,a=r(39).validateIndex,o=r(41);t.name="subset",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(e){if(!(this instanceof s))throw new SyntaxError("Constructor must be called with the new operator");if(!e)throw new Error('Argument "doc" missing');this.doc=e}var u=n(r(289))();return s.prototype.type="Help", -s.prototype.isHelp=!0,s.prototype.toString=function(){var e=this.doc||{},t="\n";if(e.name&&(t+="Name: "+e.name+"\n\n"),e.category&&(t+="Category: "+e.category+"\n\n"),e.description&&(t+="Description:\n "+e.description+"\n\n"),e.syntax&&(t+="Syntax:\n "+e.syntax.join("\n ")+"\n\n"),e.examples){t+="Examples:\n";for(var r=0;rt;t++)w[t]=t;for(r=0;o>r;r++){if(r>0)for(t=0;i>t;t++){var N=Math.min(t,r),E=0;for(n=0;N>n;n++)E=u(E,f(g[t][n],g[n][r]));g[t][r]=l(g[t][r],E)}var M=r,A=0,_=0;for(t=r;i>t;t++){var O=g[t][r],T=s(O);p(T,A)&&(M=t,A=T,_=O)}if(r!==M&&(w[r]=[w[M],w[M]=w[r]][0],v._swapRows(r,M,g)),i>r)for(t=r+1;i>t;t++){var C=g[t][r];m(C,0)||(g[t][r]=c(g[t][r],_))}}for(r=0;o>r;r++)for(t=0;i>t;t++)0===r&&(o>t&&(x[t]=[]),d[t]=[]),r>t?(o>t&&(x[t][r]=g[t][r]),i>r&&(d[t][r]=0)):t!==r?(o>t&&(x[t][r]=0),i>r&&(d[t][r]=g[t][r])):(o>t&&(x[t][r]=g[t][r]),i>r&&(d[t][r]=1));var S=new v({data:d,size:y}),z=new v({data:x,size:b}),B=[];for(t=0,h=w.length;h>t;t++)B[w[t]]=t;return{L:S,U:z,p:B,toString:function(){return"L: "+this.L.toString()+"\nU: "+this.U.toString()+"\nP: "+this.p}}},b=function(e){var t,r,n,i=e._size[0],a=e._size[1],o=Math.min(i,a),u=e._values,l=e._index,v=e._ptr,y=[],x=[],b=[],w=[i,o],N=[],E=[],M=[],A=[o,a],_=[],O=[];for(t=0;i>t;t++)_[t]=t,O[t]=t;var T=function(e,t){var r=O[e],n=O[t];_[r]=t,_[n]=e,O[e]=n,O[t]=r};for(r=0;a>r;r++){var C=new d;i>r&&(b.push(y.length),y.push(1),x.push(r)),M.push(N.length);var S=v[r],z=v[r+1];for(n=S;z>n;n++)t=l[n],C.set(_[t],u[n]);r>0&&C.forEach(0,r-1,function(e,t){g._forEachRow(e,y,x,b,function(r,n){r>e&&C.accumulate(r,h(f(n,t)))})});var B=r,k=C.get(r),I=s(k);C.forEach(r+1,i-1,function(e,t){var r=s(t);p(r,I)&&(B=e,I=r,k=t)}),r!==B&&(g._swapRows(r,B,w[1],y,x,b),g._swapRows(r,B,A[1],N,E,M),C.swap(r,B),T(r,B)),C.forEach(0,i-1,function(e,t){r>=e?(N.push(t),E.push(e)):(t=c(t,k),m(t,0)||(y.push(t),x.push(e)))})}return M.push(N.length),b.push(y.length),{L:new g({values:y,index:x,ptr:b,size:w}),U:new g({values:N,index:E,ptr:M,size:A}),p:_,toString:function(){return"L: "+this.L.toString()+"\nU: "+this.U.toString()+"\nP: "+this.p}}};return y}var i=r(38),a=i.object;t.name="lup",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(324)),s=n(r(335)),u=i("slu",{"SparseMatrix, number, number":function(e,t,r){if(!o(t)||0>t||t>3)throw new Error("Symbolic Ordering and Analysis order must be an integer number in the interval [0, 3]");if(0>r||r>1)throw new Error("Partial pivoting threshold must be a number from 0 to 1");var n=a(t,e,!1),i=s(e,n,r);return{L:i.L,U:i.U,p:i.pinv,q:n.q,toString:function(){return"L: "+this.L.toString()+"\nU: "+this.U.toString()+"\np: "+this.p.toString()+(this.q?"\nq: "+this.q.toString():"")+"\n"}}}});return u}var i=r(38),a=i.number,o=a.isInteger;t.name="slu",t.factory=n},function(e,t,r){"use strict";function n(e,t,n){var i=n(r(325)),a=n(r(330)),o=n(r(331)),s=n(r(332)),u=n(r(333)),c=function(e,t,r){var n,c=t._ptr,l=t._size,p=l[1],m={};if(m.q=i(e,t),e&&!m.q)return null;if(r){var h=e?a(t,null,m.q,0):t;m.parent=o(h,1);var g=s(m.parent,p);if(m.cp=u(h,m.parent,g,1),h&&m.parent&&m.cp&&f(h,m))for(m.unz=0,n=0;p>n;n++)m.unz+=m.cp[n]}else m.unz=4*c[p]+p,m.lnz=m.unz;return m},f=function(e,t){var r=e._ptr,n=e._index,i=e._size,a=i[0],o=i[1];t.pinv=[],t.leftmost=[];var s,u,c,f,l,p=t.parent,m=t.pinv,h=t.leftmost,g=[],v=0,d=a,y=a+o,x=a+2*o;for(u=0;o>u;u++)g[d+u]=-1,g[y+u]=-1,g[x+u]=0;for(s=0;a>s;s++)h[s]=-1;for(u=o-1;u>=0;u--)for(f=r[u],l=r[u+1],c=f;l>c;c++)h[n[c]]=u;for(s=a-1;s>=0;s--)m[s]=-1,u=h[s],-1!=u&&(0===g[x+u]++&&(g[y+u]=s),g[v+s]=g[d+u],g[d+u]=s);for(t.lnz=0,t.m2=a,u=0;o>u;u++)if(s=g[d+u],t.lnz++,0>s&&(s=t.m2++),m[s]=u,!(--x[u]<=0)){t.lnz+=g[x+u];var b=p[u];-1!=b&&(0===g[x+b]&&(g[y+b]=g[y+u]),g[v+g[y+u]]=g[d+b],g[d+b]=g[v+s],g[x+b]+=g[x+u])}for(s=0;a>s;s++)m[s]<0&&(m[s]=u++);return!0};return c}t.name="cs_sqr",t.path="sparse",t.factory=n},function(e,t,r){"use strict";function n(e,t,n){var i=n(r(326)),a=n(r(327)),o=n(r(328)),s=n(r(49)),u=n(r(83)),c=n(r(329)),f=function(e,t){if(!t||0>=e||e>3)return null;var r=t._size,n=r[0],s=r[1],u=0,c=Math.max(16,10*Math.sqrt(s));c=Math.min(s-2,c);var f=l(e,t,n,s,c);a(f,g,null);for(var v,d,y,x,b,w,N,E,M,A,_,O,T,C,S,z,B=f._index,k=f._ptr,I=k[s],R=[],P=[],U=0,q=s+1,L=2*(s+1),F=3*(s+1),D=4*(s+1),$=5*(s+1),j=6*(s+1),G=7*(s+1),H=R,V=p(s,k,P,U,F,H,L,G,q,j,D,$),Z=m(s,k,P,$,D,j,c,q,F,H,L),Y=0;s>Z;){for(y=-1;s>Y&&-1==(y=P[F+Y]);Y++);-1!=P[L+y]&&(H[P[L+y]]=-1),P[F+Y]=P[L+y];var W=P[D+y],X=P[q+y];Z+=X;var J=0;P[q+y]=-X;var Q=k[y],K=0===W?Q:I,ee=K;for(x=1;W+1>=x;x++){for(x>W?(w=y,N=Q,E=P[U+y]-W):(w=B[Q++],N=k[w],E=P[U+w]),b=1;E>=b;b++)v=B[N++],(M=P[q+v])<=0||(J+=M,P[q+v]=-M,B[ee++]=v,-1!=P[L+v]&&(H[P[L+v]]=H[v]),-1!=H[v]?P[L+H[v]]=P[L+v]:P[F+P[$+v]]=P[L+v]);w!=y&&(k[w]=i(y),P[j+w]=0)}for(0!==W&&(I=ee),P[$+y]=J,k[y]=K,P[U+y]=ee-K,P[D+y]=-2,V=h(V,u,P,j,s),A=K;ee>A;A++)if(v=B[A],!((_=P[D+v])<=0)){M=-P[q+v];var te=V-M;for(Q=k[v],O=k[v]+_-1;O>=Q;Q++)w=B[Q],P[j+w]>=V?P[j+w]-=M:0!==P[j+w]&&(P[j+w]=P[$+w]+te)}for(A=K;ee>A;A++){for(v=B[A],O=k[v],T=O+P[D+v]-1,C=O,S=0,z=0,Q=O;T>=Q;Q++)if(w=B[Q],0!==P[j+w]){var re=P[j+w]-V;re>0?(z+=re,B[C++]=w,S+=w):(k[w]=i(y),P[j+w]=0)}P[D+v]=C-O+1;var ne=C,ie=O+P[U+v];for(Q=T+1;ie>Q;Q++){d=B[Q];var ae=P[q+d];0>=ae||(z+=ae,B[C++]=d,S+=d)}0===z?(k[v]=i(y),M=-P[q+v],J-=M,X+=M,Z+=M,P[q+v]=0,P[D+v]=-1):(P[$+v]=Math.min(P[$+v],z),B[C]=B[ne],B[ne]=B[O],B[O]=y,P[U+v]=C-O+1,S=(0>S?-S:S)%s,P[L+v]=P[G+S],P[G+S]=v,H[v]=S)}for(P[$+y]=J,u=Math.max(u,J),V=h(V+u,u,P,j,s),A=K;ee>A;A++)if(v=B[A],!(P[q+v]>=0))for(S=H[v],v=P[G+S],P[G+S]=-1;-1!=v&&-1!=P[L+v];v=P[L+v],V++){for(E=P[U+v],_=P[D+v],Q=k[v]+1;Q<=k[v]+E-1;Q++)P[j+B[Q]]=V;var oe=v;for(d=P[L+v];-1!=d;){var se=P[U+d]===E&&P[D+d]===_;for(Q=k[d]+1;se&&Q<=k[d]+E-1;Q++)P[j+B[Q]]!=V&&(se=0);se?(k[d]=i(v),P[q+v]+=P[q+d],P[q+d]=0,P[D+d]=-1,d=P[L+d],P[L+oe]=d):(oe=d,d=P[L+d])}}for(Q=K,A=K;ee>A;A++)v=B[A],(M=-P[q+v])<=0||(P[q+v]=M,z=P[$+v]+J-M,z=Math.min(z,s-Z-M),-1!=P[F+z]&&(H[P[F+z]]=v),P[L+v]=P[F+z],H[v]=-1,P[F+z]=v,Y=Math.min(Y,z),P[$+v]=z,B[Q++]=v);P[q+y]=X,0===(P[U+y]=Q-K)&&(k[y]=-1,P[j+y]=0),0!==W&&(I=Q)}for(v=0;s>v;v++)k[v]=i(k[v]);for(d=0;s>=d;d++)P[F+d]=-1;for(d=s;d>=0;d--)P[q+d]>0||(P[L+d]=P[F+k[d]],P[F+k[d]]=d);for(w=s;w>=0;w--)P[q+w]<=0||-1!=k[w]&&(P[L+w]=P[F+k[w]],P[F+k[w]]=w);for(y=0,v=0;s>=v;v++)-1==k[v]&&(y=o(v,y,P,F,L,R,j));return R.splice(R.length-1,1),R},l=function(e,t,r,n,i){var a=c(t);if(1===e&&n===r)return s(t,a);if(2==e){for(var o=a._index,f=a._ptr,l=0,p=0;r>p;p++){var m=f[p];if(f[p]=l,!(f[p+1]-m>i))for(var h=f[p+1];h>m;m++)o[l++]=o[m]}return f[r]=l,t=c(a),u(a,t)}return u(a,t)},p=function(e,t,r,n,i,a,o,s,u,c,f,l){for(var p=0;e>p;p++)r[n+p]=t[p+1]-t[p];r[n+e]=0;for(var m=0;e>=m;m++)r[i+m]=-1,a[m]=-1,r[o+m]=-1,r[s+m]=-1,r[u+m]=1,r[c+m]=1,r[f+m]=0,r[l+m]=r[n+m];var g=h(0,0,r,c,e);return r[f+e]=-2,t[e]=-1,r[c+e]=0,g},m=function(e,t,r,n,a,o,s,u,c,f,l){for(var p=0,m=0;e>m;m++){var h=r[n+m];if(0===h)r[a+m]=-2,p++,t[m]=-1,r[o+m]=0;else if(h>s)r[u+m]=0,r[a+m]=-1,p++,t[m]=i(e),r[u+e]++;else{var g=r[c+h];-1!=g&&(f[g]=m),r[l+m]=r[c+h],r[c+h]=m}}return p},h=function(e,t,r,n,i){if(2>e||0>e+t){for(var a=0;i>a;a++)0!==r[n+a]&&(r[n+a]=1);e=2}return e},g=function(e,t){return e!=t};return f}t.name="cs_amd",t.path="sparse",t.factory=n},function(e,t){"use strict";function r(){var e=function(e){return-e-2};return e}t.name="cs_flip",t.path="sparse",t.factory=r},function(e,t){"use strict";function r(){var e=function(e,t,r){for(var n=e._values,i=e._index,a=e._ptr,o=e._size,s=o[1],u=0,c=0;s>c;c++){var f=a[c];for(a[c]=u;f=0;){var u=r[o+s],c=r[n+u];-1==c?(s--,a[t++]=u):(r[n+u]=r[i+c],++s,r[o+s]=c)}return t};return e}t.name="cs_tdfs",t.path="sparse",t.factory=r},function(e,t,r){"use strict";function n(e,t,n,o){var s=r(29),u=n(r(50)),c=e.DenseMatrix,f=e.SparseMatrix,l=o("transpose",{Array:function(e){return l(u(e)).valueOf()},Matrix:function(e){var t,r=e.size();switch(r.length){case 1:t=e.clone();break;case 2:var n=r[0],i=r[1];if(0===i)throw new RangeError("Cannot transpose a 2D matrix with no columns (size: "+a(r)+")");switch(e.storage()){case"dense":t=p(e,n,i);break;case"sparse":t=m(e,n,i)}break;default:throw new RangeError("Matrix must be a vector or two dimensional (size: "+a(this._size)+")")}return t},any:function(e){return i(e)}}),p=function(e,t,r){for(var n,a=e._data,o=[],s=0;r>s;s++){n=o[s]=[];for(var u=0;t>u;u++)n[u]=i(a[u][s])}return new c({data:o,size:[r,t],datatype:e._datatype})},m=function(e,t,r){for(var n=e._values,a=e._index,o=e._ptr,s=n?[]:void 0,u=[],c=[],l=[],p=0;t>p;p++)l[p]=0;var m,h,g;for(m=0,h=a.length;h>m;m++)l[a[m]]++;for(var v=0,d=0;t>d;d++)c.push(v),v+=l[d],l[d]=c[d];for(c.push(v),g=0;r>g;g++)for(var y=o[g],x=o[g+1],b=y;x>b;b++){var w=l[a[b]]++;u[w]=g,n&&(s[w]=i(n[b]))}return new f({values:s,index:u,ptr:c,size:[r,t],datatype:e._datatype})};return l.toTex="\\left(${args[0]}\\right)"+s.operators.transpose,l}var i=r(3).clone,a=r(23).format;t.name="transpose",t.factory=n},function(e,t){"use strict";function r(e){var t=e.SparseMatrix,r=function(e,r,n,i){for(var a=e._values,o=e._index,s=e._ptr,u=e._size,c=e._datatype,f=u[0],l=u[1],p=i&&e._values?[]:null,m=[],h=[],g=0,v=0;l>v;v++){h[v]=g;for(var d=n?n[v]:v,y=s[d],x=s[d+1],b=y;x>b;b++){var w=r?r[o[b]]:o[b];m[g]=w,p&&(p[g]=a[b]),g++}}return h[l]=g,new t({values:p,index:m,ptr:h,size:[f,l],datatype:c})};return r}t.name="cs_permute",t.path="sparse",t.factory=r},function(e,t){"use strict";function r(){var e=function(e,t){if(!e)return null;var r,n,i=e._index,a=e._ptr,o=e._size,s=o[0],u=o[1],c=[],f=[],l=0,p=u;if(t)for(r=0;s>r;r++)f[p+r]=-1;for(var m=0;u>m;m++){c[m]=-1,f[l+m]=-1;for(var h=a[m],g=a[m+1],v=h;g>v;v++){var d=i[v];for(r=t?f[p+d]:d;-1!=r&&m>r;r=n)n=f[l+r],f[l+r]=m,-1==n&&(c[r]=m);t&&(f[p+d]=m)}}return c};return e}t.name="cs_etree",t.path="sparse",t.factory=r},function(e,t,r){"use strict";function n(e,t,n){var i=n(r(328)),a=function(e,t){if(!e)return null;var r,n=0,a=[],o=[],s=0,u=t,c=2*t;for(r=0;t>r;r++)o[s+r]=-1;for(r=t-1;r>=0;r--)-1!=e[r]&&(o[u+r]=o[s+e[r]],o[s+e[r]]=r);for(r=0;t>r;r++)-1==e[r]&&(n=i(r,n,o,s,u,a,c));return a};return a}t.name="cs_post",t.path="sparse",t.factory=n},function(e,t,r){"use strict";function n(e,t,n){var i=n(r(329)),a=n(r(334)),o=function(e,t,r,n){if(!e||!t||!r)return null;var o,s,u,c,f,l,p,m=e._size,h=m[0],g=m[1],v=4*g+(n?g+h+1:0),d=[],y=0,x=g,b=2*g,w=3*g,N=4*g,E=5*g+1;for(u=0;v>u;u++)d[u]=-1;var M=[],A=i(e),_=A._index,O=A._ptr;for(u=0;g>u;u++)for(s=r[u],M[s]=-1==d[w+s]?1:0;-1!=s&&-1==d[w+s];s=t[s])d[w+s]=u;if(n){for(u=0;g>u;u++)d[r[u]]=u;for(o=0;h>o;o++){for(u=g,l=O[o],p=O[o+1],f=l;p>f;f++)u=Math.min(u,d[_[f]]);d[E+o]=d[N+u],d[N+u]=o}}for(o=0;g>o;o++)d[y+o]=o;for(u=0;g>u;u++){for(s=r[u],-1!=t[s]&&M[t[s]]--,c=n?d[N+u]:s;-1!=c;c=n?d[E+c]:-1)for(f=O[c];f=1&&M[s]++,2==T.jleaf&&M[T.q]--}-1!=t[s]&&(d[y+s]=t[s])}for(s=0;g>s;s++)-1!=t[s]&&(M[t[s]]+=M[s]);return M};return o}t.name="cs_counts",t.path="sparse",t.factory=n},function(e,t){"use strict";function r(){var e=function(e,t,r,n,i,a,o){var s,u,c,f,l=0;if(t>=e||r[n+t]<=r[i+e])return-1;if(r[i+e]=r[n+t],c=r[a+e],r[a+e]=t,-1===c)l=1,f=e;else{for(l=2,f=c;f!=r[o+f];f=r[o+f]);for(s=c;s!=f;s=u)u=r[o+s],r[o+s]=f}return{jleaf:l,q:f}};return e}t.name="cs_leaf",t.path="sparse",t.factory=r},function(e,t,r){"use strict";function n(e,t,n){var i=n(r(85)),a=n(r(78)),o=n(r(83)),s=n(r(62)),u=n(r(336)),c=n(r(337)),f=e.SparseMatrix,l=function(e,t,r){if(!e)return null;var n,l=e._size,p=l[1],m=100,h=100;t&&(n=t.q,m=t.lnz||m,h=t.unz||h);var g,v,d=[],y=[],x=[],b=new f({values:d,index:y,ptr:x,size:[p,p]}),w=[],N=[],E=[],M=new f({values:w,index:N,ptr:E,size:[p,p]}),A=[],_=[],O=[];for(g=0;p>g;g++)_[g]=0,A[g]=-1,x[g+1]=0;m=0,h=0;for(var T=0;p>T;T++){x[T]=m,E[T]=h;var C=n?n[T]:T,S=c(b,e,C,O,_,A,1),z=-1,B=-1;for(v=S;p>v;v++)if(g=O[v],A[g]<0){var k=i(_[g]);s(k,B)&&(B=k,z=g)}else N[h]=A[g],w[h++]=_[g];if(-1==z||0>=B)return null;A[C]<0&&u(i(_[C]),o(B,r))&&(z=C);var I=_[z];for(N[h]=T,w[h++]=I,A[z]=T,y[m]=z,d[m++]=1,v=S;p>v;v++)g=O[v],A[g]<0&&(y[m]=g,d[m++]=a(_[g],I)),_[g]=0}for(x[p]=m,E[p]=h,v=0;m>v;v++)y[v]=A[y[v]];return d.splice(m,d.length-m),y.splice(m,y.length-m),w.splice(h,w.length-h),N.splice(h,N.length-h),{L:b,U:M,pinv:A}};return l}t.name="cs_lu",t.path="sparse",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(50)),s=n(r(59)),u=n(r(60)),c=n(r(61)),f=n(r(55)),l=n(r(56)),p=r(29),m=a("largerEq",{"boolean, boolean":function(e,t){return e>=t},"number, number":function(e,r){return e>=r||i(e,r,t.epsilon)},"BigNumber, BigNumber":function(e,t){return e.gte(t)},"Fraction, Fraction":function(e,t){return-1!==e.compare(t)},"Complex, Complex":function(){throw new TypeError("No ordering relation is defined for complex numbers")},"Unit, Unit":function(e,t){if(!e.equalBase(t))throw new Error("Cannot compare units with different base");return m(e.value,t.value)},"string, string":function(e,t){return e>=t},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=u(e,t,m);break;default:r=s(t,e,m,!0)}break;default:switch(t.storage()){case"sparse":r=s(e,t,m,!1);break;default:r=f(e,t,m)}}return r},"Array, Array":function(e,t){return m(o(e),o(t)).valueOf()},"Array, Matrix":function(e,t){return m(o(e),t)},"Matrix, Array":function(e,t){return m(e,o(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=c(e,t,m,!1);break;default:r=l(e,t,m,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=c(t,e,m,!0);break;default:r=l(t,e,m,!0)}return r},"Array, any":function(e,t){return l(o(e),t,m,!1).valueOf()},"any, Array":function(e,t){return l(o(t),e,m,!0).valueOf()}});return m.toTex="\\left(${args[0]}"+p.operators.largerEq+"${args[1]}\\right)",m}var i=r(6).nearlyEqual;t.name="largerEq",t.factory=n},function(e,t,r){"use strict";function n(e,t,n){var i=n(r(78)),a=n(r(83)),o=n(r(74)),s=n(r(338)),u=function(e,t,r,n,u,c,f){var l,p,m,h,g=e._values,v=e._index,d=e._ptr,y=e._size,x=y[1],b=t._values,w=t._index,N=t._ptr,E=s(e,t,r,n,c);for(l=E;x>l;l++)u[n[l]]=0;for(p=N[r],m=N[r+1],l=p;m>l;l++)u[w[l]]=b[l];for(var M=E;x>M;M++){var A=n[M],_=c?c[A]:A;if(!(0>_))for(p=d[_],m=d[_+1],u[A]=i(u[A],g[f?p:m-1]),l=f?p+1:p,h=f?m:m-1;h>l;l++){var O=v[l];u[O]=o(u[O],a(g[l],u[A]))}}return E};return u}t.name="cs_spsolve",t.path="sparse",t.factory=n},function(e,t,r){"use strict";function n(e,t,n){var i=n(r(339)),a=n(r(340)),o=n(r(341)),s=function(e,t,r,n,s){var u,c,f,l=e._ptr,p=e._size,m=t._index,h=t._ptr,g=p[1],v=g;for(c=h[r],f=h[r+1],u=c;f>u;u++){var d=m[u];a(l,d)||(v=i(d,e,v,n,s))}for(u=v;g>u;u++)o(l,n[u]);return v};return s}t.name="cs_reach",t.path="sparse",t.factory=n},function(e,t,r){"use strict";function n(e,t,n){var i=n(r(340)),a=n(r(341)),o=n(r(342)),s=function(e,t,r,n,s){var u,c,f,l=t._index,p=t._ptr,m=t._size,h=m[1],g=0;for(n[0]=e;g>=0;){e=n[g];var v=s?s[e]:e;i(p,e)||(a(p,e),n[h+g]=0>v?0:o(p[v]));var d=1;for(c=n[h+g],f=0>v?0:o(p[v+1]);f>c;c++)if(u=l[c],!i(p,u)){n[h+g]=c,n[++g]=u,d=0;break}d&&(g--,n[--r]=e)}return r};return s}t.name="cs_dfs",t.path="sparse",t.factory=n},function(e,t){"use strict";function r(){var e=function(e,t){return e[t]<0};return e}t.name="cs_marked",t.path="sparse",t.factory=r},function(e,t,r){"use strict";function n(e,t,n){var i=n(r(326)),a=function(e,t){e[t]=i(e[t])};return a}t.name="cs_mark",t.path="sparse",t.factory=n},function(e,t,r){"use strict";function n(e,t,n){var i=n(r(326)),a=function(e){return 0>e?i(e):e};return a}t.name="cs_unflip",t.path="sparse",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(50)),o=n(r(78)),s=n(r(77)),u=n(r(74)),c=n(r(47)),f=n(r(344)),l=e.DenseMatrix,p=i("lsolve",{"SparseMatrix, Array | Matrix":function(e,t){return h(e,t)},"DenseMatrix, Array | Matrix":function(e,t){return m(e,t)},"Array, Array | Matrix":function(e,t){var r=a(e),n=m(r,t);return n.valueOf()}}),m=function(e,t){t=f(e,t,!0);for(var r=t._data,n=e._size[0],i=e._size[1],a=[],p=e._data,m=0;i>m;m++){var h,g=r[m][0]||0;if(c(g,0))h=0;else{var v=p[m][m];if(c(v,0))throw new Error("Linear system cannot be solved since matrix is singular");h=o(g,v);for(var d=m+1;n>d;d++)r[d]=[u(r[d][0]||0,s(h,p[d][m]))]}a[m]=[h]}return new l({data:a,size:[n,1]})},h=function(e,t){t=f(e,t,!0);for(var r,n,i=t._data,a=e._size[0],p=e._size[1],m=e._values,h=e._index,g=e._ptr,v=[],d=0;p>d;d++){var y=i[d][0]||0;if(c(y,0))v[d]=[0];else{var x=0,b=[],w=[],N=g[d+1];for(n=g[d];N>n;n++)r=h[n],r===d?x=m[n]:r>d&&(b.push(m[n]),w.push(r));if(c(x,0))throw new Error("Linear system cannot be solved since matrix is singular");var E=o(y,x);for(n=0,N=w.length;N>n;n++)r=w[n],i[r]=[u(i[r][0]||0,s(E,b[n]))];v[d]=[E]}}return new l({data:v,size:[a,1]})};return p}t.name="lsolve",t.factory=n},function(e,t,r){"use strict";function n(e){var t=e.DenseMatrix,r=function(e,r,n){var i=e.size();if(2!==i.length)throw new RangeError("Matrix must be two dimensional (size: "+a.format(i)+")");var u=i[0],c=i[1];if(u!==c)throw new RangeError("Matrix must be square (size: "+a.format(i)+")");var f,l,p;if(r&&r.isMatrix===!0){var m=r.size();if(1===m.length){if(m[0]!==u)throw new RangeError("Dimension mismatch. Matrix columns must match vector length.");for(f=[],p=r._data,l=0;u>l;l++)f[l]=[p[l]];return new t({data:f,size:[u,1],datatype:r._datatype})}if(2===m.length){if(m[0]!==u||1!==m[1])throw new RangeError("Dimension mismatch. Matrix columns must match vector length.");if(r.isDenseMatrix===!0){if(n){for(f=[],p=r._data,l=0;u>l;l++)f[l]=[p[l][0]];return new t({data:f,size:[u,1],datatype:r._datatype})}return r}for(f=[],l=0;u>l;l++)f[l]=[0];for(var h=r._values,g=r._index,v=r._ptr,d=v[1],y=v[0];d>y;y++)l=g[y],f[l][0]=h[y];return new t({data:f,size:[u,1],datatype:r._datatype})}throw new RangeError("Dimension mismatch. Matrix columns must match vector length.")}if(s(r)){var x=o.size(r);if(1===x.length){if(x[0]!==u)throw new RangeError("Dimension mismatch. Matrix columns must match vector length.");for(f=[],l=0;u>l;l++)f[l]=[r[l]];return new t({data:f,size:[u,1]})}if(2===x.length){if(x[0]!==u||1!==x[1])throw new RangeError("Dimension mismatch. Matrix columns must match vector length.");for(f=[],l=0;u>l;l++)f[l]=[r[l][0]];return new t({data:f,size:[u,1]})}throw new RangeError("Dimension mismatch. Matrix columns must match vector length.")}};return r}var i=r(38),a=i.string,o=i.array,s=Array.isArray;t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(50)),s=n(r(322)),u=n(r(323)),c=n(r(346)),f=n(r(344)),l=n(r(347)),p=n(r(343)),m=a("lusolve",{"Array, Array | Matrix":function(e,t){e=o(e);var r=s(e),n=g(r.L,r.U,r.p,null,t);return n.valueOf()},"DenseMatrix, Array | Matrix":function(e,t){var r=s(e);return g(r.L,r.U,r.p,null,t)},"SparseMatrix, Array | Matrix":function(e,t){var r=s(e);return g(r.L,r.U,r.p,null,t)},"SparseMatrix, Array | Matrix, number, number":function(e,t,r,n){var i=u(e,r,n);return g(i.L,i.U,i.p,i.q,t)},"Object, Array | Matrix":function(e,t){return g(e.L,e.U,e.p,e.q,t)}}),h=function(e){if(e&&e.isMatrix===!0)return e;if(i(e))return o(e);throw new TypeError("Invalid Matrix LU decomposition")},g=function(e,t,r,n,i){e=h(e),t=h(t),i=f(e,i,!1),r&&(i._data=c(r,i._data));var a=p(e,i),o=l(t,a);return n&&(o._data=c(n,o._data)),o};return m}var i=Array.isArray;t.name="lusolve",t.factory=n},function(e,t){"use strict";function r(){var e=function(e,t,r){var n,r=t.length,i=[];if(e)for(n=0;r>n;n++)i[e[n]]=t[n];else for(n=0;r>n;n++)i[n]=t[n];return i};return e}t.name="cs_ipvec",t.path="sparse",t.factory=r},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(50)),o=n(r(78)),s=n(r(77)),u=n(r(74)),c=n(r(47)),f=n(r(344)),l=e.DenseMatrix,p=i("usolve",{"SparseMatrix, Array | Matrix":function(e,t){return h(e,t)},"DenseMatrix, Array | Matrix":function(e,t){return m(e,t)},"Array, Array | Matrix":function(e,t){var r=a(e),n=m(r,t);return n.valueOf()}}),m=function(e,t){t=f(e,t,!0);for(var r=t._data,n=e._size[0],i=e._size[1],a=[],p=e._data,m=i-1;m>=0;m--){var h,g=r[m][0]||0;if(c(g,0))h=0;else{var v=p[m][m];if(c(v,0))throw new Error("Linear system cannot be solved since matrix is singular");h=o(g,v);for(var d=m-1;d>=0;d--)r[d]=[u(r[d][0]||0,s(h,p[d][m]))]}a[m]=[h]}return new l({data:a,size:[n,1]})},h=function(e,t){t=f(e,t,!0);for(var r,n,i=t._data,a=e._size[0],p=e._size[1],m=e._values,h=e._index,g=e._ptr,v=[],d=p-1;d>=0;d--){var y=i[d][0]||0;if(c(y,0))v[d]=[0];else{var x=0,b=[],w=[],N=g[d],E=g[d+1];for(n=E-1;n>=N;n--)r=h[n],r===d?x=m[n]:d>r&&(b.push(m[n]),w.push(r));if(c(x,0))throw new Error("Linear system cannot be solved since matrix is singular");var M=o(y,x);for(n=0,E=w.length;E>n;n++)r=w[n],i[r]=[u(i[r][0],s(M,b[n]))];v[d]=[M]}}return new l({data:v,size:[a,1]})};return p}t.name="usolve",t.factory=n},function(e,t,r){e.exports=[r(85),r(49),r(51),r(349),r(351),r(352),r(310),r(353),r(355),r(357),r(80),r(358),r(359),r(360),r(361),r(364),r(82),r(367),r(368),r(83),r(369),r(371),r(79),r(372),r(374),r(362),r(375),r(74),r(75),r(376),r(377)]},function(e,t,r){"use strict";function n(e,t,n,a){function o(e){if(0===e)return e;var t,r=0>e;return r&&(e=-e),isFinite(e)?(t=Math.exp(Math.log(e)/3),t=(e/(t*t)+2*t)/3):t=e,r?-t:t}function s(r,n){var i=r.toPolar(),a=m(new e.Complex(o(i.r),0),h(new e.Complex(0,i.phi/3)));if(n){var s=[a,m(new e.Complex(o(i.r),0),h(new e.Complex(0,i.phi/3+2*Math.PI/3))),m(new e.Complex(o(i.r),0),h(new e.Complex(0,i.phi/3-2*Math.PI/3)))];return"array"===t.matrix?s:p(s)}return a}function u(e){if(e.isZero())return e;var t,r=e.isNegative();return r&&(e=e.neg()),e.isFinite()?(t=e.ln().div(3).exp(),t=e.div(t.times(t)).plus(t.times(2)).div(3)):t=1/0,r?t.neg():t}function c(t){if(t.value&&t.value.isComplex){var r=t.clone();return r.value=1,r=r.pow(1/3),r.value=s(t.value),r}var n=l(t.value);n&&(t.value=f(t.value));var i;i=t.value&&t.value.isBigNumber?new e.BigNumber(1).div(3):t.value&&t.value.isFraction?new e.Fraction(1,3):1/3;var r=t.pow(i);return n&&(r.value=f(r.value)),r}var f=n(r(75)),l=n(r(350)),p=n(r(50)),m=a.find(n(r(77)),["Complex,Complex"]),h=a.find(n(r(80)),["Complex"]),g=a("cbrt",{number:o,Complex:s,"Complex, boolean":s,BigNumber:u,Unit:c,"Array | Matrix":function(e){return i(e,g,!0)}});return g.toTex="\\sqrt[3]{${args[0]}}",g}var i=r(19);t.name="cbrt",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("isNegative",{number:function(e){return 0>e},BigNumber:function(e){return e.isNeg()&&!e.isZero()&&!e.isNaN()},Fraction:function(e){return e.s<0&&e.n>0},Unit:function(e){return a(e.value)},"Array | Matrix":function(e){return i(e,a)}});return a}var i=r(19);r(6);t.name="isNegative",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("ceil",{number:Math.ceil,Complex:function(t){return new e.Complex(Math.ceil(t.re),Math.ceil(t.im))},BigNumber:function(e){return e.ceil()},Fraction:function(e){return e.ceil()},"Array | Matrix":function(e){return i(e,a,!0)}});return a.toTex="\\left\\lceil${args[0]}\\right\\rceil",a}var i=r(19);t.name="ceil",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=a.find(n(r(77)),["Complex,Complex"]),s=a("cube",{number:function(e){return e*e*e},Complex:function(e){return o(o(e,e),e)},BigNumber:function(e){return e.times(e).times(e)},Fraction:function(e){return e.mul(e).mul(e)},"Array | Matrix":function(e){return i(e,s,!0)},Unit:function(e){return e.pow(3)}});return s.toTex="\\left(${args[0]}\\right)^3",s}var i=r(19);t.name="cube",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(50)),o=n(r(78)),s=r(29),u=n(r(354)),c=n(r(59)),f=n(r(60)),l=n(r(84)),p=n(r(61)),m=n(r(55)),h=n(r(56)),g=i("dotDivide",{"any, any":o,"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=f(e,t,o,!1);break;default:r=u(t,e,o,!0)}break;default:switch(t.storage()){case"sparse":r=c(e,t,o,!1);break;default:r=m(e,t,o)}}return r},"Array, Array":function(e,t){return g(a(e),a(t)).valueOf()},"Array, Matrix":function(e,t){return g(a(e),t)},"Matrix, Array":function(e,t){return g(e,a(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=l(e,t,o,!1);break;default:r=h(e,t,o,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=p(t,e,o,!0);break;default:r=h(t,e,o,!0)}return r},"Array, any":function(e,t){return h(a(e),t,o,!1).valueOf()},"any, Array":function(e,t){return h(a(t),e,o,!0).valueOf()}});return g.toTex="\\left(${args[0]}"+s.operators.dotDivide+"${args[1]}\\right)",g}t.name="dotDivide",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(47)),s=e.SparseMatrix,u=function(e,t,r,n){var u=e._data,c=e._size,f=e._datatype,l=t._values,p=t._index,m=t._ptr,h=t._size,g=t._datatype;if(c.length!==h.length)throw new i(c.length,h.length);if(c[0]!==h[0]||c[1]!==h[1])throw new RangeError("Dimension mismatch. Matrix A ("+c+") must match Matrix B ("+h+")");if(!l)throw new Error("Cannot perform operation on Dense Matrix and Pattern Sparse Matrix");var v,d=c[0],y=c[1],x=o,b=0,w=r;"string"==typeof f&&f===g&&(v=f,x=a.find(o,[v,v]),b=a.convert(0,v),w=a.find(r,[v,v]));for(var N=[],E=[],M=[],A=0;y>A;A++){M[A]=E.length;for(var _=m[A],O=m[A+1],T=_;O>T;T++){var C=p[T],S=n?w(l[T],u[C][A]):w(u[C][A],l[T]);x(S,b)||(E.push(C),N.push(S))}}return M[y]=E.length,new s({values:N,index:E,ptr:M,size:[d,y],datatype:v})};return u}var i=r(41);t.name="algorithm02",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(50)),o=n(r(77)),s=r(29),u=n(r(354)),c=n(r(356)),f=n(r(84)),l=n(r(55)),p=n(r(56)),m=i("dotMultiply",{"any, any":o,"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=c(e,t,o,!1);break;default:r=u(t,e,o,!0)}break;default:switch(t.storage()){case"sparse":r=u(e,t,o,!1);break;default:r=l(e,t,o)}}return r},"Array, Array":function(e,t){return m(a(e),a(t)).valueOf()},"Array, Matrix":function(e,t){return m(a(e),t)},"Matrix, Array":function(e,t){return m(e,a(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=f(e,t,o,!1);break;default:r=p(e,t,o,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=f(t,e,o,!0);break;default:r=p(t,e,o,!0)}return r},"Array, any":function(e,t){return p(a(e),t,o,!1).valueOf()},"any, Array":function(e,t){return p(a(t),e,o,!0).valueOf()}});return m.toTex="\\left(${args[0]}"+s.operators.dotMultiply+"${args[1]}\\right)",m}t.name="dotMultiply",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(47)),s=e.SparseMatrix,u=function(e,t,r){var n=e._values,u=e._index,c=e._ptr,f=e._size,l=e._datatype,p=t._values,m=t._index,h=t._ptr,g=t._size,v=t._datatype;if(f.length!==g.length)throw new i(f.length,g.length);if(f[0]!==g[0]||f[1]!==g[1])throw new RangeError("Dimension mismatch. Matrix A ("+f+") must match Matrix B ("+g+")");var d,y=f[0],x=f[1],b=o,w=0,N=r;"string"==typeof l&&l===v&&(d=l,b=a.find(o,[d,d]),w=a.convert(0,d),N=a.find(r,[d,d]));var E,M,A,_,O,T=n&&p?[]:void 0,C=[],S=[],z=new s({values:T,index:C,ptr:S,size:[y,x],datatype:d}),B=T?[]:void 0,k=[];for(M=0;x>M;M++){S[M]=C.length;var I=M+1;if(B)for(_=h[M],O=h[M+1],A=_;O>A;A++)E=m[A],k[E]=I,B[E]=p[A];for(_=c[M],O=c[M+1],A=_;O>A;A++)if(E=u[A],B){var R=k[E]===I?B[E]:w,P=N(n[A],R);b(P,w)||(C.push(E),T.push(P))}else C.push(E)}return S[x]=C.length,z};return u}var i=r(41);t.name="algorithm09",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(50)),o=n(r(79)),s=r(29),u=n(r(59)),c=n(r(60)),f=n(r(84)),l=n(r(61)),p=n(r(55)),m=n(r(56)),h=i("dotPow",{"any, any":o,"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=c(e,t,o,!1);break;default:r=u(t,e,o,!0)}break;default:switch(t.storage()){case"sparse":r=u(e,t,o,!1);break;default:r=p(e,t,o)}}return r},"Array, Array":function(e,t){return h(a(e),a(t)).valueOf()},"Array, Matrix":function(e,t){return h(a(e),t)},"Matrix, Array":function(e,t){return h(e,a(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=f(e,t,h,!1);break;default:r=m(e,t,h,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=l(t,e,h,!0);break;default:r=m(t,e,h,!0)}return r},"Array, any":function(e,t){return m(a(e),t,h,!1).valueOf()},"any, Array":function(e,t){return m(a(t),e,h,!0).valueOf()}});return h.toTex="\\left(${args[0]}"+s.operators.dotPow+"${args[1]}\\right)",h}t.name="dotPow",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("fix",{number:function(e){return e>0?Math.floor(e):Math.ceil(e)},Complex:function(t){return new e.Complex(t.re>0?Math.floor(t.re):Math.ceil(t.re),t.im>0?Math.floor(t.im):Math.ceil(t.im))},BigNumber:function(e){return e.isNegative()?e.ceil():e.floor()},Fraction:function(e){return e.s<0?e.ceil():e.floor()},"Array | Matrix":function(e){return i(e,a,!0)}});return a.toTex="\\mathrm{${name}}\\left(${args}\\right)",a}var i=r(19);t.name="fix",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("floor",{number:Math.floor,Complex:function(t){return new e.Complex(Math.floor(t.re),Math.floor(t.im))},BigNumber:function(e){return e.floor()},Fraction:function(e){return e.floor()},"Array | Matrix":function(e){return i(e,a,!0)}});return a.toTex="\\left\\lfloor${args[0]}\\right\\rfloor",a}var i=r(19);t.name="floor",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(t,r){if(!t.isInt()||!r.isInt())throw new Error("Parameters in function gcd must be integer numbers");for(var n=new e.BigNumber(0);!r.isZero();){var i=t.mod(r);t=r,r=i}return t.lt(n)?t.neg():t}var s=n(r(50)),u=n(r(52)),c=n(r(53)),f=n(r(54)),l=n(r(55)),p=n(r(56)),m=a("gcd",{"number, number":i,"BigNumber, BigNumber":o,"Fraction, Fraction":function(e,t){return e.gcd(t)},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=c(e,t,m);break;default:r=u(t,e,m,!0)}break;default:switch(t.storage()){case"sparse":r=u(e,t,m,!1);break;default:r=l(e,t,m)}}return r},"Array, Array":function(e,t){return m(s(e),s(t)).valueOf()},"Array, Matrix":function(e,t){return m(s(e),t)},"Matrix, Array":function(e,t){return m(e,s(t))},"Matrix, number | BigNumber":function(e,t){var r;switch(e.storage()){case"sparse":r=f(e,t,m,!1);break;default:r=p(e,t,m,!1)}return r},"number | BigNumber, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=f(t,e,m,!0);break;default:r=p(t,e,m,!0)}return r},"Array, number | BigNumber":function(e,t){return p(s(e),t,m,!1).valueOf()},"number | BigNumber, Array":function(e,t){return p(s(t),e,m,!0).valueOf()},"Array | Matrix | number | BigNumber, Array | Matrix | number | BigNumber, ...Array | Matrix | number | BigNumber":function(e,t,r){for(var n=m(e,t),i=0;ie?-e:e}var a=r(6).isInteger;t.name="gcd",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(e){for(var t=0,r=0,n=0;n=0||t.predictable?Math.sqrt(r):o(new e.Complex(r,0))}function o(t){var r,n,i=Math.sqrt(t.re*t.re+t.im*t.im);return r=t.re>=0?.5*Math.sqrt(2*(i+t.re)):Math.abs(t.im)/Math.sqrt(2*(i-t.re)),n=t.re<=0?.5*Math.sqrt(2*(i-t.re)):Math.abs(t.im)/Math.sqrt(2*(i+t.re)),t.im>=0?new e.Complex(r,n):new e.Complex(r,-n)}var s=n("sqrt",{number:a,Complex:o,BigNumber:function(e){return!e.isNegative()||t.predictable?e.sqrt():a(e.toNumber())},"Array | Matrix":function(e){return i(e,s,!0)},Unit:function(e){return e.pow(.5)}});return s.toTex="\\sqrt{${args[0]}}",s}var i=r(19);t.name="sqrt",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("isPositive",{number:function(e){return e>0},BigNumber:function(e){return!e.isNeg()&&!e.isZero()&&!e.isNaN()},Fraction:function(e){return e.s>0&&e.n>0},Unit:function(e){return a(e.value)},"Array | Matrix":function(e){return i(e,a)}});return a}var i=r(19);r(6);t.name="isPositive",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(t,r){if(!t.isInt()||!r.isInt())throw new Error("Parameters in function lcm must be integer numbers");if(t.isZero()||r.isZero())return new e.BigNumber(0);for(var n=t.times(r);!r.isZero();){var i=r;r=t.mod(i),t=i}return n.div(t).abs()}var s=n(r(50)),u=n(r(354)),c=n(r(365)),f=n(r(84)),l=n(r(55)),p=n(r(56)),m=a("lcm",{"number, number":i,"BigNumber, BigNumber":o,"Fraction, Fraction":function(t,r){return 0===t.n&&0===r.n?new e.Fraction(0):t.mul(r).abs().div(t.gcd(r))},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=c(e,t,m);break;default:r=u(t,e,m,!0)}break;default:switch(t.storage()){case"sparse":r=u(e,t,m,!1);break;default:r=l(e,t,m)}}return r},"Array, Array":function(e,t){return m(s(e),s(t)).valueOf()},"Array, Matrix":function(e,t){return m(s(e),t)},"Matrix, Array":function(e,t){return m(e,s(t))},"Matrix, number | BigNumber":function(e,t){var r;switch(e.storage()){case"sparse":r=f(e,t,m,!1);break;default:r=p(e,t,m,!1)}return r},"number | BigNumber, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=f(t,e,m,!0);break;default:r=p(t,e,m,!0)}return r},"Array, number | BigNumber":function(e,t){return p(s(e),t,m,!1).valueOf()},"number | BigNumber, Array":function(e,t){return p(s(t),e,m,!0).valueOf()},"Array | Matrix | number | BigNumber, Array | Matrix | number | BigNumber, ...Array | Matrix | number | BigNumber":function(e,t,r){for(var n=m(e,t),i=0;iO;O++){N[O]=w.length;var T=O+1;if(i(e,O,A,M,_,T,E,x),i(t,O,A,M,_,T,E,x),M)for(var C=N[O];Cl;l++)h=v[l],r[h]!==a?(r[h]=a,y.push(h),c?(n[h]=u?s(g[l],f):s(f,g[l]),i[h]=a):n[h]=g[l]):(n[h]=u?s(g[l],n[h]):s(n[h],g[l]),i[h]=a);else for(p=d[t],m=d[t+1],l=p;m>l;l++)h=v[l],r[h]!==a?(r[h]=a,y.push(h)):i[h]=a}},function(e,t,r){"use strict";function n(e,t,r,n){function a(t){return new e.Complex(Math.log(Math.sqrt(t.re*t.re+t.im*t.im))/Math.LN10,Math.atan2(t.im,t.re)/Math.LN10)}var o=n("log10",{number:function(r){return r>=0||t.predictable?Math.log(r)/Math.LN10:o(new e.Complex(r,0))},Complex:a,BigNumber:function(r){return!r.isNegative()||t.predictable?r.log():a(new e.Complex(r.toNumber(),0))},"Array | Matrix":function(e){return i(e,o)}});return o.toTex="\\log_{10}\\left(${args[0]}\\right)",o}var i=r(19);t.name="log10",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){function a(e,t){if(t>0)return e-t*Math.floor(e/t);if(0===t)return e;throw new Error("Cannot calculate mod for a negative divisor")}var o=n(r(50)),s=r(29),u=n(r(354)),c=n(r(59)),f=n(r(76)),l=n(r(84)),p=n(r(61)),m=n(r(55)),h=n(r(56)),g=i("mod",{"number, number":a,"BigNumber, BigNumber":function(e,t){return t.isZero()?e:e.mod(t)},"Fraction, Fraction":function(e,t){return e.mod(t)},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=f(e,t,g,!1);break;default:r=u(t,e,g,!0)}break;default:switch(t.storage()){case"sparse":r=c(e,t,g,!1);break;default:r=m(e,t,g)}}return r},"Array, Array":function(e,t){return g(o(e),o(t)).valueOf()},"Array, Matrix":function(e,t){return g(o(e),t)},"Matrix, Array":function(e,t){return g(e,o(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=l(e,t,g,!1);break;default:r=h(e,t,g,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=p(t,e,g,!0);break;default:r=h(t,e,g,!0)}return r},"Array, any":function(e,t){return h(o(e),t,g,!1).valueOf()},"any, Array":function(e,t){return h(o(t),e,g,!0).valueOf()}});return g.toTex="\\left(${args[0]}"+s.operators.mod+"${args[1]}\\right)",g}t.name="mod",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){function a(e,t){var r=e.size();if(1==r.length){if(t===Number.POSITIVE_INFINITY||"inf"===t){var n=0;return e.forEach(function(e){var t=o(e);p(t,n)&&(n=t)},!0),n}if(t===Number.NEGATIVE_INFINITY||"-inf"===t){var i;return e.forEach(function(e){var t=o(e);(!i||m(t,i))&&(i=t)},!0),i||0}if("fro"===t)return a(e,2);if("number"==typeof t&&!isNaN(t)){if(!l(t,0)){var h=0;return e.forEach(function(e){h=s(u(o(e),t),h)},!0),u(h,1/t)}return Number.POSITIVE_INFINITY}throw new Error("Unsupported parameter value")}if(2==r.length){if(1===t){var d=[],y=0;return e.forEach(function(e,t){var r=t[1],n=s(d[r]||0,o(e));p(n,y)&&(y=n),d[r]=n},!0),y}if(t===Number.POSITIVE_INFINITY||"inf"===t){var x=[],b=0;return e.forEach(function(e,t){var r=t[0],n=s(x[r]||0,o(e));p(n,b)&&(b=n),x[r]=n},!0),b}if("fro"===t)return c(g(f(v(e),e)));if(2===t)throw new Error("Unsupported parameter value, missing implementation of matrix singular value decomposition");throw new Error("Unsupported parameter value")}}var o=n(r(85)),s=n(r(49)),u=n(r(79)),c=n(r(362)),f=n(r(83)),l=n(r(47)),p=n(r(62)),m=n(r(58)),h=n(r(50)),g=n(r(370)),v=n(r(329)),d=i.find(o,["Complex"]),y=i("norm",{number:Math.abs,Complex:d,BigNumber:function(e){return e.abs()},"boolean | null":function(e){return Math.abs(e)},Array:function(e){return a(h(e),2)},Matrix:function(e){return a(e,2)},"number | Complex | BigNumber | boolean | null, number | BigNumber | string":function(e){return y(e)},"Array, number | BigNumber | string":function(e,t){return a(h(e),t)},"Matrix, number | BigNumber | string":function(e,t){return a(e,t)}});return y.toTex={1:"\\left\\|${args[0]}\\right\\|",2:"\\mathrm{${name}}\\left(${args}\\right)"},y}t.name="norm",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var s=n(r(50)),u=n(r(49)),c=o("trace",{Array:function(e){return c(s(e))},Matrix:function(e){var t;switch(e.storage()){case"dense":t=f(e);break;case"sparse":t=l(e)}return t},any:i}),f=function(e){var t=e._size,r=e._data;switch(t.length){case 1:if(1==t[0])return i(r[0]);throw new RangeError("Matrix must be square (size: "+a(t)+")");case 2:var n=t[0],o=t[1];if(n===o){for(var s=0,c=0;n>c;c++)s=u(s,r[c][c]);return s}throw new RangeError("Matrix must be square (size: "+a(t)+")");default:throw new RangeError("Matrix must be two dimensional (size: "+a(t)+")")}},l=function(e){var t=e._values,r=e._index,n=e._ptr,i=e._size,o=i[0],s=i[1];if(o===s){var c=0;if(t.length>0)for(var f=0;s>f;f++)for(var l=n[f],p=n[f+1],m=l;p>m;m++){var h=r[m];if(h===f){c=u(c,t[m]);break}if(h>f)break}return c}throw new RangeError("Matrix must be square (size: "+a(i)+")")};return c.toTex="\\mathrm{tr}\\left(${args[0]}\\right)",c}var i=r(3).clone,a=r(23).format;t.name="trace",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(t,r){var n=e.BigNumber.precision,i=e.BigNumber.constructor({precision:n+2}),a=new e.BigNumber(0),o=new i(1),s=r.isNegative();if(s&&(r=r.neg()),r.isZero())throw new Error("Root must be non-zero");if(t.isNegative()&&!r.abs().mod(2).equals(1))throw new Error("Root must be odd when a is negative.");if(t.isZero())return a;if(!t.isFinite())return s?a:t;var u=t.abs().pow(o.div(r));return u=t.isNeg()?u.neg():u,new e.BigNumber((s?o.div(u):u).toPrecision(n))}var u=n(r(50)),c=n(r(52)),f=n(r(354)),l=n(r(365)),p=n(r(84)),m=n(r(55)),h=n(r(56)),g=o("nthRoot",{number:function(e){return i(e,2)},"number, number":i,BigNumber:function(t){return s(t,new e.BigNumber(2))},Complex:function(e){return a(e,2)},"Complex, number":a,"BigNumber, BigNumber":s,"Array | Matrix":function(e){return g(e,2)},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":if(1!==t.density())throw new Error("Root must be non-zero");r=l(e,t,g);break;default:r=f(t,e,g,!0)}break;default:switch(t.storage()){case"sparse":if(1!==t.density())throw new Error("Root must be non-zero");r=c(e,t,g,!1);break;default:r=m(e,t,g)}}return r},"Array, Array":function(e,t){return g(u(e),u(t)).valueOf()},"Array, Matrix":function(e,t){return g(u(e),t)},"Matrix, Array":function(e,t){return g(e,u(t))},"Matrix, number | BigNumber":function(e,t){var r;switch(e.storage()){case"sparse":r=p(e,t,g,!1);break;default:r=h(e,t,g,!1)}return r},"number | BigNumber, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":if(1!==t.density())throw new Error("Root must be non-zero");r=p(t,e,g,!0);break;default:r=h(t,e,g,!0)}return r},"Array, number | BigNumber":function(e,t){return g(u(e),t).valueOf()},"number | BigNumber, Array":function(e,t){return g(e,u(t)).valueOf()}});return g.toTex="\\sqrt[${args[1]}]{${args[0]}}",g}function i(e,t){var r=0>t;if(r&&(t=-t),0===t)throw new Error("Root must be non-zero");if(0>e&&Math.abs(t)%2!=1)throw new Error("Root must be odd when a is negative.");if(0==e)return 0;if(!isFinite(e))return r?0:e;var n=Math.pow(Math.abs(e),1/t);return n=0>e?-n:n,r?1/n:n}function a(e,t){if(0>t)throw new Error("Root must be greater than zero");if(0===t)throw new Error("Root must be non-zero");if(t%1!==0)throw new Error("Root must be an integer");for(var r=e.toPolar(),n=[],i=Math.pow(r.r,1/t),a=0;t>a;a++)n.push({r:i,phi:(r.phi+2*Math.PI*a)/t});return n}t.name="nthRoot",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var c=n(r(50)),f=n(r(47)),l=n(r(373)),p=n(r(84)),m=n(r(61)),h=n(r(56)),g=o("round",{number:Math.round,"number, number":function(e,t){if(!a(t))throw new TypeError(u);if(0>t||t>15)throw new Error("Number of decimals in function round must be in te range of 0-15");return i(e,t)},Complex:function(t){return new e.Complex(Math.round(t.re),Math.round(t.im))},"Complex, number":function(t,r){return new e.Complex(i(t.re,r),i(t.im,r))},"Complex, BigNumber":function(t,r){if(!r.isInteger())throw new TypeError(u);var n=r.toNumber();return new e.Complex(i(t.re,n),i(t.im,n))},"number, BigNumber":function(t,r){if(!r.isInteger())throw new TypeError(u);return new e.BigNumber(t).toDecimalPlaces(r.toNumber())},BigNumber:function(e){return e.toDecimalPlaces(0)},"BigNumber, BigNumber":function(e,t){if(!t.isInteger())throw new TypeError(u);return e.toDecimalPlaces(t.toNumber())},Fraction:function(e){return e.round()},"Array | Matrix":function(e){return s(e,g,!0)},"Matrix, number | BigNumber":function(e,t){var r;switch(e.storage()){case"sparse":r=p(e,t,g,!1);break;default:r=h(e,t,g,!1)}return r},"number | Complex | BigNumber, Matrix":function(e,t){if(!f(e,0)){var r;switch(t.storage()){case"sparse":r=m(t,e,g,!0);break;default:r=h(t,e,g,!0)}return r}return l(t.size(),t.storage())},"Array, number | BigNumber":function(e,t){return h(c(e),t,g,!1).valueOf()},"number | Complex | BigNumber, Array":function(e,t){return h(c(t),e,g,!0).valueOf()}});return g.toTex={1:"\\left\\lfloor${args[0]}\\right\\rceil",2:"\\mathrm{${name}}\\left(${args}\\right)"},g}function i(e,t){return parseFloat(o(e,t))}var a=r(6).isInteger,o=r(6).toFixed,s=r(19),u="Number of decimals in function round must be an integer";t.name="round",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(t,r){var n=u(t),i=n?new e.BigNumber(0):0;if(c(t),r){var o=f(r);return t.length>0?o.resize(t,i):o}var s=[];return t.length>0?a(s,t,i):s}function u(e){var t=!1;return e.forEach(function(e,r,n){e&&e.isBigNumber===!0&&(t=!0,n[r]=e.toNumber())}),t}function c(e){e.forEach(function(e){if("number"!=typeof e||!i(e)||0>e)throw new Error("Parameters in function zeros must be positive integers")})}var f=n(r(50)),l=o("zeros",{"":function(){return"array"===t.matrix?s([]):s([],"default")},"...number | BigNumber | string":function(e){var r=e[e.length-1];if("string"==typeof r){var n=e.pop();return s(e,n)}return"array"===t.matrix?s(e):s(e,"default")},Array:s,Matrix:function(e){var t=e.storage();return s(e.valueOf(),t)},"Array | Matrix, string":function(e,t){return s(e.valueOf(),t)}});return l.toTex="\\mathrm{${name}}\\left(${args}\\right)",l}var i=r(6).isInteger,a=r(39).resize;t.name="zeros",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var o=n("sign",{number:i.sign,Complex:function(t){var r=Math.sqrt(t.re*t.re+t.im*t.im);return new e.Complex(t.re/r,t.im/r)},BigNumber:function(t){return new e.BigNumber(t.cmp(0))},Fraction:function(t){return new e.Fraction(t.s)},"Array | Matrix":function(e){return a(e,o,!0)},Unit:function(e){return o(e.value)}});return o.toTex="\\mathrm{${name}}\\left(${args}\\right)",o}var i=r(6),a=r(19);t.name="sign",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("square",{number:function(e){return e*e},Complex:function(t){return new e.Complex(t.re*t.re-t.im*t.im,t.re*t.im+t.im*t.re)},BigNumber:function(e){return e.times(e)},Fraction:function(e){return e.mul(e)},"Array | Matrix":function(e){return i(e,a,!0)},Unit:function(e){return e.pow(2)}});return a.toTex="\\left(${args[0]}\\right)^2",a}var i=r(19);t.name="square",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=r(29),s=a("unaryPlus",{number:function(e){return e},Complex:function(e){return e.clone()},BigNumber:function(e){return e},Fraction:function(e){return e},Unit:function(e){return e.clone()},"Array | Matrix":function(e){return i(e,s,!0)},"boolean | string | null":function(r){return"bignumber"==t.number?new e.BigNumber(+r):+r}});return s.toTex=o.operators.unaryPlus+"\\left(${args[0]}\\right)",s}var i=r(19);t.name="unaryPlus",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(e,r){var n,a,o,s=0,c=1,f=1,l=0;if(!i(e)||!i(r))throw new Error("Parameters in function xgcd must be integer numbers");for(;r;)a=Math.floor(e/r),o=e%r,n=s,s=c-a*s,c=n,n=f,f=l-a*f,l=n,e=r,r=o;var p;return p=0>e?[-e,-c,-l]:[e,e?c:0,l],"array"===t.matrix?p:u(p)}function s(r,n){var i,a,o,s=new e.BigNumber(0),c=new e.BigNumber(0),f=new e.BigNumber(1),l=new e.BigNumber(1),p=new e.BigNumber(0);if(!r.isInt()||!n.isInt())throw new Error("Parameters in function xgcd must be integer numbers");for(;!n.isZero();)a=r.div(n).floor(),o=r.mod(n),i=c,c=f.minus(a.times(c)),f=i,i=l,l=p.minus(a.times(l)),p=i,r=n,n=o;var m;return m=r.lt(s)?[r.neg(),f.neg(),p.neg()]:[r,r.isZero()?0:f,p],"array"===t.matrix?m:u(m)}var u=n(r(50)),c=a("xgcd",{"number, number":o,"BigNumber, BigNumber":s});return c.toTex="\\mathrm{${name}}\\left(${args}\\right)",c}var i=r(6).isInteger;t.name="xgcd",t.factory=n},function(e,t,r){e.exports=[r(379),r(383),r(384),r(386),r(388),r(391),r(393)]},function(e,t,r){"use strict";function n(e,t,n,o){var s=r(29),u=n(r(50)),c=n(r(354)),f=n(r(365)),l=n(r(84)),p=n(r(55)),m=n(r(56)),h=o("bitAnd",{"number, number":function(e,t){if(!i(e)||!i(t))throw new Error("Integers expected in function bitAnd");return e&t},"BigNumber, BigNumber":a,"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=f(e,t,h,!1);break;default:r=c(t,e,h,!0)}break;default:switch(t.storage()){case"sparse":r=c(e,t,h,!1);break;default:r=p(e,t,h)}}return r},"Array, Array":function(e,t){return h(u(e),u(t)).valueOf()},"Array, Matrix":function(e,t){return h(u(e),t)},"Matrix, Array":function(e,t){return h(e,u(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=l(e,t,h,!1);break;default:r=m(e,t,h,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=l(t,e,h,!0);break;default:r=m(t,e,h,!0)}return r},"Array, any":function(e,t){return m(u(e),t,h,!1).valueOf()},"any, Array":function(e,t){return m(u(t),e,h,!0).valueOf()}});return h.toTex="\\left(${args[0]}"+s.operators.bitAnd+"${args[1]}\\right)",h}var i=r(6).isInteger,a=r(380);t.name="bitAnd",t.factory=n},function(e,t,r){var n=r(381);e.exports=function(e,t){if(e.isFinite()&&!e.isInteger()||t.isFinite()&&!t.isInteger())throw new Error("Integers expected in function bitAnd");var r=e.constructor;if(e.isNaN()||t.isNaN())return new r(NaN);if(e.isZero()||t.eq(-1)||e.eq(t))return e;if(t.isZero()||e.eq(-1))return t;if(!e.isFinite()||!t.isFinite()){if(!e.isFinite()&&!t.isFinite())return e.isNegative()==t.isNegative()?e:new r(0);if(!e.isFinite())return t.isNegative()?e:e.isNegative()?new r(0):t;if(!t.isFinite())return e.isNegative()?t:t.isNegative()?new r(0):e}return n(e,t,function(e,t){return e&t})}},function(e,t,r){function n(e){for(var t=e.c,r=t[0]+"",n=1;n0)if(++s>c)for(s-=c;s--;u+="0");else c>s&&(u=u.slice(0,s)+"."+u.slice(s));for(var f=[0],n=0;n1&&(null==f[o+1]&&(f[o+1]=0),f[o+1]+=f[o]>>1,f[o]&=1)}return f.reverse()}var i=r(382);e.exports=function(e,t,r){var a,o,s=e.constructor,u=+(e.s<0),c=+(t.s<0);if(u){a=n(i(e));for(var f=0;f0;)r(l[--h],p[--g])==v&&(d=d.plus(y)),y=y.times(x);for(;g>0;)r(m,p[--g])==v&&(d=d.plus(y)),y=y.times(x);return s.config({precision:b}),0==v&&(d.s=-d.s),d}},function(e,t){e.exports=function(e){if(e.isFinite()&&!e.isInteger())throw new Error("Integer expected in function bitNot");var t=e.constructor,r=t.precision;t.config({precision:1e9});var e=e.plus(t.ONE);return e.s=-e.s||null,t.config({precision:r}),e}},function(e,t,r){"use strict";function n(e,t,n,s){var u=r(29),c=s("bitNot",{number:function(e){if(!o(e))throw new Error("Integer expected in function bitNot");return~e},BigNumber:a,"Array | Matrix":function(e){return i(e,c)}});return c.toTex=u.operators.bitNot+"\\left(${args[0]}\\right)",c}var i=r(19),a=r(382),o=r(6).isInteger;t.name="bitNot",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var s=r(29),u=n(r(50)),c=n(r(52)),f=n(r(53)),l=n(r(54)),p=n(r(55)),m=n(r(56)),h=o("bitOr",{"number, number":function(e,t){if(!i(e)||!i(t))throw new Error("Integers expected in function bitOr");return e|t},"BigNumber, BigNumber":a,"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=f(e,t,h);break;default:r=c(t,e,h,!0)}break;default:switch(t.storage()){case"sparse":r=c(e,t,h,!1);break;default:r=p(e,t,h)}}return r},"Array, Array":function(e,t){return h(u(e),u(t)).valueOf()},"Array, Matrix":function(e,t){return h(u(e),t)},"Matrix, Array":function(e,t){return h(e,u(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=l(e,t,h,!1);break;default:r=m(e,t,h,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=l(t,e,h,!0);break;default:r=m(t,e,h,!0)}return r},"Array, any":function(e,t){return m(u(e),t,h,!1).valueOf()},"any, Array":function(e,t){return m(u(t),e,h,!0).valueOf()}});return h.toTex="\\left(${args[0]}"+s.operators.bitOr+"${args[1]}\\right)",h}var i=r(6).isInteger,a=r(385);t.name="bitOr",t.factory=n},function(e,t,r){var n=r(381);e.exports=function(e,t){if(e.isFinite()&&!e.isInteger()||t.isFinite()&&!t.isInteger())throw new Error("Integers expected in function bitOr");var r=e.constructor;if(e.isNaN()||t.isNaN())return new r(NaN);var i=new r(-1);return e.isZero()||t.eq(i)||e.eq(t)?t:t.isZero()||e.eq(i)?e:e.isFinite()&&t.isFinite()?n(e,t,function(e,t){return e|t}):!e.isFinite()&&!e.isNegative()&&t.isNegative()||e.isNegative()&&!t.isNegative()&&!t.isFinite()?i:e.isNegative()&&t.isNegative()?e.isFinite()?e:t:e.isFinite()?t:e}},function(e,t,r){"use strict";function n(e,t,n,o){var s=r(29),u=n(r(50)),c=n(r(59)),f=n(r(60)),l=n(r(61)),p=n(r(55)),m=n(r(56)),h=o("bitXor",{"number, number":function(e,t){if(!i(e)||!i(t))throw new Error("Integers expected in function bitXor");return e^t},"BigNumber, BigNumber":a,"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=f(e,t,h);break;default:r=c(t,e,h,!0)}break;default:switch(t.storage()){case"sparse":r=c(e,t,h,!1);break;default:r=p(e,t,h)}}return r},"Array, Array":function(e,t){return h(u(e),u(t)).valueOf()},"Array, Matrix":function(e,t){return h(u(e),t)},"Matrix, Array":function(e,t){return h(e,u(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=l(e,t,h,!1);break;default:r=m(e,t,h,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=l(t,e,h,!0);break;default:r=m(t,e,h,!0)}return r},"Array, any":function(e,t){return m(u(e),t,h,!1).valueOf()},"any, Array":function(e,t){return m(u(t),e,h,!0).valueOf()}});return h.toTex="\\left(${args[0]}"+s.operators.bitXor+"${args[1]}\\right)",h}var i=r(6).isInteger,a=r(387);t.name="bitXor",t.factory=n},function(e,t,r){var n=r(381),i=r(382);e.exports=function(e,t){if(e.isFinite()&&!e.isInteger()||t.isFinite()&&!t.isInteger())throw new Error("Integers expected in function bitXor");var r=e.constructor;if(e.isNaN()||t.isNaN())return new r(NaN);if(e.isZero())return t;if(t.isZero())return e;if(e.eq(t))return new r(0);var a=new r(-1);return e.eq(a)?i(t):t.eq(a)?i(e):e.isFinite()&&t.isFinite()?n(e,t,function(e,t){return e^t}):e.isFinite()||t.isFinite()?new r(e.isNegative()==t.isNegative()?1/0:-(1/0)):a}},function(e,t,r){"use strict";function n(e,t,n,o){var s=r(29),u=n(r(50)),c=n(r(47)),f=n(r(373)),l=n(r(52)),p=n(r(354)),m=n(r(390)),h=n(r(54)),g=n(r(84)),v=n(r(55)),d=n(r(56)),y=o("leftShift",{"number, number":function(e,t){if(!i(e)||!i(t))throw new Error("Integers expected in function leftShift");return e<k;k++){C[k]=T.length;var I=k+1;for(M=c[k],A=c[k+1],E=M;A>E;E++)_=u[E],B[_]=I,z[_]=n[E],T.push(_);for(M=h[k],A=h[k+1],E=M;A>E;E++)_=m[E],B[_]===I&&(z[_]=N(z[_],p[E]));for(E=C[k];E>t},"BigNumber, BigNumber":a,"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=m(e,t,y,!1);break;default:r=p(t,e,y,!0)}break;default:switch(t.storage()){case"sparse":r=l(e,t,y,!1);break;default:r=v(e,t,y)}}return r},"Array, Array":function(e,t){return y(u(e),u(t)).valueOf()},"Array, Matrix":function(e,t){return y(u(e),t)},"Matrix, Array":function(e,t){return y(e,u(t))},"Matrix, number | BigNumber":function(e,t){if(!c(t,0)){var r;switch(e.storage()){case"sparse":r=g(e,t,y,!1);break;default:r=d(e,t,y,!1)}return r}return e.clone()},"number | BigNumber, Matrix":function(e,t){if(!c(e,0)){var r;switch(t.storage()){case"sparse":r=h(t,e,y,!0);break;default:r=d(t,e,y,!0)}return r}return f(t.size(),t.storage())},"Array, number | BigNumber":function(e,t){return y(u(e),t).valueOf()},"number | BigNumber, Array":function(e,t){return y(e,u(t)).valueOf()}});return y.toTex="\\left(${args[0]}"+s.operators.rightArithShift+"${args[1]}\\right)",y}var i=r(6).isInteger,a=r(392);t.name="rightArithShift",t.factory=n},function(e,t){e.exports=function(e,t){if(e.isFinite()&&!e.isInteger()||t.isFinite()&&!t.isInteger())throw new Error("Integers expected in function rightArithShift");var r=e.constructor;return e.isNaN()||t.isNaN()||t.isNegative()&&!t.isZero()?new r(NaN):e.isZero()||t.isZero()?e:t.isFinite()?t.lt(55)?e.div(Math.pow(2,t.toNumber())+"").floor():e.div(new r(2).pow(t)).floor():new r(e.isNegative()?-1:e.isFinite()?0:NaN)}},function(e,t,r){"use strict";function n(e,t,n,a){var o=r(29),s=n(r(50)),u=n(r(47)),c=n(r(373)),f=n(r(52)),l=n(r(354)),p=n(r(390)),m=n(r(54)),h=n(r(84)),g=n(r(55)),v=n(r(56)),d=a("rightLogShift",{"number, number":function(e,t){if(!i(e)||!i(t))throw new Error("Integers expected in function rightLogShift");return e>>>t},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=p(e,t,d,!1);break;default:r=l(t,e,d,!0)}break;default:switch(t.storage()){case"sparse":r=f(e,t,d,!1);break;default:r=g(e,t,d)}}return r},"Array, Array":function(e,t){return d(s(e),s(t)).valueOf()},"Array, Matrix":function(e,t){return d(s(e),t)},"Matrix, Array":function(e,t){return d(e,s(t))},"Matrix, number | BigNumber":function(e,t){if(!u(t,0)){var r;switch(e.storage()){case"sparse":r=h(e,t,d,!1);break;default:r=v(e,t,d,!1)}return r}return e.clone()},"number | BigNumber, Matrix":function(e,t){if(!u(e,0)){var r;switch(t.storage()){case"sparse":r=m(t,e,d,!0);break;default:r=v(t,e,d,!0)}return r}return c(t.size(),t.storage())},"Array, number | BigNumber":function(e,t){return d(s(e),t).valueOf()},"number | BigNumber, Array":function(e,t){return d(e,s(t)).valueOf()}});return d.toTex="\\left(${args[0]}"+o.operators.rightLogShift+"${args[1]}\\right)",d}var i=r(6).isInteger;t.name="rightLogShift",t.factory=n},function(e,t,r){e.exports=[r(395),r(401),r(396),r(402)]},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(49)),o=n(r(396)),s=n(r(350)),u=n(r(400)),c=i("bellNumbers",{"number | BigNumber":function(e){if(!u(e)||s(e))throw new TypeError("Non-negative integer value expected in function bellNumbers");for(var t=0,r=0;e>=r;r++)t=a(t,o(e,r));return t}});return c.toTex="\\mathrm{B}_{${args[0]}}",c}t.name="bellNumbers",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(49)),o=n(r(74)),s=n(r(83)),u=n(r(310)),c=n(r(79)),f=n(r(397)),l=n(r(399)),p=n(r(350)),m=n(r(400)),h=n(r(62)),g=i("stirlingS2",{"number | BigNumber, number | BigNumber":function(e,t){if(!m(e)||p(e)||!m(t)||p(t))throw new TypeError("Non-negative integer value expected in function stirlingS2");if(h(t,e))throw new TypeError("k must be less than or equal to n in function stirlingS2");for(var r=f(t),n=0,i=0;t>=i;i++){var g=c(-1,o(t,i)),v=l(t,i),d=c(i,e);n=a(n,s(s(v,d),g))}return u(n,r)}});return g.toTex="\\mathrm{S}\\left(${args[0]},${args[1]}\\right)",g}t.name="stirlingS2",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(398)),s=r(29),u=a("factorial",{number:function(e){if(0>e)throw new Error("Value must be non-negative");return o(e+1)},BigNumber:function(e){if(e.isNegative())throw new Error("Value must be non-negative");return o(e.plus(1))},"Array | Matrix":function(e){return i(e,u)}});return u.toTex="\\left(${args[0]}\\right)"+s.operators.factorial,u}var i=r(19);r(93);t.name="factorial",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,u){function c(r){if(r.isZero())return new e.BigNumber(1);for(var n=t.precision+(0|Math.log(r.toNumber())),i=e.BigNumber.constructor({precision:n}),a=new i(r),o=r.toNumber()-1;o>1;)a=a.times(o),o--;return new e.BigNumber(a.toPrecision(e.BigNumber.precision))}var f=n(r(83)),l=n(r(79)),p=u("gamma",{number:function(e){var t,r;if(a(e)){if(0>=e)return isFinite(e)?1/0:NaN;if(e>171)return 1/0;for(var n=e-2,i=e-1;n>1;)i*=n,n--;return 0==i&&(i=1),i}if(.5>e)return Math.PI/(Math.sin(Math.PI*e)*p(1-e));if(e>=171.35)return 1/0;if(e>85){var u=e*e,c=u*e,f=c*e,l=f*e;return Math.sqrt(2*Math.PI/e)*Math.pow(e/Math.E,e)*(1+1/(12*e)+1/(288*u)-139/(51840*c)-571/(2488320*f)+163879/(209018880*l)+5246819/(75246796800*l*e)); -}--e,r=s[0];for(var m=1;me)throw new TypeError("Positive integer value expected in function combinations");if(!a(t)||0>t)throw new TypeError("Positive integer value expected in function combinations");if(t>e)throw new TypeError("k must be less than or equal to n");for(r=Math.max(t,e-t),n=1,i=1;e-r>=i;i++)n=n*(r+i)/i;return n},"BigNumber, BigNumber":function(t,r){var n,a,o,s,u=new e.BigNumber(1);if(!i(t)||!i(r))throw new TypeError("Positive integer value expected in function combinations");if(r.gt(t))throw new TypeError("k must be less than n in function combinations");for(n=t.minus(r),r.lt(n)&&(n=r),a=u,o=u,s=t.minus(n);o.lte(s);o=o.plus(1))a=a.times(n.plus(o)).dividedBy(o);return a}});return o.toTex="\\binom{${args[0]}}{${args[1]}}",o}function i(e){return e.isInteger()&&e.gte(0)}var a=r(6).isInteger;t.name="combinations",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var o=n("isInteger",{number:a.isInteger,BigNumber:function(e){return e.isInt()},Fraction:function(e){return 1===e.d&&isFinite(e.n)},"Array | Matrix":function(e){return i(e,o)}});return o}var i=r(19),a=r(6);t.name="isInteger",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(399)),o=n(r(51)),s=n(r(363)),u=n(r(400)),c=n(r(62)),f=i("composition",{"number | BigNumber, number | BigNumber":function(e,t){if(!(u(e)&&s(e)&&u(t)&&s(t)))throw new TypeError("Positive integer value expected in function composition");if(c(t,e))throw new TypeError("k must be less than or equal to n in function composition");return a(o(e,-1),o(t,-1))}});return f.toTex="\\mathrm{${name}}\\left(${args}\\right)",f}t.name="composition",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(49)),o=n(r(310)),s=n(r(83)),u=n(r(399)),c=n(r(350)),f=n(r(400)),l=i("catalan",{"number | BigNumber":function(e){if(!f(e)||c(e))throw new TypeError("Non-negative integer value expected in function catalan");return o(u(s(e,2),e),a(e,1))}});return l.toTex="\\mathrm{C}_{${args[0]}}",l}t.name="catalan",t.factory=n},function(e,t,r){e.exports=[r(404),r(405),r(406),r(407)]},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("arg",{number:function(e){return Math.atan2(0,e)},Complex:function(e){return Math.atan2(e.im,e.re)},"Array | Matrix":function(e){return i(e,a)}});return a.toTex="\\arg\\left(${args[0]}\\right)",a}var i=r(19);t.name="arg",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("conj",{number:function(e){return e},BigNumber:function(e){return e},Complex:function(t){return new e.Complex(t.re,-t.im)},"Array | Matrix":function(e){return i(e,a)}});return a.toTex="\\left(${args[0]}\\right)^*",a}var i=r(19);t.name="conj",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("im",{number:function(e){return 0},BigNumber:function(t){return new e.BigNumber(0)},Complex:function(e){return e.im},"Array | Matrix":function(e){return i(e,a)}});return a.toTex="\\Im\\left\\lbrace${args[0]}\\right\\rbrace",a}var i=r(19);t.name="im",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("re",{number:function(e){return e},BigNumber:function(e){return e},Complex:function(e){return e.re},"Array | Matrix":function(e){return i(e,a)}});return a.toTex="\\Re\\left\\lbrace${args[0]}\\right\\rbrace",a}var i=r(19);t.name="re",t.factory=n},function(e,t,r){e.exports=[r(409),r(410)]},function(e,t,r){"use strict";function n(e,t,n,f){var l=n(r(50)),p=f("intersect",{"Array, Array, Array":function(e,t,r){if(!a(e))throw new TypeError("Array with 3 numbers expected for first argument");if(!a(t))throw new TypeError("Array with 3 numbers expected for second argument");if(!o(r))throw new TypeError("Array with 4 numbers expected as third argument");return c(e[0],e[1],e[2],t[0],t[1],t[2],r[0],r[1],r[2],r[3])},"Array, Array, Array, Array":function(e,t,r,n){if(2===e.length){if(!i(e))throw new TypeError("Array with 2 numbers expected for first argument");if(!i(t))throw new TypeError("Array with 2 numbers expected for second argument");if(!i(r))throw new TypeError("Array with 2 numbers expected for third argument");if(!i(n))throw new TypeError("Array with 2 numbers expected for fourth argument");return s(e[0],e[1],t[0],t[1],r[0],r[1],n[0],n[1])}if(3===e.length){if(!a(e))throw new TypeError("Array with 3 numbers expected for first argument");if(!a(t))throw new TypeError("Array with 3 numbers expected for second argument");if(!a(r))throw new TypeError("Array with 3 numbers expected for third argument");if(!a(n))throw new TypeError("Array with 3 numbers expected for fourth argument");return u(e[0],e[1],e[2],t[0],t[1],t[2],r[0],r[1],r[2],n[0],n[1],n[2])}throw new TypeError("Arrays with two or thee dimensional points expected")},"Matrix, Matrix, Matrix":function(e,t,r){return l(p(e.valueOf(),t.valueOf(),r.valueOf()))},"Matrix, Matrix, Matrix, Matrix":function(e,t,r,n){return l(p(e.valueOf(),t.valueOf(),r.valueOf(),n.valueOf()))}});return p}function i(e){return 2===e.length&&"number"==typeof e[0]&&"number"==typeof e[1]}function a(e){return 3===e.length&&"number"==typeof e[0]&&"number"==typeof e[1]&&"number"==typeof e[2]}function o(e){return 4===e.length&&"number"==typeof e[0]&&"number"==typeof e[1]&&"number"==typeof e[2]&&"number"==typeof e[3]}function s(e,t,r,n,i,a,o,s){var u=(e-i)*(o-i)+(t-a)*(s-a),c=(o-i)*(r-e)+(s-a)*(n-t),f=(e-i)*(r-e)+(t-a)*(n-t),l=(o-i)*(o-i)+(s-a)*(s-a),p=(r-e)*(r-e)+(n-t)*(n-t),m=(u*c-f*l)/(p*l-c*c),h=(u+m*c)/l,g=e+m*(r-e),v=t+m*(n-t),d=i+h*(o-i),y=a+h*(s-a);return g===d&&v===y?[g,v]:null}function u(e,t,r,n,i,a,o,s,u,c,f,l){var p=(e-o)*(c-o)+(t-s)*(f-s)+(r-u)*(l-u),m=(c-o)*(n-e)+(f-s)*(i-t)+(l-u)*(a-r),h=(e-o)*(n-e)+(t-s)*(i-t)+(r-u)*(a-r),g=(c-o)*(c-o)+(f-s)*(f-s)+(l-u)*(l-u),v=(n-e)*(n-e)+(i-t)*(i-t)+(a-r)*(a-r),d=(p*m-h*g)/(v*g-m*m),y=(p+d*m)/g,x=e+d*(n-e),b=t+d*(i-t),w=r+d*(a-r),N=o+y*(c-o),E=s+y*(f-s),M=u+y*(l-u);return x===N&&b===E&&w===M?[x,b,w]:null}function c(e,t,r,n,i,a,o,s,u,c){var f=(c-e*o-t*s-r*u)/(n*o+i*s+a*u-e-t-r),l=e+f*(n-e),p=t+f*(i-t),m=r+f*(a-r);return[l,p,m]}t.name="intersect",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,s){var h=(n(r(50)),s("distance",{"Array, Array, Array":function(e,t,r){if(2==e.length&&2==t.length&&2==r.length){if(!i(e))throw new TypeError("Array with 2 numbers expected for first argument");if(!i(t))throw new TypeError("Array with 2 numbers expected for second argument");if(!i(r))throw new TypeError("Array with 2 numbers expected for third argument");var n=(r[1]-r[0])/(t[1]-t[0]),a=n*n*t[0],o=-1*(n*t[0]),s=e[1];return c(e[0],e[1],a,o,s)}throw new TypeError("Invalid Arguments: Try again")},"Object, Object, Object":function(e,t,r){if(2==Object.keys(e).length&&2==Object.keys(t).length&&2==Object.keys(r).length){if(!i(e))throw new TypeError("Values of pointX and pointY should be numbers");if(!i(t))throw new TypeError("Values of lineOnePtX and lineOnePtY should be numbers");if(!i(r))throw new TypeError("Values of lineTwoPtX and lineTwoPtY should be numbers");if(e.hasOwnProperty("pointX")&&e.hasOwnProperty("pointY")&&t.hasOwnProperty("lineOnePtX")&&t.hasOwnProperty("lineOnePtY")&&r.hasOwnProperty("lineTwoPtX")&&r.hasOwnProperty("lineTwoPtY")){var n=(r.lineTwoPtY-r.lineTwoPtX)/(t.lineOnePtY-t.lineOnePtX),a=n*n*t.lineOnePtX,o=-1*(n*t.lineOnePtX),s=e.pointX;return c(e.pointX,e.pointY,a,o,s)}throw new TypeError("Key names do not match")}throw new TypeError("Invalid Arguments: Try again")},"Array, Array":function(e,t){if(2==e.length&&3==t.length){if(!i(e))throw new TypeError("Array with 2 numbers expected for first argument");if(!a(t))throw new TypeError("Array with 3 numbers expected for second argument");return c(e[0],e[1],t[0],t[1],t[2])}if(3==e.length&&6==t.length){if(!a(e))throw new TypeError("Array with 3 numbers expected for first argument");if(!o(t))throw new TypeError("Array with 6 numbers expected for second argument");return f(e[0],e[1],e[2],t[0],t[1],t[2],t[3],t[4],t[5])}if(2==e.length&&2==t.length){if(!i(e))throw new TypeError("Array with 2 numbers expected for first argument");if(!i(t))throw new TypeError("Array with 2 numbers expected for second argument");return l(e[0],e[1],t[0],t[1])}if(3==e.length&&3==t.length){if(!a(e))throw new TypeError("Array with 3 numbers expected for first argument");if(!a(t))throw new TypeError("Array with 3 numbers expected for second argument");return p(e[0],e[1],e[2],t[0],t[1],t[2])}throw new TypeError("Invalid Arguments: Try again")},"Object, Object":function(e,t){if(2==Object.keys(e).length&&3==Object.keys(t).length){if(!i(e))throw new TypeError("Values of pointX and pointY should be numbers");if(!a(t))throw new TypeError("Values of xCoeffLine, yCoeffLine and constant should be numbers");if(e.hasOwnProperty("pointX")&&e.hasOwnProperty("pointY")&&t.hasOwnProperty("xCoeffLine")&&t.hasOwnProperty("yCoeffLine")&&t.hasOwnProperty("yCoeffLine"))return c(e.pointX,e.pointY,t.xCoeffLine,t.yCoeffLine,t.constant);throw new TypeError("Key names do not match")}if(3==Object.keys(e).length&&6==Object.keys(t).length){if(!a(e))throw new TypeError("Values of pointX, pointY and pointZ should be numbers");if(!o(t))throw new TypeError("Values of x0, y0, z0, a, b and c should be numbers");if(e.hasOwnProperty("pointX")&&e.hasOwnProperty("pointY")&&t.hasOwnProperty("x0")&&t.hasOwnProperty("y0")&&t.hasOwnProperty("z0")&&t.hasOwnProperty("a")&&t.hasOwnProperty("b")&&t.hasOwnProperty("c"))return f(e.pointX,e.pointY,e.pointZ,t.x0,t.y0,t.z0,t.a,t.b,t.c);throw new TypeError("Key names do not match")}if(2==Object.keys(e).length&&2==Object.keys(t).length){if(!i(e))throw new TypeError("Values of pointOneX and pointOneY should be numbers");if(!i(t))throw new TypeError("Values of pointTwoX and pointTwoY should be numbers");if(e.hasOwnProperty("pointOneX")&&e.hasOwnProperty("pointOneY")&&t.hasOwnProperty("pointTwoX")&&t.hasOwnProperty("pointTwoY"))return l(e.pointOneX,e.pointOneY,t.pointTwoX,t.pointTwoY);throw new TypeError("Key names do not match")}if(3==Object.keys(e).length&&3==Object.keys(t).length){if(!a(e))throw new TypeError("Values of pointOneX, pointOneY and pointOneZ should be numbers");if(!a(t))throw new TypeError("Values of pointTwoX, pointTwoY and pointTwoZ should be numbers");if(e.hasOwnProperty("pointOneX")&&e.hasOwnProperty("pointOneY")&&e.hasOwnProperty("pointOneZ")&&t.hasOwnProperty("pointTwoX")&&t.hasOwnProperty("pointTwoY")&&t.hasOwnProperty("pointTwoZ"))return p(e.pointOneX,e.pointOneY,e.pointOneZ,t.pointTwoX,t.pointTwoY,t.pointTwoZ);throw new TypeError("Key names do not match")}throw new TypeError("Invalid Arguments: Try again")},Array:function(e){if(!u(e))throw new TypeError("Incorrect array format entered for pairwise distance calculation");return m(e)}}));return h}function i(e){return e.constructor!==Array&&(e=s(e)),"number"==typeof e[0]&&"number"==typeof e[1]}function a(e){return e.constructor!==Array&&(e=s(e)),"number"==typeof e[0]&&"number"==typeof e[1]&&"number"==typeof e[2]}function o(e){return e.constructor!==Array&&(e=s(e)),"number"==typeof e[0]&&"number"==typeof e[1]&&"number"==typeof e[2]&&"number"==typeof e[3]&&"number"==typeof e[4]&&"number"==typeof e[5]}function s(e){for(var t=Object.keys(e),r=[],n=0;n0?t:0,a=0>t?-t:0;switch(r.length){case 1:return c(e,t,n,r[0],a,i);case 2:return f(e,t,n,r,a,i)}throw new RangeError("Matrix for function diag must be 2 dimensional")}function c(t,r,n,i,a,o){var s=[i+a,i+o],u=e.Matrix.storage(n||"dense"),c=u.diagonal(s,t,r);return null!==n?c:c.valueOf()}function f(e,t,r,n,i,o){if(e&&e.isMatrix===!0){var s=e.diagonal(t);return null!==r?r!==s.storage()?l(s,r):s:s.valueOf()}for(var u=Math.min(n[0]-i,n[1]-o),c=[],f=0;u>f;f++)c[f]=a(e[f+i][f+o]);return null!==r?l(c):c}var l=n(r(50)),p=s("diag",{Array:function(e){return u(e,0,i.size(e),null)},"Array, number":function(e,t){return u(e,t,i.size(e),null)},"Array, BigNumber":function(e,t){return u(e,t.toNumber(),i.size(e),null)},"Array, string":function(e,t){return u(e,0,i.size(e),t)},"Array, number, string":function(e,t,r){return u(e,t,i.size(e),r)},"Array, BigNumber, string":function(e,t,r){return u(e,t.toNumber(),i.size(e),r)},Matrix:function(e){return u(e,0,e.size(),e.storage())},"Matrix, number":function(e,t){return u(e,t,e.size(),e.storage())},"Matrix, BigNumber":function(e,t){return u(e,t.toNumber(),e.size(),e.storage())},"Matrix, string":function(e,t){return u(e,0,e.size(),t)},"Matrix, number, string":function(e,t,r){return u(e,t,e.size(),r)},"Matrix, BigNumber, string":function(e,t,r){return u(e,t.toNumber(),e.size(),r)}});return p.toTex="\\mathrm{${name}}\\left(${args}\\right)",p}var i=r(39),a=r(3).clone,o=r(6).isInteger;t.name="diag",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(e,t){var r=i(e),n=i(t),a=r[0];if(1!==r.length||1!==n.length)throw new RangeError("Vector expected");if(r[0]!=n[0])throw new RangeError("Vectors must have equal length ("+r[0]+" != "+n[0]+")");if(0==a)throw new RangeError("Cannot calculate the dot product of empty vectors");for(var o=0,c=0;a>c;c++)o=s(o,u(e[c],t[c]));return o}var s=n(r(49)),u=n(r(83)),c=a("dot",{"Matrix, Matrix":function(e,t){return o(e.toArray(),t.toArray())},"Matrix, Array":function(e,t){return o(e.toArray(),t)},"Array, Matrix":function(e,t){return o(e,t.toArray())},"Array, Array":o});return c.toTex="\\left(${args[0]}\\cdot${args[1]}\\right)",c}var i=r(39).size;t.name="dot",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var s=n(r(50)),u=o("flatten",{Array:function(e){return a(i(e))},Matrix:function(e){var t=a(i(e.toArray()));return s(t)}});return u.toTex="\\mathrm{${name}}\\left(${args}\\right)",u}var i=r(3).clone,a=r(39).flatten;t.name="flatten",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(t,r){var n=u(t),i=n?new e.BigNumber(1):1;if(c(t),r){var o=f(r);return t.length>0?o.resize(t,i):o}var s=[];return t.length>0?a(s,t,i):s}function u(e){var t=!1;return e.forEach(function(e,r,n){e&&e.isBigNumber===!0&&(t=!0,n[r]=e.toNumber())}),t}function c(e){e.forEach(function(e){if("number"!=typeof e||!i(e)||0>e)throw new Error("Parameters in function ones must be positive integers")})}var f=n(r(50)),l=o("ones",{"":function(){return"array"===t.matrix?s([]):s([],"default")},"...number | BigNumber | string":function(e){var r=e[e.length-1];if("string"==typeof r){var n=e.pop();return s(e,n)}return"array"===t.matrix?s(e):s(e,"default")},Array:s,Matrix:function(e){var t=e.storage();return s(e.valueOf(),t)},"Array | Matrix, string":function(e,t){return s(e.valueOf(),t)}});return l.toTex="\\mathrm{${name}}\\left(${args}\\right)",l}var i=r(6).isInteger,a=r(39).resize;t.name="ones",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,f){function l(e,t,r){if(void 0!==r){if("string"!=typeof r||1!==r.length)throw new TypeError("Single character expected as defaultValue")}else r=" ";if(1!==t.length)throw new i(t.length,1);var n=t[0];if("number"!=typeof n||!o(n))throw new TypeError("Invalid size, must contain positive integers (size: "+s(t)+")");if(e.length>n)return e.substring(0,n);if(e.lengthu;u++)a+=r;return a}return e}var p=n(r(50)),m=function(e,r,n){if(2!=arguments.length&&3!=arguments.length)throw new a("resize",arguments.length,2,3);if(r&&r.isMatrix===!0&&(r=r.valueOf()),r.length&&r[0]&&r[0].isBigNumber===!0&&(r=r.map(function(e){return e&&e.isBigNumber===!0?e.toNumber():e})),e&&e.isMatrix===!0)return e.resize(r,n,!0);if("string"==typeof e)return l(e,r,n);var i=Array.isArray(e)?!1:"array"!==t.matrix;if(0==r.length){for(;Array.isArray(e);)e=e[0];return u(e)}Array.isArray(e)||(e=[e]),e=u(e);var o=c.resize(e,r,n);return i?p(o):o};return m.toTex="\\mathrm{${name}}\\left(${args}\\right)",m}var i=r(41),a=r(11),o=r(6).isInteger,s=r(23).format,u=r(3).clone,c=r(39);t.name="resize",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(50)),s=a("size",{Matrix:function(e){return o(e.size())},Array:i.size,string:function(e){return"array"===t.matrix?[e.length]:o([e.length])},"number | Complex | BigNumber | Unit | boolean | null":function(e){return"array"===t.matrix?[]:o([])}});return s.toTex="\\mathrm{${name}}\\left(${args}\\right)",s}var i=r(39);t.name="size",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var s=n(r(50)),u=o("squeeze",{Array:function(e){return a.squeeze(i.clone(e))},Matrix:function(e){var t=a.squeeze(e.toArray());return Array.isArray(t)?s(t):t},any:function(e){return i.clone(e)}});return u.toTex="\\mathrm{${name}}\\left(${args}\\right)",u}var i=r(3),a=r(39);t.name="squeeze",t.factory=n},function(e,t,r){e.exports=[r(399),r(397),r(398),r(427),r(429),r(430),r(431),r(433),r(434)]},function(e,t,r){"use strict";function n(e,t,n,i){function a(e,t){var r=t.size().length,n=e.size().length;if(r>1)throw new Error("first object must be one dimensional");if(n>1)throw new Error("second object must be one dimensional");if(r!==n)throw new Error("Length of two vectors must be equal");var i=u(e);if(0===i)throw new Error("Sum of elements in first object must be non zero");var a=u(t);if(0===a)throw new Error("Sum of elements in second object must be non zero");var o=s(e,u(e)),m=s(t,u(t)),h=u(c(o,l(f(o,m))));return p(h)?h:Number.NaN}var o=n(r(50)),s=n(r(310)),u=n(r(428)),c=n(r(83)),f=n(r(353)),l=n(r(82)),p=n(r(87)),m=i("kldivergence",{"Array, Array":function(e,t){return a(o(e),o(t))},"Matrix, Array":function(e,t){return a(e,o(t))},"Array, Matrix":function(e,t){return a(o(e),t)},"Matrix, Matrix":function(e,t){return a(e,t)}});return m}t.name="kldivergence",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(r){var n=void 0;if(i(r,function(e){n=void 0===n?e:s(n,e)}),void 0===n)switch(t.number){case"number":return 0;case"bignumber":return new e.BigNumber(0);case"fraction":return new e.Fraction(0);default:return 0}return n}var s=n(r(51)),u=a("sum",{"Array | Matrix":function(e){return o(e)},"Array | Matrix, number | BigNumber":function(){throw new Error("sum(A, dim) is not yet supported")},"...":function(){return o(arguments)}});return u.toTex="\\mathrm{${name}}\\left(${args}\\right)",u}var i=r(306);t.name="sum",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(49)),s=n(r(83)),u=n(r(310)),c=n(r(397)),f=n(r(400)),l=n(r(363));return a("multinomial",{"Array | Matrix":function(e){var t=0,r=1;return i(e,function(e){if(!f(e)||!l(e))throw new TypeError("Positive integer value expected in function multinomial");t=o(t,e),r=s(r,c(e))}),u(c(t),r)}})}var i=r(306);t.name="multinomial",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var s=n(r(397)),u=o("permutations",{"number | BigNumber":s,"number, number":function(e,t){var r,n;if(!a(e)||0>e)throw new TypeError("Positive integer value expected in function permutations");if(!a(t)||0>t)throw new TypeError("Positive integer value expected in function permutations");if(t>e)throw new TypeError("second argument k must be less than or equal to first argument n");for(r=1,n=e-t+1;e>=n;n++)r*=n;return r},"BigNumber, BigNumber":function(t,r){var n,a;if(!i(t)||!i(r))throw new TypeError("Positive integer value expected in function permutations");if(r.gt(t))throw new TypeError("second argument k must be less than or equal to first argument n");for(n=new e.BigNumber(1),a=t.minus(r).plus(1);a.lte(t);a=a.plus(1))n=n.times(a);return n}});return u.toTex="\\mathrm{${name}}\\left(${args}\\right)",u}function i(e){return e.isInteger()&&e.gte(0)}var a=r(6).isInteger;t.name="permutations",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(432)),o=a("uniform").pickRandom;return o.toTex="\\mathrm{${name}}\\left(${args}\\right)",o}t.name="pickRandom",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(e){if(!f.hasOwnProperty(e))throw new Error("Unknown distribution "+e);var t=Array.prototype.slice.call(arguments,1),r=f[e].apply(this,t);return function(e){var t={random:function(e,t,n){var s,c,f;if(arguments.length>3)throw new i("random",arguments.length,0,3);if(1===arguments.length?a(e)?s=e:f=e:2===arguments.length?a(e)?(s=e,f=t):(c=e,f=t):(s=e,c=t,f=n),void 0===f&&(f=1),void 0===c&&(c=0),void 0!==s){var l=o(s.valueOf(),c,f,r);return s&&s.isMatrix===!0?u(l):l}return r(c,f)},randomInt:function(e,t,r){var s,c,f;if(arguments.length>3||arguments.length<1)throw new i("randomInt",arguments.length,1,3);if(1===arguments.length?a(e)?s=e:f=e:2===arguments.length?a(e)?(s=e,f=t):(c=e,f=t):(s=e,c=t,f=r),void 0===c&&(c=0),void 0!==s){var l=o(s.valueOf(),c,f,n);return s&&s.isMatrix===!0?u(l):l}return n(c,f)},pickRandom:function(e){if(1!==arguments.length)throw new i("pickRandom",arguments.length,1);if(e&&e.isMatrix===!0)e=e.valueOf();else if(!Array.isArray(e))throw new TypeError("Unsupported type of value in function pickRandom");if(c.size(e).length>1)throw new Error("Only one dimensional vectors supported");return e[Math.floor(Math.random()*e.length)]}},r=function(t,r){return t+e()*(r-t)},n=function(t,r){return Math.floor(t+e()*(r-t))},o=function(e,t,r,n){var i,a,s=[];if(e=e.slice(0),e.length>1)for(a=0,i=e.shift();i>a;a++)s.push(o(e,t,r,n));else for(a=0,i=e.shift();i>a;a++)s.push(n(t,r));return s};return t}(r)}var u=n(r(50)),c=r(39),f={uniform:function(){return Math.random},normal:function(){return function(){for(var e,t,r=-1;0>r||r>1;)e=Math.random(),t=Math.random(),r=1/6*Math.pow(-2*Math.log(e),.5)*Math.cos(2*Math.PI*t)+.5;return r}}};return s.toTex="\\mathrm{${name}}\\left(${args}\\right)",s}var i=r(11),a=r(304);t.name="distribution",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(432)),o=a("uniform").random;return o.toTex="\\mathrm{${name}}\\left(${args}\\right)",o}t.name="random",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){var a=n(r(432)),o=a("uniform").randomInt;return o.toTex="\\mathrm{${name}}\\left(${args}\\right)",o}t.name="randomInt",t.factory=n},function(e,t,r){e.exports=[r(436),r(437),r(86),r(62),r(336),r(58),r(438),r(439)]},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(50)),s=n(r(59)),u=n(r(76)),c=n(r(61)),f=n(r(55)),l=n(r(56)),p=a("compare",{"boolean, boolean":function(e,t){return e===t?0:e>t?1:-1},"number, number":function(e,r){return e===r||i(e,r,t.epsilon)?0:e>r?1:-1},"BigNumber, BigNumber":function(t,r){return new e.BigNumber(t.cmp(r))},"Fraction, Fraction":function(t,r){return new e.Fraction(t.compare(r))},"Complex, Complex":function(){throw new TypeError("No ordering relation is defined for complex numbers")},"Unit, Unit":function(e,t){if(!e.equalBase(t))throw new Error("Cannot compare units with different base");return p(e.value,t.value)},"string, string":function(e,t){return e===t?0:e>t?1:-1},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=u(e,t,p);break;default:r=s(t,e,p,!0)}break;default:switch(t.storage()){case"sparse":r=s(e,t,p,!1);break;default:r=f(e,t,p)}}return r},"Array, Array":function(e,t){return p(o(e),o(t)).valueOf()},"Array, Matrix":function(e,t){return p(o(e),t)},"Matrix, Array":function(e,t){return p(e,o(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=c(e,t,p,!1);break;default:r=l(e,t,p,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=c(t,e,p,!0);break;default:r=l(t,e,p,!0)}return r},"Array, any":function(e,t){return l(o(e),t,p,!1).valueOf()},"any, Array":function(e,t){return l(o(t),e,p,!0).valueOf()}});return p.toTex="\\mathrm{${name}}\\left(${args}\\right)",p}var i=r(6).nearlyEqual;t.name="compare",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){function a(e,t){if(Array.isArray(e)){if(Array.isArray(t)){var r=e.length;if(r!==t.length)return!1;for(var n=0;r>n;n++)if(!a(e[n],t[n]))return!1;return!0}return!1}return Array.isArray(t)?!1:o(e,t)}var o=n(r(86)),s=i("deepEqual",{"any, any":function(e,t){return a(e.valueOf(),t.valueOf())}});return s.toTex="\\mathrm{${name}}\\left(${args}\\right)", -s}t.name="deepEqual",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(50)),s=n(r(59)),u=n(r(60)),c=n(r(61)),f=n(r(55)),l=n(r(56)),p=r(29),m=a("smallerEq",{"boolean, boolean":function(e,t){return t>=e},"number, number":function(e,r){return r>=e||i(e,r,t.epsilon)},"BigNumber, BigNumber":function(e,t){return e.lte(t)},"Fraction, Fraction":function(e,t){return 1!==e.compare(t)},"Complex, Complex":function(){throw new TypeError("No ordering relation is defined for complex numbers")},"Unit, Unit":function(e,t){if(!e.equalBase(t))throw new Error("Cannot compare units with different base");return m(e.value,t.value)},"string, string":function(e,t){return t>=e},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=u(e,t,m);break;default:r=s(t,e,m,!0)}break;default:switch(t.storage()){case"sparse":r=s(e,t,m,!1);break;default:r=f(e,t,m)}}return r},"Array, Array":function(e,t){return m(o(e),o(t)).valueOf()},"Array, Matrix":function(e,t){return m(o(e),t)},"Matrix, Array":function(e,t){return m(e,o(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=c(e,t,m,!1);break;default:r=l(e,t,m,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=c(t,e,m,!0);break;default:r=l(t,e,m,!0)}return r},"Array, any":function(e,t){return l(o(e),t,m,!1).valueOf()},"any, Array":function(e,t){return l(o(t),e,m,!0).valueOf()}});return m.toTex="\\left(${args[0]}"+p.operators.smallerEq+"${args[1]}\\right)",m}var i=r(6).nearlyEqual;t.name="smallerEq",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(50)),s=n(r(59)),u=n(r(60)),c=n(r(61)),f=n(r(55)),l=n(r(56)),p=r(29),m=a("unequal",{"any, any":function(e,t){return null===e?null!==t:null===t?null!==e:void 0===e?void 0!==t:void 0===t?void 0!==e:h(e,t)},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=u(e,t,h);break;default:r=s(t,e,h,!0)}break;default:switch(t.storage()){case"sparse":r=s(e,t,h,!1);break;default:r=f(e,t,h)}}return r},"Array, Array":function(e,t){return m(o(e),o(t)).valueOf()},"Array, Matrix":function(e,t){return m(o(e),t)},"Matrix, Array":function(e,t){return m(e,o(t))},"Matrix, any":function(e,t){var r;switch(e.storage()){case"sparse":r=c(e,t,h,!1);break;default:r=l(e,t,h,!1)}return r},"any, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=c(t,e,h,!0);break;default:r=l(t,e,h,!0)}return r},"Array, any":function(e,t){return l(o(e),t,h,!1).valueOf()},"any, Array":function(e,t){return l(o(t),e,h,!0).valueOf()}}),h=a("_unequal",{"boolean, boolean":function(e,t){return e!==t},"number, number":function(e,r){return!i(e,r,t.epsilon)},"BigNumber, BigNumber":function(e,t){return!e.eq(t)},"Fraction, Fraction":function(e,t){return 0!==e.compare(t)},"Complex, Complex":function(e,r){return!i(e.re,r.re,t.epsilon)||!i(e.im,r.im,t.epsilon)},"Unit, Unit":function(e,t){if(!e.equalBase(t))throw new Error("Cannot compare units with different base");return m(e.value,t.value)},"string, string":function(e,t){return e!==t}});return m.toTex="\\left(${args[0]}"+p.operators.unequal+"${args[1]}\\right)",m}var i=r(6).nearlyEqual;t.name="unequal",t.factory=n},function(e,t,r){e.exports=[r(305),r(309),r(441),r(314),r(443),r(444),r(445),r(446),r(428),r(447)]},function(e,t,r){"use strict";function n(e,t,n,a){function o(e){e=i(e.valueOf());var t=e.length;if(0==t)throw new Error("Cannot calculate median of an empty array");if(t%2==0){for(var r=t/2-1,n=f(e,r+1),a=e[r],o=0;r>o;++o)c(e[o],a)>0&&(a=e[o]);return m(a,n)}var s=f(e,(t-1)/2);return p(s)}var s=n(r(51)),u=n(r(78)),c=n(r(436)),f=n(r(442)),l=a("median",{"Array | Matrix":o,"Array | Matrix, number | BigNumber":function(e,t){throw new Error("median(A, dim) is not yet supported")},"...":function(){return o(Array.prototype.slice.call(arguments))}}),p=a({"number | BigNumber | Unit":function(e){return e}}),m=a({"number | BigNumber | Unit, number | BigNumber | Unit":function(e,t){return u(s(e,t),2)}});return l.toTex="\\mathrm{${name}}\\left(${args}\\right)",l}var i=r(39).flatten;t.name="median",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(e,t){return-c(e,t)}function s(e,t,r){if(!i(t)||0>t)throw new Error("k must be a non-negative integer");if(e&&e.isMatrix){var n=e.size();if(n.length>1)throw new Error("Only one dimensional matrices supported");return u(e.valueOf(),t,r)}return Array.isArray(e)?u(e,t,r):void 0}function u(e,t,r){if(t>=e.length)throw new Error("k out of bounds");for(var n=0,i=e.length-1;i>n;){for(var a=n,o=i,s=e[Math.floor(Math.random()*(i-n+1))+n];o>a;)if(r(e[a],s)>=0){var u=e[o];e[o]=e[a],e[a]=u,--o}else++a;r(e[a],s)>0&&--a,a>=t?i=a:n=a+1}return e[t]}var c=n(r(436));return a("partitionSelect",{"Array | Matrix, number":function(e,t){return s(e,t,c)},"Array | Matrix, number, string":function(e,t,r){if("asc"===r)return s(e,t,c);if("desc"===r)return s(e,t,o);throw new Error('Compare string must be "asc" or "desc"')},"Array | Matrix, number, function":s})}var i=r(6).isInteger;t.name="partitionSelect",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){function a(e){e=i(e.valueOf());var t=e.length;if(0==t)throw new Error("Cannot calculate mode of an empty array");var r={},n=[],a=0;for(var o in e)e[o]in r||(r[e[o]]=0),r[e[o]]++,r[e[o]]==a?n.push(e[o]):r[e[o]]>a&&(a=r[e[o]],n=[e[o]]);return n}var o=n("mode",{"Array | Matrix":a,"...":function(){return a(Array.prototype.slice.call(arguments))}});return o}var i=r(39).flatten;t.name="mode",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(e){var t=void 0;if(i(e,function(e){t=void 0===t?e:s(t,e)}),void 0===t)throw new Error("Cannot calculate prod of an empty array");return t}var s=n(r(77)),u=a("prod",{"Array | Matrix":o,"Array | Matrix, number | BigNumber":function(e,t){throw new Error("prod(A, dim) is not yet supported")},"...":function(){return o(arguments)}});return u.toTex="\\mathrm{${name}}\\left(${args}\\right)",u}var i=r(306);t.name="prod",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,u){function c(t,r,n){var o,u,c;if(arguments.length<2||arguments.length>3)throw new SyntaxError("Function quantileSeq requires two or three parameters");if(s(t)){if(n=n||!1,"boolean"==typeof n){if(u=t.valueOf(),a(r)){if(0>r)throw new Error("N/prob must be non-negative");if(1>=r)return f(u,r,n);if(r>1){if(!i(r))throw new Error("N must be a positive integer");var l=r+1;o=new Array(r);for(var p=0;r>p;)o[p]=f(u,++p/l,n);return o}}if(r&&r.isBigNumber){if(r.isNegative())throw new Error("N/prob must be non-negative");if(c=r.constructor.ONE,r.lte(c))return f(u,r,n);if(r.gt(c)){if(!r.isInteger())throw new Error("N must be a positive integer");var m=r.toNumber();if(m>4294967295)throw new Error("N must be less than or equal to 2^32-1, as that is the maximum length of an Array");var l=new e.BigNumber(m+1);o=new Array(m);for(var p=0;m>p;)o[p]=f(u,new e.BigNumber(++p).div(l),n);return o}}if(Array.isArray(r)){o=new Array(r.length);for(var p=0;ph||h>1)throw new Error("Probability must be between 0 and 1, inclusive")}else{if(!h||!h.isBigNumber)throw new TypeError("Unexpected type of argument in function quantileSeq");if(c=h.constructor.ONE,h.isNegative()||h.gt(c))throw new Error("Probability must be between 0 and 1, inclusive")}o[p]=f(u,h,n)}return o}throw new TypeError("Unexpected type of argument in function quantileSeq")}throw new TypeError("Unexpected type of argument in function quantileSeq")}throw new TypeError("Unexpected type of argument in function quantileSeq")}function f(e,t,r){var n=o(e),i=n.length;if(0===i)throw new Error("Cannot calculate quantile of an empty sequence");if(a(t)){var s=t*(i-1),u=s%1;if(0===u){var c=r?n[s]:m(n,s);return g(c),c}var f,v,d=Math.floor(s);if(r)f=n[d],v=n[d+1];else{v=m(n,d+1),f=n[d];for(var y=0;d>y;++y)h(n[y],f)>0&&(f=n[y])}return g(f),g(v),l(p(f,1-u),p(v,u))}var s=t.times(i-1);if(s.isInteger()){s=s.toNumber();var c=r?n[s]:m(n,s);return g(c),c}var f,v,d=s.floor(),u=s.minus(d),x=d.toNumber();if(r)f=n[x],v=n[x+1];else{v=m(n,x+1),f=n[x];for(var y=0;x>y;++y)h(n[y],f)>0&&(f=n[y])}g(f),g(v);var b=u.constructor.ONE;return l(p(f,b.minus(u)),p(v,u))}var l=n(r(49)),p=n(r(83)),m=n(r(442)),h=n(r(436)),g=u({"number | BigNumber | Unit":function(e){return e}});return c}var i=r(6).isInteger,a=r(6).isNumber,o=r(39).flatten,s=r(304);t.name="quantileSeq",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,i){function a(e,t){if(0==e.length)throw new SyntaxError("Function std requires one or more parameters (0 provided)");return o(s.apply(null,arguments))}var o=n(r(362)),s=n(r(447)),u=i("std",{"Array | Matrix":a,"Array | Matrix, string":a,"...":function(){return a(Array.prototype.slice.call(arguments))}});return u.toTex="\\mathrm{${name}}\\left(${args}\\right)",u}t.name="std",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(t,r){var n=0,i=0;if(0==t.length)throw new SyntaxError("Function var requires one or more parameters (0 provided)");if(a(t,function(e){n=u(n,e),i++}),0===i)throw new Error("Cannot calculate var of an empty array");var o=l(n,i);switch(n=0,a(t,function(e){var t=c(e,o);n=u(n,f(t,t))}),r){case"uncorrected":return l(n,i);case"biased":return l(n,i+1);case"unbiased":var s=n&&n.isBigNumber===!0?new e.BigNumber(0):0;return 1==i?s:l(n,i-1);default:throw new Error('Unknown normalization "'+r+'". Choose "unbiased" (default), "uncorrected", or "biased".')}}var u=n(r(51)),c=n(r(74)),f=n(r(77)),l=n(r(78)),p=o("variance",{"Array | Matrix":function(e){return s(e,i)},"Array | Matrix, string":s,"...":function(){return s(arguments,i)}});return p.toTex="\\mathrm{Var}\\left(${args}\\right)",p}var i="unbiased",a=r(306);t.name="var",t.factory=n},function(e,t,r){e.exports=[r(449),r(459),r(461),r(463),r(466),r(468),r(470),r(471),r(467),r(469),r(462),r(472),r(465),r(474),r(475),r(478),r(480),r(482),r(483),r(484),r(485),r(486),r(477),r(487),r(488)]},function(e,t,r){"use strict";function n(e,t,n,o){function s(t){var r=new e.Complex(t.im*t.im-t.re*t.re+1,-2*t.re*t.im),n=u(r),i=new e.Complex(n.re-t.im,n.im+t.re),a=c(i);return new e.Complex(1.5707963267948966-a.im,a.re)}var u=o.find(n(r(362)),["Complex"]),c=o.find(n(r(82)),["Complex"]),f=o("acos",{number:function(r){return r>=-1&&1>=r||t.predictable?Math.acos(r):s(new e.Complex(r,0))},Complex:s,BigNumber:function(t){return a(t,e.BigNumber,!1)},"Array | Matrix":function(e){return i(e,f)}});return f.toTex="\\cos^{-1}\\left(${args[0]}\\right)",f}var i=r(19),a=r(450);t.name="acos",t.factory=n},function(e,t,r){var n=r(93).pi,i=r(451);e.exports=function(e,t,r){if(r){if(e.abs().lt(t.ONE))throw new Error("asec() only has non-complex values for |x| >= 1.")}else if(e.abs().gt(t.ONE))throw new Error("acos() only has non-complex values for |x| <= 1.");if(e.eq(-1))return n(t);var a=t.precision;t.config({precision:a+4}),r&&(e=t.ONE.div(e));var o=i(t.ONE.minus(e.times(e)).sqrt().div(e.plus(t.ONE)),t).times(2);return t.config({precision:a}),o.toDP(a-1)}},function(e,t,r){var n=r(93),i=r(452),a=r(94);e.exports=function(e,t,r){if(e.isNaN())return new t(NaN);if(!r&&e.isZero()||r&&!e.isFinite())return new t(0);var o=t.precision;if(!r&&!e.isFinite()||r&&e.isZero()){var s=n.pi(t.constructor({precision:o+2})).div(2).toDP(o-1);return s.constructor=t,s.s=e.s,s}t.config({precision:o+4}),r&&(e=t.ONE.div(e));var u=e.abs();if(u.lte(.875)){var c=a(e);return c.constructor=t,t.config({precision:o}),c.toDP(t.precision-1)}if(u.gte(1.143)){var s=n.pi(t.constructor({precision:o+4})).div(2),c=s.minus(a(t.ONE.div(u)));return c.s=e.s,c.constructor=t,t.config({precision:o}),c.toDP(t.precision-1)}return e=e.div(e.times(e).plus(1).sqrt()),t.config({precision:o}),i(e,t)}},function(e,t,r){var n=r(93).pi,i=r(453),a=r(454);e.exports=function o(e,t,r){if(e.isNaN())return new t(NaN);var s=t.precision,u=e.abs();if(r){if(u.lt(t.ONE))throw new Error("acsc() only has non-complex values for |x| >= 1.");t.config({precision:s+2}),e=t.ONE.div(e),t.config({precision:s}),u=e.abs()}else if(u.gt(t.ONE))throw new Error("asin() only has non-complex values for |x| <= 1.");if(u.gt(.8)){t.config({precision:s+4});var c=e.s,f=n(t.constructor({precision:s+4})).div(2);return e=f.minus(o(t.ONE.minus(e.times(e)).sqrt(),t)),e.s=c,e.constructor=t,t.config({precision:s}),e.toDP(s-1)}var l=u.gt(.58);l&&(t.config({precision:s+8}),e=e.div(new t(2).sqrt().times(t.ONE.minus(e.times(e)).sqrt().plus(t.ONE).sqrt())),t.config({precision:s}));var p=60>=s||e.dp()<=Math.log(s)&&e.lt(.05)?i(e,s):a(e,t);return l?p.times(2):p}},function(e,t){e.exports=function(e,t){var r=e.constructor;r.config({precision:t+Math.log(t)|4});for(var n=new r(1),i=e,a=NaN,o=e.times(e),s=e,u=new r(n),c=new r(n),f=new r(n),l=3;!i.equals(a);l+=2)s=s.times(o),u=u.times(f),c=c.times(f.plus(n)),a=i,f=new r(l),i=i.plus(s.times(u).div(f.times(c)));return r.config({precision:t}),i.toDP(t-1)}},function(e,t,r){var n=r(455),i=r(458);e.exports=function(e,t){var r=t.precision,a=-(r+4),o=r+8-e.e,s=25-e.e,u=Math.max(1.442695*Math.log(r+2)|5,5);t.config({precision:s});var c=0,f=new t(Math.asin(e.toNumber())+"");do{var l=n(f,t,1,!1),p=i(l);l.isZero()||(l.s=f.s);var m=l.minus(e).div(p);f=f.minus(m),s=Math.min(2*s,o),t.config({precision:s})}while(2*m.e>=a&&!m.isZero()&&++c<=u);if(c==u)throw new Error("asin() failed to converge to the requested accuracy.Try with a higher precision.");return t.config({precision:r}),f.toDP(r-1)}},function(e,t,r){var n=r(456),i=r(457);e.exports=function(e,t,r,a){if(e.isNaN()||!e.isFinite())return new t(NaN);var o=t.precision,s=new t(e),u=s.isNegative();u&&(s.s=-s.s);var c=o+(0|Math.log(o))+3;if(t.config({precision:c}),s=n(s,t.constructor({precision:c}),r),s[0].constructor=t,s[1])return s=s[0],a&&s.isZero()&&(s=new t(1/0)),t.config({precision:o}),s;var f;if(s=s[0],r){f=i(s.div(3125),r),t.config({precision:Math.min(c,o+15)});for(var l=new t(5),p=new t(16),m=new t(20),h=0;5>h;++h){var g=f.times(f),v=g.times(f),d=v.times(g);f=p.times(d).minus(m.times(v)).plus(l.times(f))}u&&(f.s=-f.s)}else{var y,x;s.abs().lt(t.ONE)?(y=64,x=3):(y=256,x=4),f=i(s.div(y),r),t.config({precision:Math.min(c,o+8)});for(var b=new t(8);x>0;--x){var g=f.times(f),w=g.times(g);f=b.times(w.minus(g)).plus(t.ONE)}}return a&&(f=f.e<=-o?new t(1/0):t.ONE.div(f)),t.config({precision:o}),f.toDP(o-1)}},function(e,t,r){var n=r(93);e.exports=function(e,t,r){var i=n.pi(t.constructor({precision:t.precision+2})),a=n.tau(t);if(e.abs().lte(i.toDP(e.dp())))return[e,!1];if(e.dp()>0&&e.div(i.toDP(e.dp())).toNumber()%2==0)return[new t(1^r),!0];var o=e.mod(a);return e.dp()>0&&o.toDP(e.dp(),1).isZero()?[new t(1^r),!0]:(o.gt(i)&&(r?(o=o.minus(i),o.s=-o.s):o=a.minus(o)),o.constructor=e.constructor,[o,!1])}},function(e,t){e.exports=function(e,t){for(var r=e.constructor.ONE,n=e,i=NaN,a=e.times(e),o=t?n:n=r,s=r,u=!0,c=t;!n.equals(i);c+=2)o=o.times(a),s=s.times(c+1).times(c+2),i=n,u=!u,n=u?n.plus(o.div(s)):n.minus(o.div(s));return n}},function(e,t){e.exports=function(e){var t=e.constructor,r=t.precision;t.config({precision:r+2});var n=t.ONE.minus(e.times(e)).sqrt();return t.config({precision:r}),n.toDP(r-1)}},function(e,t,r){"use strict";function n(e,t,n,o){function s(e){var t,r=u(e);return r.im<=0?(t=r.re,r.re=-r.im,r.im=t):(t=r.im,r.im=-r.re,r.re=t),r}var u=o.find(n(r(449)),["Complex"]),c=o("acosh",{number:function(r){return r>=1||t.predictable?Math.log(Math.sqrt(r*r-1)+r):-1>=r?new e.Complex(Math.log(Math.sqrt(r*r-1)-r),Math.PI):s(new e.Complex(r,0))},Complex:s,BigNumber:function(t){return a(t,e.BigNumber,!1,!1)},"Array | Matrix":function(e){return i(e,c)}});return c.toTex="\\cosh^{-1}\\left(${args[0]}\\right)",c}var i=r(19),a=r(460);t.name="acosh",t.factory=n},function(e,t){e.exports=function(e,t,r,n){if(e.isNaN())return new t(NaN);if(n&&e.isZero())return new t(1/0);if(!r)if(n){if(e.isNegative()||e.gt(t.ONE))throw new Error("asech() only has non-complex values for 0 <= x <= 1.")}else if(e.lt(t.ONE))throw new Error("acosh() only has non-complex values for x >= 1.");var i=t.precision;t.config({precision:i+4});var a=new t(e);a.constructor=t,n&&(a=t.ONE.div(a));var o=r?a.times(a).plus(t.ONE):a.times(a).minus(t.ONE),s=a.plus(o.sqrt()).ln();return t.config({precision:i}),new t(s.toPrecision(i))}},function(e,t,r){"use strict";function n(e,t,n,s){var u=s.find(n(r(462)),["Complex"]),c=s("acot",{number:function(e){return e?Math.atan(1/e):o},Complex:function(t){if(0==t.im)return new e.Complex(t.re?Math.atan(1/t.re):o,0);var r=t.re*t.re+t.im*t.im;return t=0!=r?new e.Complex(t.re=t.re/r,t.im=-t.im/r):new e.Complex(0!=t.re?t.re/0:0,0!=t.im?-(t.im/0):0),u(t)},BigNumber:function(t){return a(t,e.BigNumber,!0)},"Array | Matrix":function(e){return i(e,c)}});return c.toTex="\\cot^{-1}\\left(${args[0]}\\right)",c}var i=r(19),a=r(451),o=1.5707963267948966;t.name="acot",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var s=o.find(n(r(82)),["Complex"]),u=o("atan",{number:function(e){return Math.atan(e)},Complex:function(t){if(0==t.re){if(1==t.im)return new e.Complex(0,1/0);if(-1==t.im)return new e.Complex(0,-(1/0))}var r=t.re,n=t.im,i=r*r+(1-n)*(1-n),a=new e.Complex((1-n*n-r*r)/i,-2*r/i),o=s(a);return new e.Complex(-.5*o.im,.5*o.re)},BigNumber:function(t){return a(t,e.BigNumber,!1)},"Array | Matrix":function(e){return i(e,u,!0)}});return u.toTex="\\tan^{-1}\\left(${args[0]}\\right)",u}var i=r(19),a=r(451);t.name="atan",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,s){function u(t){if(0==t.re&&0==t.im)return new e.Complex(0,o);var r=t.re*t.re+t.im*t.im;return t=0!=r?new e.Complex(t.re/r,-t.im/r):new e.Complex(0!=t.re?t.re/0:0,0!=t.im?-(t.im/0):0),c(t)}var c=s.find(n(r(465)),["Complex"]),f=s("acoth",{number:function(r){return r>=1||-1>=r||t.predictable?isFinite(r)?(Math.log((r+1)/r)+Math.log(r/(r-1)))/2:0:0!==r?u(new e.Complex(r,0)):new e.Complex(0,o)},Complex:u,BigNumber:function(t){return a(t,e.BigNumber,!0)},"Array | Matrix":function(e){return i(e,f)}});return f.toTex="\\coth^{-1}\\left(${args[0]}\\right)",f}var i=r(19),a=r(464),o=1.5707963267948966;t.name="acoth",t.factory=n},function(e,t){e.exports=function(e,t,r){if(e.isNaN())return new t(NaN);var n=e.abs();if(n.eq(t.ONE))return new t(e.isNegative()?-(1/0):1/0);if(n.gt(t.ONE)){if(!r)throw new Error("atanh() only has non-complex values for |x| <= 1.")}else if(r)throw new Error("acoth() has complex values for |x| < 1.");if(e.isZero())return new t(0);var i=t.precision;t.config({precision:i+4});var a=new t(e);a.constructor=t,r&&(a=t.ONE.div(a));var o=t.ONE.plus(a).div(t.ONE.minus(a)).ln().div(2);return t.config({precision:i}),new t(o.toPrecision(i))}},function(e,t,r){"use strict";function n(e,t,r,n){function o(t){var r=t.re>1&&0==t.im,n=1-t.re,i=1+t.re,a=n*n+t.im*t.im;t=0!=a?new e.Complex((i*n-t.im*t.im)/a,(t.im*n+i*t.im)/a):new e.Complex(-1!=t.re?t.re/0:0,0!=t.im?t.im/0:0);var o=t.re;return t.re=Math.log(Math.sqrt(t.re*t.re+t.im*t.im))/2,t.im=Math.atan2(t.im,o)/2,r&&(t.im=-t.im),t}var s=n("atanh",{number:function(r){return 1>=r&&r>=-1||t.predictable?Math.log((1+r)/(1-r))/2:o(new e.Complex(r,0))},Complex:o,BigNumber:function(t){return a(t,e.BigNumber,!1)},"Array | Matrix":function(e){return i(e,s,!0)}});return s.toTex="\\tanh^{-1}\\left(${args[0]}\\right)",s}var i=r(19),a=r(464);t.name="atanh",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,s){function u(t){if(0==t.re&&0==t.im)return new e.Complex(o,1/0);var r=t.re*t.re+t.im*t.im;return t=0!=r?new e.Complex(t.re=t.re/r,t.im=-t.im/r):new e.Complex(0!=t.re?t.re/0:0,0!=t.im?-(t.im/0):0),c(t)}var c=s.find(n(r(467)),["Complex"]),f=s("acsc",{number:function(r){return-1>=r||r>=1||t.predictable?Math.asin(1/r):u(new e.Complex(r,0))},Complex:u,BigNumber:function(t){return a(t,e.BigNumber,!0)},"Array | Matrix":function(e){return i(e,f)}});return f.toTex="\\csc^{-1}\\left(${args[0]}\\right)",f}var i=r(19),a=r(452),o=1.5707963267948966;t.name="acsc",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(t){var r=t.re,n=t.im,i=new e.Complex(n*n-r*r+1,-2*r*n),a=u(i),o=new e.Complex(a.re-n,a.im+r),s=c(o);return new e.Complex(s.im,-s.re)}var u=o.find(n(r(362)),["Complex"]),c=o.find(n(r(82)),["Complex"]),f=o("asin",{number:function(r){return r>=-1&&1>=r||t.predictable?Math.asin(r):s(new e.Complex(r,0))},Complex:s,BigNumber:function(t){return a(t,e.BigNumber,!1)},"Array | Matrix":function(e){return i(e,f,!0)}});return f.toTex="\\sin^{-1}\\left(${args[0]}\\right)",f}var i=r(19),a=r(452);t.name="asin",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var s=o.find(n(r(469)),["Complex"]),u=o("acsch",{number:function(e){return e=1/e,Math.log(e+Math.sqrt(e*e+1))},Complex:function(t){if(0==t.im)return t=0!=t.re?Math.log(t.re+Math.sqrt(t.re*t.re+1)):1/0,new e.Complex(t,0);var r=t.re*t.re+t.im*t.im;return t=0!=r?new e.Complex(t.re/r,-t.im/r):new e.Complex(0!=t.re?t.re/0:0,0!=t.im?-(t.im/0):0),s(t)},BigNumber:function(t){return a(t,e.BigNumber,!0,!0)},"Array | Matrix":function(e){return i(e,u)}});return u.toTex="\\mathrm{csch}^{-1}\\left(${args[0]}\\right)",u}var i=r(19),a=r(460);t.name="acsch",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var s=o.find(n(r(467)),["Complex"]),u=o("asinh",{number:function(e){return Math.log(Math.sqrt(e*e+1)+e)},Complex:function(e){var t=e.im;e.im=-e.re,e.re=t;var r=s(e);return e.re=-e.im,e.im=t,t=r.re,r.re=-r.im,r.im=t,r},BigNumber:function(t){return a(t,e.BigNumber,!0,!1)},"Array | Matrix":function(e){return i(e,u,!0)}});return u.toTex="\\sinh^{-1}\\left(${args[0]}\\right)",u}var i=r(19),a=r(460);t.name="asinh",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(t){if(0==t.re&&0==t.im)return new e.Complex(0,1/0);var r=t.re*t.re+t.im*t.im;return t=0!=r?new e.Complex(t.re=t.re/r,t.im=-t.im/r):new e.Complex(0!=t.re?t.re/0:0,0!=t.im?-(t.im/0):0),u(t)}var u=o.find(n(r(449)),["Complex"]),c=o("asec",{number:function(r){return-1>=r||r>=1||t.predictable?Math.acos(1/r):s(new e.Complex(r,0))},Complex:s,BigNumber:function(t){return a(t,e.BigNumber,!0)},"Array | Matrix":function(e){return i(e,c)}});return c.toTex="\\sec^{-1}\\left(${args[0]}\\right)",c}var i=r(19),a=r(450);t.name="asec",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){function s(t){if(0==t.re&&0==t.im)return new e.Complex(1/0,0);var r=t.re*t.re+t.im*t.im;return t=0!=r?new e.Complex(t.re/r,-t.im/r):new e.Complex(0!=t.re?t.re/0:0,0!=t.im?-(t.im/0):0),u(t)}var u=o.find(n(r(459)),["Complex"]),c=o("asech",{number:function(r){if(1>=r&&r>=-1||t.predictable){r=1/r;var n=Math.sqrt(r*r-1);return r>0||t.predictable?Math.log(n+r):new e.Complex(Math.log(n-r),Math.PI)}return s(new e.Complex(r,0))},Complex:s,BigNumber:function(t){return a(t,e.BigNumber,!1,!0)},"Array | Matrix":function(e){return i(e,c)}});return c.toTex="\\mathrm{sech}^{-1}\\left(${args[0]}\\right)",c}var i=r(19),a=r(460);t.name="asech",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){var o=n(r(50)),s=n(r(354)),u=n(r(59)),c=n(r(356)),f=n(r(84)),l=n(r(61)),p=n(r(55)),m=n(r(56)),h=a("atan2",{"number, number":Math.atan2,"BigNumber, BigNumber":function(t,r){return i(t,r,e.BigNumber)},"Matrix, Matrix":function(e,t){var r;switch(e.storage()){case"sparse":switch(t.storage()){case"sparse":r=c(e,t,h,!1);break;default:r=s(t,e,h,!0)}break;default:switch(t.storage()){case"sparse":r=u(e,t,h,!1);break;default:r=p(e,t,h)}}return r},"Array, Array":function(e,t){return h(o(e),o(t)).valueOf()},"Array, Matrix":function(e,t){return h(o(e),t)},"Matrix, Array":function(e,t){return h(e,o(t))},"Matrix, number | BigNumber":function(e,t){var r;switch(e.storage()){case"sparse":r=f(e,t,h,!1);break;default:r=m(e,t,h,!1)}return r},"number | BigNumber, Matrix":function(e,t){var r;switch(t.storage()){case"sparse":r=l(t,e,h,!0);break;default:r=m(t,e,h,!0)}return r},"Array, number | BigNumber":function(e,t){return m(o(e),t,h,!1).valueOf()},"number | BigNumber, Array":function(e,t){return m(o(t),e,h,!0).valueOf()}});return h.toTex="\\mathrm{atan2}\\left(${args}\\right)",h}var i=r(473);t.name="atan2",t.factory=n},function(e,t,r){var n=r(93),i=r(451);e.exports=function(e,t,r){var a=r.precision;if(t.isZero()){if(e.isZero())return new r(NaN);var o=n.pi(r.constructor({precision:a+2})).div(2).toDP(a-1);return o.constructor=r,o.s=e.s,o}r.config({precision:a+2});var s=i(e.div(t),r,!1);if(t.isNegative()){var u=n.pi(r);s=e.isNegative()?s.minus(u):s.plus(u)}return s.constructor=r,r.config({precision:a}),s.toDP(a-1)}},function(e,t,r){"use strict";function n(e,t,n,o){var s=o.find(n(r(475)),["number"]),u=o.find(n(r(477)),["number"]),c=o("cos",{number:Math.cos,Complex:function(t){return new e.Complex(Math.cos(t.re)*s(-t.im),Math.sin(t.re)*u(-t.im))},BigNumber:function(t){return a(t,e.BigNumber,0,!1)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function cos is no angle");return c(t.value)},"Array | Matrix":function(e){return i(e,c)}});return c.toTex="\\cos\\left(${args[0]}\\right)",c}var i=r(19),a=r(455);t.name="cos",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var s=n("cosh",{number:i,Complex:function(t){var r=Math.exp(t.re),n=Math.exp(-t.re);return new e.Complex(Math.cos(t.im)*(r+n)/2,Math.sin(t.im)*(r-n)/2)},BigNumber:function(t){return o(t,e.BigNumber,!1,!1)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function cosh is no angle");return s(t.value)},"Array | Matrix":function(e){return a(e,s)}});return s.toTex="\\cosh\\left(${args[0]}\\right)",s}function i(e){return(Math.exp(e)+Math.exp(-e))/2}var a=r(19),o=r(476);t.name="cosh",t.factory=n},function(e,t){e.exports=function(e,t,r,n){if(e.isNaN())return new t(NaN);if(!e.isFinite())return new t(n?0:r?e:1/0);var i=t.precision;t.config({precision:i+4});var a=new t(e);return a.constructor=t,a=a.exp(),a=r?a.minus(t.ONE.div(a)):a.plus(t.ONE.div(a)),a=n?new t(2).div(a):a.div(2),t.config({precision:i}),new t(a.toPrecision(i))}},function(e,t,r){"use strict";function n(e,t,r,n){var s=n("sinh",{number:i,Complex:function(t){var r=Math.cos(t.im),n=Math.sin(t.im),i=Math.exp(t.re),a=Math.exp(-t.re);return new e.Complex(r*(i-a)/2,n*(i+a)/2)},BigNumber:function(t){return o(t,e.BigNumber,!0,!1)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function sinh is no angle");return s(t.value)},"Array | Matrix":function(e){return a(e,s,!0)}});return s.toTex="\\sinh\\left(${args[0]}\\right)",s}function i(e){return Math.abs(e)<1?e+e*e*e/6+e*e*e*e*e/120:(Math.exp(e)-Math.exp(-e))/2}var a=r(19),o=r(476);t.name="sinh",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var o=n("cot",{number:function(e){return 1/Math.tan(e)},Complex:function(t){var r=Math.exp(-4*t.im)-2*Math.exp(-2*t.im)*Math.cos(2*t.re)+1;return new e.Complex(2*Math.exp(-2*t.im)*Math.sin(2*t.re)/r,(Math.exp(-4*t.im)-1)/r)},BigNumber:function(t){return a(t,e.BigNumber,!0)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function cot is no angle");return o(t.value)},"Array | Matrix":function(e){return i(e,o)}});return o.toTex="\\cot\\left(${args[0]}\\right)",o}var i=r(19),a=r(479);t.name="cot",t.factory=n},function(e,t,r){var n=r(93),i=r(455),a=r(458),o=r(456);e.exports=function(e,t,r){if(e.isNaN())return new t(NaN);var s=t.precision,u=n.pi(t.constructor({precision:s+2})),c=u.div(2).toDP(s-1);u=u.toDP(s-1);var f=o(e,t,1)[0];if(f.abs().eq(u))return new t(1/0);t.config({precision:s+4});var l=i(f,t,1,!1),p=a(l);l=l.toDP(s),p=p.toDP(s),f.eq(e)?f.gt(c)&&(p.s=-p.s):u.minus(f.abs()).gt(c)&&(p.s=-p.s);var m=r?p.div(l):l.div(p);return t.config({precision:s}),new t(m.toPrecision(s))}},function(e,t,r){"use strict";function n(e,t,r,n){var s=n("coth",{number:i,Complex:function(t){var r=Math.exp(2*t.re),n=r*Math.cos(2*t.im),i=r*Math.sin(2*t.im),a=(n-1)*(n-1)+i*i;return new e.Complex(((n+1)*(n-1)+i*i)/a,-2*i/a)},BigNumber:function(t){return o(t,e.BigNumber,!0)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function coth is no angle");return s(t.value)},"Array | Matrix":function(e){return a(e,s)}});return s.toTex="\\coth\\left(${args[0]}\\right)",s}function i(e){var t=Math.exp(2*e);return(t+1)/(t-1)}var a=r(19),o=r(481);t.name="coth",t.factory=n},function(e,t){e.exports=function(e,t,r){if(e.isNaN())return new t(NaN);if(!e.isFinite())return new t(e.s);var n=t.precision;t.config({precision:n+4});var i=new t(e);i.constructor=t;var a=i.exp(),o=t.ONE.div(a),s=a.minus(o);return s=r?a.plus(o).div(s):s.div(a.plus(o)),t.config({precision:n}),s.toDP(n-1)}},function(e,t,r){"use strict";function n(e,t,r,n){var o=n("csc",{number:function(e){return 1/Math.sin(e)},Complex:function(t){var r=.25*(Math.exp(-2*t.im)+Math.exp(2*t.im))-.5*Math.cos(2*t.re);return new e.Complex(.5*Math.sin(t.re)*(Math.exp(-t.im)+Math.exp(t.im))/r,.5*Math.cos(t.re)*(Math.exp(-t.im)-Math.exp(t.im))/r)},BigNumber:function(t){return a(t,e.BigNumber,1,!0)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function csc is no angle");return o(t.value)},"Array | Matrix":function(e){return i(e,o)}});return o.toTex="\\csc\\left(${args[0]}\\right)",o}var i=r(19),a=r(455);t.name="csc",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var s=n("csch",{number:i,Complex:function(t){var r=Math.exp(t.re),n=Math.exp(-t.re),i=Math.cos(t.im)*(r-n),a=Math.sin(t.im)*(r+n),o=i*i+a*a;return new e.Complex(2*i/o,-2*a/o)},BigNumber:function(t){return o(t,e.BigNumber,!0,!0)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function csch is no angle");return s(t.value)},"Array | Matrix":function(e){return a(e,s)}});return s.toTex="\\mathrm{csch}\\left(${args[0]}\\right)",s}function i(e){return 0==e?Number.POSITIVE_INFINITY:Math.abs(2/(Math.exp(e)-Math.exp(-e)))*s(e)}var a=r(19),o=r(476),s=r(6).sign;t.name="csch",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var o=n("sec",{number:function(e){return 1/Math.cos(e)},Complex:function(t){var r=.25*(Math.exp(-2*t.im)+Math.exp(2*t.im))+.5*Math.cos(2*t.re);return new e.Complex(.5*Math.cos(t.re)*(Math.exp(-t.im)+Math.exp(t.im))/r,.5*Math.sin(t.re)*(Math.exp(t.im)-Math.exp(-t.im))/r)},BigNumber:function(t){return a(t,e.BigNumber,0,!0)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function sec is no angle");return o(t.value)},"Array | Matrix":function(e){return i(e,o)}});return o.toTex="\\sec\\left(${args[0]}\\right)",o}var i=r(19),a=r(455);t.name="sec",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var s=n("sech",{number:i,Complex:function(t){var r=Math.exp(t.re),n=Math.exp(-t.re),i=Math.cos(t.im)*(r+n),a=Math.sin(t.im)*(r-n),o=i*i+a*a;return new e.Complex(2*i/o,-2*a/o)},BigNumber:function(t){return o(t,e.BigNumber,!1,!0)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function sech is no angle");return s(t.value)},"Array | Matrix":function(e){return a(e,s)}});return s.toTex="\\mathrm{sech}\\left(${args[0]}\\right)",s}function i(e){return 2/(Math.exp(e)+Math.exp(-e))}var a=r(19),o=r(476);t.name="sech",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,o){var s=o.find(n(r(475)),["number"]),u=o.find(n(r(477)),["number"]),c=o("sin",{number:Math.sin,Complex:function(t){return new e.Complex(Math.sin(t.re)*s(-t.im),Math.cos(t.re)*u(t.im))},BigNumber:function(t){return a(t,e.BigNumber,1,!1)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function sin is no angle");return c(t.value)},"Array | Matrix":function(e){return i(e,c,!0)}});return c.toTex="\\sin\\left(${args[0]}\\right)",c}var i=r(19),a=r(455);t.name="sin",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var o=n("tan",{number:Math.tan,Complex:function(t){var r=Math.exp(-4*t.im)+2*Math.exp(-2*t.im)*Math.cos(2*t.re)+1;return new e.Complex(2*Math.exp(-2*t.im)*Math.sin(2*t.re)/r,(1-Math.exp(-4*t.im))/r)},BigNumber:function(t){return a(t,e.BigNumber,!1)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function tan is no angle"); -return o(t.value)},"Array | Matrix":function(e){return i(e,o,!0)}});return o.toTex="\\tan\\left(${args[0]}\\right)",o}var i=r(19),a=r(479);t.name="tan",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var s=n("tanh",{number:i,Complex:function(t){var r=Math.exp(2*t.re),n=r*Math.cos(2*t.im),i=r*Math.sin(2*t.im),a=(n+1)*(n+1)+i*i;return new e.Complex(((n-1)*(n+1)+i*i)/a,2*i/a)},BigNumber:function(t){return o(t,e.BigNumber,!1)},Unit:function(t){if(!t.hasBase(e.Unit.BASE_UNITS.ANGLE))throw new TypeError("Unit in function tanh is no angle");return s(t.value)},"Array | Matrix":function(e){return a(e,s,!0)}});return s.toTex="\\tanh\\left(${args[0]}\\right)",s}function i(e){var t=Math.exp(2*e);return(t-1)/(t+1)}var a=r(19),o=r(481);t.name="tanh",t.factory=n},function(e,t,r){e.exports=[r(490)]},function(e,t,r){"use strict";function n(e,t,n,i){var a=r(29),o=n(r(50)),s=n(r(55)),u=n(r(56)),c=i("to",{"Unit, Unit | string":function(e,t){return e.to(t)},"Matrix, Matrix":function(e,t){return s(e,t,c)},"Array, Array":function(e,t){return c(o(e),o(t)).valueOf()},"Array, Matrix":function(e,t){return c(o(e),t)},"Matrix, Array":function(e,t){return c(e,o(t))},"Matrix, any":function(e,t){return u(e,t,c,!1)},"any, Matrix":function(e,t){return u(t,e,c,!0)},"Array, any":function(e,t){return u(o(e),t,c,!1).valueOf()},"any, Array":function(e,t){return u(o(t),e,c,!0).valueOf()}});return c.toTex="\\left(${args[0]}"+a.operators.to+"${args[1]}\\right)",c}t.name="to",t.factory=n},function(e,t,r){e.exports=[r(492),r(297),r(88),r(400),r(350),r(87),r(363),r(414),r(302),r(442),r(493),r(494),r(89),r(299)]},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("clone",{any:i.clone});return a.toTex="\\mathrm{${name}}\\left(${args}\\right)",a}var i=r(3);t.name="clone",t.factory=n},function(e,t,r){"use strict";function n(e,t,r,n){var a=n("print",{"string, Object":i,"string, Object, number":i});return a.toTex="\\mathrm{${name}}\\left(${args}\\right)",a}function i(e,t,r){return e.replace(/\$([\w\.]+)/g,function(e,n){for(var i=n.split("."),s=t[i.shift()];i.length&&void 0!==s;){var u=i.shift();s=u?s[u]:s+"."}return void 0!==s?a(s)?s:o(s,r):e})}var a=r(23).isString,o=r(23).format;t.name="print",t.factory=n},function(e,t,r){"use strict";function n(e,t,n,a){function o(e){if("asc"===e)return f;if("desc"===e)return l;throw new Error('String "asc" or "desc" expected')}function s(e){if(1!==i(e).length)throw new Error("One dimensional array expected")}function u(e){if(1!==e.size().length)throw new Error("One dimensional matrix expected")}var c=n(r(50)),f=n(r(436)),l=function(e,t){return-f(e,t)},p=a("sort",{Array:function(e){return s(e),e.sort(f)},Matrix:function(e){return u(e),c(e.toArray().sort(f),e.storage())},"Array, function":function(e,t){return s(e),e.sort(t)},"Matrix, function":function(e,t){return u(e),c(e.toArray().sort(t),e.storage())},"Array, string":function(e,t){return s(e),e.sort(o(t))},"Matrix, string":function(e,t){return u(e),c(e.toArray().sort(o(t)),e.storage())}});return p.toTex="\\mathrm{${name}}\\left(${args}\\right)",p}var i=r(39).size;t.name="sort",t.factory=n},function(e,t,r){e.exports=[r(496)]},function(e,t){"use strict";function r(e,t,r,n){return function(t,r){var n=e[r&&r.mathjs];return n&&"function"==typeof n.fromJSON?n.fromJSON(r):r}}t.name="reviver",t.path="json",t.factory=r},function(e,t,r){"use strict";var n=r(11),i=r(41),a=r(42);e.exports=[{name:"ArgumentsError",path:"error",factory:function(){return n}},{name:"DimensionError",path:"error",factory:function(){return i}},{name:"IndexError",path:"error",factory:function(){return a}}]}])}); -//# sourceMappingURL=math.map \ No newline at end of file diff --git a/spaces/Marshalls/testmtd/analysis/pymo/writers.py b/spaces/Marshalls/testmtd/analysis/pymo/writers.py deleted file mode 100644 index b34a6eae2963b19043a01685fe18759a0f7e0375..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/analysis/pymo/writers.py +++ /dev/null @@ -1,76 +0,0 @@ -import numpy as np -import pandas as pd - -class BVHWriter(): - def __init__(self): - pass - - def write(self, X, ofile, framerate=-1, start=0, stop=-1): - - # Writing the skeleton info - ofile.write('HIERARCHY\n') - - self.motions_ = [] - self._printJoint(X, X.root_name, 0, ofile) - - if stop > 0: - nframes = stop-start - else: - nframes = X.values.shape[0] - stop = X.values.shape[0] - - # Writing the motion header - ofile.write('MOTION\n') - ofile.write('Frames: %d\n'%nframes) - - if framerate > 0: - ofile.write('Frame Time: %f\n'%float(1.0/framerate)) - else: - ofile.write('Frame Time: %f\n'%X.framerate) - - # Writing the data - self.motions_ = np.asarray(self.motions_).T - lines = [" ".join(item) for item in self.motions_[start:stop].astype(str)] - ofile.write("".join("%s\n"%l for l in lines)) - - def _printJoint(self, X, joint, tab, ofile): - - if X.skeleton[joint]['parent'] == None: - ofile.write('ROOT %s\n'%joint) - elif len(X.skeleton[joint]['children']) > 0: - ofile.write('%sJOINT %s\n'%('\t'*(tab), joint)) - else: - ofile.write('%sEnd site\n'%('\t'*(tab))) - - ofile.write('%s{\n'%('\t'*(tab))) - - ofile.write('%sOFFSET %3.5f %3.5f %3.5f\n'%('\t'*(tab+1), - X.skeleton[joint]['offsets'][0], - X.skeleton[joint]['offsets'][1], - X.skeleton[joint]['offsets'][2])) - rot_order = X.skeleton[joint]['order'] - - #print("rot_order = " + rot_order) - channels = X.skeleton[joint]['channels'] - rot = [c for c in channels if ('rotation' in c)] - pos = [c for c in channels if ('position' in c)] - - n_channels = len(rot) +len(pos) - ch_str = '' - if n_channels > 0: - for ci in range(len(pos)): - cn = pos[ci] - self.motions_.append(np.asarray(X.values['%s_%s'%(joint,cn)].values)) - ch_str = ch_str + ' ' + cn - for ci in range(len(rot)): - cn = '%srotation'%(rot_order[ci]) - self.motions_.append(np.asarray(X.values['%s_%s'%(joint,cn)].values)) - ch_str = ch_str + ' ' + cn - if len(X.skeleton[joint]['children']) > 0: - #ch_str = ''.join(' %s'*n_channels%tuple(channels)) - ofile.write('%sCHANNELS %d%s\n' %('\t'*(tab+1), n_channels, ch_str)) - - for c in X.skeleton[joint]['children']: - self._printJoint(X, c, tab+1, ofile) - - ofile.write('%s}\n'%('\t'*(tab))) diff --git a/spaces/MichaelWelsch/FreeVC/speaker_encoder/data_objects/random_cycler.py b/spaces/MichaelWelsch/FreeVC/speaker_encoder/data_objects/random_cycler.py deleted file mode 100644 index c405db6b27f46d874d8feb37e3f9c1e12c251109..0000000000000000000000000000000000000000 --- a/spaces/MichaelWelsch/FreeVC/speaker_encoder/data_objects/random_cycler.py +++ /dev/null @@ -1,37 +0,0 @@ -import random - -class RandomCycler: - """ - Creates an internal copy of a sequence and allows access to its items in a constrained random - order. For a source sequence of n items and one or several consecutive queries of a total - of m items, the following guarantees hold (one implies the other): - - Each item will be returned between m // n and ((m - 1) // n) + 1 times. - - Between two appearances of the same item, there may be at most 2 * (n - 1) other items. - """ - - def __init__(self, source): - if len(source) == 0: - raise Exception("Can't create RandomCycler from an empty collection") - self.all_items = list(source) - self.next_items = [] - - def sample(self, count: int): - shuffle = lambda l: random.sample(l, len(l)) - - out = [] - while count > 0: - if count >= len(self.all_items): - out.extend(shuffle(list(self.all_items))) - count -= len(self.all_items) - continue - n = min(count, len(self.next_items)) - out.extend(self.next_items[:n]) - count -= n - self.next_items = self.next_items[n:] - if len(self.next_items) == 0: - self.next_items = shuffle(list(self.all_items)) - return out - - def __next__(self): - return self.sample(1)[0] - diff --git a/spaces/MirageML/sjc/highres_final_vis.py b/spaces/MirageML/sjc/highres_final_vis.py deleted file mode 100644 index 22e7c7873017a0c5162679b8549da049b5da718f..0000000000000000000000000000000000000000 --- a/spaces/MirageML/sjc/highres_final_vis.py +++ /dev/null @@ -1,124 +0,0 @@ -import numpy as np -import torch -from einops import rearrange - -from voxnerf.render import subpixel_rays_from_img - -from run_sjc import ( - SJC, ScoreAdapter, StableDiffusion, - tqdm, EventStorage, HeartBeat, EarlyLoopBreak, get_event_storage, get_heartbeat, optional_load_config, read_stats, - vis_routine, stitch_vis, latest_ckpt, - scene_box_filter, render_ray_bundle, as_torch_tsrs, - device_glb -) - - -# the SD deocder is very memory hungry; the latent image cannot be too large -# for a graphics card with < 12 GB memory, set this to 128; quality already good -# if your card has 12 to 24 GB memory, you can set this to 200; -# but visually it won't help beyond a certain point. Our teaser is done with 128. -decoder_bottleneck_hw = 128 - - -def final_vis(): - cfg = optional_load_config(fname="full_config.yml") - assert len(cfg) > 0, "can't find cfg file" - mod = SJC(**cfg) - - family = cfg.pop("family") - model: ScoreAdapter = getattr(mod, family).make() - vox = mod.vox.make() - poser = mod.pose.make() - - pbar = tqdm(range(1)) - - with EventStorage(), HeartBeat(pbar): - ckpt_fname = latest_ckpt() - state = torch.load(ckpt_fname, map_location="cpu") - vox.load_state_dict(state) - vox.to(device_glb) - - with EventStorage("highres"): - # what dominates the speed is NOT the factor here. - # you can try from 2 to 8, and the speed is about the same. - # the dominating factor in the pipeline I believe is the SD decoder. - evaluate(model, vox, poser, n_frames=200, factor=4) - - -@torch.no_grad() -def evaluate(score_model, vox, poser, n_frames=200, factor=4): - H, W = poser.H, poser.W - vox.eval() - K, poses = poser.sample_test(n_frames) - del n_frames - poses = poses[60:] # skip the full overhead view; not interesting - - fuse = EarlyLoopBreak(5) - metric = get_event_storage() - hbeat = get_heartbeat() - - aabb = vox.aabb.T.cpu().numpy() - vox = vox.to(device_glb) - - num_imgs = len(poses) - - for i in (pbar := tqdm(range(num_imgs))): - if fuse.on_break(): - break - - pose = poses[i] - y, depth = highres_render_one_view(vox, aabb, H, W, K, pose, f=factor) - if isinstance(score_model, StableDiffusion): - y = score_model.decode(y) - vis_routine(metric, y, depth) - - metric.step() - hbeat.beat() - - metric.flush_history() - - metric.put_artifact( - "movie_im_and_depth", ".mp4", - lambda fn: stitch_vis(fn, read_stats(metric.output_dir, "view")[1]) - ) - - metric.put_artifact( - "movie_im_only", ".mp4", - lambda fn: stitch_vis(fn, read_stats(metric.output_dir, "img")[1]) - ) - - metric.step() - - -def highres_render_one_view(vox, aabb, H, W, K, pose, f=4): - bs = 4096 - - ro, rd = subpixel_rays_from_img(H, W, K, pose, f=f) - ro, rd, t_min, t_max = scene_box_filter(ro, rd, aabb) - n = len(ro) - ro, rd, t_min, t_max = as_torch_tsrs(vox.device, ro, rd, t_min, t_max) - - rgbs = torch.zeros(n, 4, device=vox.device) - depth = torch.zeros(n, 1, device=vox.device) - - with torch.no_grad(): - for i in range(int(np.ceil(n / bs))): - s = i * bs - e = min(n, s + bs) - _rgbs, _depth, _ = render_ray_bundle( - vox, ro[s:e], rd[s:e], t_min[s:e], t_max[s:e] - ) - rgbs[s:e] = _rgbs - depth[s:e] = _depth - - rgbs = rearrange(rgbs, "(h w) c -> 1 c h w", h=H*f, w=W*f) - depth = rearrange(depth, "(h w) 1 -> h w", h=H*f, w=W*f) - rgbs = torch.nn.functional.interpolate( - rgbs, (decoder_bottleneck_hw, decoder_bottleneck_hw), - mode='bilinear', antialias=True - ) - return rgbs, depth - - -if __name__ == "__main__": - final_vis() diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/plugins/__init__.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/plugins/__init__.py deleted file mode 100644 index 053a33e2d647128fc7dcc60e85aea0b560103984..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/models/textrecog/plugins/__init__.py +++ /dev/null @@ -1,4 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .common import GCAModule, Maxpool2d - -__all__ = ['Maxpool2d', 'GCAModule'] diff --git a/spaces/MrVicente/RA-BART/custom_bart/decoder.py b/spaces/MrVicente/RA-BART/custom_bart/decoder.py deleted file mode 100644 index 6e19c437a508106a3da02901c0a096ae18f8f1cd..0000000000000000000000000000000000000000 --- a/spaces/MrVicente/RA-BART/custom_bart/decoder.py +++ /dev/null @@ -1,312 +0,0 @@ -############################# -# Imports -############################# - -# Python modules -from typing import ( - Optional, - Tuple, - Union, - List, -) -import math -import random - -# Remote modules -import torch -from torch import nn -from transformers import ( - BartConfig, - BartPretrainedModel, -) -from transformers.modeling_outputs import ( - BaseModelOutput, - BaseModelOutputWithPastAndCrossAttentions -) -from transformers.models.bart.modeling_bart import ( - BartLearnedPositionalEmbedding, - _expand_mask, - _make_causal_mask -) -from transformers.utils import ( - logging, -) - -# Local modules -from .config import BartCustomConfig -from .decoder_layer import BartCustomDecoderLayer - -logger = logging.get_logger(__name__) - -class BartCustomDecoder(BartPretrainedModel): - """ - Transformer decoder consisting of *config.decoder_layers* layers. Each layer is a [`BartDecoderLayer`] - - Args: - config: BartConfig - embed_tokens (nn.Embedding): output embedding - """ - - def __init__(self, config: BartCustomConfig, embed_tokens: Optional[nn.Embedding] = None): - super().__init__(config) - self.dropout = config.dropout - self.layerdrop = config.decoder_layerdrop - self.padding_idx = config.pad_token_id - self.max_target_positions = config.max_position_embeddings - self.embed_scale = math.sqrt(config.d_model) if config.scale_embedding else 1.0 - - if embed_tokens is not None: - self.embed_tokens = embed_tokens - else: - self.embed_tokens = nn.Embedding(config.vocab_size, config.d_model, self.padding_idx) - - self.embed_positions = BartLearnedPositionalEmbedding( - config.max_position_embeddings, - config.d_model, - ) - self.layers = nn.ModuleList([BartCustomDecoderLayer(config) for _ in range(config.decoder_layers)]) - self.layernorm_embedding = nn.LayerNorm(config.d_model) - - self.gradient_checkpointing = False - # Initialize weights and apply final processing - self.post_init() - - def get_input_embeddings(self): - return self.embed_tokens - - def set_input_embeddings(self, value): - self.embed_tokens = value - - def _prepare_decoder_attention_mask(self, attention_mask, input_shape, inputs_embeds, past_key_values_length): - # create causal mask - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - combined_attention_mask = None - if input_shape[-1] > 1: - combined_attention_mask = _make_causal_mask( - input_shape, inputs_embeds.dtype, past_key_values_length=past_key_values_length - ).to(self.device) - - if attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - expanded_attn_mask = _expand_mask(attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) - combined_attention_mask = ( - expanded_attn_mask if combined_attention_mask is None else expanded_attn_mask + combined_attention_mask - ) - - return combined_attention_mask - - def forward( - self, - input_ids: torch.LongTensor = None, - attention_mask: Optional[torch.Tensor] = None, - encoder_hidden_states: Optional[torch.FloatTensor] = None, - encoder_attention_mask: Optional[torch.LongTensor] = None, - head_mask: Optional[torch.Tensor] = None, - cross_attn_head_mask: Optional[torch.Tensor] = None, - past_key_values: Optional[List[torch.FloatTensor]] = None, - inputs_embeds: Optional[torch.FloatTensor] = None, - use_cache: Optional[bool] = None, - output_attentions: Optional[bool] = None, - output_hidden_states: Optional[bool] = None, - return_dict: Optional[bool] = None, - ) -> Union[Tuple, BaseModelOutputWithPastAndCrossAttentions]: - r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): - Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you - provide it. - - Indices can be obtained using [`BartTokenizer`]. See [`PreTrainedTokenizer.encode`] and - [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*): - Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - encoder_hidden_states (`torch.FloatTensor` of shape `(batch_size, encoder_sequence_length, hidden_size)`, *optional*): - Sequence of hidden-states at the output of the last layer of the encoder. Used in the cross-attention - of the decoder. - encoder_attention_mask (`torch.LongTensor` of shape `(batch_size, encoder_sequence_length)`, *optional*): - Mask to avoid performing cross-attention on padding tokens indices of encoder input_ids. Mask values - selected in `[0, 1]`: - - - 1 for tokens that are **not masked**, - - 0 for tokens that are **masked**. - - [What are attention masks?](../glossary#attention-mask) - head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the attention modules. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - cross_attn_head_mask (`torch.Tensor` of shape `(decoder_layers, decoder_attention_heads)`, *optional*): - Mask to nullify selected heads of the cross-attention modules in the decoder to avoid performing - cross-attention on hidden heads. Mask values selected in `[0, 1]`: - - - 1 indicates the head is **not masked**, - - 0 indicates the head is **masked**. - - past_key_values (`tuple(tuple(torch.FloatTensor))`, *optional*, returned when `use_cache=True` is passed or when `config.use_cache=True`): - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of - shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`) and 2 additional tensors of - shape `(batch_size, num_heads, encoder_sequence_length, embed_size_per_head)`. - - Contains pre-computed hidden-states (key and values in the self-attention blocks and in the - cross-attention blocks) that can be used (see `past_key_values` input) to speed up sequential decoding. - - If `past_key_values` are used, the user can optionally input only the last `decoder_input_ids` (those - that don't have their past key value states given to this model) of shape `(batch_size, 1)` instead of - all `decoder_input_ids` of shape `(batch_size, sequence_length)`. inputs_embeds (`torch.FloatTensor` of - shape `(batch_size, sequence_length, hidden_size)`, *optional*): Optionally, instead of passing - `input_ids` you can choose to directly pass an embedded representation. This is useful if you want more - control over how to convert `input_ids` indices into associated vectors than the model's internal - embedding lookup matrix. - output_attentions (`bool`, *optional*): - Whether or not to return the attentions tensors of all attention layers. See `attentions` under - returned tensors for more detail. - output_hidden_states (`bool`, *optional*): - Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors - for more detail. - return_dict (`bool`, *optional*): - Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple. - """ - output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions - output_hidden_states = ( - output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states - ) - use_cache = use_cache if use_cache is not None else self.config.use_cache - return_dict = return_dict if return_dict is not None else self.config.use_return_dict - - # retrieve input_ids and inputs_embeds - if input_ids is not None and inputs_embeds is not None: - raise ValueError("You cannot specify both decoder_input_ids and decoder_inputs_embeds at the same time") - elif input_ids is not None: - input_shape = input_ids.size() - input_ids = input_ids.view(-1, input_shape[-1]) - elif inputs_embeds is not None: - input_shape = inputs_embeds.size()[:-1] - else: - raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds") - - # past_key_values_length - past_key_values_length = past_key_values[0][0].shape[2] if past_key_values is not None else 0 - - if inputs_embeds is None: - inputs_embeds = self.embed_tokens(input_ids) * self.embed_scale - - attention_mask = self._prepare_decoder_attention_mask( - attention_mask, input_shape, inputs_embeds, past_key_values_length - ) - - # expand encoder attention mask - if encoder_hidden_states is not None and encoder_attention_mask is not None: - # [bsz, seq_len] -> [bsz, 1, tgt_seq_len, src_seq_len] - encoder_attention_mask = _expand_mask(encoder_attention_mask, inputs_embeds.dtype, tgt_len=input_shape[-1]) - - # embed positions - positions = self.embed_positions(input_shape, past_key_values_length) - - hidden_states = inputs_embeds + positions - hidden_states = self.layernorm_embedding(hidden_states) - - hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training) - - # decoder layers - all_hidden_states = () if output_hidden_states else None - all_self_attns = () if output_attentions else None - all_cross_attentions = () if (output_attentions and encoder_hidden_states is not None) else None - next_decoder_cache = () if use_cache else None - - # check if head_mask/cross_attn_head_mask has a correct number of layers specified if desired - for attn_mask, mask_name in zip([head_mask, cross_attn_head_mask], ["head_mask", "cross_attn_head_mask"]): - if attn_mask is not None: - if attn_mask.size()[0] != (len(self.layers)): - raise ValueError( - f"The `{mask_name}` should be specified for {len(self.layers)} layers, but it is for {head_mask.size()[0]}." - ) - - for idx, decoder_layer in enumerate(self.layers): - # add LayerDrop (see https://arxiv.org/abs/1909.11556 for description) - if output_hidden_states: - all_hidden_states += (hidden_states,) - dropout_probability = random.uniform(0, 1) - if self.training and (dropout_probability < self.layerdrop): - continue - - past_key_value = past_key_values[idx] if past_key_values is not None else None - - if self.gradient_checkpointing and self.training: - - if use_cache: - logger.warning( - "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." - ) - use_cache = False - - def create_custom_forward(module): - def custom_forward(*inputs): - # None for past_key_value - return module(*inputs, output_attentions, use_cache) - - return custom_forward - - layer_outputs = torch.utils.checkpoint.checkpoint( - create_custom_forward(decoder_layer), - hidden_states, - attention_mask, - encoder_hidden_states, - encoder_attention_mask, - head_mask[idx] if head_mask is not None else None, - cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None, - None, - ) - else: - - layer_outputs = decoder_layer( - hidden_states, - attention_mask=attention_mask, - encoder_hidden_states=encoder_hidden_states, - encoder_attention_mask=encoder_attention_mask, - layer_head_mask=(head_mask[idx] if head_mask is not None else None), - cross_attn_layer_head_mask=( - cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None - ), - past_key_value=past_key_value, - output_attentions=output_attentions, - use_cache=use_cache, - ) - hidden_states = layer_outputs[0] - - if use_cache: - next_decoder_cache += (layer_outputs[3 if output_attentions else 1],) - - if output_attentions: - all_self_attns += (layer_outputs[1],) - - if encoder_hidden_states is not None: - all_cross_attentions += (layer_outputs[2],) - - # add hidden states from the last decoder layer - if output_hidden_states: - all_hidden_states += (hidden_states,) - - next_cache = next_decoder_cache if use_cache else None - if not return_dict: - return tuple( - v - for v in [hidden_states, next_cache, all_hidden_states, all_self_attns, all_cross_attentions] - if v is not None - ) - return BaseModelOutputWithPastAndCrossAttentions( - last_hidden_state=hidden_states, - past_key_values=next_cache, - hidden_states=all_hidden_states, - attentions=all_self_attns, - cross_attentions=all_cross_attentions, - ) - diff --git a/spaces/MuGeminorum/insecta/insectid/detector.py b/spaces/MuGeminorum/insecta/insectid/detector.py deleted file mode 100644 index 20fddffa20b2d04ddef7e8d34ca61f412481c87c..0000000000000000000000000000000000000000 --- a/spaces/MuGeminorum/insecta/insectid/detector.py +++ /dev/null @@ -1,56 +0,0 @@ -import os - -import khandy -import numpy as np - -from .base import OnnxModel -from .base import check_image_dtype_and_shape - - -class InsectDetector(OnnxModel): - def __init__(self): - current_dir = os.path.dirname(os.path.abspath(__file__)) - model_path = os.path.join(current_dir, 'models/quarrying_insect_detector.onnx') - self.input_width = 640 - self.input_height = 640 - super(InsectDetector, self).__init__(model_path) - - def _preprocess(self, image): - check_image_dtype_and_shape(image) - - # image size normalization - image, scale, pad_left, pad_top = khandy.letterbox_image( - image, self.input_width, self.input_height, 0, return_scale=True) - # image channel normalization - image = khandy.normalize_image_channel(image, swap_rb=True) - # image dtype normalization - image = khandy.rescale_image(image, 'auto', np.float32) - # to tensor - image = np.transpose(image, (2,0,1)) - image = np.expand_dims(image, axis=0) - return image, scale, pad_left, pad_top - - def _post_process(self, outputs_list, scale, pad_left, pad_top, conf_thresh, iou_thresh): - pred = outputs_list[0][0] - pass_t = pred[:, 4] > conf_thresh - pred = pred[pass_t] - - boxes = khandy.convert_boxes_format(pred[:, :4], 'cxcywh', 'xyxy') - boxes = khandy.unletterbox_2d_points(boxes, scale, pad_left, pad_top, False) - confs = np.max(pred[:, 5:] * pred[:, 4:5], axis=-1) - classes = np.argmax(pred[:, 5:] * pred[:, 4:5], axis=-1) - keep = khandy.non_max_suppression(boxes, confs, iou_thresh) - return boxes[keep], confs[keep], classes[keep] - - def detect(self, image, conf_thresh=0.5, iou_thresh=0.5): - image, scale, pad_left, pad_top = self._preprocess(image) - outputs_list = self.forward(image) - boxes, confs, classes = self._post_process( - outputs_list, - scale=scale, - pad_left=pad_left, - pad_top=pad_top, - conf_thresh=conf_thresh, - iou_thresh=iou_thresh) - return boxes, confs, classes - \ No newline at end of file diff --git a/spaces/MyGenAiUser/MyGenAiChat/app.py b/spaces/MyGenAiUser/MyGenAiChat/app.py deleted file mode 100644 index a362dcc7d0ddd1eee86961f1bc3db6d894fbd3d5..0000000000000000000000000000000000000000 --- a/spaces/MyGenAiUser/MyGenAiChat/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import os -import gradio as gr -from langchain.chat_models import ChatOpenAI -from langchain import LLMChain, PromptTemplate -from langchain.memory import ConversationBufferMemory - -OPENAI_API_KEY=os.getenv('OPENAI_API_KEY') - -template = """You are a helpful assistant to answer all user queries. -{chat_history} -User: {user_message} -Chatbot:""" - -prompt = PromptTemplate( - input_variables=["chat_history", "user_message"], template=template -) - -memory = ConversationBufferMemory(memory_key="chat_history") - -llm_chain = LLMChain( - llm=ChatOpenAI(temperature='0.5', model_name="gpt-3.5-turbo"), - prompt=prompt, - verbose=True, - memory=memory, -) - -def get_text_response(user_message,history): - response = llm_chain.predict(user_message = user_message) - return response - -demo = gr.ChatInterface(get_text_response) - -if __name__ == "__main__": - demo.launch() #To create a public link, set `share=True` in `launch()`. To enable errors and logs, set `debug=True` in `launch()`. diff --git a/spaces/NATSpeech/PortaSpeech/modules/commons/conformer/espnet_positional_embedding.py b/spaces/NATSpeech/PortaSpeech/modules/commons/conformer/espnet_positional_embedding.py deleted file mode 100644 index 89b9b5549cc779d1ea67f052b1c99cad92365503..0000000000000000000000000000000000000000 --- a/spaces/NATSpeech/PortaSpeech/modules/commons/conformer/espnet_positional_embedding.py +++ /dev/null @@ -1,113 +0,0 @@ -import math -import torch - - -class PositionalEncoding(torch.nn.Module): - """Positional encoding. - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - reverse (bool): Whether to reverse the input position. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000, reverse=False): - """Construct an PositionalEncoding object.""" - super(PositionalEncoding, self).__init__() - self.d_model = d_model - self.reverse = reverse - self.xscale = math.sqrt(self.d_model) - self.dropout = torch.nn.Dropout(p=dropout_rate) - self.pe = None - self.extend_pe(torch.tensor(0.0).expand(1, max_len)) - - def extend_pe(self, x): - """Reset the positional encodings.""" - if self.pe is not None: - if self.pe.size(1) >= x.size(1): - if self.pe.dtype != x.dtype or self.pe.device != x.device: - self.pe = self.pe.to(dtype=x.dtype, device=x.device) - return - pe = torch.zeros(x.size(1), self.d_model) - if self.reverse: - position = torch.arange( - x.size(1) - 1, -1, -1.0, dtype=torch.float32 - ).unsqueeze(1) - else: - position = torch.arange(0, x.size(1), dtype=torch.float32).unsqueeze(1) - div_term = torch.exp( - torch.arange(0, self.d_model, 2, dtype=torch.float32) - * -(math.log(10000.0) / self.d_model) - ) - pe[:, 0::2] = torch.sin(position * div_term) - pe[:, 1::2] = torch.cos(position * div_term) - pe = pe.unsqueeze(0) - self.pe = pe.to(device=x.device, dtype=x.dtype) - - def forward(self, x: torch.Tensor): - """Add positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - """ - self.extend_pe(x) - x = x * self.xscale + self.pe[:, : x.size(1)] - return self.dropout(x) - - -class ScaledPositionalEncoding(PositionalEncoding): - """Scaled positional encoding module. - See Sec. 3.2 https://arxiv.org/abs/1809.08895 - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000): - """Initialize class.""" - super().__init__(d_model=d_model, dropout_rate=dropout_rate, max_len=max_len) - self.alpha = torch.nn.Parameter(torch.tensor(1.0)) - - def reset_parameters(self): - """Reset parameters.""" - self.alpha.data = torch.tensor(1.0) - - def forward(self, x): - """Add positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - """ - self.extend_pe(x) - x = x + self.alpha * self.pe[:, : x.size(1)] - return self.dropout(x) - - -class RelPositionalEncoding(PositionalEncoding): - """Relative positional encoding module. - See : Appendix B in https://arxiv.org/abs/1901.02860 - Args: - d_model (int): Embedding dimension. - dropout_rate (float): Dropout rate. - max_len (int): Maximum input length. - """ - - def __init__(self, d_model, dropout_rate, max_len=5000): - """Initialize class.""" - super().__init__(d_model, dropout_rate, max_len, reverse=True) - - def forward(self, x): - """Compute positional encoding. - Args: - x (torch.Tensor): Input tensor (batch, time, `*`). - Returns: - torch.Tensor: Encoded tensor (batch, time, `*`). - torch.Tensor: Positional embedding tensor (1, time, `*`). - """ - self.extend_pe(x) - x = x * self.xscale - pos_emb = self.pe[:, : x.size(1)] - return self.dropout(x), self.dropout(pos_emb) \ No newline at end of file diff --git a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/code_tasks_test.py b/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/code_tasks_test.py deleted file mode 100644 index d3260a1a56ec0f7c36363d558122f7f7e49198e6..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/research/brain_coder/single_task/code_tasks_test.py +++ /dev/null @@ -1,108 +0,0 @@ -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -"""Tests for code_tasks.""" - -import numpy as np -import tensorflow as tf - -from single_task import code_tasks # brain coder -from single_task import defaults # brain coder - - -def pad(string, pad_length, pad_char): - return string + pad_char * (pad_length - len(string)) - - -class CodeTasksTest(tf.test.TestCase): - - def assertClose(self, a, b): - self.assertTrue( - np.isclose(a, b, atol=1e-4), - 'Expecting approximately equal values. Got: %s, %s' % (a, b)) - - def testMultiIOTaskManager(self): - maxlen = 100 - padchr = '[' - task = code_tasks.make_paper_task( - 'print', timestep_limit=maxlen, do_code_simplification=False) - reward_fns = task.rl_batch(1) - r = reward_fns[0] - self.assertClose( - r(pad('++++++++.---.+++++++...', maxlen, padchr)).episode_rewards[-1], - 0.2444) - self.assertClose( - r(pad('++++++++.---.+++++++..+++.', - maxlen, padchr)).episode_rewards[-1], - 1.0) - - task = code_tasks.make_paper_task( - 'print', timestep_limit=maxlen, do_code_simplification=True) - reward_fns = task.rl_batch(1) - r = reward_fns[0] - self.assertClose( - r('++++++++.---.+++++++...').episode_rewards[-1], - 0.2444) - self.assertClose( - r('++++++++.---.+++++++..+++.').episode_rewards[-1], - 0.935) - self.assertClose( - r(pad('++++++++.---.+++++++..+++.', - maxlen, padchr)).episode_rewards[-1], - 0.75) - - task = code_tasks.make_paper_task( - 'reverse', timestep_limit=maxlen, do_code_simplification=False) - reward_fns = task.rl_batch(1) - r = reward_fns[0] - self.assertClose( - r(pad('>,>,>,.<.<.<.', maxlen, padchr)).episode_rewards[-1], - 0.1345) - self.assertClose( - r(pad(',[>,]+[,<.]', maxlen, padchr)).episode_rewards[-1], - 1.0) - - task = code_tasks.make_paper_task( - 'reverse', timestep_limit=maxlen, do_code_simplification=True) - reward_fns = task.rl_batch(1) - r = reward_fns[0] - self.assertClose(r('>,>,>,.<.<.<.').episode_rewards[-1], 0.1324) - self.assertClose(r(',[>,]+[,<.]').episode_rewards[-1], 0.9725) - self.assertClose( - r(pad(',[>,]+[,<.]', maxlen, padchr)).episode_rewards[-1], - 0.75) - - def testMakeTask(self): - maxlen = 100 - padchr = '[' - config = defaults.default_config_with_updates( - 'env=c(config_for_iclr=False,fixed_string=[8,5,12,12,15])') - task = code_tasks.make_task(config.env, 'print', timestep_limit=maxlen) - reward_fns = task.rl_batch(1) - r = reward_fns[0] - self.assertClose( - r('++++++++.---.+++++++...').episode_rewards[-1], - 0.2444) - self.assertClose( - r('++++++++.---.+++++++..+++.').episode_rewards[-1], - 0.935) - self.assertClose( - r(pad('++++++++.---.+++++++..+++.', - maxlen, padchr)).episode_rewards[-1], - 0.75) - - def testKnownCodeBaseTask(self): - maxlen = 100 - padchr = '[' - task = code_tasks.make_paper_task( - 'shift-left', timestep_limit=maxlen, do_code_simplification=False) - reward_fns = task.rl_batch(1) - r = reward_fns[0] - self.assertClose( - r(pad(',>,[.,]<.,.', maxlen, padchr)).episode_rewards[-1], - 1.0) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/Nixic/ffmo/README.md b/spaces/Nixic/ffmo/README.md deleted file mode 100644 index 08ab7ee2c402405ced1fc94dbcc9009984248f0d..0000000000000000000000000000000000000000 --- a/spaces/Nixic/ffmo/README.md +++ /dev/null @@ -1,26 +0,0 @@ ---- -license: apache-2.0 -title: FFmo -sdk: gradio -emoji: 💻 -colorFrom: green -colorTo: gray -pinned: true ---- - -# FFMO -FFmpeg-Online Wrapper, Also Fps Booster xD - -If you are beginner, go to my [Huggingface Space](https://huggingface.co/Nixic/FFmo). - -If you are developer/senior, use [![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1BWgdzhL118O6fENqYCIIG9WgCfiTkQ65?usp=share_link) instead for more performance, and options. - -# Features -- Smooth Interpolation. -- Frame Blending. -- Advanced FFmpeg Usage. - -# Credits -- [media-converter by Accel](https://huggingface.co/spaces/Accel/media-converter). -- [FFmpeg](https://github.com/FFmpeg/FFmpeg). -- All of My Friends. diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py deleted file mode 100644 index 2ea37c16b4a477c48e4dd4500ec03f2d0c86d611..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/criterions/label_smoothed_cross_entropy_with_alignment.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -from fairseq import metrics, utils -from fairseq.criterions import register_criterion - -from .label_smoothed_cross_entropy import ( - LabelSmoothedCrossEntropyCriterion, - LabelSmoothedCrossEntropyCriterionConfig, -) - -from dataclasses import dataclass, field - - -@dataclass -class LabelSmoothedCrossEntropyCriterionWithAlignmentConfig( - LabelSmoothedCrossEntropyCriterionConfig -): - alignment_lambda: float = field( - default=0.05, metadata={"help": "weight for the alignment loss"} - ) - - -@register_criterion( - "label_smoothed_cross_entropy_with_alignment", - dataclass=LabelSmoothedCrossEntropyCriterionWithAlignmentConfig, -) -class LabelSmoothedCrossEntropyCriterionWithAlignment( - LabelSmoothedCrossEntropyCriterion -): - def __init__(self, task, sentence_avg, label_smoothing, alignment_lambda): - super().__init__(task, sentence_avg, label_smoothing) - self.alignment_lambda = alignment_lambda - - def forward(self, model, sample, reduce=True): - """Compute the loss for the given sample. - - Returns a tuple with three elements: - 1) the loss - 2) the sample size, which is used as the denominator for the gradient - 3) logging outputs to display while training - """ - net_output = model(**sample["net_input"]) - loss, nll_loss = self.compute_loss(model, net_output, sample, reduce=reduce) - sample_size = ( - sample["target"].size(0) if self.sentence_avg else sample["ntokens"] - ) - logging_output = { - "loss": utils.item(loss.data) if reduce else loss.data, - "nll_loss": utils.item(nll_loss.data) if reduce else nll_loss.data, - "ntokens": sample["ntokens"], - "nsentences": sample["target"].size(0), - "sample_size": sample_size, - } - - alignment_loss = None - - # Compute alignment loss only for training set and non dummy batches. - if "alignments" in sample and sample["alignments"] is not None: - alignment_loss = self.compute_alignment_loss(sample, net_output) - - if alignment_loss is not None: - logging_output["alignment_loss"] = utils.item(alignment_loss.data) - loss += self.alignment_lambda * alignment_loss - - return loss, sample_size, logging_output - - def compute_alignment_loss(self, sample, net_output): - attn_prob = net_output[1]["attn"][0] - bsz, tgt_sz, src_sz = attn_prob.shape - attn = attn_prob.view(bsz * tgt_sz, src_sz) - - align = sample["alignments"] - align_weights = sample["align_weights"].float() - - if len(align) > 0: - # Alignment loss computation. align (shape [:, 2]) contains the src-tgt index pairs corresponding to - # the alignments. align_weights (shape [:]) contains the 1 / frequency of a tgt index for normalizing. - loss = -( - (attn[align[:, 1][:, None], align[:, 0][:, None]]).log() - * align_weights[:, None] - ).sum() - else: - return None - - return loss - - @staticmethod - def reduce_metrics(logging_outputs) -> None: - """Aggregate logging outputs from data parallel training.""" - loss_sum = utils.item(sum(log.get("loss", 0) for log in logging_outputs)) - nll_loss_sum = utils.item( - sum(log.get("nll_loss", 0) for log in logging_outputs) - ) - alignment_loss_sum = utils.item( - sum(log.get("alignment_loss", 0) for log in logging_outputs) - ) - ntokens = utils.item(sum(log.get("ntokens", 0) for log in logging_outputs)) - sample_size = utils.item( - sum(log.get("sample_size", 0) for log in logging_outputs) - ) - - metrics.log_scalar( - "loss", loss_sum / sample_size / math.log(2), sample_size, round=3 - ) - metrics.log_scalar( - "nll_loss", nll_loss_sum / ntokens / math.log(2), ntokens, round=3 - ) - metrics.log_scalar( - "alignment_loss", - alignment_loss_sum / sample_size / math.log(2), - sample_size, - round=3, - ) - metrics.log_derived( - "ppl", lambda meters: utils.get_perplexity(meters["nll_loss"].avg) - ) - - @staticmethod - def logging_outputs_can_be_summed() -> bool: - """ - Whether the logging outputs returned by `forward` can be summed - across workers prior to calling `reduce_metrics`. Setting this - to True will improves distributed training speed. - """ - return True diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/trainer.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/trainer.py deleted file mode 100644 index e46ccfe0b8d3a224586fb16c69168321f60ce30e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/trainer.py +++ /dev/null @@ -1,1509 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Train a network across multiple GPUs. -""" - -import contextlib -import logging -import sys -import time -from argparse import Namespace -from itertools import chain -from typing import Any, Dict, List - -import torch -from fairseq import checkpoint_utils, models, optim, utils -from fairseq.dataclass.configs import FairseqConfig -from fairseq.dataclass.utils import convert_namespace_to_omegaconf -from fairseq.distributed import utils as distributed_utils -from fairseq.file_io import PathManager -from fairseq.logging import meters, metrics -from fairseq.models.ema import build_ema -from fairseq.nan_detector import NanDetector -from fairseq.optim import lr_scheduler -from omegaconf import OmegaConf - -logger = logging.getLogger(__name__) - - -class Trainer(object): - """Main class for data parallel training. - - This class supports synchronous distributed data parallel training, - where multiple workers each have a full model replica and gradients - are accumulated across workers before each update. We use - :class:`~torch.nn.parallel.DistributedDataParallel` to handle - communication of the gradients across workers. - """ - - def __init__(self, cfg: FairseqConfig, task, model, criterion, quantizer=None): - - if isinstance(cfg, Namespace): - logger.warning( - "argparse.Namespace configuration is deprecated! Automatically converting to OmegaConf" - ) - cfg = convert_namespace_to_omegaconf(cfg) - - self.cfg = cfg - self.task = task - - # catalog shared parameters - shared_params = _catalog_shared_params(model) - self.tpu = cfg.common.tpu - self.cuda = torch.cuda.is_available() and not cfg.common.cpu and not self.tpu - if self.cuda: - self.device = torch.device("cuda") - elif self.tpu: - self.device = utils.get_tpu_device() - else: - self.device = torch.device("cpu") - - if self.is_fsdp: - import fairscale - if self.cfg.common.bf16: - raise ValueError( - "FullyShardedDataParallel is not compatible with --bf16 or " - "--memory-efficient-bf16" - ) - if self.cfg.distributed_training.zero_sharding != "none": - raise ValueError( - "FullyShardedDataParallel is not compatible with --zero-sharding " - "option (it's already built in)" - ) - if max(self.cfg.optimization.update_freq) > 1 and fairscale.__version__ < "0.4.0": - raise RuntimeError( - "Please update to fairscale 0.4.0 or newer when combining " - "--update-freq with FullyShardedDataParallel" - ) - else: - if ( - hasattr(self.cfg.distributed_training, "cpu_offload") - and self.cfg.distributed_training.cpu_offload - ): - raise ValueError("--cpu-offload requires --ddp-backend=fully_sharded") - - # copy model and criterion to current device/dtype - self._criterion = criterion - self._model = model - if not self.is_fsdp: - if cfg.common.fp16: - assert not cfg.common.amp, "Cannot use fp16 and AMP together" - self._criterion = self._criterion.half() - self._model = self._model.half() - elif cfg.common.bf16: - self._criterion = self._criterion.to(dtype=torch.bfloat16) - self._model = self._model.to(dtype=torch.bfloat16) - elif cfg.common.amp: - self._amp_retries = 0 - if ( - not cfg.distributed_training.pipeline_model_parallel - # the DistributedFairseqModel wrapper will handle moving to device, - # so only handle cases which don't use the wrapper - and not self.use_distributed_wrapper - ): - self._criterion = self._criterion.to(device=self.device) - self._model = self._model.to(device=self.device) - self.pipeline_model_parallel = cfg.distributed_training.pipeline_model_parallel - self.last_device = None - if self.cuda and self.pipeline_model_parallel: - self.last_device = torch.device( - cfg.distributed_training.pipeline_devices[-1] - ) - - # check that shared parameters are preserved after device transfer - for shared_param in shared_params: - ref = _get_module_by_path(self._model, shared_param[0]) - for path in shared_param[1:]: - logger.info( - "detected shared parameter: {} <- {}".format(shared_param[0], path) - ) - _set_module_by_path(self._model, path, ref) - - self._dummy_batch = None # indicates we don't have a dummy batch at first - self._lr_scheduler = None - self._num_updates = 0 - self._num_xla_compiles = 0 # for TPUs - self._optim_history = None - self._optimizer = None - self._warn_once = set() - self._wrapped_criterion = None - self._wrapped_model = None - self._ema = None - - # TODO(myleott): support tpu - if self.cuda and self.data_parallel_world_size > 1: - self._grad_norm_buf = torch.cuda.DoubleTensor(self.data_parallel_world_size) - else: - self._grad_norm_buf = None - - self.quantizer = quantizer - if self.quantizer is not None: - self.quantizer.set_trainer(self) - - # get detailed cuda environment - if self.cuda: - self.cuda_env = utils.CudaEnvironment() - if self.data_parallel_world_size > 1: - self.cuda_env_arr = distributed_utils.all_gather_list( - self.cuda_env, group=distributed_utils.get_global_group() - ) - else: - self.cuda_env_arr = [self.cuda_env] - if self.data_parallel_rank == 0: - utils.CudaEnvironment.pretty_print_cuda_env_list(self.cuda_env_arr) - else: - self.cuda_env = None - self.cuda_env_arr = None - - metrics.log_start_time("wall", priority=790, round=0) - - self._start_time = time.time() - self._previous_training_time = 0 - self._cumulative_training_time = None - - def reinitialize(self): - """Reinitialize the Trainer, typically after model params change.""" - self._lr_scheduler = None - self._optimizer = None - self._wrapped_criterion = None - self._wrapped_model = None - - @property - def data_parallel_world_size(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 1 - return distributed_utils.get_data_parallel_world_size() - - @property - def data_parallel_process_group(self): - return distributed_utils.get_data_parallel_group() - - @property - def data_parallel_rank(self): - if self.cfg.distributed_training.distributed_world_size == 1: - return 0 - return distributed_utils.get_data_parallel_rank() - - @property - def is_data_parallel_master(self): - # NOTE: this returns true for all model parallel replicas with data - # parallel rank 0 - return self.data_parallel_rank == 0 - - @property - def use_distributed_wrapper(self) -> bool: - return ( - self.data_parallel_world_size > 1 and not self.cfg.optimization.use_bmuf - ) or ( - self.is_fsdp and self.cfg.distributed_training.cpu_offload - ) - - @property - def should_save_checkpoint_on_current_rank(self) -> bool: - """Indicates whether to save checkpoints on the current DDP rank.""" - if ( - self.is_fsdp and self.cfg.distributed_training.use_sharded_state - ) or getattr(self.cfg.model, "base_layers", 0) > 0: - return True - else: - return self.is_data_parallel_master - - @property - def always_call_state_dict_during_save_checkpoint(self) -> bool: - if self.is_fsdp and not self.cfg.distributed_training.use_sharded_state: - # FSDP calls communication collective when consolidating checkpoints - return True - else: - return False - - @property - def checkpoint_suffix(self) -> str: - """Suffix to add to the checkpoint file name.""" - if self.is_fsdp and self.cfg.distributed_training.use_sharded_state: - return self.cfg.checkpoint.checkpoint_suffix + "-shard{0}".format( - self.data_parallel_rank - ) - else: - return self.cfg.checkpoint.checkpoint_suffix or "" - - @property - def criterion(self): - if self._wrapped_criterion is None: - if utils.has_parameters(self._criterion) and self.use_distributed_wrapper: - self._wrapped_criterion = models.DistributedFairseqModel( - self.cfg.distributed_training, - self._criterion, - process_group=self.data_parallel_process_group, - device=self.device, - ) - else: - self._wrapped_criterion = self._criterion - return self._wrapped_criterion - - @property - def model(self): - if self._wrapped_model is None: - if self.use_distributed_wrapper: - self._wrapped_model = models.DistributedFairseqModel( - self.cfg.distributed_training, - self._model, - process_group=self.data_parallel_process_group, - device=self.device, - ) - else: - self._wrapped_model = self._model - return self._wrapped_model - - @property - def ema(self): - if self._ema is None: - self._build_ema() - return self._ema - - def _build_ema(self): - if self.cfg.ema.store_ema: - self._ema = build_ema(self._model, self.cfg.ema, self.device) - logger.info( - "Exponential Moving Average Shadow Model is initialized." - ) - - @property - def optimizer(self): - if self._optimizer is None: - self._build_optimizer() - return self._optimizer - - @property - def lr_scheduler(self): - if self._lr_scheduler is None: - self._build_optimizer() # this will initialize self._lr_scheduler - return self._lr_scheduler - - def _build_optimizer(self): - params = list( - filter( - lambda p: p.requires_grad, - chain(self.model.parameters(), self.criterion.parameters()), - ) - ) - - if self.is_fsdp and self.cfg.common.fp16: - # FullyShardedDataParallel always uses MemoryEfficientFP16 wrapper, - # mostly for the grad scaling. But if we don't have the - # --memory-efficient-fp16 flag set, then we're effectively doing - # regular --fp16 and can allow the use of optimizers that would - # otherwise be unsupported by MemoryEfficientFP16Optimizer. - allow_unsupported = not self.cfg.common.memory_efficient_fp16 - self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer( - self.cfg, params, allow_unsupported=allow_unsupported - ) - elif self.cfg.common.fp16 or self.cfg.common.bf16 or self.cfg.common.amp: - if self.cuda and torch.cuda.get_device_capability(0)[0] < 7: - logger.info( - "NOTE: your device does NOT support faster training with --fp16 or --amp, " - "please switch to FP32 which is likely to be faster" - ) - if ( - self.cfg.common.memory_efficient_fp16 - or self.cfg.common.memory_efficient_bf16 - ): - self._optimizer = optim.MemoryEfficientFP16Optimizer.build_optimizer( - self.cfg, params - ) - elif self.cfg.common.amp: - self._optimizer = optim.AMPOptimizer.build_optimizer(self.cfg, params) - else: - self._optimizer = optim.FP16Optimizer.build_optimizer(self.cfg, params) - else: - if self.cuda and torch.cuda.get_device_capability(0)[0] >= 7: - logger.info("NOTE: your device may support faster training with --fp16 or --amp") - self._optimizer = optim.build_optimizer(self.cfg.optimizer, params) - - if self.is_fsdp: - assert ( - not self.cfg.optimization.use_bmuf - ), "--ddp-backend=fully_sharded is not compatible with BMUF" - assert self._optimizer.supports_flat_params, ( - "--ddp-backend=fully_sharded is only compatible with pointwise " - "optimizers (e.g., Adam, AdamW, Adadelta, Adamax, SGD, etc.). " - "However, the sharding will result in slightly different results when " - "using non-pointwise optimizers (e.g., Adagrad, Adafactor, LAMB)" - ) - - if self.cfg.optimization.use_bmuf: - self._optimizer = optim.FairseqBMUF( - self.cfg.bmuf, - self._optimizer, - ) - - if self.cfg.distributed_training.zero_sharding == "os": - if ( - self.cfg.common.fp16 - and not self.cfg.common.memory_efficient_fp16 - and not self.cfg.common.memory_efficient_bf16 - ) and not self.cfg.common.fp16_no_flatten_grads: - raise ValueError( - "ZeRO is incomptabile with fp16 and flattened grads. " - "Please use --fp16-no-flatten-grads" - ) - else: - optim.shard_(self._optimizer, self.data_parallel_process_group) - - # We should initialize the learning rate scheduler immediately after - # building the optimizer, so that the initial learning rate is set. - self._lr_scheduler = lr_scheduler.build_lr_scheduler( - self.cfg.lr_scheduler, - self.optimizer, - ) - self._lr_scheduler.step_update(0) - - @property - def is_fsdp(self): - return self.cfg.distributed_training.ddp_backend == "fully_sharded" - - def consolidate_optimizer(self): - """For OSS, we need to consolidate the state dict.""" - if self.cfg.checkpoint.no_save_optimizer_state: - return - self._gathered_optim_state = None - if hasattr(self.optimizer.optimizer, "consolidate_state_dict"): - self.optimizer.optimizer.consolidate_state_dict() - elif self.is_fsdp and not self.model.use_sharded_state: - st = self.model.gather_full_optim_state_dict( - self.optimizer - ) # only returns on rank 0 - self._gathered_optim_state = st - - def state_dict(self): - state_dict = { - "args": None, # legacy - "cfg": ( - OmegaConf.to_container(self.cfg, resolve=True, enum_to_str=True) - if OmegaConf.is_config(self.cfg) - else self.cfg - ), - "model": self.model.state_dict(), - "criterion": ( - self.criterion.state_dict() - if utils.has_parameters(self.criterion) - else None - ), - "optimizer_history": (self._optim_history or []) - + [ - { - "criterion_name": self.get_criterion().__class__.__name__, - "optimizer_name": self.optimizer.__class__.__name__, - "lr_scheduler_state": self.lr_scheduler.state_dict(), - "num_updates": self.get_num_updates(), - } - ], - "task_state": self.task.state_dict() if self.task is not None else {}, - "extra_state": { - "metrics": metrics.state_dict(), - "previous_training_time": self.cumulative_training_time(), - }, - } - if self.cfg.ema.store_ema: - # Save EMA model state as extra state - state_dict["extra_state"]["ema"] = self.ema.get_model().state_dict() - if self.cfg.ema.ema_fp32: - # Save EMA params in fp32 - state_dict["extra_state"]["ema_fp32_params"] = self.ema.fp32_params - if not self.cfg.checkpoint.no_save_optimizer_state: - if self._gathered_optim_state is not None: - state_dict["last_optimizer_state"] = self._gathered_optim_state - self._gathered_optim_state = None - else: - state_dict["last_optimizer_state"] = self.optimizer.state_dict() - if self.is_fsdp: - # save meta data for recombining checkpoint upon loading - state_dict["fsdp_metadata"] = self.model.local_metadata_dict() - return state_dict - - def save_checkpoint(self, filename, extra_state): - """Save all training state in a checkpoint file.""" - logger.info(f"Saving checkpoint to {filename}") - # call state_dict on all ranks in case it needs internal communication - state_dict = utils.move_to_cpu(self.state_dict()) - state_dict["extra_state"].update(extra_state) - if self.should_save_checkpoint_on_current_rank: - checkpoint_utils.torch_persistent_save( - state_dict, - filename, - async_write=self.cfg.checkpoint.write_checkpoints_asynchronously, - ) - logger.info(f"Finished saving checkpoint to {filename}") - - def load_checkpoint( - self, - filename, - reset_optimizer=False, - reset_lr_scheduler=False, - optimizer_overrides=None, - reset_meters=False, - ): - """ - Load all training state from a checkpoint file. - rank = 0 will load the checkpoint, and then broadcast it to all - other ranks. - """ - extra_state, self._optim_history, last_optim_state = None, [], None - - logger.info(f"Preparing to load checkpoint {filename}") - is_distributed = self.data_parallel_world_size > 1 - bexists = PathManager.isfile(filename) - if bexists: - load_on_all_ranks = ( - self.cfg.checkpoint.load_checkpoint_on_all_dp_ranks - # TPUs don't support broadcast yet, so load checkpoints - # on every worker for now - or self.tpu - # FSDP requires loading checkpoint shards on all ranks - or (self.is_fsdp and self.cfg.distributed_training.use_sharded_state) - or getattr(self.cfg.model, "base_layers", 0) > 0 - ) - - if load_on_all_ranks or self.data_parallel_rank == 0: - state = checkpoint_utils.load_checkpoint_to_cpu( - filename, load_on_all_ranks=load_on_all_ranks - ) - last_optim_state = state.get("last_optimizer_state", None) - - # If doing zero_sharding, do not broadcast global optimizer - # state. Later we will broadcast sharded states to each rank - # to avoid memory from exploding. - if ( - not load_on_all_ranks - and self.cfg.distributed_training.zero_sharding == "os" - and "last_optimizer_state" in state - and is_distributed - ): - state["last_optimizer_state"] = "SHARDED" - else: - last_optim_state = None - state = None - - if is_distributed and not load_on_all_ranks: - state = distributed_utils.broadcast_object( - state, - src_rank=0, - group=self.data_parallel_process_group, - dist_device=self.device, - ) - if self.data_parallel_rank > 0: - last_optim_state = state.get("last_optimizer_state", None) - - # load model parameters - try: - self.model.load_state_dict( - state["model"], strict=True, model_cfg=self.cfg.model - ) - # save memory for later steps - del state["model"] - if utils.has_parameters(self.get_criterion()): - self.get_criterion().load_state_dict( - state["criterion"], strict=True - ) - del state["criterion"] - - except Exception: - raise Exception( - "Cannot load model parameters from checkpoint {}; " - "please ensure that the architectures match.".format(filename) - ) - extra_state = state["extra_state"] - self._optim_history = state["optimizer_history"] - - if last_optim_state is not None and not reset_optimizer: - # rebuild optimizer after loading model, since params may have changed - self._build_optimizer() - - # only reload optimizer and lr_scheduler if they match - last_optim = self._optim_history[-1] - assert ( - last_optim["criterion_name"] == self.get_criterion().__class__.__name__ - ), f"Criterion does not match; please reset the optimizer (--reset-optimizer). {last_optim['criterion_name']} vs {self.get_criterion().__class__.__name__}" - assert ( - last_optim["optimizer_name"] == self.optimizer.__class__.__name__ - ), f"Optimizer does not match; please reset the optimizer (--reset-optimizer). {last_optim['optimizer_name']} vs {self.optimizer.__class__.__name__}" - - if not reset_lr_scheduler: - self.lr_scheduler.load_state_dict(last_optim["lr_scheduler_state"]) - - if self.is_fsdp and not self.model.use_sharded_state: - # if use_sharded_state, the last_optim_state is already sharded, skip this - last_optim_state = self.model.get_shard_from_optim_state_dict( - last_optim_state - ) - elif not load_on_all_ranks and is_distributed: - last_optim_state = self.optimizer.broadcast_global_state_dict( - last_optim_state - ) - - self.optimizer.load_state_dict(last_optim_state, optimizer_overrides) - - self.set_num_updates(last_optim["num_updates"]) - - if extra_state is not None: - itr_state = extra_state["train_iterator"] - epoch = itr_state["epoch"] - - if "previous_training_time" in extra_state: - self._previous_training_time = extra_state["previous_training_time"] - self._start_time = time.time() - - self.lr_step(epoch) - - if ( - itr_state.get("version", 1) >= 2 - and itr_state["iterations_in_epoch"] == 0 - ): - # reset meters at start of epoch - reset_meters = True - - if "metrics" in extra_state and not reset_meters: - metrics.load_state_dict(extra_state["metrics"]) - - # reset TimeMeters, since their start times don't make sense anymore - for meter in metrics.get_meters("default"): - if isinstance(meter, meters.TimeMeter): - meter.reset() - - if self.cfg.ema.store_ema: - if "ema" not in extra_state: - logger.warn( - "EMA not found in checkpoint. But store_ema is True. " - "EMA is re-initialized from checkpoint." - ) - self.ema.restore(state["model"], build_fp32_params=self.cfg.ema.ema_fp32) - else: - logger.info( - "Loading EMA from checkpoint" - ) - self.ema.restore(extra_state["ema"], build_fp32_params=False) - - if self.cfg.ema.ema_fp32: - if "ema_fp32_params" in extra_state: - logger.info( - "Loading EMA fp32 params from checkpoint" - ) - self.ema.build_fp32_params(extra_state["ema_fp32_params"]) - else: - logger.info( - "Building EMA fp32 params from EMA model in checkpoint" - ) - self.ema.build_fp32_params() - - logger.info( - "Loaded checkpoint {} (epoch {} @ {} updates)".format( - filename, epoch, self.get_num_updates() - ) - ) - - else: - logger.info("No existing checkpoint found {}".format(filename)) - - return extra_state - - def get_train_iterator( - self, - epoch, - combine=True, - load_dataset=True, - data_selector=None, - shard_batch_itr=True, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over the training set for a given epoch.""" - if load_dataset: - logger.info("loading train data for epoch {}".format(epoch)) - self.task.load_dataset( - self.cfg.dataset.train_subset, - epoch=epoch, - combine=combine, - data_selector=data_selector, - tpu=self.tpu, - ) - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(self.cfg.dataset.train_subset), - max_tokens=self.cfg.dataset.max_tokens, - max_sentences=self.cfg.dataset.batch_size, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - self.cfg.dataset.max_tokens, - ), - ignore_invalid_inputs=True, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size if shard_batch_itr else 1, - shard_id=self.data_parallel_rank if shard_batch_itr else 0, - num_workers=self.cfg.dataset.num_workers, - epoch=epoch, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - return batch_iterator - - def get_valid_iterator( - self, - subset, - disable_iterator_cache=False, - ): - """Return an EpochBatchIterator over given validation subset for a given epoch.""" - batch_iterator = self.task.get_batch_iterator( - dataset=self.task.dataset(subset), - max_tokens=self.cfg.dataset.max_tokens_valid, - max_sentences=self.cfg.dataset.batch_size_valid, - max_positions=utils.resolve_max_positions( - self.task.max_positions(), - self.model.max_positions(), - ), - ignore_invalid_inputs=self.cfg.dataset.skip_invalid_size_inputs_valid_test, - required_batch_size_multiple=self.cfg.dataset.required_batch_size_multiple, - seed=self.cfg.common.seed, - num_shards=self.data_parallel_world_size, - shard_id=self.data_parallel_rank, - num_workers=self.cfg.dataset.num_workers, - # always pass a fixed "epoch" to keep validation data consistent - # across training epochs - epoch=1, - data_buffer_size=self.cfg.dataset.data_buffer_size, - disable_iterator_cache=disable_iterator_cache, - ) - self.reset_dummy_batch(batch_iterator.first_batch) - return batch_iterator - - def begin_epoch(self, epoch): - """Called at the beginning of each epoch.""" - logger.info("begin training epoch {}".format(epoch)) - - self.lr_step_begin_epoch(epoch) - - if self.quantizer is not None: - self.quantizer.begin_epoch(epoch) - - # task specific setup per epoch - self.task.begin_epoch(epoch, self.get_model()) - - if self.tpu: - import torch_xla.core.xla_model as xm - - xm.rendezvous("begin_epoch") # wait for all workers - xm.mark_step() - - def begin_valid_epoch(self, epoch): - """Called at the beginning of each validation epoch.""" - - # task specific setup per validation epoch - self.task.begin_valid_epoch(epoch, self.get_model()) - - def reset_dummy_batch(self, batch): - self._dummy_batch = batch - - @metrics.aggregate("train") - def train_step(self, samples, raise_oom=False): - """Do forward, backward and parameter update.""" - self._set_seed() - self.model.train() - self.criterion.train() - self.zero_grad() - - metrics.log_start_time("train_wall", priority=800, round=0) - - # If EMA is enabled through store_ema=True - # and task.uses_ema is True, pass the EMA model as a keyword - # argument to the task. - extra_kwargs = {} - if self.cfg.ema.store_ema and getattr(self.task, "uses_ema", False): - extra_kwargs["ema_model"] = self.ema.get_model() - - # forward and backward pass - logging_outputs, sample_size, ooms = [], 0, 0 - for i, sample in enumerate(samples): # delayed update loop - sample, is_dummy_batch = self._prepare_sample(sample) - - def maybe_no_sync(): - """ - Whenever *samples* contains more than one mini-batch, we - want to accumulate gradients locally and only call - all-reduce in the last backwards pass. - """ - if ( - self.data_parallel_world_size > 1 - and hasattr(self.model, "no_sync") - and i < len(samples) - 1 - # The no_sync context manager results in increased memory - # usage with FSDP, since full-size gradients will be - # accumulated on each GPU. It's typically a better tradeoff - # to do the extra communication with FSDP. - and not self.is_fsdp - ): - return self.model.no_sync() - else: - return contextlib.ExitStack() # dummy contextmanager - - try: - with maybe_no_sync(): - # forward and backward - loss, sample_size_i, logging_output = self.task.train_step( - sample=sample, - model=self.model, - criterion=self.criterion, - optimizer=self.optimizer, - update_num=self.get_num_updates(), - ignore_grad=is_dummy_batch, - **extra_kwargs, - ) - del loss - - logging_outputs.append(logging_output) - sample_size += sample_size_i - - # emptying the CUDA cache after the first step can - # reduce the chance of OOM - if self.cuda and self.get_num_updates() == 0: - torch.cuda.empty_cache() - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if raise_oom: - raise e - logger.warning( - "attempting to recover from OOM in forward/backward pass" - ) - ooms += 1 - self.zero_grad() - if self.cuda: - torch.cuda.empty_cache() - if self.cfg.distributed_training.distributed_world_size == 1: - return None - else: - raise e - - if self.tpu and i < len(samples) - 1: - # tpu-comment: every XLA operation before marking step is - # appended to the IR graph, and processing too many batches - # before marking step can lead to OOM errors. - # To handle gradient accumulation use case, we explicitly - # mark step here for every forward pass without a backward pass - self._xla_markstep_and_send_to_cpu() - - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - if torch.is_tensor(sample_size): - sample_size = sample_size.float() - else: - sample_size = float(sample_size) - - # gather logging outputs from all replicas - if self._sync_stats(): - train_time = self._local_cumulative_training_time() - logging_outputs, ( - sample_size, - ooms, - total_train_time, - ) = self._aggregate_logging_outputs( - logging_outputs, sample_size, ooms, train_time, ignore=is_dummy_batch - ) - self._cumulative_training_time = ( - total_train_time / self.data_parallel_world_size - ) - - overflow = False - try: - with torch.autograd.profiler.record_function("reduce-grads"): - # reduce gradients across workers - self.optimizer.all_reduce_grads(self.model) - if utils.has_parameters(self.criterion): - self.optimizer.all_reduce_grads(self.criterion) - - with torch.autograd.profiler.record_function("multiply-grads"): - # multiply gradients by (data_parallel_size / sample_size) since - # DDP normalizes by the number of data parallel workers for - # improved fp16 precision. - # Thus we get (sum_of_gradients / sample_size) at the end. - # In case of fp16, this step also undoes loss scaling. - # (Debugging note: Some optimizers perform this scaling on the - # fly, so inspecting model.parameters() or optimizer.params may - # still show the original, unscaled gradients.) - numer = ( - self.data_parallel_world_size - if not self.cfg.optimization.use_bmuf or self._sync_stats() - else 1 - ) - self.optimizer.multiply_grads(numer / (sample_size or 1.0)) - # Note: (sample_size or 1.0) handles the case of a zero gradient, in a - # way that avoids CPU/device transfers in case sample_size is a GPU or - # TPU object. The assumption is that the gradient itself is also 0. - - with torch.autograd.profiler.record_function("clip-grads"): - # clip grads - grad_norm = self.clip_grad_norm(self.cfg.optimization.clip_norm) - - # check that grad norms are consistent across workers - # on tpu check tensor is slow - if not self.tpu: - if ( - not self.cfg.optimization.use_bmuf - and self.cfg.distributed_training.ddp_backend != "slow_mo" - ): - self._check_grad_norms(grad_norm) - if not torch.isfinite(grad_norm).all(): - # in case of AMP, if gradients are Nan/Inf then - # optimizer step is still required - if self.cfg.common.amp: - overflow = True - else: - # check local gradnorm single GPU case, trigger NanDetector - raise FloatingPointError("gradients are Nan/Inf") - - with torch.autograd.profiler.record_function("optimizer"): - # take an optimization step - self.task.optimizer_step( - self.optimizer, model=self.model, update_num=self.get_num_updates() - ) - if self.cfg.common.amp and overflow: - if self._amp_retries == self.cfg.common.amp_batch_retries: - logger.info("AMP: skipping this batch.") - self._amp_retries = 0 - else: - self._amp_retries += 1 - return self.train_step(samples, raise_oom) # recursion to feed in same batch - - except FloatingPointError: - # re-run the forward and backward pass with hooks attached to print - # out where it fails - self.zero_grad() - with NanDetector(self.get_model()): - for _, sample in enumerate(samples): - sample, _ = self._prepare_sample(sample) - self.task.train_step( - sample, - self.model, - self.criterion, - self.optimizer, - self.get_num_updates(), - ignore_grad=False, - **extra_kwargs, - ) - raise - except OverflowError as e: - overflow = True - logger.info( - f"NOTE: gradient overflow detected, ignoring gradient, {str(e)}" - ) - grad_norm = torch.tensor(0.0).cuda() - self.zero_grad() - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - logger.error("OOM during optimization, irrecoverable") - raise e - - # Some distributed wrappers (e.g., SlowMo) need access to the optimizer - # after the step - if hasattr(self.model, "perform_additional_optimizer_actions"): - if hasattr(self.optimizer, "fp32_params"): - self.model.perform_additional_optimizer_actions( - self.optimizer.optimizer, self.optimizer.fp32_params - ) - else: - self.model.perform_additional_optimizer_actions( - self.optimizer.optimizer - ) - - logging_output = None - if not overflow or self.cfg.distributed_training.ddp_backend == "slow_mo": - self.set_num_updates(self.get_num_updates() + 1) - - if self.cfg.ema.store_ema: - # Step EMA forward with new model. - self.ema.step( - self.get_model(), - self.get_num_updates(), - ) - metrics.log_scalar( - "ema_decay", - self.ema.get_decay(), - priority=10000, - round=5, - weight=0, - ) - - if self.tpu: - import torch_xla.core.xla_model as xm - - # mark step on TPUs - self._xla_markstep_and_send_to_cpu() - - # only log stats every log_interval steps - # this causes wps to be misreported when log_interval > 1 - logging_output = {} - if self.get_num_updates() % self.cfg.common.log_interval == 0: - # log memory usage - mem_info = xm.get_memory_info(self.device) - gb_free = mem_info["kb_free"] / 1024 / 1024 - gb_total = mem_info["kb_total"] / 1024 / 1024 - metrics.log_scalar( - "gb_free", gb_free, priority=1500, round=1, weight=0 - ) - metrics.log_scalar( - "gb_total", gb_total, priority=1600, round=1, weight=0 - ) - logging_outputs = self._xla_markstep_and_send_to_cpu( - logging_outputs - ) - logging_output = self._reduce_and_log_stats( - logging_outputs, sample_size, grad_norm - ) - - # log whenever there's an XLA compilation, since these - # slow down training and may indicate opportunities for - # optimization - self._check_xla_compilation() - else: - if self.cuda and self.cuda_env is not None: - # log minimum free memory over the iteration - gb_used = torch.cuda.max_memory_allocated() / 1024 / 1024 / 1024 - torch.cuda.reset_peak_memory_stats() - gb_free = self.cuda_env.total_memory_in_GB - gb_used - metrics.log_scalar( - "gb_free", gb_free, priority=1500, round=1, weight=0 - ) - - # log stats - logging_output = self._reduce_and_log_stats( - logging_outputs, sample_size, grad_norm - ) - - # clear CUDA cache to reduce memory fragmentation - if ( - self.cuda - and self.cfg.common.empty_cache_freq > 0 - and ( - (self.get_num_updates() + self.cfg.common.empty_cache_freq - 1) - % self.cfg.common.empty_cache_freq - ) - == 0 - ): - torch.cuda.empty_cache() - - if self.cfg.common.fp16 or self.cfg.common.amp: - metrics.log_scalar( - "loss_scale", - ( - self.optimizer.scaler.loss_scale - if self.cfg.common.fp16 - else self.optimizer.scaler.get_scale() - ), - priority=700, - round=4, - weight=0, - ) - - metrics.log_stop_time("train_wall") - return logging_output - - @metrics.aggregate("valid") - def valid_step(self, sample, raise_oom=False): - """Do forward pass in evaluation mode.""" - if self.tpu: - import torch_xla.core.xla_model as xm - - xm.rendezvous("valid_step") # wait for all workers - - # If EMA is enabled through store_ema=True - # and task.uses_ema is True, pass the EMA model as a keyword - # argument to the task. - extra_kwargs = {} - if self.cfg.ema.store_ema and getattr(self.task, "uses_ema", False): - extra_kwargs["ema_model"] = self.ema.get_model() - - with torch.no_grad(): - self.model.eval() - self.criterion.eval() - - sample, is_dummy_batch = self._prepare_sample(sample) - - try: - _loss, sample_size, logging_output = self.task.valid_step( - sample, self.model, self.criterion, **extra_kwargs - ) - except RuntimeError as e: - if "out of memory" in str(e): - self._log_oom(e) - if not raise_oom: - logger.warning( - "ran out of memory in validation step, retrying batch" - ) - for p in self.model.parameters(): - if p.grad is not None: - p.grad = None # free some memory - if self.cuda: - torch.cuda.empty_cache() - return self.valid_step(sample, raise_oom=True) - raise e - - logging_outputs = [logging_output] - if is_dummy_batch: - if torch.is_tensor(sample_size): - sample_size.zero_() - else: - sample_size *= 0.0 - - # gather logging outputs from all replicas - if self.data_parallel_world_size > 1: - logging_outputs, (sample_size,) = self._aggregate_logging_outputs( - logging_outputs, - sample_size, - ignore=is_dummy_batch, - ) - - # log validation stats - if self.tpu: - logging_outputs = self._xla_markstep_and_send_to_cpu(logging_outputs) - logging_output = self._reduce_and_log_stats(logging_outputs, sample_size) - - return logging_output - - def zero_grad(self): - self.optimizer.zero_grad() - - def lr_step_begin_epoch(self, epoch): - """Adjust the learning rate at the beginning of the epoch.""" - self.lr_scheduler.step_begin_epoch(epoch) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def lr_step(self, epoch, val_loss=None): - """Adjust the learning rate at the end of the epoch.""" - self.lr_scheduler.step(epoch, val_loss) - # prefer updating the LR based on the number of steps - return self.lr_step_update() - - def lr_step_update(self): - """Update the learning rate after each update.""" - new_lr = self.lr_scheduler.step_update(self.get_num_updates()) - if isinstance(new_lr, dict): - for k, v in new_lr.items(): - metrics.log_scalar(f"lr_{k}", v, weight=0, priority=300) - new_lr = new_lr.get("default", next(iter(new_lr.values()))) - else: - metrics.log_scalar("lr", new_lr, weight=0, priority=300) - return new_lr - - def get_lr(self): - """Get the current learning rate.""" - return self.optimizer.get_lr() - - def get_model(self): - """Get the (non-wrapped) model instance.""" - return self._model - - def get_criterion(self): - """Get the (non-wrapped) criterion instance.""" - return self._criterion - - def get_meter(self, name): - """[deprecated] Get a specific meter by name.""" - from fairseq import meters - - if "get_meter" not in self._warn_once: - self._warn_once.add("get_meter") - utils.deprecation_warning( - "Trainer.get_meter is deprecated. Please use fairseq.metrics instead." - ) - - train_meters = metrics.get_meters("train") - if train_meters is None: - train_meters = {} - - if name == "train_loss" and "loss" in train_meters: - return train_meters["loss"] - elif name == "train_nll_loss": - # support for legacy train.py, which assumed this meter is - # always initialized - m = train_meters.get("nll_loss", None) - return m or meters.AverageMeter() - elif name == "wall": - # support for legacy train.py, which assumed this meter is - # always initialized - m = metrics.get_meter("default", "wall") - return m or meters.TimeMeter() - elif name == "wps": - m = metrics.get_meter("train", "wps") - return m or meters.TimeMeter() - elif name in {"valid_loss", "valid_nll_loss"}: - # support for legacy train.py, which assumed these meters - # are always initialized - k = name[len("valid_") :] - m = metrics.get_meter("valid", k) - return m or meters.AverageMeter() - elif name == "oom": - return meters.AverageMeter() - elif name in train_meters: - return train_meters[name] - return None - - def get_num_updates(self): - """Get the number of parameters updates.""" - return self._num_updates - - def set_num_updates(self, num_updates): - """Set the number of parameters updates.""" - self._num_updates = num_updates - self.lr_step_update() - if self.quantizer: - self.quantizer.step_update(self._num_updates) - metrics.log_scalar("num_updates", self._num_updates, weight=0, priority=200) - - def clip_grad_norm(self, clip_norm): - def agg_norm_fn(total_norm): - total_norm = total_norm.cuda().float() ** 2 - total_norm = distributed_utils.all_reduce( - total_norm, group=self.data_parallel_process_group - ) - return total_norm ** 0.5 - - should_agg_norm = ( - self.is_fsdp - and ( - self.data_parallel_process_group is not None - or torch.distributed.is_initialized() - ) - ) - return self.optimizer.clip_grad_norm( - clip_norm, aggregate_norm_fn=agg_norm_fn if should_agg_norm else None - ) - - def cumulative_training_time(self): - if self._cumulative_training_time is None: - # single GPU - return self._local_cumulative_training_time() - else: - return self._cumulative_training_time - - def _local_cumulative_training_time(self): - """Aggregate training time in seconds.""" - return time.time() - self._start_time + self._previous_training_time - - def _fp_convert_sample(self, sample): - def apply_half(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.half) - return t - - def apply_bfloat16(t): - if t.dtype is torch.float32: - return t.to(dtype=torch.bfloat16) - return t - - if self.cfg.common.fp16: - sample = utils.apply_to_sample(apply_half, sample) - - if self.cfg.common.bf16: - sample = utils.apply_to_sample(apply_bfloat16, sample) - - return sample - - def _prepare_sample(self, sample, is_dummy=False): - if sample == "DUMMY": - raise Exception( - "Trying to use an uninitialized 'dummy' batch. This usually indicates " - "that the total number of batches is smaller than the number of " - "participating GPUs. Try reducing the batch size or using fewer GPUs." - ) - - if sample is None or len(sample) == 0: - assert ( - self._dummy_batch is not None and len(self._dummy_batch) > 0 - ), "Invalid dummy batch: {}".format(self._dummy_batch) - sample, _ = self._prepare_sample(self._dummy_batch, is_dummy=True) - return sample, True - - # Given that PCIe/NVLink bandwidth is significantly smaller than DRAM bandwidth - # it makes sense to do the format conversion on the CPU and then transfer - # a smaller buffer to the device. This also saves GPU memory capacity. - - if self.cfg.common.on_cpu_convert_precision: - sample = self._fp_convert_sample(sample) - - if self.cuda: - if self.pipeline_model_parallel: - if 'target' in sample: - sample['target'] = utils.move_to_cuda(sample['target'], device=self.last_device) - else: - sample = utils.move_to_cuda(sample) - elif self.tpu and is_dummy: - # the dummy batch may not be on the appropriate device - sample = utils.move_to_cuda(sample, device=self.device) - - if not self.cfg.common.on_cpu_convert_precision: - sample = self._fp_convert_sample(sample) - - if self._dummy_batch == "DUMMY": - self._dummy_batch = sample - - return sample, False - - def _set_seed(self): - # Set seed based on args.seed and the update number so that we get - # reproducible results when resuming from checkpoints - seed = self.cfg.common.seed + self.get_num_updates() - utils.set_torch_seed(seed) - - def _sync_stats(self): - # Return True if it's using multiple GPUs and DDP or multiple GPUs with - # BMUF and it's a bmuf sync with warmup iterations completed before. - if self.data_parallel_world_size == 1: - return False - elif self.cfg.optimization.use_bmuf: - return ( - self.get_num_updates() + 1 - ) % self.cfg.bmuf.global_sync_iter == 0 and ( - self.get_num_updates() + 1 - ) > self.cfg.bmuf.warmup_iterations - else: - return True - - def _log_oom(self, exc): - msg = "OOM: Ran out of memory with exception: {}".format(exc) - logger.warning(msg) - if torch.cuda.is_available() and hasattr(torch.cuda, "memory_summary"): - for device_idx in range(torch.cuda.device_count()): - logger.warning(torch.cuda.memory_summary(device=device_idx)) - sys.stderr.flush() - - def _aggregate_logging_outputs( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - if self.task.__class__.logging_outputs_can_be_summed(self.get_criterion()): - return self._fast_stat_sync_sum( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - else: - return self._all_gather_list_sync( - logging_outputs, *extra_stats_to_sum, ignore=ignore - ) - - def _all_gather_list_sync( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - """ - Sync logging outputs across workers. all_gather_list_sync is - suitable when logging outputs are complex types. - """ - if self.tpu: - raise NotImplementedError - if ignore: - logging_outputs = [] - results = list( - zip( - *distributed_utils.all_gather_list( - [logging_outputs] + list(extra_stats_to_sum), - max_size=getattr(self.cfg.common, "all_gather_list_size", 16384), - group=self.data_parallel_process_group, - ) - ) - ) - logging_outputs, extra_stats_to_sum = results[0], results[1:] - logging_outputs = list(chain.from_iterable(logging_outputs)) - extra_stats_to_sum = [sum(s) for s in extra_stats_to_sum] - return logging_outputs, extra_stats_to_sum - - def _fast_stat_sync_sum( - self, - logging_outputs: List[Dict[str, Any]], - *extra_stats_to_sum, - ignore=False, - ): - """ - Sync logging outputs across workers. fast_stat_sync_sum is - faster than all_gather_list_sync, but is only suitable when - logging outputs are scalars and can be summed. Note that - *logging_outputs* cannot contain any nested dicts/lists. - """ - data = {} - for i, stat in enumerate(extra_stats_to_sum): - data["extra_stats_" + str(i)] = stat - if len(logging_outputs) > 0: - log_keys = list(logging_outputs[0].keys()) - for k in log_keys: - if not ignore: - v = sum(log[k] for log in logging_outputs if k in log) - else: - v = logging_outputs[0][k] - v = torch.zeros_like(v) if torch.is_tensor(v) else 0 - data["logging_outputs_" + k] = v - else: - log_keys = None - - data = distributed_utils.all_reduce_dict( - data, device=self.device, group=self.data_parallel_process_group - ) - - extra_stats_to_sum = [ - data["extra_stats_" + str(i)] for i in range(len(extra_stats_to_sum)) - ] - if log_keys is not None: - logging_outputs = [{k: data["logging_outputs_" + k] for k in log_keys}] - else: - logging_outputs = [] - return logging_outputs, extra_stats_to_sum - - def _check_grad_norms(self, grad_norm): - """Check that grad norms are consistent across workers.""" - if self._grad_norm_buf is not None: - self._grad_norm_buf.zero_() - self._grad_norm_buf[self.data_parallel_rank] = grad_norm - distributed_utils.all_reduce( - self._grad_norm_buf, group=self.data_parallel_process_group - ) - - def is_consistent(tensor): - max_abs_diff = torch.max(torch.abs(tensor - tensor[0])) - return ( - (torch.isfinite(tensor).all() - and (max_abs_diff / (tensor[0] + 1e-6) < 1e-6).all()) - or - (self.cfg.common.amp and not torch.isfinite(tensor).all()) - # in case of amp non-finite grads are fine - ) - - if not is_consistent(self._grad_norm_buf): - pretty_detail = "\n".join( - "rank {:3d} = {:.8f}".format(r, n) - for r, n in enumerate(self._grad_norm_buf.tolist()) - ) - error_detail = "grad_norm across the workers:\n{}\n".format( - pretty_detail - ) - # use FloatingPointError to trigger NanDetector - raise FloatingPointError( - "Fatal error: gradients are inconsistent between workers. " - "Try --ddp-backend=legacy_ddp. " - "Or are you mixing up different generation of GPUs in training?" - + "\n" - + "-" * 80 - + "\n{}\n".format(error_detail) - + "-" * 80 - ) - - def _reduce_and_log_stats(self, logging_outputs, sample_size, grad_norm=None): - if grad_norm is not None and ( - not torch.is_tensor(grad_norm) or torch.isfinite(grad_norm) - ): - metrics.log_speed("ups", 1.0, priority=100, round=2) - metrics.log_scalar("gnorm", grad_norm, priority=400, round=3) - if self.cfg.optimization.clip_norm > 0: - metrics.log_scalar( - "clip", - torch.where( - grad_norm > self.cfg.optimization.clip_norm, - grad_norm.new_tensor(100), - grad_norm.new_tensor(0), - ), - priority=500, - round=1, - ) - - with metrics.aggregate() as agg: - if logging_outputs is not None: - self.task.reduce_metrics(logging_outputs, self.get_criterion()) - del logging_outputs - - # extra warning for criterions that don't properly log a loss value - if "loss" not in agg: - if "loss" not in self._warn_once: - self._warn_once.add("loss") - logger.warning( - "Criterion.reduce_metrics did not log a 'loss' value, " - "which may break some functionality" - ) - metrics.log_scalar("loss", -1) - - # support legacy interface - if self.tpu: - logging_output = {} - else: - logging_output = agg.get_smoothed_values() - logging_output["sample_size"] = sample_size - for key_to_delete in ["ppl", "wps", "wpb", "bsz"]: - if key_to_delete in logging_output: - del logging_output[key_to_delete] - return logging_output - - def _check_xla_compilation(self): - import torch_xla.debug.metrics as met - - compile_stats = met.metric_data("CompileTime") - if compile_stats is None: - return - num_xla_compiles = compile_stats[0] - if num_xla_compiles > self._num_xla_compiles: - logger.warning( - "XLA compilation detected on device #{}; too many of these can lead " - "to slow training, but we expect a few in the beginning".format( - self.cfg.distributed_training.distributed_rank - ) - ) - self._num_xla_compiles = num_xla_compiles - - def _xla_markstep_and_send_to_cpu(self, data=None): - import torch_xla.core.xla_model as xm - - xm.mark_step() - if data is not None: - from fairseq.utils import xla_device_to_cpu - - return xla_device_to_cpu(data) - - -def _catalog_shared_params(module, memo=None, prefix=""): - if memo is None: - first_call = True - memo = {} - else: - first_call = False - for name, param in module._parameters.items(): - param_prefix = prefix + ("." if prefix else "") + name - if param not in memo: - memo[param] = [] - memo[param].append(param_prefix) - for name, m in module._modules.items(): - if m is None: - continue - submodule_prefix = prefix + ("." if prefix else "") + name - _catalog_shared_params(m, memo, submodule_prefix) - if first_call: - return [x for x in memo.values() if len(x) > 1] - - -def _get_module_by_path(module, path): - path = path.split(".") - for name in path: - module = getattr(module, name) - return module - - -def _set_module_by_path(module, path, value): - path = path.split(".") - for name in path[:-1]: - module = getattr(module, name) - setattr(module, path[-1], value) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wmt20/README.md b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wmt20/README.md deleted file mode 100644 index b4f2874652f8be19998a65faa1d9276d8017ec59..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/wmt20/README.md +++ /dev/null @@ -1,72 +0,0 @@ -# WMT 20 - -This page provides pointers to the models of Facebook-FAIR's WMT'20 news translation task submission [(Chen et al., 2020)](https://arxiv.org/abs/2011.08298). - -## Single best MT models (after finetuning on part of WMT20 news dev set) - -Model | Description | Download ----|---|--- -`transformer.wmt20.ta-en` | Ta->En | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta-en.single.tar.gz) -`transformer.wmt20.en-ta` | En->Ta | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-ta.single.tar.gz) -`transformer.wmt20.iu-en.news` | Iu->En (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.news.single.tar.gz) -`transformer.wmt20.en-iu.news` | En->Iu (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.news.single.tar.gz) -`transformer.wmt20.iu-en.nh` | Iu->En (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu-en.nh.single.tar.gz) -`transformer.wmt20.en-iu.nh` | En->Iu (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en-iu.nh.single.tar.gz) - -## Language models -Model | Description | Download ----|---|--- -`transformer_lm.wmt20.en` | En Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.en.tar.gz) -`transformer_lm.wmt20.ta` | Ta Language Model | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.ta.tar.gz) -`transformer_lm.wmt20.iu.news` | Iu Language Model (News domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu.news.tar.gz) -`transformer_lm.wmt20.iu.nh` | Iu Language Model (Nunavut Hansard domain) | [download (.tar.gz)](https://dl.fbaipublicfiles.com/fairseq/models/wmt20.iu.nh.tar.gz) - -## Example usage (torch.hub) - -#### Translation - -```python -import torch - -# English to Tamil translation -en2ta = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.en-ta') -en2ta.translate("Machine learning is great!") # 'இயந்திரக் கற்றல் அருமை!' - -# Tamil to English translation -ta2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.ta-en') -ta2en.translate("இயந்திரக் கற்றல் அருமை!") # 'Machine learning is great!' - -# English to Inuktitut translation -en2iu = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.en-iu.news') -en2iu.translate("machine learning is great!") # 'ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ ᐱᐅᔪᒻᒪᕆᒃ!' - -# Inuktitut to English translation -iu2en = torch.hub.load('pytorch/fairseq', 'transformer.wmt20.iu-en.news') -iu2en.translate("ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ ᐱᐅᔪᒻᒪᕆᒃ!") # 'Machine learning excellence!' -``` - -#### Language Modeling - -```python -# Sample from the English LM -en_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.en') -en_lm.sample("Machine learning is") # 'Machine learning is a type of artificial intelligence that uses machine learning to learn from data and make predictions.' - -# Sample from the Tamil LM -ta_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.ta') -ta_lm.sample("இயந்திரக் கற்றல் என்பது செயற்கை நுண்ணறிவின்") # 'இயந்திரக் கற்றல் என்பது செயற்கை நுண்ணறிவின் ஒரு பகுதியாகும்.' - -# Sample from the Inuktitut LM -iu_lm = torch.hub.load('pytorch/fairseq', 'transformer_lm.wmt20.iu.news') -iu_lm.sample("ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ") # 'ᖃᒧᑕᐅᔭᓄᑦ ᐃᓕᓐᓂᐊᕐᓂᖅ, ᐊᒻᒪᓗ ᓯᓚᐅᑉ ᐊᓯᙳᖅᐸᓪᓕᐊᓂᖓᓄᑦ ᖃᓄᐃᓕᐅᕈᑎᒃᓴᑦ, ᐃᓚᖃᖅᖢᑎᒃ ᐅᑯᓂᖓ:' -``` - -## Citation -```bibtex -@inproceedings{chen2020facebook - title={Facebook AI's WMT20 News Translation Task Submission}, - author={Peng-Jen Chen and Ann Lee and Changhan Wang and Naman Goyal and Angela Fan and Mary Williamson and Jiatao Gu}, - booktitle={Proc. of WMT}, - year={2020}, -} -``` diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/base_layer.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/base_layer.py deleted file mode 100644 index e7ef155b25fc73e74780879f665288c9bc95fd80..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/modules/base_layer.py +++ /dev/null @@ -1,135 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch.nn as nn -import torch -import sys -from fairseq import utils -from fairseq.distributed import utils as distributed_utils -from fairseq.modules.layer_norm import LayerNorm - - -class BaseLayer(nn.Module): - - def __init__(self, args): - super().__init__() - self.num_workers = distributed_utils.get_data_parallel_world_size() - expert_centroids = torch.empty(self.num_workers, args.decoder_embed_dim) - torch.nn.init.orthogonal_(expert_centroids, gain=0.1) - self.register_parameter("expert_centroids", torch.nn.Parameter(expert_centroids)) - self.expert_network = nn.Sequential(*([BaseSublayer(args) for _ in range(args.base_sublayers)])) - self.expert_id = distributed_utils.get_data_parallel_rank() - self.shuffle = args.base_shuffle - self.cpp = self.load_assignment() - - # Add a special attribute to the expert parameters, so we know not to sync their gradients - for param in self.expert_network.parameters(): - param.expert = True - - def forward(self, input_features, *args, **kwargs): - features = input_features.reshape(-1, input_features.size(-1)) - is_training = input_features.requires_grad - - if self.shuffle and is_training: - # Send each token to a random worker, to break correlations within the batch - shuffle_sort = torch.randperm(features.size(0), device=features.device) - features = All2All.apply(features[shuffle_sort]) - - with torch.no_grad(): - # Compute similarity of each token to each expert, for routing - token_expert_affinities = features.matmul(self.expert_centroids.transpose(0, 1)) - - # Compute which token goes to which expert - sort_by_expert, input_splits, output_splits = self.balanced_assignment(token_expert_affinities) \ - if is_training else self.greedy_assignment(token_expert_affinities) - # Swap these tokens for the right ones for our expert - routed_features = All2All.apply(features[sort_by_expert], output_splits, input_splits) - - if routed_features.size(0) > 0: - # Mix in the expert network based on how appropriate it is for these tokens - alpha = torch.sigmoid(routed_features.mv(self.expert_centroids[self.expert_id])).unsqueeze(1) - routed_features = alpha * self.expert_network(routed_features) + (1 - alpha) * routed_features - # Return to original worker and ordering - result = All2All.apply(routed_features, input_splits, output_splits)[self.inverse_sort(sort_by_expert)] - - if self.shuffle and is_training: - # Undo shuffling - result = All2All.apply(result)[self.inverse_sort(shuffle_sort)] - - # Return additional Nones for compatibility with TransformerDecoderLayer - return result.view(input_features.size()), None, None - - def inverse_sort(self, order): - # Creates an index that undoes a sort: xs==xs[order][inverse_sort(order)] - return torch.empty_like(order).scatter_(0, order, torch.arange(0, order.size(0), device=order.device)) - - def balanced_assignment(self, scores): - ok = scores.isfinite() - if not ok.all(): - # NaNs here can break the assignment algorithm - scores[~ok] = scores[ok].min() - return self.cpp.balanced_assignment(scores), None, None - - # Assigns each token to the top k experts - def greedy_assignment(self, scores, k=1): - token_to_workers = torch.topk(scores, dim=1, k=k, largest=True).indices.view(-1) - token_to_workers, sort_ordering = torch.sort(token_to_workers) - worker2token = sort_ordering // k - - # Find how many tokens we're sending to each other worker (being careful for sending 0 tokens to some workers) - output_splits = torch.zeros((self.num_workers,), dtype=torch.long, device=scores.device) - workers, counts = torch.unique_consecutive(token_to_workers, return_counts=True) - output_splits[workers] = counts - # Tell other workers how many tokens to expect from us - input_splits = All2All.apply(output_splits) - return worker2token, input_splits.tolist(), output_splits.tolist() - - def load_assignment(self): - try: - from fairseq import libbase - - return libbase - - except ImportError as e: - sys.stderr.write( - "ERROR: missing libbase. run `python setup.py build_ext --inplace`\n" - ) - raise e - - -class BaseSublayer(nn.Module): - def __init__(self, args): - super().__init__() - self.activation_fn = utils.get_activation_fn( - activation=getattr(args, 'activation_fn', 'relu') or "relu" - ) - self.norm = LayerNorm(args.decoder_embed_dim, export=False) - self.ff1 = torch.nn.Linear(args.decoder_embed_dim, args.decoder_ffn_embed_dim) - self.ff2 = torch.nn.Linear(args.decoder_ffn_embed_dim, args.decoder_embed_dim) - self.ff2.weight.data.zero_() - - def forward(self, xs): - return xs + self.ff2(self.activation_fn(self.ff1(self.norm(xs)))) - - -# Wraps torch.distributed.all_to_all_single as a function that supports autograd -class All2All(torch.autograd.Function): - @staticmethod - def forward(ctx, xs, input_splits=None, output_splits=None): - ctx.input_splits = input_splits - ctx.output_splits = output_splits - - ys = torch.empty_like(xs) if output_splits is None else \ - xs.new_empty(size=[sum(output_splits)] + list(xs.size()[1:])) - torch.distributed.all_to_all_single(ys, xs, output_split_sizes=output_splits, input_split_sizes=input_splits) - return ys - - @staticmethod - def backward(ctx, grad_output): - result = torch.empty_like(grad_output) if ctx.input_splits is None else \ - grad_output.new_empty(size=[sum(ctx.input_splits)] + list(grad_output.size()[1:])) - torch.distributed.all_to_all_single(result, grad_output, - output_split_sizes=ctx.input_splits, input_split_sizes=ctx.output_splits) - return result, None, None diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rrpn.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rrpn.py deleted file mode 100644 index d51b92b7d25865a950e28cfb9ae284e600495888..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/rrpn.py +++ /dev/null @@ -1,203 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import logging -from typing import Dict, List -import torch - -from detectron2.config import configurable -from detectron2.layers import ShapeSpec, batched_nms_rotated, cat -from detectron2.structures import Instances, RotatedBoxes, pairwise_iou_rotated -from detectron2.utils.memory import retry_if_cuda_oom - -from ..box_regression import Box2BoxTransformRotated -from .build import PROPOSAL_GENERATOR_REGISTRY -from .proposal_utils import _is_tracing -from .rpn import RPN - -logger = logging.getLogger(__name__) - - -def find_top_rrpn_proposals( - proposals, - pred_objectness_logits, - image_sizes, - nms_thresh, - pre_nms_topk, - post_nms_topk, - min_box_size, - training, -): - """ - For each feature map, select the `pre_nms_topk` highest scoring proposals, - apply NMS, clip proposals, and remove small boxes. Return the `post_nms_topk` - highest scoring proposals among all the feature maps if `training` is True, - otherwise, returns the highest `post_nms_topk` scoring proposals for each - feature map. - - Args: - proposals (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A, 5). - All proposal predictions on the feature maps. - pred_objectness_logits (list[Tensor]): A list of L tensors. Tensor i has shape (N, Hi*Wi*A). - image_sizes (list[tuple]): sizes (h, w) for each image - nms_thresh (float): IoU threshold to use for NMS - pre_nms_topk (int): number of top k scoring proposals to keep before applying NMS. - When RRPN is run on multiple feature maps (as in FPN) this number is per - feature map. - post_nms_topk (int): number of top k scoring proposals to keep after applying NMS. - When RRPN is run on multiple feature maps (as in FPN) this number is total, - over all feature maps. - min_box_size(float): minimum proposal box side length in pixels (absolute units wrt - input images). - training (bool): True if proposals are to be used in training, otherwise False. - This arg exists only to support a legacy bug; look for the "NB: Legacy bug ..." - comment. - - Returns: - proposals (list[Instances]): list of N Instances. The i-th Instances - stores post_nms_topk object proposals for image i. - """ - num_images = len(image_sizes) - device = proposals[0].device - - # 1. Select top-k anchor for every level and every image - topk_scores = [] # #lvl Tensor, each of shape N x topk - topk_proposals = [] - level_ids = [] # #lvl Tensor, each of shape (topk,) - batch_idx = torch.arange(num_images, device=device) - for level_id, proposals_i, logits_i in zip( - itertools.count(), proposals, pred_objectness_logits - ): - Hi_Wi_A = logits_i.shape[1] - if isinstance(Hi_Wi_A, torch.Tensor): # it's a tensor in tracing - num_proposals_i = torch.clamp(Hi_Wi_A, max=pre_nms_topk) - else: - num_proposals_i = min(Hi_Wi_A, pre_nms_topk) - - topk_scores_i, topk_idx = logits_i.topk(num_proposals_i, dim=1) - - # each is N x topk - topk_proposals_i = proposals_i[batch_idx[:, None], topk_idx] # N x topk x 5 - - topk_proposals.append(topk_proposals_i) - topk_scores.append(topk_scores_i) - level_ids.append(torch.full((num_proposals_i,), level_id, dtype=torch.int64, device=device)) - - # 2. Concat all levels together - topk_scores = cat(topk_scores, dim=1) - topk_proposals = cat(topk_proposals, dim=1) - level_ids = cat(level_ids, dim=0) - - # 3. For each image, run a per-level NMS, and choose topk results. - results = [] - for n, image_size in enumerate(image_sizes): - boxes = RotatedBoxes(topk_proposals[n]) - scores_per_img = topk_scores[n] - valid_mask = torch.isfinite(boxes.tensor).all(dim=1) & torch.isfinite(scores_per_img) - if not valid_mask.all(): - boxes = boxes[valid_mask] - scores_per_img = scores_per_img[valid_mask] - boxes.clip(image_size) - - # filter empty boxes - keep = boxes.nonempty(threshold=min_box_size) - lvl = level_ids - if _is_tracing() or keep.sum().item() != len(boxes): - boxes, scores_per_img, lvl = (boxes[keep], scores_per_img[keep], level_ids[keep]) - - keep = batched_nms_rotated(boxes.tensor, scores_per_img, lvl, nms_thresh) - # In Detectron1, there was different behavior during training vs. testing. - # (https://github.com/facebookresearch/Detectron/issues/459) - # During training, topk is over the proposals from *all* images in the training batch. - # During testing, it is over the proposals for each image separately. - # As a result, the training behavior becomes batch-dependent, - # and the configuration "POST_NMS_TOPK_TRAIN" end up relying on the batch size. - # This bug is addressed in Detectron2 to make the behavior independent of batch size. - keep = keep[:post_nms_topk] - - res = Instances(image_size) - res.proposal_boxes = boxes[keep] - res.objectness_logits = scores_per_img[keep] - results.append(res) - return results - - -@PROPOSAL_GENERATOR_REGISTRY.register() -class RRPN(RPN): - """ - Rotated Region Proposal Network described in :paper:`RRPN`. - """ - - @configurable - def __init__(self, *args, **kwargs): - super().__init__(*args, **kwargs) - if self.anchor_boundary_thresh >= 0: - raise NotImplementedError( - "anchor_boundary_thresh is a legacy option not implemented for RRPN." - ) - - @classmethod - def from_config(cls, cfg, input_shape: Dict[str, ShapeSpec]): - ret = super().from_config(cfg, input_shape) - ret["box2box_transform"] = Box2BoxTransformRotated(weights=cfg.MODEL.RPN.BBOX_REG_WEIGHTS) - return ret - - @torch.no_grad() - def label_and_sample_anchors(self, anchors: List[RotatedBoxes], gt_instances: List[Instances]): - """ - Args: - anchors (list[RotatedBoxes]): anchors for each feature map. - gt_instances: the ground-truth instances for each image. - - Returns: - list[Tensor]: - List of #img tensors. i-th element is a vector of labels whose length is - the total number of anchors across feature maps. Label values are in {-1, 0, 1}, - with meanings: -1 = ignore; 0 = negative class; 1 = positive class. - list[Tensor]: - i-th element is a Nx5 tensor, where N is the total number of anchors across - feature maps. The values are the matched gt boxes for each anchor. - Values are undefined for those anchors not labeled as 1. - """ - anchors = RotatedBoxes.cat(anchors) - - gt_boxes = [x.gt_boxes for x in gt_instances] - del gt_instances - - gt_labels = [] - matched_gt_boxes = [] - for gt_boxes_i in gt_boxes: - """ - gt_boxes_i: ground-truth boxes for i-th image - """ - match_quality_matrix = retry_if_cuda_oom(pairwise_iou_rotated)(gt_boxes_i, anchors) - matched_idxs, gt_labels_i = retry_if_cuda_oom(self.anchor_matcher)(match_quality_matrix) - # Matching is memory-expensive and may result in CPU tensors. But the result is small - gt_labels_i = gt_labels_i.to(device=gt_boxes_i.device) - - # A vector of labels (-1, 0, 1) for each anchor - gt_labels_i = self._subsample_labels(gt_labels_i) - - if len(gt_boxes_i) == 0: - # These values won't be used anyway since the anchor is labeled as background - matched_gt_boxes_i = torch.zeros_like(anchors.tensor) - else: - # TODO wasted indexing computation for ignored boxes - matched_gt_boxes_i = gt_boxes_i[matched_idxs].tensor - - gt_labels.append(gt_labels_i) # N,AHW - matched_gt_boxes.append(matched_gt_boxes_i) - return gt_labels, matched_gt_boxes - - @torch.no_grad() - def predict_proposals(self, anchors, pred_objectness_logits, pred_anchor_deltas, image_sizes): - pred_proposals = self._decode_proposals(anchors, pred_anchor_deltas) - return find_top_rrpn_proposals( - pred_proposals, - pred_objectness_logits, - image_sizes, - self.nms_thresh, - self.pre_nms_topk[self.training], - self.post_nms_topk[self.training], - self.min_box_size, - self.training, - ) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docker/Dockerfile b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docker/Dockerfile deleted file mode 100644 index 4eec16dd0beac8b80c5446c9dd6cf15feaf87303..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/docker/Dockerfile +++ /dev/null @@ -1,47 +0,0 @@ -FROM nvidia/cuda:11.1.1-cudnn8-devel-ubuntu18.04 -# use an older system (18.04) to avoid opencv incompatibility (issue#3524) - -ENV DEBIAN_FRONTEND noninteractive -RUN apt-get update && apt-get install -y \ - python3-opencv ca-certificates python3-dev git wget sudo ninja-build -RUN ln -sv /usr/bin/python3 /usr/bin/python - -# create a non-root user -ARG USER_ID=1000 -RUN useradd -m --no-log-init --system --uid ${USER_ID} appuser -g sudo -RUN echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers -USER appuser -WORKDIR /home/appuser - -ENV PATH="/home/appuser/.local/bin:${PATH}" -RUN wget https://bootstrap.pypa.io/get-pip.py && \ - python3 get-pip.py --user && \ - rm get-pip.py - -# install dependencies -# See https://pytorch.org/ for other options if you use a different version of CUDA -RUN pip install --user tensorboard cmake # cmake from apt-get is too old -RUN pip install --user torch==1.10 torchvision==0.11.1 -f https://download.pytorch.org/whl/cu111/torch_stable.html - -RUN pip install --user 'git+https://github.com/facebookresearch/fvcore' -# install detectron2 -RUN git clone https://github.com/facebookresearch/detectron2 detectron2_repo -# set FORCE_CUDA because during `docker build` cuda is not accessible -ENV FORCE_CUDA="1" -# This will by default build detectron2 for all common cuda architectures and take a lot more time, -# because inside `docker build`, there is no way to tell which architecture will be used. -ARG TORCH_CUDA_ARCH_LIST="Kepler;Kepler+Tesla;Maxwell;Maxwell+Tegra;Pascal;Volta;Turing" -ENV TORCH_CUDA_ARCH_LIST="${TORCH_CUDA_ARCH_LIST}" - -RUN pip install --user -e detectron2_repo - -# Set a fixed model cache directory. -ENV FVCORE_CACHE="/tmp" -WORKDIR /home/appuser/detectron2_repo - -# run detectron2 under user "appuser": -# wget http://images.cocodataset.org/val2017/000000439715.jpg -O input.jpg -# python3 demo/demo.py \ - #--config-file configs/COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml \ - #--input input.jpg --output outputs/ \ - #--opts MODEL.WEIGHTS detectron2://COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x/137849600/model_final_f10217.pkl diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/transforms/custom_augmentation_impl.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/transforms/custom_augmentation_impl.py deleted file mode 100644 index 5a69e178a5ac67f69c2eeab667b9c0740a862eee..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/data/transforms/custom_augmentation_impl.py +++ /dev/null @@ -1,63 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# Modified by Xingyi Zhou -""" -Implement many useful :class:`Augmentation`. -""" -import numpy as np -import sys -from fvcore.transforms.transform import ( - BlendTransform, - CropTransform, - HFlipTransform, - NoOpTransform, - Transform, - VFlipTransform, -) -from PIL import Image - -from detectron2.data.transforms.augmentation import Augmentation -from .custom_transform import EfficientDetResizeCropTransform - -__all__ = [ - "EfficientDetResizeCrop", -] - - -class EfficientDetResizeCrop(Augmentation): - """ - Scale the shorter edge to the given size, with a limit of `max_size` on the longer edge. - If `max_size` is reached, then downscale so that the longer edge does not exceed max_size. - """ - - def __init__( - self, size, scale, interp=Image.BILINEAR - ): - """ - Args: - """ - super().__init__() - self.target_size = (size, size) - self.scale = scale - self.interp = interp - - def get_transform(self, img): - # Select a random scale factor. - scale_factor = np.random.uniform(*self.scale) - scaled_target_height = scale_factor * self.target_size[0] - scaled_target_width = scale_factor * self.target_size[1] - # Recompute the accurate scale_factor using rounded scaled image size. - width, height = img.shape[1], img.shape[0] - img_scale_y = scaled_target_height / height - img_scale_x = scaled_target_width / width - img_scale = min(img_scale_y, img_scale_x) - - # Select non-zero random offset (x, y) if scaled image is larger than target size - scaled_h = int(height * img_scale) - scaled_w = int(width * img_scale) - offset_y = scaled_h - self.target_size[0] - offset_x = scaled_w - self.target_size[1] - offset_y = int(max(0.0, float(offset_y)) * np.random.uniform(0, 1)) - offset_x = int(max(0.0, float(offset_x)) * np.random.uniform(0, 1)) - return EfficientDetResizeCropTransform( - scaled_h, scaled_w, offset_y, offset_x, img_scale, self.target_size, self.interp) diff --git a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/__init__.py b/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/__init__.py deleted file mode 100644 index 9020c2df23e2af280b7bb168b996ae9eaf312eb8..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/iGPT/models/grit_src/third_party/CenterNet2/tests/__init__.py +++ /dev/null @@ -1 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. diff --git a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/utils/__init__.py b/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/utils/__init__.py deleted file mode 100644 index abe3cbe49477fe37d4fc16249de8a10f4fb4a013..0000000000000000000000000000000000000000 --- a/spaces/OpenGVLab/InternGPT/third-party/lama/bin/models/ade20k/segm_lib/utils/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .th import * diff --git a/spaces/Pearx/ChatGPT-Assistant/helper.py b/spaces/Pearx/ChatGPT-Assistant/helper.py deleted file mode 100644 index d26716f908fb9c1a155fdf9a0038b0835ff3952f..0000000000000000000000000000000000000000 --- a/spaces/Pearx/ChatGPT-Assistant/helper.py +++ /dev/null @@ -1,157 +0,0 @@ -import json -import os -import re -import uuid -import streamlit as st -import pandas as pd -from custom import * -import copy -import io - - -def get_history_chats(path: str) -> list: - if "apikey" in st.secrets: - if not os.path.exists(path): - os.makedirs(path) - files = [f for f in os.listdir(f'./{path}') if f.endswith('.json')] - files_with_time = [(f, os.stat(f'./{path}/' + f).st_ctime) for f in files] - sorted_files = sorted(files_with_time, key=lambda x: x[1], reverse=True) - chat_names = [os.path.splitext(f[0])[0] for f in sorted_files] - if len(chat_names) == 0: - chat_names.append('New Chat_' + str(uuid.uuid4())) - else: - chat_names = ['New Chat_' + str(uuid.uuid4())] - return chat_names - - -def save_data(path: str, file_name: str, history: list, paras: dict, contexts: dict, **kwargs): - if not os.path.exists(path): - os.makedirs(path) - with open(f"./{path}/{file_name}.json", 'w', encoding='utf-8') as f: - json.dump({"history": history, "paras": paras, "contexts": contexts, **kwargs}, f) - - -def remove_data(path: str, chat_name: str): - try: - os.remove(f"./{path}/{chat_name}.json") - except FileNotFoundError: - pass - # 清除缓存 - try: - st.session_state.pop('history' + chat_name) - for item in ["context_select", "context_input", "context_level", *initial_content_all['paras']]: - st.session_state.pop(item + chat_name + "value") - except KeyError: - pass - - -def load_data(path: str, file_name: str) -> dict: - try: - with open(f"./{path}/{file_name}.json", 'r', encoding='utf-8') as f: - data = json.load(f) - return data - except FileNotFoundError: - content = copy.deepcopy(initial_content_all) - if "apikey" in st.secrets: - with open(f"./{path}/{file_name}.json", 'w', encoding='utf-8') as f: - f.write(json.dumps(content)) - return content - - -def show_each_message(message: str, role: str, area=None): - if area is None: - area = [st.markdown] * 2 - if role == 'user': - icon = user_svg - name = user_name - background_color = user_background_color - else: - icon = gpt_svg - name = gpt_name - background_color = gpt_background_color - message = colon_correction( - url_correction(message) - ) - area[0](f"\n
    {icon}

    {name}:

    ", unsafe_allow_html=True) - area[1](f"""
    \n\n{message}""", - unsafe_allow_html=True) - - -def show_messages(messages: list): - for each in messages: - if (each["role"] == "user") or (each["role"] == "assistant"): - show_each_message(each["content"], each["role"]) - if each["role"] == "assistant": - st.write("---") - - -# 根据context_level提取history -def get_history_input(history: list, level: int) -> list: - if level != 0: - df_history = pd.DataFrame(history) - df_system = df_history.query('role=="system"') - df_input = df_history.query('role!="system"') - df_input = df_input[-level * 2:] - res = pd.concat([df_system, df_input], ignore_index=True).to_dict('records') - else: - res = [] - return res - - -# 去除#号右边的空格 -# def remove_hashtag_right__space(text: str) -> str: -# text = re.sub(r"(#+)\s*", r"\1", text) -# return text - - -# 提取文本 -def extract_chars(text: str, num: int) -> str: - char_num = 0 - chars = '' - for char in text: - # 汉字算两个字符 - if '\u4e00' <= char <= '\u9fff': - char_num += 2 - else: - char_num += 1 - chars += char - if char_num >= num: - break - return chars - - -@st.cache_data(max_entries=20, show_spinner=False) -def download_history(history: list): - md_text = "" - for msg in history: - if msg['role'] == 'user': - md_text += f'## {user_name}:\n{msg["content"]}\n' - elif msg['role'] == 'assistant': - md_text += f'## {gpt_name}:\n{msg["content"]}\n' - output = io.BytesIO() - output.write(md_text.encode('utf-8')) - output.seek(0) - return output - - -def filename_correction(filename: str) -> str: - pattern = r'[^\w\.-]' - filename = re.sub(pattern, '', filename) - return filename - - -def url_correction(text: str) -> str: - pattern = r'((?:http[s]?://|www\.)(?:[a-zA-Z0-9]|[$-_\~#!])+)' - text = re.sub(pattern, r' \g<1> ', text) - return text - - -# st的markdown会错误渲染英文引号加英文字符,例如 :abc -def colon_correction(text): - pattern = r':[a-zA-Z]' - if re.search(pattern, text): - text = text.replace(":", ":") - pattern = r'`([^`]*):([^`]*)`|```([^`]*):([^`]*)```' - text = re.sub(pattern, lambda m: m.group(0).replace(':', ':') if ':' in m.group(0) else m.group(0), - text) - return text diff --git a/spaces/Pengyey/bingo-chuchu/src/components/chat-message.tsx b/spaces/Pengyey/bingo-chuchu/src/components/chat-message.tsx deleted file mode 100644 index bf272d8d7005cfd06c53bd213e09ea217e803549..0000000000000000000000000000000000000000 --- a/spaces/Pengyey/bingo-chuchu/src/components/chat-message.tsx +++ /dev/null @@ -1,93 +0,0 @@ -import remarkGfm from 'remark-gfm' -import remarkMath from 'remark-math' -import supersub from 'remark-supersub' -import remarkBreaks from 'remark-breaks' -import { cn } from '@/lib/utils' -import { CodeBlock } from '@/components/ui/codeblock' -import { MemoizedReactMarkdown } from '@/components/markdown' -import { LearnMore } from './learn-more' -import { ChatMessageModel } from '@/lib/bots/bing/types' -import { useEffect } from 'react' -import { TurnCounter } from './turn-counter' - -export interface ChatMessageProps { - message: ChatMessageModel -} - -export function ChatMessage({ message, ...props }: ChatMessageProps) { - useEffect(() => { - if (document.body.scrollHeight - window.innerHeight - window.scrollY - 200 < 0) { - window.scrollBy(0, 200) - } - }, [message.text]) - - return message.text ? ( -
    -
    - {obj.alt} - } - } catch (e) { - } - return {obj.alt} - }, - p({ children }) { - return

    {children}

    - }, - code({ node, inline, className, children, ...props }) { - if (children.length) { - if (children[0] == '▍') { - return ( - - ) - } - - children[0] = (children[0] as string).replace('`▍`', '▍') - } - - const match = /language-(\w+)/.exec(className || '') - - if (inline) { - return ( - - {children} - - ) - } - - return ( - - ) - } - }} - > - {message.text} -
    -
    -
    - {message.author === 'bot' && } - {message.author === 'bot' && } -
    -
    - ) : null -} diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/lvis/lvis.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/lvis/lvis.py deleted file mode 100644 index 9cad8004bfbf962d03927f1826f1525b2c93789b..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/data/datasets/evaluation/lvis/lvis.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) Aishwarya Kamath & Nicolas Carion. Licensed under the Apache License 2.0. All Rights Reserved -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -import json -import os -import time -from collections import defaultdict - -import pycocotools.mask as mask_utils -import torchvision -from PIL import Image - - - -def _isArrayLike(obj): - return hasattr(obj, "__iter__") and hasattr(obj, "__len__") - - -class LVIS: - def __init__(self, annotation_path=None): - """Class for reading and visualizing annotations. - Args: - annotation_path (str): location of annotation file - """ - self.anns = {} - self.cats = {} - self.imgs = {} - self.img_ann_map = defaultdict(list) - self.cat_img_map = defaultdict(list) - self.dataset = {} - - if annotation_path is not None: - print("Loading annotations.") - - tic = time.time() - self.dataset = self._load_json(annotation_path) - print("Done (t={:0.2f}s)".format(time.time() - tic)) - - assert type(self.dataset) == dict, "Annotation file format {} not supported.".format(type(self.dataset)) - self._create_index() - - def _load_json(self, path): - with open(path, "r") as f: - return json.load(f) - - def _create_index(self): - print("Creating index.") - - self.img_ann_map = defaultdict(list) - self.cat_img_map = defaultdict(list) - - self.anns = {} - self.cats = {} - self.imgs = {} - - for ann in self.dataset["annotations"]: - self.img_ann_map[ann["image_id"]].append(ann) - self.anns[ann["id"]] = ann - - for img in self.dataset["images"]: - self.imgs[img["id"]] = img - - for cat in self.dataset["categories"]: - self.cats[cat["id"]] = cat - - for ann in self.dataset["annotations"]: - self.cat_img_map[ann["category_id"]].append(ann["image_id"]) - - print("Index created.") - - def get_ann_ids(self, img_ids=None, cat_ids=None, area_rng=None): - """Get ann ids that satisfy given filter conditions. - Args: - img_ids (int array): get anns for given imgs - cat_ids (int array): get anns for given cats - area_rng (float array): get anns for a given area range. e.g [0, inf] - Returns: - ids (int array): integer array of ann ids - """ - if img_ids is not None: - img_ids = img_ids if _isArrayLike(img_ids) else [img_ids] - if cat_ids is not None: - cat_ids = cat_ids if _isArrayLike(cat_ids) else [cat_ids] - anns = [] - if img_ids is not None: - for img_id in img_ids: - anns.extend(self.img_ann_map[img_id]) - else: - anns = self.dataset["annotations"] - - # return early if no more filtering required - if cat_ids is None and area_rng is None: - return [_ann["id"] for _ann in anns] - - cat_ids = set(cat_ids) - - if area_rng is None: - area_rng = [0, float("inf")] - - ann_ids = [ - _ann["id"] - for _ann in anns - if _ann["category_id"] in cat_ids and _ann["area"] > area_rng[0] and _ann["area"] < area_rng[1] - ] - return ann_ids - - def get_cat_ids(self): - """Get all category ids. - Returns: - ids (int array): integer array of category ids - """ - return list(self.cats.keys()) - - def get_img_ids(self): - """Get all img ids. - Returns: - ids (int array): integer array of image ids - """ - return list(self.imgs.keys()) - - def _load_helper(self, _dict, ids): - if ids is None: - return list(_dict.values()) - elif _isArrayLike(ids): - return [_dict[id] for id in ids] - else: - return [_dict[ids]] - - def load_anns(self, ids=None): - """Load anns with the specified ids. If ids=None load all anns. - Args: - ids (int array): integer array of annotation ids - Returns: - anns (dict array) : loaded annotation objects - """ - return self._load_helper(self.anns, ids) - - def load_cats(self, ids): - """Load categories with the specified ids. If ids=None load all - categories. - Args: - ids (int array): integer array of category ids - Returns: - cats (dict array) : loaded category dicts - """ - return self._load_helper(self.cats, ids) - - def load_imgs(self, ids): - """Load categories with the specified ids. If ids=None load all images. - Args: - ids (int array): integer array of image ids - Returns: - imgs (dict array) : loaded image dicts - """ - return self._load_helper(self.imgs, ids) - - def download(self, save_dir, img_ids=None): - """Download images from mscoco.org server. - Args: - save_dir (str): dir to save downloaded images - img_ids (int array): img ids of images to download - """ - imgs = self.load_imgs(img_ids) - - if not os.path.exists(save_dir): - os.makedirs(save_dir) - - for img in imgs: - file_name = os.path.join(save_dir, img["file_name"]) - if not os.path.exists(file_name): - from urllib.request import urlretrieve - - urlretrieve(img["coco_url"], file_name) - - def ann_to_rle(self, ann): - """Convert annotation which can be polygons, uncompressed RLE to RLE. - Args: - ann (dict) : annotation object - Returns: - ann (rle) - """ - img_data = self.imgs[ann["image_id"]] - h, w = img_data["height"], img_data["width"] - segm = ann["segmentation"] - if isinstance(segm, list): - # polygon -- a single object might consist of multiple parts - # we merge all parts into one mask rle code - rles = mask_utils.frPyObjects(segm, h, w) - rle = mask_utils.merge(rles) - elif isinstance(segm["counts"], list): - # uncompressed RLE - rle = mask_utils.frPyObjects(segm, h, w) - else: - # rle - rle = ann["segmentation"] - return rle - - def ann_to_mask(self, ann): - """Convert annotation which can be polygons, uncompressed RLE, or RLE - to binary mask. - Args: - ann (dict) : annotation object - Returns: - binary mask (numpy 2D array) - """ - rle = self.ann_to_rle(ann) - return mask_utils.decode(rle) - diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_inspect.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_inspect.py deleted file mode 100644 index 30446ceb3f0235721e435f5fbd53f2e306f078cd..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/rich/_inspect.py +++ /dev/null @@ -1,270 +0,0 @@ -from __future__ import absolute_import - -import inspect -from inspect import cleandoc, getdoc, getfile, isclass, ismodule, signature -from typing import Any, Collection, Iterable, Optional, Tuple, Type, Union - -from .console import Group, RenderableType -from .control import escape_control_codes -from .highlighter import ReprHighlighter -from .jupyter import JupyterMixin -from .panel import Panel -from .pretty import Pretty -from .table import Table -from .text import Text, TextType - - -def _first_paragraph(doc: str) -> str: - """Get the first paragraph from a docstring.""" - paragraph, _, _ = doc.partition("\n\n") - return paragraph - - -class Inspect(JupyterMixin): - """A renderable to inspect any Python Object. - - Args: - obj (Any): An object to inspect. - title (str, optional): Title to display over inspect result, or None use type. Defaults to None. - help (bool, optional): Show full help text rather than just first paragraph. Defaults to False. - methods (bool, optional): Enable inspection of callables. Defaults to False. - docs (bool, optional): Also render doc strings. Defaults to True. - private (bool, optional): Show private attributes (beginning with underscore). Defaults to False. - dunder (bool, optional): Show attributes starting with double underscore. Defaults to False. - sort (bool, optional): Sort attributes alphabetically. Defaults to True. - all (bool, optional): Show all attributes. Defaults to False. - value (bool, optional): Pretty print value of object. Defaults to True. - """ - - def __init__( - self, - obj: Any, - *, - title: Optional[TextType] = None, - help: bool = False, - methods: bool = False, - docs: bool = True, - private: bool = False, - dunder: bool = False, - sort: bool = True, - all: bool = True, - value: bool = True, - ) -> None: - self.highlighter = ReprHighlighter() - self.obj = obj - self.title = title or self._make_title(obj) - if all: - methods = private = dunder = True - self.help = help - self.methods = methods - self.docs = docs or help - self.private = private or dunder - self.dunder = dunder - self.sort = sort - self.value = value - - def _make_title(self, obj: Any) -> Text: - """Make a default title.""" - title_str = ( - str(obj) - if (isclass(obj) or callable(obj) or ismodule(obj)) - else str(type(obj)) - ) - title_text = self.highlighter(title_str) - return title_text - - def __rich__(self) -> Panel: - return Panel.fit( - Group(*self._render()), - title=self.title, - border_style="scope.border", - padding=(0, 1), - ) - - def _get_signature(self, name: str, obj: Any) -> Optional[Text]: - """Get a signature for a callable.""" - try: - _signature = str(signature(obj)) + ":" - except ValueError: - _signature = "(...)" - except TypeError: - return None - - source_filename: Optional[str] = None - try: - source_filename = getfile(obj) - except (OSError, TypeError): - # OSError is raised if obj has no source file, e.g. when defined in REPL. - pass - - callable_name = Text(name, style="inspect.callable") - if source_filename: - callable_name.stylize(f"link file://{source_filename}") - signature_text = self.highlighter(_signature) - - qualname = name or getattr(obj, "__qualname__", name) - - # If obj is a module, there may be classes (which are callable) to display - if inspect.isclass(obj): - prefix = "class" - elif inspect.iscoroutinefunction(obj): - prefix = "async def" - else: - prefix = "def" - - qual_signature = Text.assemble( - (f"{prefix} ", f"inspect.{prefix.replace(' ', '_')}"), - (qualname, "inspect.callable"), - signature_text, - ) - - return qual_signature - - def _render(self) -> Iterable[RenderableType]: - """Render object.""" - - def sort_items(item: Tuple[str, Any]) -> Tuple[bool, str]: - key, (_error, value) = item - return (callable(value), key.strip("_").lower()) - - def safe_getattr(attr_name: str) -> Tuple[Any, Any]: - """Get attribute or any exception.""" - try: - return (None, getattr(obj, attr_name)) - except Exception as error: - return (error, None) - - obj = self.obj - keys = dir(obj) - total_items = len(keys) - if not self.dunder: - keys = [key for key in keys if not key.startswith("__")] - if not self.private: - keys = [key for key in keys if not key.startswith("_")] - not_shown_count = total_items - len(keys) - items = [(key, safe_getattr(key)) for key in keys] - if self.sort: - items.sort(key=sort_items) - - items_table = Table.grid(padding=(0, 1), expand=False) - items_table.add_column(justify="right") - add_row = items_table.add_row - highlighter = self.highlighter - - if callable(obj): - signature = self._get_signature("", obj) - if signature is not None: - yield signature - yield "" - - if self.docs: - _doc = self._get_formatted_doc(obj) - if _doc is not None: - doc_text = Text(_doc, style="inspect.help") - doc_text = highlighter(doc_text) - yield doc_text - yield "" - - if self.value and not (isclass(obj) or callable(obj) or ismodule(obj)): - yield Panel( - Pretty(obj, indent_guides=True, max_length=10, max_string=60), - border_style="inspect.value.border", - ) - yield "" - - for key, (error, value) in items: - key_text = Text.assemble( - ( - key, - "inspect.attr.dunder" if key.startswith("__") else "inspect.attr", - ), - (" =", "inspect.equals"), - ) - if error is not None: - warning = key_text.copy() - warning.stylize("inspect.error") - add_row(warning, highlighter(repr(error))) - continue - - if callable(value): - if not self.methods: - continue - - _signature_text = self._get_signature(key, value) - if _signature_text is None: - add_row(key_text, Pretty(value, highlighter=highlighter)) - else: - if self.docs: - docs = self._get_formatted_doc(value) - if docs is not None: - _signature_text.append("\n" if "\n" in docs else " ") - doc = highlighter(docs) - doc.stylize("inspect.doc") - _signature_text.append(doc) - - add_row(key_text, _signature_text) - else: - add_row(key_text, Pretty(value, highlighter=highlighter)) - if items_table.row_count: - yield items_table - elif not_shown_count: - yield Text.from_markup( - f"[b cyan]{not_shown_count}[/][i] attribute(s) not shown.[/i] " - f"Run [b][magenta]inspect[/]([not b]inspect[/])[/b] for options." - ) - - def _get_formatted_doc(self, object_: Any) -> Optional[str]: - """ - Extract the docstring of an object, process it and returns it. - The processing consists in cleaning up the doctring's indentation, - taking only its 1st paragraph if `self.help` is not True, - and escape its control codes. - - Args: - object_ (Any): the object to get the docstring from. - - Returns: - Optional[str]: the processed docstring, or None if no docstring was found. - """ - docs = getdoc(object_) - if docs is None: - return None - docs = cleandoc(docs).strip() - if not self.help: - docs = _first_paragraph(docs) - return escape_control_codes(docs) - - -def get_object_types_mro(obj: Union[object, Type[Any]]) -> Tuple[type, ...]: - """Returns the MRO of an object's class, or of the object itself if it's a class.""" - if not hasattr(obj, "__mro__"): - # N.B. we cannot use `if type(obj) is type` here because it doesn't work with - # some types of classes, such as the ones that use abc.ABCMeta. - obj = type(obj) - return getattr(obj, "__mro__", ()) - - -def get_object_types_mro_as_strings(obj: object) -> Collection[str]: - """ - Returns the MRO of an object's class as full qualified names, or of the object itself if it's a class. - - Examples: - `object_types_mro_as_strings(JSONDecoder)` will return `['json.decoder.JSONDecoder', 'builtins.object']` - """ - return [ - f'{getattr(type_, "__module__", "")}.{getattr(type_, "__qualname__", "")}' - for type_ in get_object_types_mro(obj) - ] - - -def is_object_one_of_types( - obj: object, fully_qualified_types_names: Collection[str] -) -> bool: - """ - Returns `True` if the given object's class (or the object itself, if it's a class) has one of the - fully qualified names in its MRO. - """ - for type_name in get_object_types_mro_as_strings(obj): - if type_name in fully_qualified_types_names: - return True - return False diff --git a/spaces/RdnUser77/SpacIO_v1/all_models.py b/spaces/RdnUser77/SpacIO_v1/all_models.py deleted file mode 100644 index d5a3abe0892831da8873f2a4e55ee3fb4c1564cf..0000000000000000000000000000000000000000 --- a/spaces/RdnUser77/SpacIO_v1/all_models.py +++ /dev/null @@ -1,16 +0,0 @@ -models =[ - "Yntec/epiCPhotoGasm", -"digiplay/perfectlevel10", -"Yntec/realistic-vision-v12", - "Yntec/aMovieX", - "digiplay/AbsoluteReality_v1.8.1", - "Yntec/photoMovieRealistic", - "tensor-diffusion/chilloutmix-NI", - "dreamlike-art/dreamlike-photoreal-2.0", - "Yntec/photoMovieX", - "Yntec/CinematicReality", - "Yntec/photoMovieXFinal", - "Leekp/toonmaker3", - "shantanudave/shantanuimagessept10", - "shantanudave/autotrain-adv-15sept" -] \ No newline at end of file diff --git a/spaces/Reha2704/VToonify/vtoonify/model/bisenet/model.py b/spaces/Reha2704/VToonify/vtoonify/model/bisenet/model.py deleted file mode 100644 index e61c0eb20aaa63065cc17bbcfe27b245f1f0dbf5..0000000000000000000000000000000000000000 --- a/spaces/Reha2704/VToonify/vtoonify/model/bisenet/model.py +++ /dev/null @@ -1,283 +0,0 @@ -#!/usr/bin/python -# -*- encoding: utf-8 -*- - - -import torch -import torch.nn as nn -import torch.nn.functional as F -import torchvision - -from model.bisenet.resnet import Resnet18 -# from modules.bn import InPlaceABNSync as BatchNorm2d - - -class ConvBNReLU(nn.Module): - def __init__(self, in_chan, out_chan, ks=3, stride=1, padding=1, *args, **kwargs): - super(ConvBNReLU, self).__init__() - self.conv = nn.Conv2d(in_chan, - out_chan, - kernel_size = ks, - stride = stride, - padding = padding, - bias = False) - self.bn = nn.BatchNorm2d(out_chan) - self.init_weight() - - def forward(self, x): - x = self.conv(x) - x = F.relu(self.bn(x)) - return x - - def init_weight(self): - for ly in self.children(): - if isinstance(ly, nn.Conv2d): - nn.init.kaiming_normal_(ly.weight, a=1) - if not ly.bias is None: nn.init.constant_(ly.bias, 0) - -class BiSeNetOutput(nn.Module): - def __init__(self, in_chan, mid_chan, n_classes, *args, **kwargs): - super(BiSeNetOutput, self).__init__() - self.conv = ConvBNReLU(in_chan, mid_chan, ks=3, stride=1, padding=1) - self.conv_out = nn.Conv2d(mid_chan, n_classes, kernel_size=1, bias=False) - self.init_weight() - - def forward(self, x): - x = self.conv(x) - x = self.conv_out(x) - return x - - def init_weight(self): - for ly in self.children(): - if isinstance(ly, nn.Conv2d): - nn.init.kaiming_normal_(ly.weight, a=1) - if not ly.bias is None: nn.init.constant_(ly.bias, 0) - - def get_params(self): - wd_params, nowd_params = [], [] - for name, module in self.named_modules(): - if isinstance(module, nn.Linear) or isinstance(module, nn.Conv2d): - wd_params.append(module.weight) - if not module.bias is None: - nowd_params.append(module.bias) - elif isinstance(module, nn.BatchNorm2d): - nowd_params += list(module.parameters()) - return wd_params, nowd_params - - -class AttentionRefinementModule(nn.Module): - def __init__(self, in_chan, out_chan, *args, **kwargs): - super(AttentionRefinementModule, self).__init__() - self.conv = ConvBNReLU(in_chan, out_chan, ks=3, stride=1, padding=1) - self.conv_atten = nn.Conv2d(out_chan, out_chan, kernel_size= 1, bias=False) - self.bn_atten = nn.BatchNorm2d(out_chan) - self.sigmoid_atten = nn.Sigmoid() - self.init_weight() - - def forward(self, x): - feat = self.conv(x) - atten = F.avg_pool2d(feat, feat.size()[2:]) - atten = self.conv_atten(atten) - atten = self.bn_atten(atten) - atten = self.sigmoid_atten(atten) - out = torch.mul(feat, atten) - return out - - def init_weight(self): - for ly in self.children(): - if isinstance(ly, nn.Conv2d): - nn.init.kaiming_normal_(ly.weight, a=1) - if not ly.bias is None: nn.init.constant_(ly.bias, 0) - - -class ContextPath(nn.Module): - def __init__(self, *args, **kwargs): - super(ContextPath, self).__init__() - self.resnet = Resnet18() - self.arm16 = AttentionRefinementModule(256, 128) - self.arm32 = AttentionRefinementModule(512, 128) - self.conv_head32 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1) - self.conv_head16 = ConvBNReLU(128, 128, ks=3, stride=1, padding=1) - self.conv_avg = ConvBNReLU(512, 128, ks=1, stride=1, padding=0) - - self.init_weight() - - def forward(self, x): - H0, W0 = x.size()[2:] - feat8, feat16, feat32 = self.resnet(x) - H8, W8 = feat8.size()[2:] - H16, W16 = feat16.size()[2:] - H32, W32 = feat32.size()[2:] - - avg = F.avg_pool2d(feat32, feat32.size()[2:]) - avg = self.conv_avg(avg) - avg_up = F.interpolate(avg, (H32, W32), mode='nearest') - - feat32_arm = self.arm32(feat32) - feat32_sum = feat32_arm + avg_up - feat32_up = F.interpolate(feat32_sum, (H16, W16), mode='nearest') - feat32_up = self.conv_head32(feat32_up) - - feat16_arm = self.arm16(feat16) - feat16_sum = feat16_arm + feat32_up - feat16_up = F.interpolate(feat16_sum, (H8, W8), mode='nearest') - feat16_up = self.conv_head16(feat16_up) - - return feat8, feat16_up, feat32_up # x8, x8, x16 - - def init_weight(self): - for ly in self.children(): - if isinstance(ly, nn.Conv2d): - nn.init.kaiming_normal_(ly.weight, a=1) - if not ly.bias is None: nn.init.constant_(ly.bias, 0) - - def get_params(self): - wd_params, nowd_params = [], [] - for name, module in self.named_modules(): - if isinstance(module, (nn.Linear, nn.Conv2d)): - wd_params.append(module.weight) - if not module.bias is None: - nowd_params.append(module.bias) - elif isinstance(module, nn.BatchNorm2d): - nowd_params += list(module.parameters()) - return wd_params, nowd_params - - -### This is not used, since I replace this with the resnet feature with the same size -class SpatialPath(nn.Module): - def __init__(self, *args, **kwargs): - super(SpatialPath, self).__init__() - self.conv1 = ConvBNReLU(3, 64, ks=7, stride=2, padding=3) - self.conv2 = ConvBNReLU(64, 64, ks=3, stride=2, padding=1) - self.conv3 = ConvBNReLU(64, 64, ks=3, stride=2, padding=1) - self.conv_out = ConvBNReLU(64, 128, ks=1, stride=1, padding=0) - self.init_weight() - - def forward(self, x): - feat = self.conv1(x) - feat = self.conv2(feat) - feat = self.conv3(feat) - feat = self.conv_out(feat) - return feat - - def init_weight(self): - for ly in self.children(): - if isinstance(ly, nn.Conv2d): - nn.init.kaiming_normal_(ly.weight, a=1) - if not ly.bias is None: nn.init.constant_(ly.bias, 0) - - def get_params(self): - wd_params, nowd_params = [], [] - for name, module in self.named_modules(): - if isinstance(module, nn.Linear) or isinstance(module, nn.Conv2d): - wd_params.append(module.weight) - if not module.bias is None: - nowd_params.append(module.bias) - elif isinstance(module, nn.BatchNorm2d): - nowd_params += list(module.parameters()) - return wd_params, nowd_params - - -class FeatureFusionModule(nn.Module): - def __init__(self, in_chan, out_chan, *args, **kwargs): - super(FeatureFusionModule, self).__init__() - self.convblk = ConvBNReLU(in_chan, out_chan, ks=1, stride=1, padding=0) - self.conv1 = nn.Conv2d(out_chan, - out_chan//4, - kernel_size = 1, - stride = 1, - padding = 0, - bias = False) - self.conv2 = nn.Conv2d(out_chan//4, - out_chan, - kernel_size = 1, - stride = 1, - padding = 0, - bias = False) - self.relu = nn.ReLU(inplace=True) - self.sigmoid = nn.Sigmoid() - self.init_weight() - - def forward(self, fsp, fcp): - fcat = torch.cat([fsp, fcp], dim=1) - feat = self.convblk(fcat) - atten = F.avg_pool2d(feat, feat.size()[2:]) - atten = self.conv1(atten) - atten = self.relu(atten) - atten = self.conv2(atten) - atten = self.sigmoid(atten) - feat_atten = torch.mul(feat, atten) - feat_out = feat_atten + feat - return feat_out - - def init_weight(self): - for ly in self.children(): - if isinstance(ly, nn.Conv2d): - nn.init.kaiming_normal_(ly.weight, a=1) - if not ly.bias is None: nn.init.constant_(ly.bias, 0) - - def get_params(self): - wd_params, nowd_params = [], [] - for name, module in self.named_modules(): - if isinstance(module, nn.Linear) or isinstance(module, nn.Conv2d): - wd_params.append(module.weight) - if not module.bias is None: - nowd_params.append(module.bias) - elif isinstance(module, nn.BatchNorm2d): - nowd_params += list(module.parameters()) - return wd_params, nowd_params - - -class BiSeNet(nn.Module): - def __init__(self, n_classes, *args, **kwargs): - super(BiSeNet, self).__init__() - self.cp = ContextPath() - ## here self.sp is deleted - self.ffm = FeatureFusionModule(256, 256) - self.conv_out = BiSeNetOutput(256, 256, n_classes) - self.conv_out16 = BiSeNetOutput(128, 64, n_classes) - self.conv_out32 = BiSeNetOutput(128, 64, n_classes) - self.init_weight() - - def forward(self, x): - H, W = x.size()[2:] - feat_res8, feat_cp8, feat_cp16 = self.cp(x) # here return res3b1 feature - feat_sp = feat_res8 # use res3b1 feature to replace spatial path feature - feat_fuse = self.ffm(feat_sp, feat_cp8) - - feat_out = self.conv_out(feat_fuse) - feat_out16 = self.conv_out16(feat_cp8) - feat_out32 = self.conv_out32(feat_cp16) - - feat_out = F.interpolate(feat_out, (H, W), mode='bilinear', align_corners=True) - feat_out16 = F.interpolate(feat_out16, (H, W), mode='bilinear', align_corners=True) - feat_out32 = F.interpolate(feat_out32, (H, W), mode='bilinear', align_corners=True) - return feat_out, feat_out16, feat_out32 - - def init_weight(self): - for ly in self.children(): - if isinstance(ly, nn.Conv2d): - nn.init.kaiming_normal_(ly.weight, a=1) - if not ly.bias is None: nn.init.constant_(ly.bias, 0) - - def get_params(self): - wd_params, nowd_params, lr_mul_wd_params, lr_mul_nowd_params = [], [], [], [] - for name, child in self.named_children(): - child_wd_params, child_nowd_params = child.get_params() - if isinstance(child, FeatureFusionModule) or isinstance(child, BiSeNetOutput): - lr_mul_wd_params += child_wd_params - lr_mul_nowd_params += child_nowd_params - else: - wd_params += child_wd_params - nowd_params += child_nowd_params - return wd_params, nowd_params, lr_mul_wd_params, lr_mul_nowd_params - - -if __name__ == "__main__": - net = BiSeNet(19) - net.cuda() - net.eval() - in_ten = torch.randn(16, 3, 640, 480).cuda() - out, out16, out32 = net(in_ten) - print(out.shape) - - net.get_params() diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/depthwise_separable_conv_module.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/depthwise_separable_conv_module.py deleted file mode 100644 index 722d5d8d71f75486e2db3008907c4eadfca41d63..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmcv/cnn/bricks/depthwise_separable_conv_module.py +++ /dev/null @@ -1,96 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch.nn as nn - -from .conv_module import ConvModule - - -class DepthwiseSeparableConvModule(nn.Module): - """Depthwise separable convolution module. - - See https://arxiv.org/pdf/1704.04861.pdf for details. - - This module can replace a ConvModule with the conv block replaced by two - conv block: depthwise conv block and pointwise conv block. The depthwise - conv block contains depthwise-conv/norm/activation layers. The pointwise - conv block contains pointwise-conv/norm/activation layers. It should be - noted that there will be norm/activation layer in the depthwise conv block - if `norm_cfg` and `act_cfg` are specified. - - Args: - in_channels (int): Number of channels in the input feature map. - Same as that in ``nn._ConvNd``. - out_channels (int): Number of channels produced by the convolution. - Same as that in ``nn._ConvNd``. - kernel_size (int | tuple[int]): Size of the convolving kernel. - Same as that in ``nn._ConvNd``. - stride (int | tuple[int]): Stride of the convolution. - Same as that in ``nn._ConvNd``. Default: 1. - padding (int | tuple[int]): Zero-padding added to both sides of - the input. Same as that in ``nn._ConvNd``. Default: 0. - dilation (int | tuple[int]): Spacing between kernel elements. - Same as that in ``nn._ConvNd``. Default: 1. - norm_cfg (dict): Default norm config for both depthwise ConvModule and - pointwise ConvModule. Default: None. - act_cfg (dict): Default activation config for both depthwise ConvModule - and pointwise ConvModule. Default: dict(type='ReLU'). - dw_norm_cfg (dict): Norm config of depthwise ConvModule. If it is - 'default', it will be the same as `norm_cfg`. Default: 'default'. - dw_act_cfg (dict): Activation config of depthwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: 'default'. - pw_norm_cfg (dict): Norm config of pointwise ConvModule. If it is - 'default', it will be the same as `norm_cfg`. Default: 'default'. - pw_act_cfg (dict): Activation config of pointwise ConvModule. If it is - 'default', it will be the same as `act_cfg`. Default: 'default'. - kwargs (optional): Other shared arguments for depthwise and pointwise - ConvModule. See ConvModule for ref. - """ - - def __init__(self, - in_channels, - out_channels, - kernel_size, - stride=1, - padding=0, - dilation=1, - norm_cfg=None, - act_cfg=dict(type='ReLU'), - dw_norm_cfg='default', - dw_act_cfg='default', - pw_norm_cfg='default', - pw_act_cfg='default', - **kwargs): - super(DepthwiseSeparableConvModule, self).__init__() - assert 'groups' not in kwargs, 'groups should not be specified' - - # if norm/activation config of depthwise/pointwise ConvModule is not - # specified, use default config. - dw_norm_cfg = dw_norm_cfg if dw_norm_cfg != 'default' else norm_cfg - dw_act_cfg = dw_act_cfg if dw_act_cfg != 'default' else act_cfg - pw_norm_cfg = pw_norm_cfg if pw_norm_cfg != 'default' else norm_cfg - pw_act_cfg = pw_act_cfg if pw_act_cfg != 'default' else act_cfg - - # depthwise convolution - self.depthwise_conv = ConvModule( - in_channels, - in_channels, - kernel_size, - stride=stride, - padding=padding, - dilation=dilation, - groups=in_channels, - norm_cfg=dw_norm_cfg, - act_cfg=dw_act_cfg, - **kwargs) - - self.pointwise_conv = ConvModule( - in_channels, - out_channels, - 1, - norm_cfg=pw_norm_cfg, - act_cfg=pw_act_cfg, - **kwargs) - - def forward(self, x): - x = self.depthwise_conv(x) - x = self.pointwise_conv(x) - return x diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/iou_loss.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/iou_loss.py deleted file mode 100644 index eba6f18b80981ca891c1add37007e6bf478c651f..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/losses/iou_loss.py +++ /dev/null @@ -1,436 +0,0 @@ -import math - -import mmcv -import torch -import torch.nn as nn - -from mmdet.core import bbox_overlaps -from ..builder import LOSSES -from .utils import weighted_loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def iou_loss(pred, target, linear=False, eps=1e-6): - """IoU loss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - The loss is calculated as negative log of IoU. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - linear (bool, optional): If True, use linear scale of loss instead of - log scale. Default: False. - eps (float): Eps to avoid log(0). - - Return: - torch.Tensor: Loss tensor. - """ - ious = bbox_overlaps(pred, target, is_aligned=True).clamp(min=eps) - if linear: - loss = 1 - ious - else: - loss = -ious.log() - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def bounded_iou_loss(pred, target, beta=0.2, eps=1e-3): - """BIoULoss. - - This is an implementation of paper - `Improving Object Localization with Fitness NMS and Bounded IoU Loss. - `_. - - Args: - pred (torch.Tensor): Predicted bboxes. - target (torch.Tensor): Target bboxes. - beta (float): beta parameter in smoothl1. - eps (float): eps to avoid NaN. - """ - pred_ctrx = (pred[:, 0] + pred[:, 2]) * 0.5 - pred_ctry = (pred[:, 1] + pred[:, 3]) * 0.5 - pred_w = pred[:, 2] - pred[:, 0] - pred_h = pred[:, 3] - pred[:, 1] - with torch.no_grad(): - target_ctrx = (target[:, 0] + target[:, 2]) * 0.5 - target_ctry = (target[:, 1] + target[:, 3]) * 0.5 - target_w = target[:, 2] - target[:, 0] - target_h = target[:, 3] - target[:, 1] - - dx = target_ctrx - pred_ctrx - dy = target_ctry - pred_ctry - - loss_dx = 1 - torch.max( - (target_w - 2 * dx.abs()) / - (target_w + 2 * dx.abs() + eps), torch.zeros_like(dx)) - loss_dy = 1 - torch.max( - (target_h - 2 * dy.abs()) / - (target_h + 2 * dy.abs() + eps), torch.zeros_like(dy)) - loss_dw = 1 - torch.min(target_w / (pred_w + eps), pred_w / - (target_w + eps)) - loss_dh = 1 - torch.min(target_h / (pred_h + eps), pred_h / - (target_h + eps)) - loss_comb = torch.stack([loss_dx, loss_dy, loss_dw, loss_dh], - dim=-1).view(loss_dx.size(0), -1) - - loss = torch.where(loss_comb < beta, 0.5 * loss_comb * loss_comb / beta, - loss_comb - 0.5 * beta) - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def giou_loss(pred, target, eps=1e-7): - r"""`Generalized Intersection over Union: A Metric and A Loss for Bounding - Box Regression `_. - - Args: - pred (torch.Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (torch.Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - - Return: - Tensor: Loss tensor. - """ - gious = bbox_overlaps(pred, target, mode='giou', is_aligned=True, eps=eps) - loss = 1 - gious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def diou_loss(pred, target, eps=1e-7): - r"""`Implementation of Distance-IoU Loss: Faster and Better - Learning for Bounding Box Regression, https://arxiv.org/abs/1911.08287`_. - - Code is modified from https://github.com/Zzh-tju/DIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - # DIoU - dious = ious - rho2 / c2 - loss = 1 - dious - return loss - - -@mmcv.jit(derivate=True, coderize=True) -@weighted_loss -def ciou_loss(pred, target, eps=1e-7): - r"""`Implementation of paper `Enhancing Geometric Factors into - Model Learning and Inference for Object Detection and Instance - Segmentation `_. - - Code is modified from https://github.com/Zzh-tju/CIoU. - - Args: - pred (Tensor): Predicted bboxes of format (x1, y1, x2, y2), - shape (n, 4). - target (Tensor): Corresponding gt bboxes, shape (n, 4). - eps (float): Eps to avoid log(0). - Return: - Tensor: Loss tensor. - """ - # overlap - lt = torch.max(pred[:, :2], target[:, :2]) - rb = torch.min(pred[:, 2:], target[:, 2:]) - wh = (rb - lt).clamp(min=0) - overlap = wh[:, 0] * wh[:, 1] - - # union - ap = (pred[:, 2] - pred[:, 0]) * (pred[:, 3] - pred[:, 1]) - ag = (target[:, 2] - target[:, 0]) * (target[:, 3] - target[:, 1]) - union = ap + ag - overlap + eps - - # IoU - ious = overlap / union - - # enclose area - enclose_x1y1 = torch.min(pred[:, :2], target[:, :2]) - enclose_x2y2 = torch.max(pred[:, 2:], target[:, 2:]) - enclose_wh = (enclose_x2y2 - enclose_x1y1).clamp(min=0) - - cw = enclose_wh[:, 0] - ch = enclose_wh[:, 1] - - c2 = cw**2 + ch**2 + eps - - b1_x1, b1_y1 = pred[:, 0], pred[:, 1] - b1_x2, b1_y2 = pred[:, 2], pred[:, 3] - b2_x1, b2_y1 = target[:, 0], target[:, 1] - b2_x2, b2_y2 = target[:, 2], target[:, 3] - - w1, h1 = b1_x2 - b1_x1, b1_y2 - b1_y1 + eps - w2, h2 = b2_x2 - b2_x1, b2_y2 - b2_y1 + eps - - left = ((b2_x1 + b2_x2) - (b1_x1 + b1_x2))**2 / 4 - right = ((b2_y1 + b2_y2) - (b1_y1 + b1_y2))**2 / 4 - rho2 = left + right - - factor = 4 / math.pi**2 - v = factor * torch.pow(torch.atan(w2 / h2) - torch.atan(w1 / h1), 2) - - # CIoU - cious = ious - (rho2 / c2 + v**2 / (1 - ious + v)) - loss = 1 - cious - return loss - - -@LOSSES.register_module() -class IoULoss(nn.Module): - """IoULoss. - - Computing the IoU loss between a set of predicted bboxes and target bboxes. - - Args: - linear (bool): If True, use linear scale of loss instead of log scale. - Default: False. - eps (float): Eps to avoid log(0). - reduction (str): Options are "none", "mean" and "sum". - loss_weight (float): Weight of loss. - """ - - def __init__(self, - linear=False, - eps=1e-6, - reduction='mean', - loss_weight=1.0): - super(IoULoss, self).__init__() - self.linear = linear - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - """Forward function. - - Args: - pred (torch.Tensor): The prediction. - target (torch.Tensor): The learning target of the prediction. - weight (torch.Tensor, optional): The weight of loss for each - prediction. Defaults to None. - avg_factor (int, optional): Average factor that is used to average - the loss. Defaults to None. - reduction_override (str, optional): The reduction method used to - override the original reduction method of the loss. - Defaults to None. Options are "none", "mean" and "sum". - """ - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if (weight is not None) and (not torch.any(weight > 0)) and ( - reduction != 'none'): - return (pred * weight).sum() # 0 - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # iou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * iou_loss( - pred, - target, - weight, - linear=self.linear, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class BoundedIoULoss(nn.Module): - - def __init__(self, beta=0.2, eps=1e-3, reduction='mean', loss_weight=1.0): - super(BoundedIoULoss, self).__init__() - self.beta = beta - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - loss = self.loss_weight * bounded_iou_loss( - pred, - target, - weight, - beta=self.beta, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class GIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(GIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * giou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class DIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(DIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * diou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss - - -@LOSSES.register_module() -class CIoULoss(nn.Module): - - def __init__(self, eps=1e-6, reduction='mean', loss_weight=1.0): - super(CIoULoss, self).__init__() - self.eps = eps - self.reduction = reduction - self.loss_weight = loss_weight - - def forward(self, - pred, - target, - weight=None, - avg_factor=None, - reduction_override=None, - **kwargs): - if weight is not None and not torch.any(weight > 0): - return (pred * weight).sum() # 0 - assert reduction_override in (None, 'none', 'mean', 'sum') - reduction = ( - reduction_override if reduction_override else self.reduction) - if weight is not None and weight.dim() > 1: - # TODO: remove this in the future - # reduce the weight of shape (n, 4) to (n,) to match the - # giou_loss of shape (n,) - assert weight.shape == pred.shape - weight = weight.mean(-1) - loss = self.loss_weight * ciou_loss( - pred, - target, - weight, - eps=self.eps, - reduction=reduction, - avg_factor=avg_factor, - **kwargs) - return loss diff --git a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/docstore/__init__.py b/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/docstore/__init__.py deleted file mode 100644 index 6250d5c3aaf5de06e7daa358a513205f302527c2..0000000000000000000000000000000000000000 --- a/spaces/SIH/Augmented-Retrieval-qa-ChatGPT/streamlit_langchain_chat/customized_langchain/docstore/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -"""Wrappers on top of docstores.""" -from streamlit_langchain_chat.customized_langchain.docstore.in_memory import InMemoryDocstore - - -__all__ = [ - "InMemoryDocstore", -] diff --git a/spaces/Samhita/geolocator/gantry_callback/string_img_util.py b/spaces/Samhita/geolocator/gantry_callback/string_img_util.py deleted file mode 100644 index d091339208d713c5f2189def4e6038eca86450a6..0000000000000000000000000000000000000000 --- a/spaces/Samhita/geolocator/gantry_callback/string_img_util.py +++ /dev/null @@ -1,27 +0,0 @@ -import base64 -from io import BytesIO - - -def read_b64_string(b64_string, return_data_type=False): - """Read a base64-encoded string into an in-memory file-like object.""" - data_header, b64_data = split_and_validate_b64_string(b64_string) - b64_buffer = BytesIO(base64.b64decode(b64_data)) - if return_data_type: - return get_b64_filetype(data_header), b64_buffer - else: - return b64_buffer - - -def get_b64_filetype(data_header): - """Retrieves the filetype information from the data type header of a base64-encoded object.""" - _, file_type = data_header.split("/") - return file_type - - -def split_and_validate_b64_string(b64_string): - """Return the data_type and data of a b64 string, with validation.""" - header, data = b64_string.split(",", 1) - assert header.startswith("data:") - assert header.endswith(";base64") - data_type = header.split(";")[0].split(":")[1] - return data_type, data diff --git a/spaces/SeViLA/SeViLA/lavis/models/timesformer/helpers.py b/spaces/SeViLA/SeViLA/lavis/models/timesformer/helpers.py deleted file mode 100644 index 1a8ebd1415fff35cd0f1e365a6f666dcb2f04fee..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/timesformer/helpers.py +++ /dev/null @@ -1,400 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause - - Based on https://github.com/facebookresearch/TimeSformer -""" - -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -# Copyright 2020 Ross Wightman -# Modified model creation / weight loading / state_dict helpers - -import logging, warnings -import os -import math -from collections import OrderedDict - -import torch -import torch.utils.model_zoo as model_zoo -import torch.nn.functional as F - - -def load_state_dict(checkpoint_path, use_ema=False): - if checkpoint_path and os.path.isfile(checkpoint_path): - checkpoint = torch.load(checkpoint_path, map_location="cpu") - state_dict_key = "state_dict" - if isinstance(checkpoint, dict): - if use_ema and "state_dict_ema" in checkpoint: - state_dict_key = "state_dict_ema" - if state_dict_key and state_dict_key in checkpoint: - new_state_dict = OrderedDict() - for k, v in checkpoint[state_dict_key].items(): - # strip `module.` prefix - name = k[7:] if k.startswith("module") else k - new_state_dict[name] = v - state_dict = new_state_dict - elif "model_state" in checkpoint: - state_dict_key = "model_state" - new_state_dict = OrderedDict() - for k, v in checkpoint[state_dict_key].items(): - # strip `model.` prefix - name = k[6:] if k.startswith("model") else k - new_state_dict[name] = v - state_dict = new_state_dict - else: - state_dict = checkpoint - logging.info( - "Loaded {} from checkpoint '{}'".format(state_dict_key, checkpoint_path) - ) - return state_dict - else: - logging.error("No checkpoint found at '{}'".format(checkpoint_path)) - raise FileNotFoundError() - - -def load_checkpoint(model, checkpoint_path, use_ema=False, strict=True): - state_dict = load_state_dict(checkpoint_path, use_ema) - model.load_state_dict(state_dict, strict=strict) - - -# def resume_checkpoint(model, checkpoint_path, optimizer=None, loss_scaler=None, log_info=True): -# resume_epoch = None -# if os.path.isfile(checkpoint_path): -# checkpoint = torch.load(checkpoint_path, map_location='cpu') -# if isinstance(checkpoint, dict) and 'state_dict' in checkpoint: -# if log_info: -# _logger.info('Restoring model state from checkpoint...') -# new_state_dict = OrderedDict() -# for k, v in checkpoint['state_dict'].items(): -# name = k[7:] if k.startswith('module') else k -# new_state_dict[name] = v -# model.load_state_dict(new_state_dict) - -# if optimizer is not None and 'optimizer' in checkpoint: -# if log_info: -# _logger.info('Restoring optimizer state from checkpoint...') -# optimizer.load_state_dict(checkpoint['optimizer']) - -# if loss_scaler is not None and loss_scaler.state_dict_key in checkpoint: -# if log_info: -# _logger.info('Restoring AMP loss scaler state from checkpoint...') -# loss_scaler.load_state_dict(checkpoint[loss_scaler.state_dict_key]) - -# if 'epoch' in checkpoint: -# resume_epoch = checkpoint['epoch'] -# if 'version' in checkpoint and checkpoint['version'] > 1: -# resume_epoch += 1 # start at the next epoch, old checkpoints incremented before save - -# if log_info: -# _logger.info("Loaded checkpoint '{}' (epoch {})".format(checkpoint_path, checkpoint['epoch'])) -# else: -# model.load_state_dict(checkpoint) -# if log_info: -# _logger.info("Loaded checkpoint '{}'".format(checkpoint_path)) -# return resume_epoch -# else: -# _logger.error("No checkpoint found at '{}'".format(checkpoint_path)) -# raise FileNotFoundError() - - -def load_pretrained( - model, - cfg=None, - num_classes=1000, - in_chans=3, - filter_fn=None, - img_size=224, - num_frames=8, - num_patches=196, - attention_type="divided_space_time", - pretrained_model="", - strict=True, -): - if cfg is None: - cfg = getattr(model, "default_cfg") - if cfg is None or "url" not in cfg or not cfg["url"]: - logging.warning("Pretrained model URL is invalid, using random initialization.") - return - - if len(pretrained_model) == 0: - if cfg is None: - logging.info(f"loading from default config {model.default_cfg}.") - state_dict = model_zoo.load_url(cfg["url"], progress=False, map_location="cpu") - else: - try: - state_dict = load_state_dict(pretrained_model)["model"] - except: - state_dict = load_state_dict(pretrained_model) - - if filter_fn is not None: - state_dict = filter_fn(state_dict) - - if in_chans == 1: - conv1_name = cfg["first_conv"] - logging.info( - "Converting first conv (%s) pretrained weights from 3 to 1 channel" - % conv1_name - ) - conv1_weight = state_dict[conv1_name + ".weight"] - conv1_type = conv1_weight.dtype - conv1_weight = conv1_weight.float() - O, I, J, K = conv1_weight.shape - if I > 3: - assert conv1_weight.shape[1] % 3 == 0 - # For models with space2depth stems - conv1_weight = conv1_weight.reshape(O, I // 3, 3, J, K) - conv1_weight = conv1_weight.sum(dim=2, keepdim=False) - else: - conv1_weight = conv1_weight.sum(dim=1, keepdim=True) - conv1_weight = conv1_weight.to(conv1_type) - state_dict[conv1_name + ".weight"] = conv1_weight - elif in_chans != 3: - conv1_name = cfg["first_conv"] - conv1_weight = state_dict[conv1_name + ".weight"] - conv1_type = conv1_weight.dtype - conv1_weight = conv1_weight.float() - O, I, J, K = conv1_weight.shape - if I != 3: - logging.warning( - "Deleting first conv (%s) from pretrained weights." % conv1_name - ) - del state_dict[conv1_name + ".weight"] - strict = False - else: - logging.info( - "Repeating first conv (%s) weights in channel dim." % conv1_name - ) - repeat = int(math.ceil(in_chans / 3)) - conv1_weight = conv1_weight.repeat(1, repeat, 1, 1)[:, :in_chans, :, :] - conv1_weight *= 3 / float(in_chans) - conv1_weight = conv1_weight.to(conv1_type) - state_dict[conv1_name + ".weight"] = conv1_weight - - classifier_name = cfg["classifier"] - if num_classes == 1000 and cfg["num_classes"] == 1001: - # special case for imagenet trained models with extra background class in pretrained weights - classifier_weight = state_dict[classifier_name + ".weight"] - state_dict[classifier_name + ".weight"] = classifier_weight[1:] - classifier_bias = state_dict[classifier_name + ".bias"] - state_dict[classifier_name + ".bias"] = classifier_bias[1:] - elif num_classes != state_dict[classifier_name + ".weight"].size(0): - # print('Removing the last fully connected layer due to dimensions mismatch ('+str(num_classes)+ ' != '+str(state_dict[classifier_name + '.weight'].size(0))+').', flush=True) - # completely discard fully connected for all other differences between pretrained and created model - del state_dict[classifier_name + ".weight"] - del state_dict[classifier_name + ".bias"] - strict = False - - ## Resizing the positional embeddings in case they don't match - logging.info( - f"Resizing spatial position embedding from {state_dict['pos_embed'].size(1)} to {num_patches + 1}" - ) - if num_patches + 1 != state_dict["pos_embed"].size(1): - pos_embed = state_dict["pos_embed"] - cls_pos_embed = pos_embed[0, 0, :].unsqueeze(0).unsqueeze(1) - other_pos_embed = pos_embed[0, 1:, :].unsqueeze(0).transpose(1, 2) - new_pos_embed = F.interpolate( - other_pos_embed, size=(num_patches), mode="nearest" - ) - new_pos_embed = new_pos_embed.transpose(1, 2) - new_pos_embed = torch.cat((cls_pos_embed, new_pos_embed), 1) - state_dict["pos_embed"] = new_pos_embed - - ## Resizing time embeddings in case they don't match - if "time_embed" in state_dict and num_frames != state_dict["time_embed"].size(1): - logging.info( - f"Resizing temporal position embedding from {state_dict['time_embed'].size(1)} to {num_frames}" - ) - time_embed = state_dict["time_embed"].transpose(1, 2) - new_time_embed = F.interpolate(time_embed, size=(num_frames), mode="nearest") - state_dict["time_embed"] = new_time_embed.transpose(1, 2) - - ## Initializing temporal attention - if attention_type == "divided_space_time": - new_state_dict = state_dict.copy() - for key in state_dict: - if "blocks" in key and "attn" in key: - new_key = key.replace("attn", "temporal_attn") - if not new_key in state_dict: - new_state_dict[new_key] = state_dict[key] - else: - new_state_dict[new_key] = state_dict[new_key] - if "blocks" in key and "norm1" in key: - new_key = key.replace("norm1", "temporal_norm1") - if not new_key in state_dict: - new_state_dict[new_key] = state_dict[key] - else: - new_state_dict[new_key] = state_dict[new_key] - state_dict = new_state_dict - - ## Loading the weights - model.load_state_dict(state_dict, strict=False) - - -def load_pretrained_imagenet( - model, - pretrained_model, - cfg=None, - ignore_classifier=True, - num_frames=8, - num_patches=196, - **kwargs, -): - import timm - - logging.info(f"Loading vit_base_patch16_224 checkpoints.") - loaded_state_dict = timm.models.vision_transformer.vit_base_patch16_224( - pretrained=True - ).state_dict() - - del loaded_state_dict["head.weight"] - del loaded_state_dict["head.bias"] - - ## Initializing temporal attention - new_state_dict = loaded_state_dict.copy() - for key in loaded_state_dict: - if "blocks" in key and "attn" in key: - new_key = key.replace("attn", "temporal_attn") - if not new_key in loaded_state_dict: - new_state_dict[new_key] = loaded_state_dict[key] - else: - new_state_dict[new_key] = loaded_state_dict[new_key] - if "blocks" in key and "norm1" in key: - new_key = key.replace("norm1", "temporal_norm1") - if not new_key in loaded_state_dict: - new_state_dict[new_key] = loaded_state_dict[key] - else: - new_state_dict[new_key] = loaded_state_dict[new_key] - - loaded_state_dict = new_state_dict - - loaded_keys = loaded_state_dict.keys() - model_keys = model.state_dict().keys() - - load_not_in_model = [k for k in loaded_keys if k not in model_keys] - model_not_in_load = [k for k in model_keys if k not in loaded_keys] - - toload = dict() - mismatched_shape_keys = [] - for k in model_keys: - if k in loaded_keys: - if model.state_dict()[k].shape != loaded_state_dict[k].shape: - mismatched_shape_keys.append(k) - else: - toload[k] = loaded_state_dict[k] - - logging.info("Keys in loaded but not in model:") - logging.info(f"In total {len(load_not_in_model)}, {sorted(load_not_in_model)}") - logging.info("Keys in model but not in loaded:") - logging.info(f"In total {len(model_not_in_load)}, {sorted(model_not_in_load)}") - logging.info("Keys in model and loaded, but shape mismatched:") - logging.info( - f"In total {len(mismatched_shape_keys)}, {sorted(mismatched_shape_keys)}" - ) - - model.load_state_dict(toload, strict=False) - - -def load_pretrained_kinetics( - model, - pretrained_model, - cfg=None, - ignore_classifier=True, - num_frames=8, - num_patches=196, - **kwargs, -): - if cfg is None: - cfg = getattr(model, "default_cfg") - if cfg is None or "url" not in cfg or not cfg["url"]: - logging.warning("Pretrained model URL is invalid, using random initialization.") - return - - assert ( - len(pretrained_model) > 0 - ), "Path to pre-trained Kinetics weights not provided." - - state_dict = load_state_dict(pretrained_model) - - classifier_name = cfg["classifier"] - if ignore_classifier: - - classifier_weight_key = classifier_name + ".weight" - classifier_bias_key = classifier_name + ".bias" - - state_dict[classifier_weight_key] = model.state_dict()[classifier_weight_key] - state_dict[classifier_bias_key] = model.state_dict()[classifier_bias_key] - - else: - raise NotImplementedError( - "[dxli] Not supporting loading Kinetics-pretrained ckpt with classifier." - ) - - ## Resizing the positional embeddings in case they don't match - if num_patches + 1 != state_dict["pos_embed"].size(1): - new_pos_embed = resize_spatial_embedding(state_dict, "pos_embed", num_patches) - state_dict["pos_embed"] = new_pos_embed - - ## Resizing time embeddings in case they don't match - if "time_embed" in state_dict and num_frames != state_dict["time_embed"].size(1): - state_dict["time_embed"] = resize_temporal_embedding( - state_dict, "time_embed", num_frames - ) - - ## Loading the weights - try: - model.load_state_dict(state_dict, strict=True) - logging.info("Succeeded in loading Kinetics pre-trained weights.") - except: - logging.error("Error in loading Kinetics pre-trained weights.") - - -def resize_spatial_embedding(state_dict, key, num_patches): - logging.info( - f"Resizing spatial position embedding from {state_dict[key].size(1)} to {num_patches + 1}" - ) - - pos_embed = state_dict[key] - - cls_pos_embed = pos_embed[0, 0, :].unsqueeze(0).unsqueeze(1) - other_pos_embed = pos_embed[0, 1:, :].unsqueeze(0).transpose(1, 2) - - new_pos_embed = F.interpolate(other_pos_embed, size=(num_patches), mode="nearest") - new_pos_embed = new_pos_embed.transpose(1, 2) - new_pos_embed = torch.cat((cls_pos_embed, new_pos_embed), 1) - - return new_pos_embed - - -def resize_temporal_embedding(state_dict, key, num_frames): - logging.info( - f"Resizing temporal position embedding from {state_dict[key].size(1)} to {num_frames}" - ) - - time_embed = state_dict[key].transpose(1, 2) - new_time_embed = F.interpolate(time_embed, size=(num_frames), mode="nearest") - - return new_time_embed.transpose(1, 2) - - -def detach_variable(inputs): - if isinstance(inputs, tuple): - out = [] - for inp in inputs: - x = inp.detach() - x.requires_grad = inp.requires_grad - out.append(x) - return tuple(out) - else: - raise RuntimeError( - "Only tuple of tensors is supported. Got Unsupported input type: ", - type(inputs).__name__, - ) - - -def check_backward_validity(inputs): - if not any(inp.requires_grad for inp in inputs): - warnings.warn( - "None of the inputs have requires_grad=True. Gradients will be None" - ) diff --git a/spaces/ServerX/PorcoDiaz/lib/infer_pack/modules.py b/spaces/ServerX/PorcoDiaz/lib/infer_pack/modules.py deleted file mode 100644 index c83289df7c79a4810dacd15c050148544ba0b6a9..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/lib/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from lib.infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/ServerX/PorcoDiaz/train/process_ckpt.py b/spaces/ServerX/PorcoDiaz/train/process_ckpt.py deleted file mode 100644 index e3c3dba6df4b4f71a4d0865cdc96241d17da8781..0000000000000000000000000000000000000000 --- a/spaces/ServerX/PorcoDiaz/train/process_ckpt.py +++ /dev/null @@ -1,259 +0,0 @@ -import torch, traceback, os, pdb, sys - -now_dir = os.getcwd() -sys.path.append(now_dir) -from collections import OrderedDict -from i18n import I18nAuto - -i18n = I18nAuto() - - -def savee(ckpt, sr, if_f0, name, epoch, version, hps): - try: - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = ckpt[key].half() - opt["config"] = [ - hps.data.filter_length // 2 + 1, - 32, - hps.model.inter_channels, - hps.model.hidden_channels, - hps.model.filter_channels, - hps.model.n_heads, - hps.model.n_layers, - hps.model.kernel_size, - hps.model.p_dropout, - hps.model.resblock, - hps.model.resblock_kernel_sizes, - hps.model.resblock_dilation_sizes, - hps.model.upsample_rates, - hps.model.upsample_initial_channel, - hps.model.upsample_kernel_sizes, - hps.model.spk_embed_dim, - hps.model.gin_channels, - hps.data.sampling_rate, - ] - opt["info"] = "%sepoch" % epoch - opt["sr"] = sr - opt["f0"] = if_f0 - opt["version"] = version - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() - - -def show_info(path): - try: - a = torch.load(path, map_location="cpu") - return "Epochs: %s\nSample rate: %s\nPitch guidance: %s\nRVC Version: %s" % ( - a.get("info", "None"), - a.get("sr", "None"), - a.get("f0", "None"), - a.get("version", "None"), - ) - except: - return traceback.format_exc() - - -def extract_small_model(path, name, sr, if_f0, info, version): - try: - ckpt = torch.load(path, map_location="cpu") - if "model" in ckpt: - ckpt = ckpt["model"] - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = ckpt[key].half() - if sr == "40k": - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 10, 2, 2], - 512, - [16, 16, 4, 4], - 109, - 256, - 40000, - ] - elif sr == "48k": - if version == "v1": - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 6, 2, 2, 2], - 512, - [16, 16, 4, 4, 4], - 109, - 256, - 48000, - ] - else: - opt["config"] = [ - 1025, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [12, 10, 2, 2], - 512, - [24, 20, 4, 4], - 109, - 256, - 48000, - ] - elif sr == "32k": - if version == "v1": - opt["config"] = [ - 513, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 4, 2, 2, 2], - 512, - [16, 16, 4, 4, 4], - 109, - 256, - 32000, - ] - else: - opt["config"] = [ - 513, - 32, - 192, - 192, - 768, - 2, - 6, - 3, - 0, - "1", - [3, 7, 11], - [[1, 3, 5], [1, 3, 5], [1, 3, 5]], - [10, 8, 2, 2], - 512, - [20, 16, 4, 4], - 109, - 256, - 32000, - ] - if info == "": - info = "Extracted model." - opt["info"] = info - opt["version"] = version - opt["sr"] = sr - opt["f0"] = int(if_f0) - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() - - -def change_info(path, info, name): - try: - ckpt = torch.load(path, map_location="cpu") - ckpt["info"] = info - if name == "": - name = os.path.basename(path) - torch.save(ckpt, "weights/%s" % name) - return "Success." - except: - return traceback.format_exc() - - -def merge(path1, path2, alpha1, sr, f0, info, name, version): - try: - - def extract(ckpt): - a = ckpt["model"] - opt = OrderedDict() - opt["weight"] = {} - for key in a.keys(): - if "enc_q" in key: - continue - opt["weight"][key] = a[key] - return opt - - ckpt1 = torch.load(path1, map_location="cpu") - ckpt2 = torch.load(path2, map_location="cpu") - cfg = ckpt1["config"] - if "model" in ckpt1: - ckpt1 = extract(ckpt1) - else: - ckpt1 = ckpt1["weight"] - if "model" in ckpt2: - ckpt2 = extract(ckpt2) - else: - ckpt2 = ckpt2["weight"] - if sorted(list(ckpt1.keys())) != sorted(list(ckpt2.keys())): - return "Fail to merge the models. The model architectures are not the same." - opt = OrderedDict() - opt["weight"] = {} - for key in ckpt1.keys(): - # try: - if key == "emb_g.weight" and ckpt1[key].shape != ckpt2[key].shape: - min_shape0 = min(ckpt1[key].shape[0], ckpt2[key].shape[0]) - opt["weight"][key] = ( - alpha1 * (ckpt1[key][:min_shape0].float()) - + (1 - alpha1) * (ckpt2[key][:min_shape0].float()) - ).half() - else: - opt["weight"][key] = ( - alpha1 * (ckpt1[key].float()) + (1 - alpha1) * (ckpt2[key].float()) - ).half() - # except: - # pdb.set_trace() - opt["config"] = cfg - """ - if(sr=="40k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 10, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 40000] - elif(sr=="48k"):opt["config"] = [1025, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10,6,2,2,2], 512, [16, 16, 4, 4], 109, 256, 48000] - elif(sr=="32k"):opt["config"] = [513, 32, 192, 192, 768, 2, 6, 3, 0, "1", [3, 7, 11], [[1, 3, 5], [1, 3, 5], [1, 3, 5]], [10, 4, 2, 2, 2], 512, [16, 16, 4, 4,4], 109, 256, 32000] - """ - opt["sr"] = sr - opt["f0"] = 1 if f0 else 0 - opt["version"] = version - opt["info"] = info - torch.save(opt, "weights/%s.pth" % name) - return "Success." - except: - return traceback.format_exc() diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/payload.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/payload.py deleted file mode 100644 index 6818be15372a32f83b4e083686597fcf640dbe94..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/payload.py +++ /dev/null @@ -1,55 +0,0 @@ -# -*- coding: utf-8 -*- -"""Payload system for IPython. - -Authors: - -* Fernando Perez -* Brian Granger -""" - -#----------------------------------------------------------------------------- -# Copyright (C) 2008-2011 The IPython Development Team -# -# Distributed under the terms of the BSD License. The full license is in -# the file COPYING, distributed as part of this software. -#----------------------------------------------------------------------------- - -#----------------------------------------------------------------------------- -# Imports -#----------------------------------------------------------------------------- - -from traitlets.config.configurable import Configurable -from traitlets import List - -#----------------------------------------------------------------------------- -# Main payload class -#----------------------------------------------------------------------------- - -class PayloadManager(Configurable): - - _payload = List([]) - - def write_payload(self, data, single=True): - """Include or update the specified `data` payload in the PayloadManager. - - If a previous payload with the same source exists and `single` is True, - it will be overwritten with the new one. - """ - - if not isinstance(data, dict): - raise TypeError('Each payload write must be a dict, got: %r' % data) - - if single and 'source' in data: - source = data['source'] - for i, pl in enumerate(self._payload): - if 'source' in pl and pl['source'] == source: - self._payload[i] = data - return - - self._payload.append(data) - - def read_payload(self): - return self._payload - - def clear_payload(self): - self._payload = [] diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_async_helpers.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_async_helpers.py deleted file mode 100644 index a326c98d50470fb13373baa41e9e6f177f0f84f0..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/tests/test_async_helpers.py +++ /dev/null @@ -1,316 +0,0 @@ -""" -Test for async helpers. - -Should only trigger on python 3.5+ or will have syntax errors. -""" -import platform -from itertools import chain, repeat -from textwrap import dedent, indent -from unittest import TestCase -from IPython.testing.decorators import skip_without -import sys -from typing import TYPE_CHECKING - -if TYPE_CHECKING: - from IPython import get_ipython - - ip = get_ipython() - - -iprc = lambda x: ip.run_cell(dedent(x)).raise_error() -iprc_nr = lambda x: ip.run_cell(dedent(x)) - -from IPython.core.async_helpers import _should_be_async - -class AsyncTest(TestCase): - def test_should_be_async(self): - self.assertFalse(_should_be_async("False")) - self.assertTrue(_should_be_async("await bar()")) - self.assertTrue(_should_be_async("x = await bar()")) - self.assertFalse( - _should_be_async( - dedent( - """ - async def awaitable(): - pass - """ - ) - ) - ) - - def _get_top_level_cases(self): - # These are test cases that should be valid in a function - # but invalid outside of a function. - test_cases = [] - test_cases.append(('basic', "{val}")) - - # Note, in all conditional cases, I use True instead of - # False so that the peephole optimizer won't optimize away - # the return, so CPython will see this as a syntax error: - # - # while True: - # break - # return - # - # But not this: - # - # while False: - # return - # - # See https://bugs.python.org/issue1875 - - test_cases.append(('if', dedent(""" - if True: - {val} - """))) - - test_cases.append(('while', dedent(""" - while True: - {val} - break - """))) - - test_cases.append(('try', dedent(""" - try: - {val} - except: - pass - """))) - - test_cases.append(('except', dedent(""" - try: - pass - except: - {val} - """))) - - test_cases.append(('finally', dedent(""" - try: - pass - except: - pass - finally: - {val} - """))) - - test_cases.append(('for', dedent(""" - for _ in range(4): - {val} - """))) - - - test_cases.append(('nested', dedent(""" - if True: - while True: - {val} - break - """))) - - test_cases.append(('deep-nested', dedent(""" - if True: - while True: - break - for x in range(3): - if True: - while True: - for x in range(3): - {val} - """))) - - return test_cases - - def _get_ry_syntax_errors(self): - # This is a mix of tests that should be a syntax error if - # return or yield whether or not they are in a function - - test_cases = [] - - test_cases.append(('class', dedent(""" - class V: - {val} - """))) - - test_cases.append(('nested-class', dedent(""" - class V: - class C: - {val} - """))) - - return test_cases - - - def test_top_level_return_error(self): - tl_err_test_cases = self._get_top_level_cases() - tl_err_test_cases.extend(self._get_ry_syntax_errors()) - - vals = ('return', 'yield', 'yield from (_ for _ in range(3))', - dedent(''' - def f(): - pass - return - '''), - ) - - for test_name, test_case in tl_err_test_cases: - # This example should work if 'pass' is used as the value - with self.subTest((test_name, 'pass')): - iprc(test_case.format(val='pass')) - - # It should fail with all the values - for val in vals: - with self.subTest((test_name, val)): - msg = "Syntax error not raised for %s, %s" % (test_name, val) - with self.assertRaises(SyntaxError, msg=msg): - iprc(test_case.format(val=val)) - - def test_in_func_no_error(self): - # Test that the implementation of top-level return/yield - # detection isn't *too* aggressive, and works inside a function - func_contexts = [] - - func_contexts.append(('func', False, dedent(""" - def f():"""))) - - func_contexts.append(('method', False, dedent(""" - class MyClass: - def __init__(self): - """))) - - func_contexts.append(('async-func', True, dedent(""" - async def f():"""))) - - func_contexts.append(('async-method', True, dedent(""" - class MyClass: - async def f(self):"""))) - - func_contexts.append(('closure', False, dedent(""" - def f(): - def g(): - """))) - - def nest_case(context, case): - # Detect indentation - lines = context.strip().splitlines() - prefix_len = 0 - for c in lines[-1]: - if c != ' ': - break - prefix_len += 1 - - indented_case = indent(case, ' ' * (prefix_len + 4)) - return context + '\n' + indented_case - - # Gather and run the tests - - # yield is allowed in async functions, starting in Python 3.6, - # and yield from is not allowed in any version - vals = ('return', 'yield', 'yield from (_ for _ in range(3))') - - success_tests = zip(self._get_top_level_cases(), repeat(False)) - failure_tests = zip(self._get_ry_syntax_errors(), repeat(True)) - - tests = chain(success_tests, failure_tests) - - for context_name, async_func, context in func_contexts: - for (test_name, test_case), should_fail in tests: - nested_case = nest_case(context, test_case) - - for val in vals: - test_id = (context_name, test_name, val) - cell = nested_case.format(val=val) - - with self.subTest(test_id): - if should_fail: - msg = ("SyntaxError not raised for %s" % - str(test_id)) - with self.assertRaises(SyntaxError, msg=msg): - iprc(cell) - - print(cell) - else: - iprc(cell) - - def test_nonlocal(self): - # fails if outer scope is not a function scope or if var not defined - with self.assertRaises(SyntaxError): - iprc("nonlocal x") - iprc(""" - x = 1 - def f(): - nonlocal x - x = 10000 - yield x - """) - iprc(""" - def f(): - def g(): - nonlocal x - x = 10000 - yield x - """) - - # works if outer scope is a function scope and var exists - iprc(""" - def f(): - x = 20 - def g(): - nonlocal x - x = 10000 - yield x - """) - - - def test_execute(self): - iprc(""" - import asyncio - await asyncio.sleep(0.001) - """ - ) - - def test_autoawait(self): - iprc("%autoawait False") - iprc("%autoawait True") - iprc(""" - from asyncio import sleep - await sleep(0.1) - """ - ) - - def test_memory_error(self): - """ - The pgen parser in 3.8 or before use to raise MemoryError on too many - nested parens anymore""" - - iprc("(" * 200 + ")" * 200) - - @skip_without('curio') - def test_autoawait_curio(self): - iprc("%autoawait curio") - - @skip_without('trio') - def test_autoawait_trio(self): - iprc("%autoawait trio") - - @skip_without('trio') - def test_autoawait_trio_wrong_sleep(self): - iprc("%autoawait trio") - res = iprc_nr(""" - import asyncio - await asyncio.sleep(0) - """) - with self.assertRaises(TypeError): - res.raise_error() - - @skip_without('trio') - def test_autoawait_asyncio_wrong_sleep(self): - iprc("%autoawait asyncio") - res = iprc_nr(""" - import trio - await trio.sleep(0) - """) - with self.assertRaises(RuntimeError): - res.raise_error() - - - def tearDown(self): - ip.loop_runner = "asyncio" diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_openpy.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_openpy.py deleted file mode 100644 index e205f06ace3893dd7a0a15cf4e0e77f9ec118e63..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/utils/tests/test_openpy.py +++ /dev/null @@ -1,38 +0,0 @@ -import io -import os.path - -from IPython.utils import openpy - -mydir = os.path.dirname(__file__) -nonascii_path = os.path.join(mydir, "../../core/tests/nonascii.py") - - -def test_detect_encoding(): - with open(nonascii_path, "rb") as f: - enc, lines = openpy.detect_encoding(f.readline) - assert enc == "iso-8859-5" - - -def test_read_file(): - with io.open(nonascii_path, encoding="iso-8859-5") as f: - read_specified_enc = f.read() - read_detected_enc = openpy.read_py_file(nonascii_path, skip_encoding_cookie=False) - assert read_detected_enc == read_specified_enc - assert "coding: iso-8859-5" in read_detected_enc - - read_strip_enc_cookie = openpy.read_py_file( - nonascii_path, skip_encoding_cookie=True - ) - assert "coding: iso-8859-5" not in read_strip_enc_cookie - - -def test_source_to_unicode(): - with io.open(nonascii_path, "rb") as f: - source_bytes = f.read() - assert ( - openpy.source_to_unicode(source_bytes, skip_encoding_cookie=False).splitlines() - == source_bytes.decode("iso-8859-5").splitlines() - ) - - source_no_cookie = openpy.source_to_unicode(source_bytes, skip_encoding_cookie=True) - assert "coding: iso-8859-5" not in source_no_cookie diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/datasets/hrf.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/datasets/hrf.py deleted file mode 100644 index 923203b51377f9344277fc561803d7a78bd2c684..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmseg/datasets/hrf.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class HRFDataset(CustomDataset): - """HRF dataset. - - In segmentation map annotation for HRF, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(HRFDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/TH5314/newbing/tailwind.config.js b/spaces/TH5314/newbing/tailwind.config.js deleted file mode 100644 index 03da3c3c45be6983b9f5ffa6df5f1fd0870e9636..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/tailwind.config.js +++ /dev/null @@ -1,48 +0,0 @@ -/** @type {import('tailwindcss').Config} */ -module.exports = { - content: [ - './src/pages/**/*.{js,ts,jsx,tsx,mdx}', - './src/components/**/*.{js,ts,jsx,tsx,mdx}', - './src/app/**/*.{js,ts,jsx,tsx,mdx}', - './src/ui/**/*.{js,ts,jsx,tsx,mdx}', - ], - "darkMode": "class", - theme: { - extend: { - colors: { - 'primary-blue': 'rgb(var(--color-primary-blue) / )', - secondary: 'rgb(var(--color-secondary) / )', - 'primary-background': 'rgb(var(--primary-background) / )', - 'primary-text': 'rgb(var(--primary-text) / )', - 'secondary-text': 'rgb(var(--secondary-text) / )', - 'light-text': 'rgb(var(--light-text) / )', - 'primary-border': 'rgb(var(--primary-border) / )', - }, - keyframes: { - slideDownAndFade: { - from: { opacity: 0, transform: 'translateY(-2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideLeftAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - slideUpAndFade: { - from: { opacity: 0, transform: 'translateY(2px)' }, - to: { opacity: 1, transform: 'translateY(0)' }, - }, - slideRightAndFade: { - from: { opacity: 0, transform: 'translateX(2px)' }, - to: { opacity: 1, transform: 'translateX(0)' }, - }, - }, - animation: { - slideDownAndFade: 'slideDownAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideLeftAndFade: 'slideLeftAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideUpAndFade: 'slideUpAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - slideRightAndFade: 'slideRightAndFade 400ms cubic-bezier(0.16, 1, 0.3, 1)', - }, - }, - }, - plugins: [require('@headlessui/tailwindcss'), require('tailwind-scrollbar')], -} diff --git a/spaces/TNK21/Text_summarizer/README.md b/spaces/TNK21/Text_summarizer/README.md deleted file mode 100644 index 200271cf4dd6d31cf4254a38dc9f4a8e389e9acd..0000000000000000000000000000000000000000 --- a/spaces/TNK21/Text_summarizer/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text Summarization -emoji: 😻 -colorFrom: purple -colorTo: yellow -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/tabular_evaluation.py b/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/tabular_evaluation.py deleted file mode 100644 index cd7f36e32948f80d0e266b47828df5f51fe3f78e..0000000000000000000000000000000000000000 --- a/spaces/TabPFN/TabPFNEvaluation/TabPFN/scripts/tabular_evaluation.py +++ /dev/null @@ -1,284 +0,0 @@ -import time -import os -from pathlib import Path - -from tqdm import tqdm -import random -import numpy as np - -from torch import nn - -from utils import torch_nanmean -from datasets import * -from model_builder import load_model -from scripts.tabular_baselines import get_scoring_string -from scripts import tabular_metrics -from scripts.transformer_prediction_interface import * -from scripts.baseline_prediction_interface import * -""" -=============================== -PUBLIC FUNCTIONS FOR EVALUATION -=============================== -""" - - -def eval_model(i, e, valid_datasets, test_datasets, eval_positions, bptt, add_name, base_path, device='cpu', eval_addition='', **kwargs): - metrics_test, config_sample, model_path = eval_model_on_ds(i, e, test_datasets, eval_positions, bptt, add_name, base_path, device=device, eval_addition=eval_addition, **kwargs) - metrics_valid, _, _ = eval_model_on_ds(i, e, valid_datasets, eval_positions, bptt, add_name, base_path, device=device, eval_addition=eval_addition, **kwargs) - return {'mean_auc_test': metrics_test['mean_roc_at_1000'], 'mean_auc_valid': metrics_valid['mean_roc_at_1000'], 'mean_ce_test': metrics_test['mean_ce_at_1000'], 'mean_ce_valid': metrics_valid['mean_ce_at_1000'], 'config_sample': config_sample, 'model_path': model_path} - -def eval_model_on_ds(i, e, valid_datasets, eval_positions, bptt, add_name, base_path, device='cpu', eval_addition='', **kwargs): - - # How to use: evaluate_without_fitting(i,0,valid_datasets, [1024], 100000, add_name=model_string, base_path=base_path,) - def check_file(e): - model_file = f'models_diff/prior_diff_real_checkpoint{add_name}_n_{i}_epoch_{e}.cpkt' - model_path = os.path.join(base_path, model_file) - # print('Evaluate ', model_path) - results_file = os.path.join(base_path, - f'models_diff/prior_diff_real_results{add_name}_n_{i}_epoch_{e}_{eval_addition}.pkl') - if not Path(model_path).is_file(): # or Path(results_file).is_file(): - # print('checkpoint exists: ', Path(model_file).is_file(), ', results are written:', Path(results_file).is_file()) - return None, None, None - return model_file, model_path, results_file - - if e == -1: # use last checkpoint, if e == -1 - for e_ in range(100, -1, -1): - model_file_, model_path_, results_file_ = check_file(e_) - if model_file_ is not None: - e = e_ - model_file, model_path, results_file = model_file_, model_path_, results_file_ - break - else: - model_file, model_path, results_file = check_file(e) - - model, config_sample = load_model(base_path, model_file, device, None, verbose=False) - print(model[2].style_encoder) - - params = {'max_features': config_sample['num_features'] - , 'rescale_features': config_sample["normalize_by_used_features"] - , 'normalize_to_ranking': config_sample["normalize_to_ranking"] - , 'normalize_with_sqrt': config_sample.get("normalize_with_sqrt", False) - } - metrics_valid = evaluate(datasets=valid_datasets, model=model[2], method='transformer', device=device, overwrite=True, - extend_features=True - # just removed the style keyword but transformer is trained with style, just empty - , save=False - , metric_used=tabular_metrics.cross_entropy - , return_tensor=True - , verbose=False - , eval_positions=eval_positions - , bptt=bptt - , base_path=None - , inference_mode=True - , **params - , **kwargs) - - tabular_metrics.calculate_score_per_method(tabular_metrics.auc_metric, 'roc', metrics_valid, valid_datasets, eval_positions) - tabular_metrics.calculate_score_per_method(tabular_metrics.cross_entropy, 'ce', metrics_valid, valid_datasets, eval_positions) - - return metrics_valid, config_sample, model_path - - -def evaluate(datasets, bptt, eval_positions, metric_used, model - , verbose=False - , return_tensor=False - , **kwargs): - """ - Evaluates a list of datasets for a model function. - - :param datasets: List of datasets - :param bptt: maximum sequence length - :param eval_positions: List of positions where to evaluate models - :param verbose: If True, is verbose. - :param metric_used: Which metric is optimized for. - :param return_tensor: Wheater to return results as a pytorch.tensor or numpy, this is only relevant for transformer. - :param kwargs: - :return: - """ - overall_result = {'metric_used': get_scoring_string(metric_used) - , 'bptt': bptt - , 'eval_positions': eval_positions} - - aggregated_metric_datasets, num_datasets = torch.tensor(0.0), 0 - - # For each dataset - for [ds_name, X, y, categorical_feats, _, _] in tqdm.tqdm(datasets, desc='Iterate over datasets') if verbose else datasets: - dataset_bptt = min(len(X), bptt) - # if verbose and dataset_bptt < bptt: - # print(f'Dataset too small for given sequence length, reducing to {len(X)} ({bptt})') - - aggregated_metric, num = torch.tensor(0.0), 0 - ds_result = {} - - for eval_position in (eval_positions if verbose else eval_positions): - eval_position_real = int(dataset_bptt * 0.5) if 2 * eval_position > dataset_bptt else eval_position - eval_position_bptt = int(eval_position_real * 2.0) - - r = evaluate_position(X, y, model=model - , num_classes=len(torch.unique(y)) - , categorical_feats = categorical_feats - , bptt = eval_position_bptt - , ds_name=ds_name - , eval_position = eval_position_real - , metric_used = metric_used - ,**kwargs) - - if r is None: - continue - - _, outputs, ys, best_configs, time_used = r - - if torch.is_tensor(outputs): - outputs = outputs.to(outputs.device) - ys = ys.to(outputs.device) - - ys = ys.T - ds_result[f'{ds_name}_best_configs_at_{eval_position}'] = best_configs - ds_result[f'{ds_name}_outputs_at_{eval_position}'] = outputs - ds_result[f'{ds_name}_ys_at_{eval_position}'] = ys - ds_result[f'{ds_name}_time_at_{eval_position}'] = time_used - - new_metric = torch_nanmean(torch.stack([metric_used(ys[i], outputs[i]) for i in range(ys.shape[0])])) - - if not return_tensor: - make_scalar = lambda x: float(x.detach().cpu().numpy()) if (torch.is_tensor(x) and (len(x.shape) == 0)) else x - new_metric = make_scalar(new_metric) - ds_result = {k: make_scalar(ds_result[k]) for k in ds_result.keys()} - - lib = torch if return_tensor else np - if not lib.isnan(new_metric).any(): - aggregated_metric, num = aggregated_metric + new_metric, num + 1 - - overall_result.update(ds_result) - if num > 0: - aggregated_metric_datasets, num_datasets = (aggregated_metric_datasets + (aggregated_metric / num)), num_datasets + 1 - - overall_result['mean_metric'] = aggregated_metric_datasets / num_datasets - - return overall_result - -""" -=============================== -INTERNAL HELPER FUNCTIONS -=============================== -""" - -def check_file_exists(path): - """Checks if a pickle file exists. Returns None if not, else returns the unpickled file.""" - if (os.path.isfile(path)): - print(f'loading results from {path}') - with open(path, 'rb') as f: - return np.load(f, allow_pickle=True).tolist() - return None - -def generate_valid_split(X, y, bptt, eval_position, split_number=1): - """Generates a deteministic train-(test/valid) split. Both splits must contain the same classes and all classes in - the entire datasets. If no such split can be sampled in 7 passes, returns None. - - :param X: torch tensor, feature values - :param y: torch tensor, class values - :param bptt: Number of samples in train + test - :param eval_position: Number of samples in train, i.e. from which index values are in test - :param split_number: The split id - :return: - """ - done, seed = False, 13 - - torch.manual_seed(split_number) - perm = torch.randperm(X.shape[0]) if split_number > 1 else torch.arange(0, X.shape[0]) - X, y = X[perm], y[perm] - - while not done: - if seed > 20: - return None, None # No split could be generated in 7 passes, return None - random.seed(seed) - i = random.randint(0, len(X) - bptt) if len(X) - bptt > 0 else 0 - y_ = y[i:i + bptt] - - # Checks if all classes from dataset are contained and classes in train and test are equal (contain same - # classes) and - done = len(torch.unique(y_)) == len(torch.unique(y)) - done = done and torch.all(torch.unique(y_) == torch.unique(y)) - done = done and len(torch.unique(y_[:eval_position])) == len(torch.unique(y_[eval_position:])) - done = done and torch.all(torch.unique(y_[:eval_position]) == torch.unique(y_[eval_position:])) - seed = seed + 1 - - eval_xs = torch.stack([X[i:i + bptt].clone()], 1) - eval_ys = torch.stack([y[i:i + bptt].clone()], 1) - - return eval_xs, eval_ys - - -def evaluate_position(X, y, categorical_feats, model, bptt - , eval_position, overwrite, save, base_path, path_interfix, method, ds_name, fetch_only=False - , max_time=300, split_number=1 - , per_step_normalization=False, **kwargs): - """ - Evaluates a dataset with a 'bptt' number of training samples. - - :param X: Dataset X - :param y: Dataset labels - :param categorical_feats: Indices of categorical features. - :param model: Model function - :param bptt: Sequence length. - :param eval_position: Number of training samples. - :param overwrite: Wheater to ove - :param overwrite: If True, results on disk are overwritten. - :param save: - :param path_interfix: Used for constructing path to write on disk. - :param method: Model name. - :param ds_name: Datset name. - :param fetch_only: Wheater to calculate or only fetch results. - :param per_step_normalization: - :param kwargs: - :return: - """ - - if save: - path = os.path.join(base_path, f'results/tabular/{path_interfix}/results_{method}_{ds_name}_{eval_position}_{bptt}_{split_number}.npy') - #log_path = - - ## Load results if on disk - if not overwrite: - result = check_file_exists(path) - if result is not None: - if not fetch_only: - print(f'Loaded saved result for {path}') - return result - elif fetch_only: - print(f'Could not load saved result for {path}') - return None - - ## Generate data splits - eval_xs, eval_ys = generate_valid_split(X, y, bptt, eval_position, split_number=split_number) - if eval_xs is None: - return None - print(f"No dataset could be generated {ds_name} {bptt}") - - eval_ys = (eval_ys > torch.unique(eval_ys).unsqueeze(0)).sum(axis=1).unsqueeze(-1) - - start_time = time.time() - - if isinstance(model, nn.Module): # Two separate predict interfaces for transformer and baselines - outputs, best_configs = transformer_predict(model, eval_xs, eval_ys, eval_position, categorical_feats=categorical_feats, **kwargs), None - else: - _, outputs, best_configs = baseline_predict(model, eval_xs, eval_ys, categorical_feats - , eval_pos=eval_position - , max_time=max_time, **kwargs) - - eval_ys = eval_ys[eval_position:] - if outputs is None: - return None - - if torch.is_tensor(outputs): # Transfers data to cpu for saving - outputs = outputs.cpu() - eval_ys = eval_ys.cpu() - - ds_result = None, outputs, eval_ys, best_configs, time.time() - start_time - - if save: - with open(path, 'wb') as f: - np.save(f, ds_result) - print(f'saved results to {path}') - - return ds_result \ No newline at end of file diff --git a/spaces/TencentARC/VLog/models/grit_src/grit/data/datasets/object365.py b/spaces/TencentARC/VLog/models/grit_src/grit/data/datasets/object365.py deleted file mode 100644 index 8b8cc19da23d8397284b50588ee46e750b5b7552..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/grit/data/datasets/object365.py +++ /dev/null @@ -1,111 +0,0 @@ -import logging -import os -from fvcore.common.timer import Timer -from detectron2.structures import BoxMode -from fvcore.common.file_io import PathManager -from detectron2.data import DatasetCatalog, MetadataCatalog -from lvis import LVIS - -logger = logging.getLogger(__name__) - -__all__ = ["load_o365_json", "register_o365_instances"] - - -def register_o365_instances(name, metadata, json_file, image_root): - DatasetCatalog.register(name, lambda: load_o365_json( - json_file, image_root, name)) - MetadataCatalog.get(name).set( - json_file=json_file, image_root=image_root, - evaluator_type="lvis", **metadata - ) - - -def get_o365_meta(): - categories = [{'supercategory': 'object', 'id': 1, 'name': 'object'}] - o365_categories = sorted(categories, key=lambda x: x["id"]) - thing_classes = [k["name"] for k in o365_categories] - meta = {"thing_classes": thing_classes} - return meta - - -def load_o365_json(json_file, image_root, dataset_name=None): - ''' - Load Object365 class name text for object description for GRiT - ''' - - json_file = PathManager.get_local_path(json_file) - - timer = Timer() - lvis_api = LVIS(json_file) - if timer.seconds() > 1: - logger.info("Loading {} takes {:.2f} seconds.".format( - json_file, timer.seconds())) - - class_names = {} - sort_cat = sorted(lvis_api.dataset['categories'], key=lambda x: x['id']) - for x in sort_cat: - if '/' in x['name']: - text = '' - for xx in x['name'].split('/'): - text += xx - text += ' ' - text = text[:-1] - else: - text = x['name'] - class_names[x['id']] = text - - img_ids = sorted(lvis_api.imgs.keys()) - imgs = lvis_api.load_imgs(img_ids) - anns = [lvis_api.img_ann_map[img_id] for img_id in img_ids] - - ann_ids = [ann["id"] for anns_per_image in anns for ann in anns_per_image] - assert len(set(ann_ids)) == len(ann_ids), \ - "Annotation ids in '{}' are not unique".format(json_file) - - imgs_anns = list(zip(imgs, anns)) - logger.info("Loaded {} images in the LVIS v1 format from {}".format( - len(imgs_anns), json_file)) - - dataset_dicts = [] - - for (img_dict, anno_dict_list) in imgs_anns: - record = {} - if "file_name" in img_dict: - file_name = img_dict["file_name"] - record["file_name"] = os.path.join(image_root, file_name) - - record["height"] = int(img_dict["height"]) - record["width"] = int(img_dict["width"]) - image_id = record["image_id"] = img_dict["id"] - - objs = [] - for anno in anno_dict_list: - assert anno["image_id"] == image_id - if anno.get('iscrowd', 0) > 0: - continue - obj = {"bbox": anno["bbox"], "bbox_mode": BoxMode.XYWH_ABS} - obj["category_id"] = 0 - obj["object_description"] = class_names[anno['category_id']] - - objs.append(obj) - record["annotations"] = objs - if len(record["annotations"]) == 0: - continue - record["task"] = "ObjectDet" - dataset_dicts.append(record) - - return dataset_dicts - - -_CUSTOM_SPLITS_LVIS = { - "object365_train": ("object365/images/train/", "object365/annotations/train_v1.json"), -} - - -for key, (image_root, json_file) in _CUSTOM_SPLITS_LVIS.items(): - register_o365_instances( - key, - get_o365_meta(), - os.path.join("datasets", json_file) if "://" not in json_file else json_file, - os.path.join("datasets", image_root), - ) \ No newline at end of file diff --git a/spaces/Theivaprakasham/yolov6/tools/quantization/tensorrt/training_aware/QAT_quantizer.py b/spaces/Theivaprakasham/yolov6/tools/quantization/tensorrt/training_aware/QAT_quantizer.py deleted file mode 100644 index 4621aa426fd0ba7875717a1442ab07a74cc49b30..0000000000000000000000000000000000000000 --- a/spaces/Theivaprakasham/yolov6/tools/quantization/tensorrt/training_aware/QAT_quantizer.py +++ /dev/null @@ -1,39 +0,0 @@ -# -# QAT_quantizer.py -# YOLOv6 -# -# Created by Meituan on 2022/06/24. -# Copyright © 2022 -# - -from absl import logging -from pytorch_quantization import nn as quant_nn -from pytorch_quantization import quant_modules - -# Call this function before defining the model -def tensorrt_official_qat(): - # Quantization Aware Training is based on Straight Through Estimator (STE) derivative approximation. - # It is some time known as “quantization aware training”. - - # PyTorch-Quantization is a toolkit for training and evaluating PyTorch models with simulated quantization. - # Quantization can be added to the model automatically, or manually, allowing the model to be tuned for accuracy and performance. - # Quantization is compatible with NVIDIAs high performance integer kernels which leverage integer Tensor Cores. - # The quantized model can be exported to ONNX and imported by TensorRT 8.0 and later. - # https://github.com/NVIDIA/TensorRT/blob/main/tools/pytorch-quantization/examples/finetune_quant_resnet50.ipynb - - # The example to export the - # model.eval() - # quant_nn.TensorQuantizer.use_fb_fake_quant = True # We have to shift to pytorch's fake quant ops before exporting the model to ONNX - # opset_version = 13 - - # Export ONNX for multiple batch sizes - # print("Creating ONNX file: " + onnx_filename) - # dummy_input = torch.randn(batch_onnx, 3, 224, 224, device='cuda') #TODO: switch input dims by model - # torch.onnx.export(model, dummy_input, onnx_filename, verbose=False, opset_version=opset_version, enable_onnx_checker=False, do_constant_folding=True) - try: - quant_modules.initialize() - except NameError: - logging.info("initialzation error for quant_modules") - -# def QAT_quantizer(): -# coming soon \ No newline at end of file diff --git a/spaces/Vegecken/sovits4dzl/vdecoder/hifigan/utils.py b/spaces/Vegecken/sovits4dzl/vdecoder/hifigan/utils.py deleted file mode 100644 index 9c93c996d3cc73c30d71c1fc47056e4230f35c0f..0000000000000000000000000000000000000000 --- a/spaces/Vegecken/sovits4dzl/vdecoder/hifigan/utils.py +++ /dev/null @@ -1,68 +0,0 @@ -import glob -import os -import matplotlib -import torch -from torch.nn.utils import weight_norm -# matplotlib.use("Agg") -import matplotlib.pylab as plt - - -def plot_spectrogram(spectrogram): - fig, ax = plt.subplots(figsize=(10, 2)) - im = ax.imshow(spectrogram, aspect="auto", origin="lower", - interpolation='none') - plt.colorbar(im, ax=ax) - - fig.canvas.draw() - plt.close() - - return fig - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def apply_weight_norm(m): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - weight_norm(m) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def load_checkpoint(filepath, device): - assert os.path.isfile(filepath) - print("Loading '{}'".format(filepath)) - checkpoint_dict = torch.load(filepath, map_location=device) - print("Complete.") - return checkpoint_dict - - -def save_checkpoint(filepath, obj): - print("Saving checkpoint to {}".format(filepath)) - torch.save(obj, filepath) - print("Complete.") - - -def del_old_checkpoints(cp_dir, prefix, n_models=2): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) # get checkpoint paths - cp_list = sorted(cp_list)# sort by iter - if len(cp_list) > n_models: # if more than n_models models are found - for cp in cp_list[:-n_models]:# delete the oldest models other than lastest n_models - open(cp, 'w').close()# empty file contents - os.unlink(cp)# delete file (move to trash when using Colab) - - -def scan_checkpoint(cp_dir, prefix): - pattern = os.path.join(cp_dir, prefix + '????????') - cp_list = glob.glob(pattern) - if len(cp_list) == 0: - return None - return sorted(cp_list)[-1] - diff --git a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/data/__init__.py b/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/data/__init__.py deleted file mode 100644 index 708a3dcead8dda89374a021177481dacae9f7fe9..0000000000000000000000000000000000000000 --- a/spaces/Wrathless/Dkrotzer-MusicalMagic/audiocraft/data/__init__.py +++ /dev/null @@ -1,8 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# flake8: noqa -from . import audio, audio_dataset diff --git a/spaces/XPMaster/chainladder/app.py b/spaces/XPMaster/chainladder/app.py deleted file mode 100644 index 6db54a7dab6e66b15f87a7de213d7d40c8020fee..0000000000000000000000000000000000000000 --- a/spaces/XPMaster/chainladder/app.py +++ /dev/null @@ -1,405 +0,0 @@ -import pandas as pd -import numpy as np -import re -import os -import warnings -import gradio as gr -import re -import chainladder as cl -import zipfile -import datetime -import openpyxl -from funcs import * -from openpyxl.styles import Font, PatternFill -from openpyxl.utils import column_index_from_string, get_column_letter - -fail = "❌" -success = '✅' -current_year = int(datetime.datetime.now().year) -years_list = [str(x) for x in range(1850,current_year+500)] -months_list = [ - "Jan - 1", "Feb - 2", "Mar - 3", "Apr - 4", - "May - 5", "Jun - 6", "Jul - 7", "Aug - 8", - "Sep - 9", "Oct - 10", "Nov - 11", "Dec - 12" -] - -styling=""" -#group { -background-color: #1f2937; -border: 1px solid #374151; -} -#button { -/* Permalink - use to edit and share this gradient: https://colorzilla.com/gradient-editor/#f6e6b4+0,ed9017+100;Yellow+3D+%231 */ -background: #f6e6b4; /* Old browsers */ -background: -moz-linear-gradient(top, #f6e6b4 0%, #ed9017 100%); /* FF3.6-15 */ -background: -webkit-linear-gradient(top, #f6e6b4 0%,#ed9017 100%); /* Chrome10-25,Safari5.1-6 */ -background: linear-gradient(to bottom, #f6e6b4 0%,#ed9017 100%); /* W3C, IE10+, FF16+, Chrome26+, Opera12+, Safari7+ */ -filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#f6e6b4', endColorstr='#ed9017',GradientType=0 ); /* IE6-9 */ -text-shadow: 2px 2px 10px #000000; -} -#column { - display: flex; - flex-direction: column; - align-items: center; - justify-content: center; -} - -body { - display: flex; - justify-content: center; - align-items: center; - min-height: 100vh; - margin: 0; -} -p { - font-family: 'Arial', sans-serif; - font-size: 20px; - color: #333; - background-color: #e1f5fe; - padding: 15px; - border-radius: 5px; - box-shadow: 0px 4px 6px rgba(0, 0, 0, 0.1); - max-width: 800px; - text-align: center; -} - -.center-emoji { - display: flex; - align-items: center; - justify-content: center; - width: 100px; - height: 100px; - border: 1px solid #000; -} - -.center-emoji::before { - content: "\\1F4CA"; /* Unicode code for 📊 */ - font-size: 36px; -} - -footer { - visibility: hidden -} -""" - - -def IBNR_OS(key,filename): - try: - # Read the Excel file - df = pd.read_excel(filename) - - # Filter rows with numeric values in "Gross IBN(E)R" or "Gross OS" columns - numeric_mask = (pd.to_numeric(df['Gross IBN(E)R'], errors='coerce').notna() | - pd.to_numeric(df['Gross OS'], errors='coerce').notna()) - df_filtered = df[numeric_mask] - - # Keep only the required columns - required_columns = ['LOB', 'Gross IBN(E)R', 'Gross OS'] - df_filtered = df_filtered[required_columns] - - # Convert columns to integers and replace missing values with 0 - df_filtered = df_filtered.fillna(0).astype({'LOB': str, 'Gross IBN(E)R': int, 'Gross OS': int}) - # Iterate over distinct LOB values - distinct_lob_values = df_filtered['LOB'].unique() - - for lob_value in distinct_lob_values: - # Filter rows for the current LOB value - lob_rows = df_filtered[df_filtered['LOB'] == lob_value] - # Get the individual columns as lists - gross_ibnr_values = lob_rows['Gross IBN(E)R'].tolist() - gross_os_values = lob_rows['Gross OS'].tolist() - # Print the results and retrieve lists - if lob_value == key: - print(f"LOB: {lob_value}") - print("Gross IBN(E)R:", gross_ibnr_values) - print("Gross OS:", gross_os_values) - print() - return gross_ibnr_values, gross_os_values - return None, None - except Exception as e: - return 'Parameters file has the following issue: {'+str(e)+"} Hint allowed names are ['LOB','Gross IBN(E)R','Gross OS','sigma','loss ratio','simulations_n','tail','method']",False - -def triangle(path, start_date , end_date, - gross_IBNR = [156576 ,214177 ,146459 ,390682 ,548713 ,706833 ,860458 ,1054578 ,1538313 ,2144731 ,3090198 ,5385887 ,47465981], - gross_OS_Claims= [1000721 ,1429259 ,1056222 ,1749351 ,2253296 ,1811757 ,2265959 ,2712321 ,3485914 ,5675081 ,9648877 ,18946443 ,61422600 ], - showextra=False, - extract_file=None): - # Read data from excel sheet - issues = [] - Motor_Claims = pd.read_excel(path) - # Filter out rows where accident_quarter_bracket or transaction_quarter_bracket is null - Motor_Claims = Motor_Claims[Motor_Claims['accident_quarter_bracket'].notnull()] - Motor_Claims = Motor_Claims[Motor_Claims['transaction_quarter_bracket'].notnull()] - # Convert the accident_quarter_bracket and transaction_quarter_bracket columns to accident_period and transaction_period - Motor_Claims['accident_period'] = get_period(Motor_Claims, 'accident_quarter_bracket') - Motor_Claims['transaction_period'] = get_period(Motor_Claims, 'transaction_quarter_bracket') - # Convert the column names to lowercase - Motor_Claims.columns = [x.lower() for x in Motor_Claims.columns] - # Loop over each unique value of the lob column - name = os.path.basename(path.split(".")[0])+"_triangles" - # name = path.split(".")[0]+"_triangles" - writer = pd.ExcelWriter(name+'.xlsx', engine='openpyxl') - try: - writer.book = Workbook() - writer.book.remove(writer.book["Sheet"]) - except: - pass - for LOB in Motor_Claims['lob'].unique(): - - try: - # Select the rows where the LOB column matches the current LOB value - df_lob = Motor_Claims[(Motor_Claims['lob'] == LOB)] - - # Select only the columns relevant to paid claims - df_lob = select_columns_Paid(df_lob) - - # Filter the data by date range - df_lob = df_lob[((df_lob['accident_period'] >= start_date) & (df_lob['accident_period'] <= end_date))] - #display(df_lob) - #return None - except: - # If an exception is caught, print the name of the LOB and continue to the next one - issues.append(LOB+' has an issue, skipped.') - print(issues[-1]) - continue - - # Create an incremental and cumulative triangle based on the paid amount - triangle_df = cl.Triangle(df_lob, origin='accident_period', development='transaction_period', columns=['paid_amount'], cumulative=False, index=['lob']) - cumulative_triangle_df = triangle_df.incr_to_cum() - incremental_triangle_df = triangle_df - - # If showextra is True, display the incremental triangle, cumulative triangle, and age-to-age factors heatmap - if showextra: - print('Incremental Triangle') - print(incremental_triangle_df) - print('Cumulative Triangle') - print(cumulative_triangle_df) - print('Age to Age factors') - print(cumulative_triangle_df.link_ratio.heatmap(cmap='Reds')) - - # Apply the Mack chainladder model to the cumulative triangle to estimate reserves - pd.options.display.float_format = '{:.0f}'.format - mack = cl.MackChainladder().fit(cumulative_triangle_df) - Mack_Summary = mack.summary_.to_frame().reset_index() - Mack_Summary = Mack_Summary.rename(columns={'index': 'Loss Quarter'}) - Latest = Mack_Summary['Latest'] - Loss_Quarter = Mack_Summary['Loss Quarter'] - Latest = pd.DataFrame(Latest) - Loss_Quarter = pd.DataFrame(Loss_Quarter) - - # if there is a file to extract from it will do so - if extract_file != None: - gross_IBNR , gross_OS_Claims = IBNR_OS(LOB,extract_file) - if gross_IBNR == None or gross_OS_Claims == None: - issues.append(LOB+" does not have gross_IBNR or gross_OS_Claims in ("+os.path.basename(extract_file)+"), skipped.") - print(issues[-1]) - continue - elif gross_OS_Claims == False: # this is set to False in IBNR_OS if the extract file has an issue. - return [gross_IBNR, False] # this is used as the error name in IBNR_OS, weird but we dont need more variables. - - - gross_IBNR = pd.DataFrame(gross_IBNR) - gross_OS_Claims = pd.DataFrame(gross_OS_Claims) - - # Combine calculated results and display in a DataFrame - df=Loss_Quarter - df['Latest']=Latest - df['gross_IBNR']=gross_IBNR - df['gross_OS_Claims']=gross_OS_Claims - df['Ultimate_Claims_amount']=df['Latest']+df['gross_IBNR']+df['gross_OS_Claims'] - pd.options.display.float_format = '{:.4f}'.format - ultimates = df['Ultimate_Claims_amount'].iloc[1:].to_frame() - df['CF']=(df['Latest']/df['Ultimate_Claims_amount']) - df['CDF']=(1/df['CF']) #cumulative development factor - - ATA=[] - for i in range(len(df)-1,-1,-1): - ATA_results=df['CDF'].iloc[i]/df['CDF'].iloc[i-1] - #print(ATA_results) - ATA.append(ATA_results) - df['ATA']=ATA - # Output results for each LOB - text = 'Line of business: '+LOB - print() - print(text) - print() - text = "Without Mean Replacement" - print(text) - outcome_before = ATAOperate(cumulative_triangle_df,ATA,replace=False) - print(outcome_before) - text = "With Mean Replacement" - print(text) - #Calculate and replace residuals - outcome = ATAOperate(cumulative_triangle_df,ATA) - # Display result - print(outcome) - # Calculate Adj S^2 - adj = calculate_average(outcome) - # Converting IBNR and claims to serieses and slicing them to the proper lengths - series_ibnr = gross_IBNR.squeeze().iloc[1:len(ultimates)+1] - series_claims = gross_OS_Claims.squeeze().iloc[1:len(ultimates)+1] - # Calculating Proc SD - proc_sd_result = proc_sd(adj,ultimates) - # Calculating Coef. Variance - cof = calc_cof(series_ibnr, series_claims, proc_sd_result.squeeze()) - # Displaying formatted Proc SD and Coef Variance side by Side - merged = merge_dataframes(format_dataframe(proc_sd_result), cof) - print(merged) - # Making other data structures into dataframes for export - formatted_incremental = format_dataframe(incremental_triangle_df.to_frame()) - formatted_cumulative = format_dataframe(cumulative_triangle_df.to_frame()) - formatted_link_ratio = format_dataframe(cumulative_triangle_df.link_ratio.to_frame()) - ATAdf = pd.DataFrame([ATA]) - adj = adj.transpose() - #adjdf = pd.DataFrame([adj]) - ibnrdf = pd.DataFrame(series_ibnr).transpose() - claimsdf = pd.DataFrame(series_claims).transpose() - # Adjusting indices to display properly in excel sheet - outcome_before = outcome_before.reset_index() - outcome = outcome.reset_index() - outcome_before.rename_axis('date', axis='index', inplace=True) - outcome.rename_axis('date', axis='index', inplace=True) - merged = merged.reset_index(drop=True) - merged.index = merged.index + 1 - - # Write the dataframes to the current LOB's sheet - dataframes = [outcome_before, outcome, merged, formatted_incremental, formatted_cumulative, formatted_link_ratio,ATAdf,adj,ibnrdf,claimsdf] - labels = ['Before replacement', 'After replacement', 'Proc_SD & Coeff', 'Incremental triangle', 'Cumulative triangle', 'Link Ratio','ATA',"Adj S^2","IBNR",'OS Claims'] - position = 1 - # Define the font style for Heading 2 - heading2_font = Font(size=12, bold=True) - for df, label in zip(dataframes, labels): - df.to_excel(writer, sheet_name=LOB, startrow=position, startcol=0, index=False) - workbook = writer.book - worksheet = writer.sheets[LOB] - row1 = worksheet[position] - for cell in row1: - cell.font = heading2_font - worksheet.cell(row=position, column=1, value=label) - position += df.shape[0] + 3 - - writer = resize_columns(writer) - writer.close() - - return [name+'.xlsx',issues,True] - -def failure(msg): - return gr.File.update(value=None,visible=False),msg - -def successful(msg): - pass - -def process(inp,files,file2,start_date,end_date,start_month,end_month): - print(file2) - if files is None: - return failure(fail+' No file provided') - if len(inp) == 0: - return failure(fail+' One operation must be selected at least') - if int(start_date) > int(end_date): - return failure(fail+' Start date cannot be greater than End date') - - start_date = append_last_day(str(start_date)+'-'+str(start_month).split(" ")[-1]) - end_date = append_last_day(str(end_date)+"-"+str(end_month).split(" ")[-1]) - - status = [] - processed = [] - names = unzip_files(files.name) - - if file2 is not None: - if valid(file2.name): - file2 = file2.name - else: - return failure(fail+" IBNR/OS optional file is of invalid type (CSV and XLSX only)") - - for name in names: - if valid(name): - print(name,start_date,end_date,file2,'\n'*10) - #triangle(name, start_date , end_date) - # name = os.path.basename(triangle(name, start_date , end_date)) - for element in inp: - if 'mack' in element.lower(): - - cleaned_name = os.path.basename(name) - processed_name = triangle(name, start_date , end_date,extract_file=file2) - if processed_name[-1]: - processed.append(processed_name[0]) - if len(processed_name[1]) > 0: - #processed_name[1] = '\n'.join(f"{"*"*(index+1)}⚠️. {value}" for index, value in enumerate(processed_name[1])) - processed_name[1] = '\n'.join(f'{"*"*(index+1)}⚠️. {value}' for index, value in enumerate(processed_name[1])) - else: - processed_name[1] = "" - status.append(success+f" Success ({element}) "+cleaned_name+'\n'+processed_name[1]) - else: - status.append(fail+f" Failed ({element}): {processed_name[0]} ("+cleaned_name+")") # The first element of the returned tuple has the problem in it. - - if 'bootstrap' in element.lower(): - status.append(success+f" Success ({element}) "+cleaned_name) - if 'sbf' in element.lower(): - status.append(success+f" Success ({element}) "+cleaned_name) - else: - name = os.path.basename(name) - status.append(fail+" Failure "+name) - - msg = '\n'.join(f"{index + 1}.{value}" for index, value in enumerate(status)) - if len(processed) <= 0: - return failure(msg) - final_file = zip_files(processed) - return gr.File.update(value=final_file,visible=True),msg - - -options = ['Mack','Bootstrap','Stochastic Bornhuetter-Ferguson (SBF)'] -with gr.Blocks(css=styling) as demo: - gr.HTML(""" - - - - - - Actuarial Software Intro - - -

    - Risk adjustment with Mack, Bootstrap, and Stochastic Bornhuetter-Ferguson (SBF) -

    - - - """) - with gr.Row(): - inp = gr.CheckboxGroup(label='Operations',choices=options,value=options) - with gr.Group(): - with gr.Row(elem_id='group'): - # with gr.Column(scale=1): - # pass - with gr.Column(scale=2,elem_id='group'): - gr.Markdown("

    🗓️ Start Date

    ") - with gr.Row(): - start_date = gr.Dropdown(choices=years_list,value='2019',label='Year') - start_month = gr.Dropdown(choices=months_list,value=months_list[0],label='Month') - with gr.Column(scale=2,elem_id='group'): - gr.Markdown("

    🗓️ End Date

    ") - with gr.Row(): - end_date = gr.Dropdown(choices=years_list,value='2022',label='Year') - end_month = gr.Dropdown(choices=months_list,value=months_list[0],label='Month') - # with gr.Column(scale=1): - # pass - with gr.Accordion("📄 Templates",open=False): - gr.File(value='columns template.xlsx',label='📊 Claims columns template') - gr.File(value='parameters template.xlsx',label='🛠️ Parameters template',scale=10) - with gr.Row(): - with gr.Column(scale=3): - #with gr.Accordion("Claim files",open=True): - file1 = gr.File(label='📊 Claim File/s',elem_id="center-emoji") - with gr.Column(scale=0): - #with gr.Accordion("Parameters (optional)",open=False): - file2 = gr.File(label='🛠️ Parameters (optional)') - with gr.Row(): - btn = gr.Button("Run") - with gr.Row(): - pass - with gr.Row(): - outfile = gr.File(label='Output',visible=False) - with gr.Row(): - log = gr.Textbox(label='📝 Log') - btn.click(fn=process, inputs=[inp,file1,file2,start_date,end_date,start_month,end_month], outputs=[outfile,log]) - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/Xenova/semantic-image-search-client/_next/static/css/09b9f65a1077fd5a.css b/spaces/Xenova/semantic-image-search-client/_next/static/css/09b9f65a1077fd5a.css deleted file mode 100644 index 98824b36a41e7cb0e8ed7c5a0772ec21ca552db5..0000000000000000000000000000000000000000 --- a/spaces/Xenova/semantic-image-search-client/_next/static/css/09b9f65a1077fd5a.css +++ /dev/null @@ -1,3 +0,0 @@ -/* -! tailwindcss v3.3.3 | MIT License | https://tailwindcss.com -*/*,:after,:before{box-sizing:border-box;border:0 solid #e5e7eb}:after,:before{--tw-content:""}html{line-height:1.5;-webkit-text-size-adjust:100%;-moz-tab-size:4;-o-tab-size:4;tab-size:4;font-family:ui-sans-serif,system-ui,-apple-system,BlinkMacSystemFont,Segoe UI,Roboto,Helvetica Neue,Arial,Noto Sans,sans-serif,Apple Color Emoji,Segoe UI Emoji,Segoe UI Symbol,Noto Color Emoji;font-feature-settings:normal;font-variation-settings:normal}body{margin:0;line-height:inherit}hr{height:0;color:inherit;border-top-width:1px}abbr:where([title]){-webkit-text-decoration:underline dotted;text-decoration:underline dotted}h1,h2,h3,h4,h5,h6{font-size:inherit;font-weight:inherit}a{color:inherit;text-decoration:inherit}b,strong{font-weight:bolder}code,kbd,pre,samp{font-family:ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,Liberation Mono,Courier New,monospace;font-size:1em}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sub{bottom:-.25em}sup{top:-.5em}table{text-indent:0;border-color:inherit;border-collapse:collapse}button,input,optgroup,select,textarea{font-family:inherit;font-feature-settings:inherit;font-variation-settings:inherit;font-size:100%;font-weight:inherit;line-height:inherit;color:inherit;margin:0;padding:0}button,select{text-transform:none}[type=button],[type=reset],[type=submit],button{-webkit-appearance:button;background-color:transparent;background-image:none}:-moz-focusring{outline:auto}:-moz-ui-invalid{box-shadow:none}progress{vertical-align:baseline}::-webkit-inner-spin-button,::-webkit-outer-spin-button{height:auto}[type=search]{-webkit-appearance:textfield;outline-offset:-2px}::-webkit-search-decoration{-webkit-appearance:none}::-webkit-file-upload-button{-webkit-appearance:button;font:inherit}summary{display:list-item}blockquote,dd,dl,figure,h1,h2,h3,h4,h5,h6,hr,p,pre{margin:0}fieldset{margin:0}fieldset,legend{padding:0}menu,ol,ul{list-style:none;margin:0;padding:0}dialog{padding:0}textarea{resize:vertical}input::-moz-placeholder,textarea::-moz-placeholder{opacity:1;color:#9ca3af}input::placeholder,textarea::placeholder{opacity:1;color:#9ca3af}[role=button],button{cursor:pointer}:disabled{cursor:default}audio,canvas,embed,iframe,img,object,svg,video{display:block;vertical-align:middle}img,video{max-width:100%;height:auto}[hidden]{display:none}*,:after,:before{--tw-border-spacing-x:0;--tw-border-spacing-y:0;--tw-translate-x:0;--tw-translate-y:0;--tw-rotate:0;--tw-skew-x:0;--tw-skew-y:0;--tw-scale-x:1;--tw-scale-y:1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness:proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width:0px;--tw-ring-offset-color:#fff;--tw-ring-color:rgba(59,130,246,.5);--tw-ring-offset-shadow:0 0 #0000;--tw-ring-shadow:0 0 #0000;--tw-shadow:0 0 #0000;--tw-shadow-colored:0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }::backdrop{--tw-border-spacing-x:0;--tw-border-spacing-y:0;--tw-translate-x:0;--tw-translate-y:0;--tw-rotate:0;--tw-skew-x:0;--tw-skew-y:0;--tw-scale-x:1;--tw-scale-y:1;--tw-pan-x: ;--tw-pan-y: ;--tw-pinch-zoom: ;--tw-scroll-snap-strictness:proximity;--tw-gradient-from-position: ;--tw-gradient-via-position: ;--tw-gradient-to-position: ;--tw-ordinal: ;--tw-slashed-zero: ;--tw-numeric-figure: ;--tw-numeric-spacing: ;--tw-numeric-fraction: ;--tw-ring-inset: ;--tw-ring-offset-width:0px;--tw-ring-offset-color:#fff;--tw-ring-color:rgba(59,130,246,.5);--tw-ring-offset-shadow:0 0 #0000;--tw-ring-shadow:0 0 #0000;--tw-shadow:0 0 #0000;--tw-shadow-colored:0 0 #0000;--tw-blur: ;--tw-brightness: ;--tw-contrast: ;--tw-grayscale: ;--tw-hue-rotate: ;--tw-invert: ;--tw-saturate: ;--tw-sepia: ;--tw-drop-shadow: ;--tw-backdrop-blur: ;--tw-backdrop-brightness: ;--tw-backdrop-contrast: ;--tw-backdrop-grayscale: ;--tw-backdrop-hue-rotate: ;--tw-backdrop-invert: ;--tw-backdrop-opacity: ;--tw-backdrop-saturate: ;--tw-backdrop-sepia: }.pointer-events-none{pointer-events:none}.static{position:static}.fixed{position:fixed}.absolute{position:absolute}.relative{position:relative}.inset-0{inset:0}.inset-y-0{top:0;bottom:0}.bottom-2{bottom:.5rem}.bottom-2\.5{bottom:.625rem}.left-0{left:0}.right-0{right:0}.right-2{right:.5rem}.right-2\.5{right:.625rem}.top-0{top:0}.z-10{z-index:10}.z-30{z-index:30}.mx-auto{margin-left:auto;margin-right:auto}.mb-2{margin-bottom:.5rem}.mb-4{margin-bottom:1rem}.block{display:block}.flex{display:flex}.h-4{height:1rem}.h-5{height:1.25rem}.h-full{height:100%}.w-4{width:1rem}.w-5{width:1.25rem}.w-full{width:100%}.max-w-\[1960px\]{max-width:1960px}.transform{transform:translate(var(--tw-translate-x),var(--tw-translate-y)) rotate(var(--tw-rotate)) skewX(var(--tw-skew-x)) skewY(var(--tw-skew-y)) scaleX(var(--tw-scale-x)) scaleY(var(--tw-scale-y))}.cursor-pointer{cursor:pointer}.columns-2{-moz-columns:2;column-count:2}.items-center{align-items:center}.justify-center{justify-content:center}.gap-2{gap:.5rem}.gap-4{gap:1rem}.rounded-full{border-radius:9999px}.rounded-lg{border-radius:.5rem}.border{border-width:1px}.border-gray-300{--tw-border-opacity:1;border-color:rgb(209 213 219/var(--tw-border-opacity))}.bg-black{--tw-bg-opacity:1;background-color:rgb(0 0 0/var(--tw-bg-opacity))}.bg-black\/50{background-color:rgba(0,0,0,.5)}.bg-blue-700{--tw-bg-opacity:1;background-color:rgb(29 78 216/var(--tw-bg-opacity))}.bg-gray-50{--tw-bg-opacity:1;background-color:rgb(249 250 251/var(--tw-bg-opacity))}.bg-opacity-50{--tw-bg-opacity:0.5}.p-2{padding:.5rem}.p-3{padding:.75rem}.p-4{padding:1rem}.px-4{padding-left:1rem;padding-right:1rem}.py-2{padding-top:.5rem;padding-bottom:.5rem}.pl-10{padding-left:2.5rem}.pl-3{padding-left:.75rem}.text-2xl{font-size:1.5rem;line-height:2rem}.text-sm{font-size:.875rem;line-height:1.25rem}.font-bold{font-weight:700}.font-medium{font-weight:500}.text-gray-500{--tw-text-opacity:1;color:rgb(107 114 128/var(--tw-text-opacity))}.text-gray-900{--tw-text-opacity:1;color:rgb(17 24 39/var(--tw-text-opacity))}.text-white{--tw-text-opacity:1;color:rgb(255 255 255/var(--tw-text-opacity))}.text-white\/75{color:hsla(0,0%,100%,.75)}.blur{--tw-blur:blur(8px)}.blur,.brightness-90{filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}.brightness-90{--tw-brightness:brightness(.9)}.backdrop-blur-2xl{--tw-backdrop-blur:blur(40px)}.backdrop-blur-2xl,.backdrop-blur-lg{-webkit-backdrop-filter:var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia);backdrop-filter:var(--tw-backdrop-blur) var(--tw-backdrop-brightness) var(--tw-backdrop-contrast) var(--tw-backdrop-grayscale) var(--tw-backdrop-hue-rotate) var(--tw-backdrop-invert) var(--tw-backdrop-opacity) var(--tw-backdrop-saturate) var(--tw-backdrop-sepia)}.backdrop-blur-lg{--tw-backdrop-blur:blur(16px)}.transition{transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,-webkit-backdrop-filter;transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter;transition-property:color,background-color,border-color,text-decoration-color,fill,stroke,opacity,box-shadow,transform,filter,backdrop-filter,-webkit-backdrop-filter;transition-timing-function:cubic-bezier(.4,0,.2,1);transition-duration:.15s}.will-change-auto{will-change:auto}:root{--foreground-rgb:255,255,255;--background-start-rgb:0,0,0;--background-end-rgb:0,0,0}body{color:rgb(var(--foreground-rgb));background:linear-gradient(to bottom,transparent,rgb(var(--background-end-rgb))) rgb(var(--background-start-rgb))}.after\:pointer-events-none:after{content:var(--tw-content);pointer-events:none}.after\:absolute:after{content:var(--tw-content);position:absolute}.after\:inset-0:after{content:var(--tw-content);inset:0}.after\:rounded-lg:after{content:var(--tw-content);border-radius:.5rem}.after\:shadow-highlight:after{content:var(--tw-content);--tw-shadow:inset 0 0 0 1px hsla(0,0%,100%,.1);--tw-shadow-colored:inset 0 0 0 1px var(--tw-shadow-color);box-shadow:var(--tw-ring-offset-shadow,0 0 #0000),var(--tw-ring-shadow,0 0 #0000),var(--tw-shadow)}.hover\:bg-black\/75:hover{background-color:rgba(0,0,0,.75)}.hover\:bg-blue-800:hover{--tw-bg-opacity:1;background-color:rgb(30 64 175/var(--tw-bg-opacity))}.hover\:text-white:hover{--tw-text-opacity:1;color:rgb(255 255 255/var(--tw-text-opacity))}.focus\:border-blue-500:focus{--tw-border-opacity:1;border-color:rgb(59 130 246/var(--tw-border-opacity))}.focus\:outline-none:focus{outline:2px solid transparent;outline-offset:2px}.focus\:ring-4:focus{--tw-ring-offset-shadow:var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color);--tw-ring-shadow:var(--tw-ring-inset) 0 0 0 calc(4px + var(--tw-ring-offset-width)) var(--tw-ring-color);box-shadow:var(--tw-ring-offset-shadow),var(--tw-ring-shadow),var(--tw-shadow,0 0 #0000)}.focus\:ring-blue-300:focus{--tw-ring-opacity:1;--tw-ring-color:rgb(147 197 253/var(--tw-ring-opacity))}.focus\:ring-blue-500:focus{--tw-ring-opacity:1;--tw-ring-color:rgb(59 130 246/var(--tw-ring-opacity))}.group:hover .group-hover\:brightness-110{--tw-brightness:brightness(1.1);filter:var(--tw-blur) var(--tw-brightness) var(--tw-contrast) var(--tw-grayscale) var(--tw-hue-rotate) var(--tw-invert) var(--tw-saturate) var(--tw-sepia) var(--tw-drop-shadow)}@media (prefers-color-scheme:dark){.dark\:border-gray-600{--tw-border-opacity:1;border-color:rgb(75 85 99/var(--tw-border-opacity))}.dark\:bg-blue-600{--tw-bg-opacity:1;background-color:rgb(37 99 235/var(--tw-bg-opacity))}.dark\:bg-gray-700{--tw-bg-opacity:1;background-color:rgb(55 65 81/var(--tw-bg-opacity))}.dark\:text-gray-400{--tw-text-opacity:1;color:rgb(156 163 175/var(--tw-text-opacity))}.dark\:text-white{--tw-text-opacity:1;color:rgb(255 255 255/var(--tw-text-opacity))}.dark\:placeholder-gray-400::-moz-placeholder{--tw-placeholder-opacity:1;color:rgb(156 163 175/var(--tw-placeholder-opacity))}.dark\:placeholder-gray-400::placeholder{--tw-placeholder-opacity:1;color:rgb(156 163 175/var(--tw-placeholder-opacity))}.dark\:hover\:bg-blue-700:hover{--tw-bg-opacity:1;background-color:rgb(29 78 216/var(--tw-bg-opacity))}.dark\:focus\:border-blue-500:focus{--tw-border-opacity:1;border-color:rgb(59 130 246/var(--tw-border-opacity))}.dark\:focus\:ring-blue-500:focus{--tw-ring-opacity:1;--tw-ring-color:rgb(59 130 246/var(--tw-ring-opacity))}.dark\:focus\:ring-blue-800:focus{--tw-ring-opacity:1;--tw-ring-color:rgb(30 64 175/var(--tw-ring-opacity))}}@media (min-width:640px){.sm\:columns-3{-moz-columns:3;column-count:3}}@media (min-width:1280px){.xl\:columns-4{-moz-columns:4;column-count:4}}@media (min-width:1536px){.\32xl\:columns-5{-moz-columns:5;column-count:5}}@font-face{font-family:__Inter_e66fe9;font-style:normal;font-weight:100 900;font-display:swap;src:url(/_next/static/media/ec159349637c90ad-s.woff2) format("woff2");unicode-range:U+0460-052f,U+1c80-1c88,U+20b4,U+2de0-2dff,U+a640-a69f,U+fe2e-fe2f}@font-face{font-family:__Inter_e66fe9;font-style:normal;font-weight:100 900;font-display:swap;src:url(/_next/static/media/513657b02c5c193f-s.woff2) format("woff2");unicode-range:U+0301,U+0400-045f,U+0490-0491,U+04b0-04b1,U+2116}@font-face{font-family:__Inter_e66fe9;font-style:normal;font-weight:100 900;font-display:swap;src:url(/_next/static/media/fd4db3eb5472fc27-s.woff2) format("woff2");unicode-range:U+1f??}@font-face{font-family:__Inter_e66fe9;font-style:normal;font-weight:100 900;font-display:swap;src:url(/_next/static/media/51ed15f9841b9f9d-s.woff2) format("woff2");unicode-range:U+0370-03ff}@font-face{font-family:__Inter_e66fe9;font-style:normal;font-weight:100 900;font-display:swap;src:url(/_next/static/media/05a31a2ca4975f99-s.woff2) format("woff2");unicode-range:U+0102-0103,U+0110-0111,U+0128-0129,U+0168-0169,U+01a0-01a1,U+01af-01b0,U+0300-0301,U+0303-0304,U+0308-0309,U+0323,U+0329,U+1ea0-1ef9,U+20ab}@font-face{font-family:__Inter_e66fe9;font-style:normal;font-weight:100 900;font-display:swap;src:url(/_next/static/media/d6b16ce4a6175f26-s.woff2) format("woff2");unicode-range:U+0100-02af,U+0304,U+0308,U+0329,U+1e00-1e9f,U+1ef2-1eff,U+2020,U+20a0-20ab,U+20ad-20cf,U+2113,U+2c60-2c7f,U+a720-a7ff}@font-face{font-family:__Inter_e66fe9;font-style:normal;font-weight:100 900;font-display:swap;src:url(/_next/static/media/c9a5bc6a7c948fb0-s.p.woff2) format("woff2");unicode-range:U+00??,U+0131,U+0152-0153,U+02bb-02bc,U+02c6,U+02da,U+02dc,U+0304,U+0308,U+0329,U+2000-206f,U+2074,U+20ac,U+2122,U+2191,U+2193,U+2212,U+2215,U+feff,U+fffd}@font-face{font-family:__Inter_Fallback_e66fe9;src:local("Arial");ascent-override:90.20%;descent-override:22.48%;line-gap-override:0.00%;size-adjust:107.40%}.__className_e66fe9{font-family:__Inter_e66fe9,__Inter_Fallback_e66fe9;font-style:normal} \ No newline at end of file diff --git a/spaces/XzJosh/Gun-Bert-VITS2/text/__init__.py b/spaces/XzJosh/Gun-Bert-VITS2/text/__init__.py deleted file mode 100644 index 7566bf351ca9b95af9cdc6d729557a9da083800f..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Gun-Bert-VITS2/text/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -from text.symbols import * - - -_symbol_to_id = {s: i for i, s in enumerate(symbols)} - -def cleaned_text_to_sequence(cleaned_text, tones, language): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - phones = [_symbol_to_id[symbol] for symbol in cleaned_text] - tone_start = language_tone_start_map[language] - tones = [i + tone_start for i in tones] - lang_id = language_id_map[language] - lang_ids = [lang_id for i in phones] - return phones, tones, lang_ids - -def get_bert(norm_text, word2ph, language): - from .chinese_bert import get_bert_feature as zh_bert - from .english_bert_mock import get_bert_feature as en_bert - lang_bert_func_map = { - 'ZH': zh_bert, - 'EN': en_bert - } - bert = lang_bert_func_map[language](norm_text, word2ph) - return bert diff --git a/spaces/YanzBotz/Stablediffusion-YanzBotz/README.md b/spaces/YanzBotz/Stablediffusion-YanzBotz/README.md deleted file mode 100644 index 655bff2ed4a07fa41b3874cccdb034814f25c5cc..0000000000000000000000000000000000000000 --- a/spaces/YanzBotz/Stablediffusion-YanzBotz/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: fast-stable-diffusion -emoji: 🔥 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -license: mit ---- - -Prodia's Stable Diffusion Space. diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/bugs.md b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/bugs.md deleted file mode 100644 index d0235c708ab6b0cdadb5865110e9e8c22ca313aa..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/.github/ISSUE_TEMPLATE/bugs.md +++ /dev/null @@ -1,38 +0,0 @@ ---- -name: "🐛 Bugs" -about: Report bugs in detectron2 -title: Please read & provide the following - ---- - -## Instructions To Reproduce the 🐛 Bug: -1. Full runnable code or full changes you made: -``` -If making changes to the project itself, please use output of the following command: -git rev-parse HEAD; git diff - - -``` -2. What exact command you run: -3. __Full logs__ or other relevant observations: -``` - -``` -4. please simplify the steps as much as possible so they do not require additional resources to - run, such as a private dataset. - -## Expected behavior: - -If there are no obvious error in "full logs" provided above, -please tell us the expected behavior. - -## Environment: - -Provide your environment information using the following command: -``` -wget -nc -q https://github.com/facebookresearch/detectron2/raw/main/detectron2/utils/collect_env.py && python collect_env.py -``` - -If your issue looks like an installation issue / environment issue, -please first try to solve it yourself with the instructions in -https://detectron2.readthedocs.io/tutorials/install.html#common-installation-issues diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/config/defaults.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/config/defaults.py deleted file mode 100644 index 848486dfe91a62559e6ae35120a4dac26d4bd66d..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/config/defaults.py +++ /dev/null @@ -1,635 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .config import CfgNode as CN - -# NOTE: given the new config system -# (https://detectron2.readthedocs.io/en/latest/tutorials/lazyconfigs.html), -# we will stop adding new functionalities to default CfgNode. - -# ----------------------------------------------------------------------------- -# Convention about Training / Test specific parameters -# ----------------------------------------------------------------------------- -# Whenever an argument can be either used for training or for testing, the -# corresponding name will be post-fixed by a _TRAIN for a training parameter, -# or _TEST for a test-specific parameter. -# For example, the number of images during training will be -# IMAGES_PER_BATCH_TRAIN, while the number of images for testing will be -# IMAGES_PER_BATCH_TEST - -# ----------------------------------------------------------------------------- -# Config definition -# ----------------------------------------------------------------------------- - -_C = CN() - -# The version number, to upgrade from old configs to new ones if any -# changes happen. It's recommended to keep a VERSION in your config file. -_C.VERSION = 2 - -_C.MODEL = CN() -_C.MODEL.LOAD_PROPOSALS = False -_C.MODEL.MASK_ON = False -_C.MODEL.KEYPOINT_ON = False -_C.MODEL.DEVICE = "cuda" -_C.MODEL.META_ARCHITECTURE = "GeneralizedRCNN" - -# Path (a file path, or URL like detectron2://.., https://..) to a checkpoint file -# to be loaded to the model. You can find available models in the model zoo. -_C.MODEL.WEIGHTS = "" - -# Values to be used for image normalization (BGR order, since INPUT.FORMAT defaults to BGR). -# To train on images of different number of channels, just set different mean & std. -# Default values are the mean pixel value from ImageNet: [103.53, 116.28, 123.675] -_C.MODEL.PIXEL_MEAN = [103.530, 116.280, 123.675] -# When using pre-trained models in Detectron1 or any MSRA models, -# std has been absorbed into its conv1 weights, so the std needs to be set 1. -# Otherwise, you can use [57.375, 57.120, 58.395] (ImageNet std) -_C.MODEL.PIXEL_STD = [1.0, 1.0, 1.0] - - -# ----------------------------------------------------------------------------- -# INPUT -# ----------------------------------------------------------------------------- -_C.INPUT = CN() -# By default, {MIN,MAX}_SIZE options are used in transforms.ResizeShortestEdge. -# Please refer to ResizeShortestEdge for detailed definition. -# Size of the smallest side of the image during training -_C.INPUT.MIN_SIZE_TRAIN = (800,) -# Sample size of smallest side by choice or random selection from range give by -# INPUT.MIN_SIZE_TRAIN -_C.INPUT.MIN_SIZE_TRAIN_SAMPLING = "choice" -# Maximum size of the side of the image during training -_C.INPUT.MAX_SIZE_TRAIN = 1333 -# Size of the smallest side of the image during testing. Set to zero to disable resize in testing. -_C.INPUT.MIN_SIZE_TEST = 800 -# Maximum size of the side of the image during testing -_C.INPUT.MAX_SIZE_TEST = 1333 -# Mode for flipping images used in data augmentation during training -# choose one of ["horizontal, "vertical", "none"] -_C.INPUT.RANDOM_FLIP = "horizontal" - -# `True` if cropping is used for data augmentation during training -_C.INPUT.CROP = CN({"ENABLED": False}) -# Cropping type. See documentation of `detectron2.data.transforms.RandomCrop` for explanation. -_C.INPUT.CROP.TYPE = "relative_range" -# Size of crop in range (0, 1] if CROP.TYPE is "relative" or "relative_range" and in number of -# pixels if CROP.TYPE is "absolute" -_C.INPUT.CROP.SIZE = [0.9, 0.9] - - -# Whether the model needs RGB, YUV, HSV etc. -# Should be one of the modes defined here, as we use PIL to read the image: -# https://pillow.readthedocs.io/en/stable/handbook/concepts.html#concept-modes -# with BGR being the one exception. One can set image format to BGR, we will -# internally use RGB for conversion and flip the channels over -_C.INPUT.FORMAT = "BGR" -# The ground truth mask format that the model will use. -# Mask R-CNN supports either "polygon" or "bitmask" as ground truth. -_C.INPUT.MASK_FORMAT = "polygon" # alternative: "bitmask" - - -# ----------------------------------------------------------------------------- -# Dataset -# ----------------------------------------------------------------------------- -_C.DATASETS = CN() -# List of the dataset names for training. Must be registered in DatasetCatalog -# Samples from these datasets will be merged and used as one dataset. -_C.DATASETS.TRAIN = () -# List of the pre-computed proposal files for training, which must be consistent -# with datasets listed in DATASETS.TRAIN. -_C.DATASETS.PROPOSAL_FILES_TRAIN = () -# Number of top scoring precomputed proposals to keep for training -_C.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TRAIN = 2000 -# List of the dataset names for testing. Must be registered in DatasetCatalog -_C.DATASETS.TEST = () -# List of the pre-computed proposal files for test, which must be consistent -# with datasets listed in DATASETS.TEST. -_C.DATASETS.PROPOSAL_FILES_TEST = () -# Number of top scoring precomputed proposals to keep for test -_C.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TEST = 1000 - -# ----------------------------------------------------------------------------- -# DataLoader -# ----------------------------------------------------------------------------- -_C.DATALOADER = CN() -# Number of data loading threads -_C.DATALOADER.NUM_WORKERS = 4 -# If True, each batch should contain only images for which the aspect ratio -# is compatible. This groups portrait images together, and landscape images -# are not batched with portrait images. -_C.DATALOADER.ASPECT_RATIO_GROUPING = True -# Options: TrainingSampler, RepeatFactorTrainingSampler -_C.DATALOADER.SAMPLER_TRAIN = "TrainingSampler" -# Repeat threshold for RepeatFactorTrainingSampler -_C.DATALOADER.REPEAT_THRESHOLD = 0.0 -# Tf True, when working on datasets that have instance annotations, the -# training dataloader will filter out images without associated annotations -_C.DATALOADER.FILTER_EMPTY_ANNOTATIONS = True - -# ---------------------------------------------------------------------------- # -# Backbone options -# ---------------------------------------------------------------------------- # -_C.MODEL.BACKBONE = CN() - -_C.MODEL.BACKBONE.NAME = "build_resnet_backbone" -# Freeze the first several stages so they are not trained. -# There are 5 stages in ResNet. The first is a convolution, and the following -# stages are each group of residual blocks. -_C.MODEL.BACKBONE.FREEZE_AT = 2 - - -# ---------------------------------------------------------------------------- # -# FPN options -# ---------------------------------------------------------------------------- # -_C.MODEL.FPN = CN() -# Names of the input feature maps to be used by FPN -# They must have contiguous power of 2 strides -# e.g., ["res2", "res3", "res4", "res5"] -_C.MODEL.FPN.IN_FEATURES = [] -_C.MODEL.FPN.OUT_CHANNELS = 256 - -# Options: "" (no norm), "GN" -_C.MODEL.FPN.NORM = "" - -# Types for fusing the FPN top-down and lateral features. Can be either "sum" or "avg" -_C.MODEL.FPN.FUSE_TYPE = "sum" - - -# ---------------------------------------------------------------------------- # -# Proposal generator options -# ---------------------------------------------------------------------------- # -_C.MODEL.PROPOSAL_GENERATOR = CN() -# Current proposal generators include "RPN", "RRPN" and "PrecomputedProposals" -_C.MODEL.PROPOSAL_GENERATOR.NAME = "RPN" -# Proposal height and width both need to be greater than MIN_SIZE -# (a the scale used during training or inference) -_C.MODEL.PROPOSAL_GENERATOR.MIN_SIZE = 0 - - -# ---------------------------------------------------------------------------- # -# Anchor generator options -# ---------------------------------------------------------------------------- # -_C.MODEL.ANCHOR_GENERATOR = CN() -# The generator can be any name in the ANCHOR_GENERATOR registry -_C.MODEL.ANCHOR_GENERATOR.NAME = "DefaultAnchorGenerator" -# Anchor sizes (i.e. sqrt of area) in absolute pixels w.r.t. the network input. -# Format: list[list[float]]. SIZES[i] specifies the list of sizes to use for -# IN_FEATURES[i]; len(SIZES) must be equal to len(IN_FEATURES) or 1. -# When len(SIZES) == 1, SIZES[0] is used for all IN_FEATURES. -_C.MODEL.ANCHOR_GENERATOR.SIZES = [[32, 64, 128, 256, 512]] -# Anchor aspect ratios. For each area given in `SIZES`, anchors with different aspect -# ratios are generated by an anchor generator. -# Format: list[list[float]]. ASPECT_RATIOS[i] specifies the list of aspect ratios (H/W) -# to use for IN_FEATURES[i]; len(ASPECT_RATIOS) == len(IN_FEATURES) must be true, -# or len(ASPECT_RATIOS) == 1 is true and aspect ratio list ASPECT_RATIOS[0] is used -# for all IN_FEATURES. -_C.MODEL.ANCHOR_GENERATOR.ASPECT_RATIOS = [[0.5, 1.0, 2.0]] -# Anchor angles. -# list[list[float]], the angle in degrees, for each input feature map. -# ANGLES[i] specifies the list of angles for IN_FEATURES[i]. -_C.MODEL.ANCHOR_GENERATOR.ANGLES = [[-90, 0, 90]] -# Relative offset between the center of the first anchor and the top-left corner of the image -# Value has to be in [0, 1). Recommend to use 0.5, which means half stride. -# The value is not expected to affect model accuracy. -_C.MODEL.ANCHOR_GENERATOR.OFFSET = 0.0 - -# ---------------------------------------------------------------------------- # -# RPN options -# ---------------------------------------------------------------------------- # -_C.MODEL.RPN = CN() -_C.MODEL.RPN.HEAD_NAME = "StandardRPNHead" # used by RPN_HEAD_REGISTRY - -# Names of the input feature maps to be used by RPN -# e.g., ["p2", "p3", "p4", "p5", "p6"] for FPN -_C.MODEL.RPN.IN_FEATURES = ["res4"] -# Remove RPN anchors that go outside the image by BOUNDARY_THRESH pixels -# Set to -1 or a large value, e.g. 100000, to disable pruning anchors -_C.MODEL.RPN.BOUNDARY_THRESH = -1 -# IOU overlap ratios [BG_IOU_THRESHOLD, FG_IOU_THRESHOLD] -# Minimum overlap required between an anchor and ground-truth box for the -# (anchor, gt box) pair to be a positive example (IoU >= FG_IOU_THRESHOLD -# ==> positive RPN example: 1) -# Maximum overlap allowed between an anchor and ground-truth box for the -# (anchor, gt box) pair to be a negative examples (IoU < BG_IOU_THRESHOLD -# ==> negative RPN example: 0) -# Anchors with overlap in between (BG_IOU_THRESHOLD <= IoU < FG_IOU_THRESHOLD) -# are ignored (-1) -_C.MODEL.RPN.IOU_THRESHOLDS = [0.3, 0.7] -_C.MODEL.RPN.IOU_LABELS = [0, -1, 1] -# Number of regions per image used to train RPN -_C.MODEL.RPN.BATCH_SIZE_PER_IMAGE = 256 -# Target fraction of foreground (positive) examples per RPN minibatch -_C.MODEL.RPN.POSITIVE_FRACTION = 0.5 -# Options are: "smooth_l1", "giou", "diou", "ciou" -_C.MODEL.RPN.BBOX_REG_LOSS_TYPE = "smooth_l1" -_C.MODEL.RPN.BBOX_REG_LOSS_WEIGHT = 1.0 -# Weights on (dx, dy, dw, dh) for normalizing RPN anchor regression targets -_C.MODEL.RPN.BBOX_REG_WEIGHTS = (1.0, 1.0, 1.0, 1.0) -# The transition point from L1 to L2 loss. Set to 0.0 to make the loss simply L1. -_C.MODEL.RPN.SMOOTH_L1_BETA = 0.0 -_C.MODEL.RPN.LOSS_WEIGHT = 1.0 -# Number of top scoring RPN proposals to keep before applying NMS -# When FPN is used, this is *per FPN level* (not total) -_C.MODEL.RPN.PRE_NMS_TOPK_TRAIN = 12000 -_C.MODEL.RPN.PRE_NMS_TOPK_TEST = 6000 -# Number of top scoring RPN proposals to keep after applying NMS -# When FPN is used, this limit is applied per level and then again to the union -# of proposals from all levels -# NOTE: When FPN is used, the meaning of this config is different from Detectron1. -# It means per-batch topk in Detectron1, but per-image topk here. -# See the "find_top_rpn_proposals" function for details. -_C.MODEL.RPN.POST_NMS_TOPK_TRAIN = 2000 -_C.MODEL.RPN.POST_NMS_TOPK_TEST = 1000 -# NMS threshold used on RPN proposals -_C.MODEL.RPN.NMS_THRESH = 0.7 -# Set this to -1 to use the same number of output channels as input channels. -_C.MODEL.RPN.CONV_DIMS = [-1] - -# ---------------------------------------------------------------------------- # -# ROI HEADS options -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_HEADS = CN() -_C.MODEL.ROI_HEADS.NAME = "Res5ROIHeads" -# Number of foreground classes -_C.MODEL.ROI_HEADS.NUM_CLASSES = 80 -# Names of the input feature maps to be used by ROI heads -# Currently all heads (box, mask, ...) use the same input feature map list -# e.g., ["p2", "p3", "p4", "p5"] is commonly used for FPN -_C.MODEL.ROI_HEADS.IN_FEATURES = ["res4"] -# IOU overlap ratios [IOU_THRESHOLD] -# Overlap threshold for an RoI to be considered background (if < IOU_THRESHOLD) -# Overlap threshold for an RoI to be considered foreground (if >= IOU_THRESHOLD) -_C.MODEL.ROI_HEADS.IOU_THRESHOLDS = [0.5] -_C.MODEL.ROI_HEADS.IOU_LABELS = [0, 1] -# RoI minibatch size *per image* (number of regions of interest [ROIs]) during training -# Total number of RoIs per training minibatch = -# ROI_HEADS.BATCH_SIZE_PER_IMAGE * SOLVER.IMS_PER_BATCH -# E.g., a common configuration is: 512 * 16 = 8192 -_C.MODEL.ROI_HEADS.BATCH_SIZE_PER_IMAGE = 512 -# Target fraction of RoI minibatch that is labeled foreground (i.e. class > 0) -_C.MODEL.ROI_HEADS.POSITIVE_FRACTION = 0.25 - -# Only used on test mode - -# Minimum score threshold (assuming scores in a [0, 1] range); a value chosen to -# balance obtaining high recall with not having too many low precision -# detections that will slow down inference post processing steps (like NMS) -# A default threshold of 0.0 increases AP by ~0.2-0.3 but significantly slows down -# inference. -_C.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.05 -# Overlap threshold used for non-maximum suppression (suppress boxes with -# IoU >= this threshold) -_C.MODEL.ROI_HEADS.NMS_THRESH_TEST = 0.5 -# If True, augment proposals with ground-truth boxes before sampling proposals to -# train ROI heads. -_C.MODEL.ROI_HEADS.PROPOSAL_APPEND_GT = True - -# ---------------------------------------------------------------------------- # -# Box Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_BOX_HEAD = CN() -# C4 don't use head name option -# Options for non-C4 models: FastRCNNConvFCHead, -_C.MODEL.ROI_BOX_HEAD.NAME = "" -# Options are: "smooth_l1", "giou", "diou", "ciou" -_C.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_TYPE = "smooth_l1" -# The final scaling coefficient on the box regression loss, used to balance the magnitude of its -# gradients with other losses in the model. See also `MODEL.ROI_KEYPOINT_HEAD.LOSS_WEIGHT`. -_C.MODEL.ROI_BOX_HEAD.BBOX_REG_LOSS_WEIGHT = 1.0 -# Default weights on (dx, dy, dw, dh) for normalizing bbox regression targets -# These are empirically chosen to approximately lead to unit variance targets -_C.MODEL.ROI_BOX_HEAD.BBOX_REG_WEIGHTS = (10.0, 10.0, 5.0, 5.0) -# The transition point from L1 to L2 loss. Set to 0.0 to make the loss simply L1. -_C.MODEL.ROI_BOX_HEAD.SMOOTH_L1_BETA = 0.0 -_C.MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION = 14 -_C.MODEL.ROI_BOX_HEAD.POOLER_SAMPLING_RATIO = 0 -# Type of pooling operation applied to the incoming feature map for each RoI -_C.MODEL.ROI_BOX_HEAD.POOLER_TYPE = "ROIAlignV2" - -_C.MODEL.ROI_BOX_HEAD.NUM_FC = 0 -# Hidden layer dimension for FC layers in the RoI box head -_C.MODEL.ROI_BOX_HEAD.FC_DIM = 1024 -_C.MODEL.ROI_BOX_HEAD.NUM_CONV = 0 -# Channel dimension for Conv layers in the RoI box head -_C.MODEL.ROI_BOX_HEAD.CONV_DIM = 256 -# Normalization method for the convolution layers. -# Options: "" (no norm), "GN", "SyncBN". -_C.MODEL.ROI_BOX_HEAD.NORM = "" -# Whether to use class agnostic for bbox regression -_C.MODEL.ROI_BOX_HEAD.CLS_AGNOSTIC_BBOX_REG = False -# If true, RoI heads use bounding boxes predicted by the box head rather than proposal boxes. -_C.MODEL.ROI_BOX_HEAD.TRAIN_ON_PRED_BOXES = False - -# ---------------------------------------------------------------------------- # -# Cascaded Box Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_BOX_CASCADE_HEAD = CN() -# The number of cascade stages is implicitly defined by the length of the following two configs. -_C.MODEL.ROI_BOX_CASCADE_HEAD.BBOX_REG_WEIGHTS = ( - (10.0, 10.0, 5.0, 5.0), - (20.0, 20.0, 10.0, 10.0), - (30.0, 30.0, 15.0, 15.0), -) -_C.MODEL.ROI_BOX_CASCADE_HEAD.IOUS = (0.5, 0.6, 0.7) - - -# ---------------------------------------------------------------------------- # -# Mask Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_MASK_HEAD = CN() -_C.MODEL.ROI_MASK_HEAD.NAME = "MaskRCNNConvUpsampleHead" -_C.MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION = 14 -_C.MODEL.ROI_MASK_HEAD.POOLER_SAMPLING_RATIO = 0 -_C.MODEL.ROI_MASK_HEAD.NUM_CONV = 0 # The number of convs in the mask head -_C.MODEL.ROI_MASK_HEAD.CONV_DIM = 256 -# Normalization method for the convolution layers. -# Options: "" (no norm), "GN", "SyncBN". -_C.MODEL.ROI_MASK_HEAD.NORM = "" -# Whether to use class agnostic for mask prediction -_C.MODEL.ROI_MASK_HEAD.CLS_AGNOSTIC_MASK = False -# Type of pooling operation applied to the incoming feature map for each RoI -_C.MODEL.ROI_MASK_HEAD.POOLER_TYPE = "ROIAlignV2" - - -# ---------------------------------------------------------------------------- # -# Keypoint Head -# ---------------------------------------------------------------------------- # -_C.MODEL.ROI_KEYPOINT_HEAD = CN() -_C.MODEL.ROI_KEYPOINT_HEAD.NAME = "KRCNNConvDeconvUpsampleHead" -_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_RESOLUTION = 14 -_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_SAMPLING_RATIO = 0 -_C.MODEL.ROI_KEYPOINT_HEAD.CONV_DIMS = tuple(512 for _ in range(8)) -_C.MODEL.ROI_KEYPOINT_HEAD.NUM_KEYPOINTS = 17 # 17 is the number of keypoints in COCO. - -# Images with too few (or no) keypoints are excluded from training. -_C.MODEL.ROI_KEYPOINT_HEAD.MIN_KEYPOINTS_PER_IMAGE = 1 -# Normalize by the total number of visible keypoints in the minibatch if True. -# Otherwise, normalize by the total number of keypoints that could ever exist -# in the minibatch. -# The keypoint softmax loss is only calculated on visible keypoints. -# Since the number of visible keypoints can vary significantly between -# minibatches, this has the effect of up-weighting the importance of -# minibatches with few visible keypoints. (Imagine the extreme case of -# only one visible keypoint versus N: in the case of N, each one -# contributes 1/N to the gradient compared to the single keypoint -# determining the gradient direction). Instead, we can normalize the -# loss by the total number of keypoints, if it were the case that all -# keypoints were visible in a full minibatch. (Returning to the example, -# this means that the one visible keypoint contributes as much as each -# of the N keypoints.) -_C.MODEL.ROI_KEYPOINT_HEAD.NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS = True -# Multi-task loss weight to use for keypoints -# Recommended values: -# - use 1.0 if NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS is True -# - use 4.0 if NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS is False -_C.MODEL.ROI_KEYPOINT_HEAD.LOSS_WEIGHT = 1.0 -# Type of pooling operation applied to the incoming feature map for each RoI -_C.MODEL.ROI_KEYPOINT_HEAD.POOLER_TYPE = "ROIAlignV2" - -# ---------------------------------------------------------------------------- # -# Semantic Segmentation Head -# ---------------------------------------------------------------------------- # -_C.MODEL.SEM_SEG_HEAD = CN() -_C.MODEL.SEM_SEG_HEAD.NAME = "SemSegFPNHead" -_C.MODEL.SEM_SEG_HEAD.IN_FEATURES = ["p2", "p3", "p4", "p5"] -# Label in the semantic segmentation ground truth that is ignored, i.e., no loss is calculated for -# the correposnding pixel. -_C.MODEL.SEM_SEG_HEAD.IGNORE_VALUE = 255 -# Number of classes in the semantic segmentation head -_C.MODEL.SEM_SEG_HEAD.NUM_CLASSES = 54 -# Number of channels in the 3x3 convs inside semantic-FPN heads. -_C.MODEL.SEM_SEG_HEAD.CONVS_DIM = 128 -# Outputs from semantic-FPN heads are up-scaled to the COMMON_STRIDE stride. -_C.MODEL.SEM_SEG_HEAD.COMMON_STRIDE = 4 -# Normalization method for the convolution layers. Options: "" (no norm), "GN". -_C.MODEL.SEM_SEG_HEAD.NORM = "GN" -_C.MODEL.SEM_SEG_HEAD.LOSS_WEIGHT = 1.0 - -_C.MODEL.PANOPTIC_FPN = CN() -# Scaling of all losses from instance detection / segmentation head. -_C.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT = 1.0 - -# options when combining instance & semantic segmentation outputs -_C.MODEL.PANOPTIC_FPN.COMBINE = CN({"ENABLED": True}) # "COMBINE.ENABLED" is deprecated & not used -_C.MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH = 0.5 -_C.MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT = 4096 -_C.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH = 0.5 - - -# ---------------------------------------------------------------------------- # -# RetinaNet Head -# ---------------------------------------------------------------------------- # -_C.MODEL.RETINANET = CN() - -# This is the number of foreground classes. -_C.MODEL.RETINANET.NUM_CLASSES = 80 - -_C.MODEL.RETINANET.IN_FEATURES = ["p3", "p4", "p5", "p6", "p7"] - -# Convolutions to use in the cls and bbox tower -# NOTE: this doesn't include the last conv for logits -_C.MODEL.RETINANET.NUM_CONVS = 4 - -# IoU overlap ratio [bg, fg] for labeling anchors. -# Anchors with < bg are labeled negative (0) -# Anchors with >= bg and < fg are ignored (-1) -# Anchors with >= fg are labeled positive (1) -_C.MODEL.RETINANET.IOU_THRESHOLDS = [0.4, 0.5] -_C.MODEL.RETINANET.IOU_LABELS = [0, -1, 1] - -# Prior prob for rare case (i.e. foreground) at the beginning of training. -# This is used to set the bias for the logits layer of the classifier subnet. -# This improves training stability in the case of heavy class imbalance. -_C.MODEL.RETINANET.PRIOR_PROB = 0.01 - -# Inference cls score threshold, only anchors with score > INFERENCE_TH are -# considered for inference (to improve speed) -_C.MODEL.RETINANET.SCORE_THRESH_TEST = 0.05 -# Select topk candidates before NMS -_C.MODEL.RETINANET.TOPK_CANDIDATES_TEST = 1000 -_C.MODEL.RETINANET.NMS_THRESH_TEST = 0.5 - -# Weights on (dx, dy, dw, dh) for normalizing Retinanet anchor regression targets -_C.MODEL.RETINANET.BBOX_REG_WEIGHTS = (1.0, 1.0, 1.0, 1.0) - -# Loss parameters -_C.MODEL.RETINANET.FOCAL_LOSS_GAMMA = 2.0 -_C.MODEL.RETINANET.FOCAL_LOSS_ALPHA = 0.25 -_C.MODEL.RETINANET.SMOOTH_L1_LOSS_BETA = 0.1 -# Options are: "smooth_l1", "giou", "diou", "ciou" -_C.MODEL.RETINANET.BBOX_REG_LOSS_TYPE = "smooth_l1" - -# One of BN, SyncBN, FrozenBN, GN -# Only supports GN until unshared norm is implemented -_C.MODEL.RETINANET.NORM = "" - - -# ---------------------------------------------------------------------------- # -# ResNe[X]t options (ResNets = {ResNet, ResNeXt} -# Note that parts of a resnet may be used for both the backbone and the head -# These options apply to both -# ---------------------------------------------------------------------------- # -_C.MODEL.RESNETS = CN() - -_C.MODEL.RESNETS.DEPTH = 50 -_C.MODEL.RESNETS.OUT_FEATURES = ["res4"] # res4 for C4 backbone, res2..5 for FPN backbone - -# Number of groups to use; 1 ==> ResNet; > 1 ==> ResNeXt -_C.MODEL.RESNETS.NUM_GROUPS = 1 - -# Options: FrozenBN, GN, "SyncBN", "BN" -_C.MODEL.RESNETS.NORM = "FrozenBN" - -# Baseline width of each group. -# Scaling this parameters will scale the width of all bottleneck layers. -_C.MODEL.RESNETS.WIDTH_PER_GROUP = 64 - -# Place the stride 2 conv on the 1x1 filter -# Use True only for the original MSRA ResNet; use False for C2 and Torch models -_C.MODEL.RESNETS.STRIDE_IN_1X1 = True - -# Apply dilation in stage "res5" -_C.MODEL.RESNETS.RES5_DILATION = 1 - -# Output width of res2. Scaling this parameters will scale the width of all 1x1 convs in ResNet -# For R18 and R34, this needs to be set to 64 -_C.MODEL.RESNETS.RES2_OUT_CHANNELS = 256 -_C.MODEL.RESNETS.STEM_OUT_CHANNELS = 64 - -# Apply Deformable Convolution in stages -# Specify if apply deform_conv on Res2, Res3, Res4, Res5 -_C.MODEL.RESNETS.DEFORM_ON_PER_STAGE = [False, False, False, False] -# Use True to use modulated deform_conv (DeformableV2, https://arxiv.org/abs/1811.11168); -# Use False for DeformableV1. -_C.MODEL.RESNETS.DEFORM_MODULATED = False -# Number of groups in deformable conv. -_C.MODEL.RESNETS.DEFORM_NUM_GROUPS = 1 - - -# ---------------------------------------------------------------------------- # -# Solver -# ---------------------------------------------------------------------------- # -_C.SOLVER = CN() - -# Options: WarmupMultiStepLR, WarmupCosineLR. -# See detectron2/solver/build.py for definition. -_C.SOLVER.LR_SCHEDULER_NAME = "WarmupMultiStepLR" - -_C.SOLVER.MAX_ITER = 40000 - -_C.SOLVER.BASE_LR = 0.001 - -_C.SOLVER.MOMENTUM = 0.9 - -_C.SOLVER.NESTEROV = False - -_C.SOLVER.WEIGHT_DECAY = 0.0001 -# The weight decay that's applied to parameters of normalization layers -# (typically the affine transformation) -_C.SOLVER.WEIGHT_DECAY_NORM = 0.0 - -_C.SOLVER.GAMMA = 0.1 -# The iteration number to decrease learning rate by GAMMA. -_C.SOLVER.STEPS = (30000,) - -_C.SOLVER.WARMUP_FACTOR = 1.0 / 1000 -_C.SOLVER.WARMUP_ITERS = 1000 -_C.SOLVER.WARMUP_METHOD = "linear" - -# Save a checkpoint after every this number of iterations -_C.SOLVER.CHECKPOINT_PERIOD = 5000 - -# Number of images per batch across all machines. This is also the number -# of training images per step (i.e. per iteration). If we use 16 GPUs -# and IMS_PER_BATCH = 32, each GPU will see 2 images per batch. -# May be adjusted automatically if REFERENCE_WORLD_SIZE is set. -_C.SOLVER.IMS_PER_BATCH = 16 - -# The reference number of workers (GPUs) this config is meant to train with. -# It takes no effect when set to 0. -# With a non-zero value, it will be used by DefaultTrainer to compute a desired -# per-worker batch size, and then scale the other related configs (total batch size, -# learning rate, etc) to match the per-worker batch size. -# See documentation of `DefaultTrainer.auto_scale_workers` for details: -_C.SOLVER.REFERENCE_WORLD_SIZE = 0 - -# Detectron v1 (and previous detection code) used a 2x higher LR and 0 WD for -# biases. This is not useful (at least for recent models). You should avoid -# changing these and they exist only to reproduce Detectron v1 training if -# desired. -_C.SOLVER.BIAS_LR_FACTOR = 1.0 -_C.SOLVER.WEIGHT_DECAY_BIAS = None # None means following WEIGHT_DECAY - -# Gradient clipping -_C.SOLVER.CLIP_GRADIENTS = CN({"ENABLED": False}) -# Type of gradient clipping, currently 2 values are supported: -# - "value": the absolute values of elements of each gradients are clipped -# - "norm": the norm of the gradient for each parameter is clipped thus -# affecting all elements in the parameter -_C.SOLVER.CLIP_GRADIENTS.CLIP_TYPE = "value" -# Maximum absolute value used for clipping gradients -_C.SOLVER.CLIP_GRADIENTS.CLIP_VALUE = 1.0 -# Floating point number p for L-p norm to be used with the "norm" -# gradient clipping type; for L-inf, please specify .inf -_C.SOLVER.CLIP_GRADIENTS.NORM_TYPE = 2.0 - -# Enable automatic mixed precision for training -# Note that this does not change model's inference behavior. -# To use AMP in inference, run inference under autocast() -_C.SOLVER.AMP = CN({"ENABLED": False}) - -# ---------------------------------------------------------------------------- # -# Specific test options -# ---------------------------------------------------------------------------- # -_C.TEST = CN() -# For end-to-end tests to verify the expected accuracy. -# Each item is [task, metric, value, tolerance] -# e.g.: [['bbox', 'AP', 38.5, 0.2]] -_C.TEST.EXPECTED_RESULTS = [] -# The period (in terms of steps) to evaluate the model during training. -# Set to 0 to disable. -_C.TEST.EVAL_PERIOD = 0 -# The sigmas used to calculate keypoint OKS. See http://cocodataset.org/#keypoints-eval -# When empty, it will use the defaults in COCO. -# Otherwise it should be a list[float] with the same length as ROI_KEYPOINT_HEAD.NUM_KEYPOINTS. -_C.TEST.KEYPOINT_OKS_SIGMAS = [] -# Maximum number of detections to return per image during inference (100 is -# based on the limit established for the COCO dataset). -_C.TEST.DETECTIONS_PER_IMAGE = 100 - -_C.TEST.AUG = CN({"ENABLED": False}) -_C.TEST.AUG.MIN_SIZES = (400, 500, 600, 700, 800, 900, 1000, 1100, 1200) -_C.TEST.AUG.MAX_SIZE = 4000 -_C.TEST.AUG.FLIP = True - -_C.TEST.PRECISE_BN = CN({"ENABLED": False}) -_C.TEST.PRECISE_BN.NUM_ITER = 200 - -# ---------------------------------------------------------------------------- # -# Misc options -# ---------------------------------------------------------------------------- # -# Directory where output files are written -_C.OUTPUT_DIR = "./output" -# Set seed to negative to fully randomize everything. -# Set seed to positive to use a fixed seed. Note that a fixed seed increases -# reproducibility but does not guarantee fully deterministic behavior. -# Disabling all parallelism further increases reproducibility. -_C.SEED = -1 -# Benchmark different cudnn algorithms. -# If input images have very different sizes, this option will have large overhead -# for about 10k iterations. It usually hurts total time, but can benefit for certain models. -# If input images have the same or similar sizes, benchmark is often helpful. -_C.CUDNN_BENCHMARK = False -# The period (in terms of steps) for minibatch visualization at train time. -# Set to 0 to disable. -_C.VIS_PERIOD = 0 - -# global config is for quick hack purposes. -# You can set them in command line or config files, -# and access it with: -# -# from detectron2.config import global_cfg -# print(global_cfg.HACK) -# -# Do not commit any configs into it. -_C.GLOBAL = CN() -_C.GLOBAL.HACK = 1.0 diff --git a/spaces/YuxinJ/Scenimefy/Scenimefy/data/unaligned_dataset.py b/spaces/YuxinJ/Scenimefy/Scenimefy/data/unaligned_dataset.py deleted file mode 100644 index d51057f8db34da41e6d7210afeaf357362bdfe26..0000000000000000000000000000000000000000 --- a/spaces/YuxinJ/Scenimefy/Scenimefy/data/unaligned_dataset.py +++ /dev/null @@ -1,79 +0,0 @@ -import os.path -from Scenimefy.data.base_dataset import BaseDataset, get_transform -from Scenimefy.data.image_folder import make_dataset -from PIL import Image -import random -import Scenimefy.utils.util as util - - -class UnalignedDataset(BaseDataset): - """ - This dataset class can load unaligned/unpaired datasets. - - It requires two directories to host training images from domain A '/path/to/data/trainA' - and from domain B '/path/to/data/trainB' respectively. - You can train the model with the dataset flag '--dataroot /path/to/data'. - Similarly, you need to prepare two directories: - '/path/to/data/testA' and '/path/to/data/testB' during test time. - """ - - def __init__(self, opt): - """Initialize this dataset class. - - Parameters: - opt (Option class) -- stores all the experiment flags; needs to be a subclass of BaseOptions - """ - BaseDataset.__init__(self, opt) - self.dir_A = os.path.join(opt.dataroot, opt.phase + 'A') # create a path '/path/to/data/trainA' - self.dir_B = os.path.join(opt.dataroot, opt.phase + 'B') # create a path '/path/to/data/trainB' - - if opt.phase == "test" and not os.path.exists(self.dir_A) \ - and os.path.exists(os.path.join(opt.dataroot, "valA")): - self.dir_A = os.path.join(opt.dataroot, "valA") - self.dir_B = os.path.join(opt.dataroot, "valB") - - self.A_paths = sorted(make_dataset(self.dir_A, opt.max_dataset_size)) # load images from '/path/to/data/trainA' - self.B_paths = sorted(make_dataset(self.dir_B, opt.max_dataset_size)) # load images from '/path/to/data/trainB' - self.A_size = len(self.A_paths) # get the size of dataset A - self.B_size = len(self.B_paths) # get the size of dataset B - - def __getitem__(self, index): - """Return a data point and its metadata information. - - Parameters: - index (int) -- a random integer for data indexing - - Returns a dictionary that contains A, B, A_paths and B_paths - A (tensor) -- an image in the input domain - B (tensor) -- its corresponding image in the target domain - A_paths (str) -- image paths - B_paths (str) -- image paths - """ - A_path = self.A_paths[index % self.A_size] # make sure index is within then range - if self.opt.serial_batches: # make sure index is within then range - index_B = index % self.B_size - else: # randomize the index for domain B to avoid fixed pairs. - index_B = random.randint(0, self.B_size - 1) - B_path = self.B_paths[index_B] - A_img = Image.open(A_path).convert('RGB') - B_img = Image.open(B_path).convert('RGB') - - # Apply image transformation - # For FastCUT mode, if in finetuning phase (learning rate is decaying), - # do not perform resize-crop data augmentation of CycleGAN. -# print('current_epoch', self.current_epoch) - is_finetuning = self.opt.isTrain and self.current_epoch > self.opt.n_epochs - modified_opt = util.copyconf(self.opt, load_size=self.opt.crop_size if is_finetuning else self.opt.load_size) - transform = get_transform(modified_opt) - A = transform(A_img) - B = transform(B_img) - - return {'A': A, 'B': B, 'A_paths': A_path, 'B_paths': B_path} - - def __len__(self): - """Return the total number of images in the dataset. - - As we have two datasets with potentially different number of images, - we take a maximum of - """ - return max(self.A_size, self.B_size) diff --git a/spaces/YuxinJ/Scenimefy/Scenimefy/models/hDCE.py b/spaces/YuxinJ/Scenimefy/Scenimefy/models/hDCE.py deleted file mode 100644 index 270a6478dda91ea293ad49bdaff4a81ee657486f..0000000000000000000000000000000000000000 --- a/spaces/YuxinJ/Scenimefy/Scenimefy/models/hDCE.py +++ /dev/null @@ -1,53 +0,0 @@ -from packaging import version -import torch -from torch import nn - - - -class PatchHDCELoss(nn.Module): - def __init__(self, opt): - super().__init__() - self.opt = opt - self.cross_entropy_loss = torch.nn.CrossEntropyLoss(reduction='none') - self.mask_dtype = torch.uint8 if version.parse(torch.__version__) < version.parse('1.2.0') else torch.bool - - def forward(self, feat_q, feat_k, weight=None): - batchSize = feat_q.shape[0] - dim = feat_q.shape[1] - feat_k = feat_k.detach() - - # positive logit - l_pos = torch.bmm(feat_q.view(batchSize, 1, -1), feat_k.view(batchSize, -1, 1)) - l_pos = l_pos.view(batchSize, 1) - - if self.opt.nce_includes_all_negatives_from_minibatch: - # reshape features as if they are all negatives of minibatch of size 1. - batch_dim_for_bmm = 1 - else: - batch_dim_for_bmm = self.opt.batch_size - - # reshape features to batch size - feat_q = feat_q.view(batch_dim_for_bmm, -1, dim) - feat_k = feat_k.view(batch_dim_for_bmm, -1, dim) - npatches = feat_q.size(1) - l_neg_curbatch = torch.bmm(feat_q, feat_k.transpose(2, 1)) - - # weighted by semantic relation - if weight is not None: - l_neg_curbatch *= weight - - diagonal = torch.eye(npatches, device=feat_q.device, dtype=self.mask_dtype)[None, :, :] - l_neg_curbatch.masked_fill_(diagonal, -10.0) - l_neg = l_neg_curbatch.view(-1, npatches) - - logits = (l_neg-l_pos)/self.opt.nce_T - v = torch.logsumexp(logits, dim=1) - loss_vec = torch.exp(v-v.detach()) - - # for monitoring - out_dummy = torch.cat((l_pos, l_neg), dim=1) / self.opt.nce_T - CELoss_dummy = self.cross_entropy_loss(out_dummy, torch.zeros(out_dummy.size(0), dtype=torch.long, device=feat_q.device)) - - loss = loss_vec.mean()-1+CELoss_dummy.detach() - - return loss diff --git a/spaces/YuxinJ/Scenimefy/Scenimefy/utils/util.py b/spaces/YuxinJ/Scenimefy/Scenimefy/utils/util.py deleted file mode 100644 index f06f0773436491cf633769df0698d453a10c8a7f..0000000000000000000000000000000000000000 --- a/spaces/YuxinJ/Scenimefy/Scenimefy/utils/util.py +++ /dev/null @@ -1,168 +0,0 @@ -"""This module contains simple helper functions """ -from __future__ import print_function -import torch -import numpy as np -from PIL import Image -import os -import importlib -import argparse -from argparse import Namespace -import torchvision - - -def str2bool(v): - if isinstance(v, bool): - return v - if v.lower() in ('yes', 'true', 't', 'y', '1'): - return True - elif v.lower() in ('no', 'false', 'f', 'n', '0'): - return False - else: - raise argparse.ArgumentTypeError('Boolean value expected.') - - -def copyconf(default_opt, **kwargs): - conf = Namespace(**vars(default_opt)) - for key in kwargs: - setattr(conf, key, kwargs[key]) - return conf - - -def find_class_in_module(target_cls_name, module): - target_cls_name = target_cls_name.replace('_', '').lower() - clslib = importlib.import_module(module) - cls = None - for name, clsobj in clslib.__dict__.items(): - if name.lower() == target_cls_name: - cls = clsobj - - assert cls is not None, "In %s, there should be a class whose name matches %s in lowercase without underscore(_)" % (module, target_cls_name) - - return cls - - -def tensor2im(input_image, imtype=np.uint8): - """"Converts a Tensor array into a numpy image array. - - Parameters: - input_image (tensor) -- the input image tensor array - imtype (type) -- the desired type of the converted numpy array - """ - if not isinstance(input_image, np.ndarray): - if isinstance(input_image, torch.Tensor): # get the data from a variable - image_tensor = input_image.data - else: - return input_image - image_numpy = image_tensor[0].clamp(-1.0, 1.0).cpu().float().numpy() # convert it into a numpy array - if image_numpy.shape[0] == 1: # grayscale to RGB - image_numpy = np.tile(image_numpy, (3, 1, 1)) - image_numpy = (np.transpose(image_numpy, (1, 2, 0)) + 1) / 2.0 * 255.0 # post-processing: tranpose and scaling - else: # if it is a numpy array, do nothing - image_numpy = input_image - return image_numpy.astype(imtype) - - -def diagnose_network(net, name='network'): - """Calculate and print the mean of average absolute(gradients) - - Parameters: - net (torch network) -- Torch network - name (str) -- the name of the network - """ - mean = 0.0 - count = 0 - for param in net.parameters(): - if param.grad is not None: - mean += torch.mean(torch.abs(param.grad.data)) - count += 1 - if count > 0: - mean = mean / count - print(name) - print(mean) - - -def save_image(image_numpy, image_path, aspect_ratio=1.0): - """Save a numpy image to the disk - - Parameters: - image_numpy (numpy array) -- input numpy array - image_path (str) -- the path of the image - """ - - image_pil = Image.fromarray(image_numpy) - h, w, _ = image_numpy.shape - - if aspect_ratio is None: - pass - elif aspect_ratio > 1.0: - image_pil = image_pil.resize((h, int(w * aspect_ratio)), Image.BICUBIC) - elif aspect_ratio < 1.0: - image_pil = image_pil.resize((int(h / aspect_ratio), w), Image.BICUBIC) - # TODO: TEST - # print(image_path) - image_pil.save(image_path) - - -def print_numpy(x, val=True, shp=False): - """Print the mean, min, max, median, std, and size of a numpy array - - Parameters: - val (bool) -- if print the values of the numpy array - shp (bool) -- if print the shape of the numpy array - """ - x = x.astype(np.float64) - if shp: - print('shape,', x.shape) - if val: - x = x.flatten() - print('mean = %3.3f, min = %3.3f, max = %3.3f, median = %3.3f, std=%3.3f' % ( - np.mean(x), np.min(x), np.max(x), np.median(x), np.std(x))) - - -def mkdirs(paths): - """create empty directories if they don't exist - - Parameters: - paths (str list) -- a list of directory paths - """ - if isinstance(paths, list) and not isinstance(paths, str): - for path in paths: - mkdir(path) - else: - mkdir(paths) - - -def mkdir(path): - """create a single empty directory if it didn't exist - - Parameters: - path (str) -- a single directory path - """ - if not os.path.exists(path): - os.makedirs(path) - - -def correct_resize_label(t, size): - device = t.device - t = t.detach().cpu() - resized = [] - for i in range(t.size(0)): - one_t = t[i, :1] - one_np = np.transpose(one_t.numpy().astype(np.uint8), (1, 2, 0)) - one_np = one_np[:, :, 0] - one_image = Image.fromarray(one_np).resize(size, Image.NEAREST) - resized_t = torch.from_numpy(np.array(one_image)).long() - resized.append(resized_t) - return torch.stack(resized, dim=0).to(device) - - -def correct_resize(t, size, mode=Image.BICUBIC): - device = t.device - t = t.detach().cpu() - resized = [] - for i in range(t.size(0)): - one_t = t[i:i + 1] - one_image = Image.fromarray(tensor2im(one_t)).resize(size, Image.BICUBIC) - resized_t = torchvision.transforms.functional.to_tensor(one_image) * 2 - 1.0 - resized.append(resized_t) - return torch.stack(resized, dim=0).to(device) diff --git a/spaces/ZX9966/Fintech/style.css b/spaces/ZX9966/Fintech/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/ZX9966/Fintech/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/fast_rcnn.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/fast_rcnn.py deleted file mode 100644 index 3d6e242767b927ed37198b6bc7862abecef99a33..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/models/detectors/fast_rcnn.py +++ /dev/null @@ -1,52 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class FastRCNN(TwoStageDetector): - """Implementation of `Fast R-CNN `_""" - - def __init__(self, - backbone, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None): - super(FastRCNN, self).__init__( - backbone=backbone, - neck=neck, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained) - - def forward_test(self, imgs, img_metas, proposals, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - proposals (List[List[Tensor]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. The Tensor should have a shape Px4, where - P is the number of proposals. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got {type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) ' - f'!= num of image meta ({len(img_metas)})') - - if num_augs == 1: - return self.simple_test(imgs[0], img_metas[0], proposals[0], - **kwargs) - else: - # TODO: support test-time augmentation - assert NotImplementedError diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/shared_heads/res_layer.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/shared_heads/res_layer.py deleted file mode 100644 index b5c343258b079a0dd832d4f999c18d002b06efac..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/models/roi_heads/shared_heads/res_layer.py +++ /dev/null @@ -1,77 +0,0 @@ -import torch.nn as nn -from mmcv.cnn import constant_init, kaiming_init -from mmcv.runner import auto_fp16, load_checkpoint - -from mmdet.models.backbones import ResNet -from mmdet.models.builder import SHARED_HEADS -from mmdet.models.utils import ResLayer as _ResLayer -from mmdet.utils import get_root_logger - - -@SHARED_HEADS.register_module() -class ResLayer(nn.Module): - - def __init__(self, - depth, - stage=3, - stride=2, - dilation=1, - style='pytorch', - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - with_cp=False, - dcn=None): - super(ResLayer, self).__init__() - self.norm_eval = norm_eval - self.norm_cfg = norm_cfg - self.stage = stage - self.fp16_enabled = False - block, stage_blocks = ResNet.arch_settings[depth] - stage_block = stage_blocks[stage] - planes = 64 * 2**stage - inplanes = 64 * 2**(stage - 1) * block.expansion - - res_layer = _ResLayer( - block, - inplanes, - planes, - stage_block, - stride=stride, - dilation=dilation, - style=style, - with_cp=with_cp, - norm_cfg=self.norm_cfg, - dcn=dcn) - self.add_module(f'layer{stage + 1}', res_layer) - - def init_weights(self, pretrained=None): - """Initialize the weights in the module. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if isinstance(pretrained, str): - logger = get_root_logger() - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - @auto_fp16() - def forward(self, x): - res_layer = getattr(self, f'layer{self.stage + 1}') - out = res_layer(x) - return out - - def train(self, mode=True): - super(ResLayer, self).train(mode) - if self.norm_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/segmentors/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/segmentors/__init__.py deleted file mode 100644 index dca2f09405330743c476e190896bee39c45498ea..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmseg/models/segmentors/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -from .base import BaseSegmentor -from .cascade_encoder_decoder import CascadeEncoderDecoder -from .encoder_decoder import EncoderDecoder - -__all__ = ['BaseSegmentor', 'EncoderDecoder', 'CascadeEncoderDecoder'] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/voc.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/voc.py deleted file mode 100644 index 699cbf2b6c2c6048817a7272f6b365539a33fbba..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/datasets/voc.py +++ /dev/null @@ -1,41 +0,0 @@ -''' - * Copyright (c) 2023 Salesforce, Inc. - * All rights reserved. - * SPDX-License-Identifier: Apache License 2.0 - * For full license text, see LICENSE.txt file in the repo root or http://www.apache.org/licenses/ - * By Can Qin - * Modified from ControlNet repo: https://github.com/lllyasviel/ControlNet - * Copyright (c) 2023 Lvmin Zhang and Maneesh Agrawala - * Modified from MMCV repo: From https://github.com/open-mmlab/mmcv - * Copyright (c) OpenMMLab. All rights reserved. -''' - -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class PascalVOCDataset(CustomDataset): - """Pascal VOC dataset. - - Args: - split (str): Split txt file for Pascal VOC. - """ - - CLASSES = ('background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', - 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', - 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', - 'train', 'tvmonitor') - - PALETTE = [[0, 0, 0], [128, 0, 0], [0, 128, 0], [128, 128, 0], [0, 0, 128], - [128, 0, 128], [0, 128, 128], [128, 128, 128], [64, 0, 0], - [192, 0, 0], [64, 128, 0], [192, 128, 0], [64, 0, 128], - [192, 0, 128], [64, 128, 128], [192, 128, 128], [0, 64, 0], - [128, 64, 0], [0, 192, 0], [128, 192, 0], [0, 64, 128]] - - def __init__(self, split, **kwargs): - super(PascalVOCDataset, self).__init__( - img_suffix='.jpg', seg_map_suffix='.png', split=split, **kwargs) - assert osp.exists(self.img_dir) and self.split is not None diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/window/cocoa/pyglet_textview.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/window/cocoa/pyglet_textview.py deleted file mode 100644 index 58644bf0f42c717d78e11557357df23042cfcea5..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/window/cocoa/pyglet_textview.py +++ /dev/null @@ -1,160 +0,0 @@ -import unicodedata - -from pyglet.window import key - -from pyglet.libs.darwin.cocoapy import ObjCClass, ObjCSubclass, ObjCInstance -from pyglet.libs.darwin.cocoapy import PyObjectEncoding, send_super -from pyglet.libs.darwin.cocoapy import CFSTR, cfstring_to_string - -NSArray = ObjCClass('NSArray') -NSApplication = ObjCClass('NSApplication') - -# This custom NSTextView subclass is used for capturing all of the -# on_text, on_text_motion, and on_text_motion_select events. -class PygletTextView_Implementation: - PygletTextView = ObjCSubclass('NSTextView', 'PygletTextView') - - @PygletTextView.method(b'@'+PyObjectEncoding) - def initWithCocoaWindow_(self, window): - self = ObjCInstance(send_super(self, 'init')) - if not self: - return None - self._window = window - # Interpret tab and return as raw characters - self.setFieldEditor_(False) - self.empty_string = CFSTR("") - return self - - @PygletTextView.method('v') - def dealloc(self): - self.empty_string.release() - - @PygletTextView.method('v@') - def keyDown_(self, nsevent): - array = NSArray.arrayWithObject_(nsevent) - self.interpretKeyEvents_(array) - - @PygletTextView.method('v@') - def insertText_(self, text): - text = cfstring_to_string(text) - self.setString_(self.empty_string) - # Don't send control characters (tab, newline) as on_text events. - if unicodedata.category(text[0]) != 'Cc': - self._window.dispatch_event("on_text", text) - - @PygletTextView.method('v@') - def insertNewline_(self, sender): - # Distinguish between carriage return (u'\r') and enter (u'\x03'). - # Only the return key press gets sent as an on_text event. - event = NSApplication.sharedApplication().currentEvent() - chars = event.charactersIgnoringModifiers() - ch = chr(chars.characterAtIndex_(0)) - if ch == u'\r': - self._window.dispatch_event("on_text", u'\r') - - @PygletTextView.method('v@') - def moveUp_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_UP) - - @PygletTextView.method('v@') - def moveDown_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_DOWN) - - @PygletTextView.method('v@') - def moveLeft_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_LEFT) - - @PygletTextView.method('v@') - def moveRight_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_RIGHT) - - @PygletTextView.method('v@') - def moveWordLeft_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_PREVIOUS_WORD) - - @PygletTextView.method('v@') - def moveWordRight_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_NEXT_WORD) - - @PygletTextView.method('v@') - def moveToBeginningOfLine_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_BEGINNING_OF_LINE) - - @PygletTextView.method('v@') - def moveToEndOfLine_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_END_OF_LINE) - - @PygletTextView.method('v@') - def scrollPageUp_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_PREVIOUS_PAGE) - - @PygletTextView.method('v@') - def scrollPageDown_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_NEXT_PAGE) - - @PygletTextView.method('v@') - def scrollToBeginningOfDocument_(self, sender): # Mac OS X 10.6 - self._window.dispatch_event("on_text_motion", key.MOTION_BEGINNING_OF_FILE) - - @PygletTextView.method('v@') - def scrollToEndOfDocument_(self, sender): # Mac OS X 10.6 - self._window.dispatch_event("on_text_motion", key.MOTION_END_OF_FILE) - - @PygletTextView.method('v@') - def deleteBackward_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_BACKSPACE) - - @PygletTextView.method('v@') - def deleteForward_(self, sender): - self._window.dispatch_event("on_text_motion", key.MOTION_DELETE) - - @PygletTextView.method('v@') - def moveUpAndModifySelection_(self, sender): - self._window.dispatch_event("on_text_motion_select", key.MOTION_UP) - - @PygletTextView.method('v@') - def moveDownAndModifySelection_(self, sender): - self._window.dispatch_event("on_text_motion_select", key.MOTION_DOWN) - - @PygletTextView.method('v@') - def moveLeftAndModifySelection_(self, sender): - self._window.dispatch_event("on_text_motion_select", key.MOTION_LEFT) - - @PygletTextView.method('v@') - def moveRightAndModifySelection_(self, sender): - self._window.dispatch_event("on_text_motion_select", key.MOTION_RIGHT) - - @PygletTextView.method('v@') - def moveWordLeftAndModifySelection_(self, sender): - self._window.dispatch_event("on_text_motion_select", key.MOTION_PREVIOUS_WORD) - - @PygletTextView.method('v@') - def moveWordRightAndModifySelection_(self, sender): - self._window.dispatch_event("on_text_motion_select", key.MOTION_NEXT_WORD) - - @PygletTextView.method('v@') - def moveToBeginningOfLineAndModifySelection_(self, sender): # Mac OS X 10.6 - self._window.dispatch_event("on_text_motion_select", key.MOTION_BEGINNING_OF_LINE) - - @PygletTextView.method('v@') - def moveToEndOfLineAndModifySelection_(self, sender): # Mac OS X 10.6 - self._window.dispatch_event("on_text_motion_select", key.MOTION_END_OF_LINE) - - @PygletTextView.method('v@') - def pageUpAndModifySelection_(self, sender): # Mac OS X 10.6 - self._window.dispatch_event("on_text_motion_select", key.MOTION_PREVIOUS_PAGE) - - @PygletTextView.method('v@') - def pageDownAndModifySelection_(self, sender): # Mac OS X 10.6 - self._window.dispatch_event("on_text_motion_select", key.MOTION_NEXT_PAGE) - - @PygletTextView.method('v@') - def moveToBeginningOfDocumentAndModifySelection_(self, sender): # Mac OS X 10.6 - self._window.dispatch_event("on_text_motion_select", key.MOTION_BEGINNING_OF_FILE) - - @PygletTextView.method('v@') - def moveToEndOfDocumentAndModifySelection_(self, sender): # Mac OS X 10.6 - self._window.dispatch_event("on_text_motion_select", key.MOTION_END_OF_FILE) - - -PygletTextView = ObjCClass('PygletTextView') diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/primitive.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/primitive.py deleted file mode 100644 index 7f83f46f532b126a4573e715dd03d079fef755ca..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/pyrender/primitive.py +++ /dev/null @@ -1,489 +0,0 @@ -"""Primitives, conforming to the glTF 2.0 standards as specified in -https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#reference-primitive - -Author: Matthew Matl -""" -import numpy as np - -from OpenGL.GL import * - -from .material import Material, MetallicRoughnessMaterial -from .constants import FLOAT_SZ, UINT_SZ, BufFlags, GLTF -from .utils import format_color_array - - -class Primitive(object): - """A primitive object which can be rendered. - - Parameters - ---------- - positions : (n, 3) float - XYZ vertex positions. - normals : (n, 3) float - Normalized XYZ vertex normals. - tangents : (n, 4) float - XYZW vertex tangents where the w component is a sign value - (either +1 or -1) indicating the handedness of the tangent basis. - texcoord_0 : (n, 2) float - The first set of UV texture coordinates. - texcoord_1 : (n, 2) float - The second set of UV texture coordinates. - color_0 : (n, 4) float - RGBA vertex colors. - joints_0 : (n, 4) float - Joint information. - weights_0 : (n, 4) float - Weight information for morphing. - indices : (m, 3) int - Face indices for triangle meshes or fans. - material : :class:`Material` - The material to apply to this primitive when rendering. - mode : int - The type of primitives to render, one of the following: - - - ``0``: POINTS - - ``1``: LINES - - ``2``: LINE_LOOP - - ``3``: LINE_STRIP - - ``4``: TRIANGLES - - ``5``: TRIANGLES_STRIP - - ``6``: TRIANGLES_FAN - targets : (k,) int - Morph target indices. - poses : (x,4,4), float - Array of 4x4 transformation matrices for instancing this object. - """ - - def __init__(self, - positions, - normals=None, - tangents=None, - texcoord_0=None, - texcoord_1=None, - color_0=None, - joints_0=None, - weights_0=None, - indices=None, - material=None, - mode=None, - targets=None, - poses=None): - - if mode is None: - mode = GLTF.TRIANGLES - - self.positions = positions - self.normals = normals - self.tangents = tangents - self.texcoord_0 = texcoord_0 - self.texcoord_1 = texcoord_1 - self.color_0 = color_0 - self.joints_0 = joints_0 - self.weights_0 = weights_0 - self.indices = indices - self.material = material - self.mode = mode - self.targets = targets - self.poses = poses - - self._bounds = None - self._vaid = None - self._buffers = [] - self._is_transparent = None - self._buf_flags = None - - @property - def positions(self): - """(n,3) float : XYZ vertex positions. - """ - return self._positions - - @positions.setter - def positions(self, value): - value = np.asanyarray(value, dtype=np.float32) - self._positions = np.ascontiguousarray(value) - self._bounds = None - - @property - def normals(self): - """(n,3) float : Normalized XYZ vertex normals. - """ - return self._normals - - @normals.setter - def normals(self, value): - if value is not None: - value = np.asanyarray(value, dtype=np.float32) - value = np.ascontiguousarray(value) - if value.shape != self.positions.shape: - raise ValueError('Incorrect normals shape') - self._normals = value - - @property - def tangents(self): - """(n,4) float : XYZW vertex tangents. - """ - return self._tangents - - @tangents.setter - def tangents(self, value): - if value is not None: - value = np.asanyarray(value, dtype=np.float32) - value = np.ascontiguousarray(value) - if value.shape != (self.positions.shape[0], 4): - raise ValueError('Incorrect tangent shape') - self._tangents = value - - @property - def texcoord_0(self): - """(n,2) float : The first set of UV texture coordinates. - """ - return self._texcoord_0 - - @texcoord_0.setter - def texcoord_0(self, value): - if value is not None: - value = np.asanyarray(value, dtype=np.float32) - value = np.ascontiguousarray(value) - if (value.ndim != 2 or value.shape[0] != self.positions.shape[0] or - value.shape[1] < 2): - raise ValueError('Incorrect texture coordinate shape') - if value.shape[1] > 2: - value = value[:,:2] - self._texcoord_0 = value - - @property - def texcoord_1(self): - """(n,2) float : The second set of UV texture coordinates. - """ - return self._texcoord_1 - - @texcoord_1.setter - def texcoord_1(self, value): - if value is not None: - value = np.asanyarray(value, dtype=np.float32) - value = np.ascontiguousarray(value) - if (value.ndim != 2 or value.shape[0] != self.positions.shape[0] or - value.shape[1] != 2): - raise ValueError('Incorrect texture coordinate shape') - self._texcoord_1 = value - - @property - def color_0(self): - """(n,4) float : RGBA vertex colors. - """ - return self._color_0 - - @color_0.setter - def color_0(self, value): - if value is not None: - value = np.ascontiguousarray( - format_color_array(value, shape=(len(self.positions), 4)) - ) - self._is_transparent = None - self._color_0 = value - - @property - def joints_0(self): - """(n,4) float : Joint information. - """ - return self._joints_0 - - @joints_0.setter - def joints_0(self, value): - self._joints_0 = value - - @property - def weights_0(self): - """(n,4) float : Weight information for morphing. - """ - return self._weights_0 - - @weights_0.setter - def weights_0(self, value): - self._weights_0 = value - - @property - def indices(self): - """(m,3) int : Face indices for triangle meshes or fans. - """ - return self._indices - - @indices.setter - def indices(self, value): - if value is not None: - value = np.asanyarray(value, dtype=np.float32) - value = np.ascontiguousarray(value) - self._indices = value - - @property - def material(self): - """:class:`Material` : The material for this primitive. - """ - return self._material - - @material.setter - def material(self, value): - # Create default material - if value is None: - value = MetallicRoughnessMaterial() - else: - if not isinstance(value, Material): - raise TypeError('Object material must be of type Material') - self._material = value - - @property - def mode(self): - """int : The type of primitive to render. - """ - return self._mode - - @mode.setter - def mode(self, value): - value = int(value) - if value < GLTF.POINTS or value > GLTF.TRIANGLE_FAN: - raise ValueError('Invalid mode') - self._mode = value - - @property - def targets(self): - """(k,) int : Morph target indices. - """ - return self._targets - - @targets.setter - def targets(self, value): - self._targets = value - - @property - def poses(self): - """(x,4,4) float : Homogenous transforms for instancing this primitive. - """ - return self._poses - - @poses.setter - def poses(self, value): - if value is not None: - value = np.asanyarray(value, dtype=np.float32) - value = np.ascontiguousarray(value) - if value.ndim == 2: - value = value[np.newaxis,:,:] - if value.shape[1] != 4 or value.shape[2] != 4: - raise ValueError('Pose matrices must be of shape (n,4,4), ' - 'got {}'.format(value.shape)) - self._poses = value - self._bounds = None - - @property - def bounds(self): - if self._bounds is None: - self._bounds = self._compute_bounds() - return self._bounds - - @property - def centroid(self): - """(3,) float : The centroid of the primitive's AABB. - """ - return np.mean(self.bounds, axis=0) - - @property - def extents(self): - """(3,) float : The lengths of the axes of the primitive's AABB. - """ - return np.diff(self.bounds, axis=0).reshape(-1) - - @property - def scale(self): - """(3,) float : The length of the diagonal of the primitive's AABB. - """ - return np.linalg.norm(self.extents) - - @property - def buf_flags(self): - """int : The flags for the render buffer. - """ - if self._buf_flags is None: - self._buf_flags = self._compute_buf_flags() - return self._buf_flags - - def delete(self): - self._unbind() - self._remove_from_context() - - @property - def is_transparent(self): - """bool : If True, the mesh is partially-transparent. - """ - return self._compute_transparency() - - def _add_to_context(self): - if self._vaid is not None: - raise ValueError('Mesh is already bound to a context') - - # Generate and bind VAO - self._vaid = glGenVertexArrays(1) - glBindVertexArray(self._vaid) - - ####################################################################### - # Fill vertex buffer - ####################################################################### - - # Generate and bind vertex buffer - vertexbuffer = glGenBuffers(1) - self._buffers.append(vertexbuffer) - glBindBuffer(GL_ARRAY_BUFFER, vertexbuffer) - - # positions - vertex_data = self.positions - attr_sizes = [3] - - # Normals - if self.normals is not None: - vertex_data = np.hstack((vertex_data, self.normals)) - attr_sizes.append(3) - - # Tangents - if self.tangents is not None: - vertex_data = np.hstack((vertex_data, self.tangents)) - attr_sizes.append(4) - - # Texture Coordinates - if self.texcoord_0 is not None: - vertex_data = np.hstack((vertex_data, self.texcoord_0)) - attr_sizes.append(2) - if self.texcoord_1 is not None: - vertex_data = np.hstack((vertex_data, self.texcoord_1)) - attr_sizes.append(2) - - # Color - if self.color_0 is not None: - vertex_data = np.hstack((vertex_data, self.color_0)) - attr_sizes.append(4) - - # TODO JOINTS AND WEIGHTS - # PASS - - # Copy data to buffer - vertex_data = np.ascontiguousarray( - vertex_data.flatten().astype(np.float32) - ) - glBufferData( - GL_ARRAY_BUFFER, FLOAT_SZ * len(vertex_data), - vertex_data, GL_STATIC_DRAW - ) - total_sz = sum(attr_sizes) - offset = 0 - for i, sz in enumerate(attr_sizes): - glVertexAttribPointer( - i, sz, GL_FLOAT, GL_FALSE, FLOAT_SZ * total_sz, - ctypes.c_void_p(FLOAT_SZ * offset) - ) - glEnableVertexAttribArray(i) - offset += sz - - ####################################################################### - # Fill model matrix buffer - ####################################################################### - - if self.poses is not None: - pose_data = np.ascontiguousarray( - np.transpose(self.poses, [0,2,1]).flatten().astype(np.float32) - ) - else: - pose_data = np.ascontiguousarray( - np.eye(4).flatten().astype(np.float32) - ) - - modelbuffer = glGenBuffers(1) - self._buffers.append(modelbuffer) - glBindBuffer(GL_ARRAY_BUFFER, modelbuffer) - glBufferData( - GL_ARRAY_BUFFER, FLOAT_SZ * len(pose_data), - pose_data, GL_STATIC_DRAW - ) - - for i in range(0, 4): - idx = i + len(attr_sizes) - glEnableVertexAttribArray(idx) - glVertexAttribPointer( - idx, 4, GL_FLOAT, GL_FALSE, FLOAT_SZ * 4 * 4, - ctypes.c_void_p(4 * FLOAT_SZ * i) - ) - glVertexAttribDivisor(idx, 1) - - ####################################################################### - # Fill element buffer - ####################################################################### - if self.indices is not None: - elementbuffer = glGenBuffers(1) - self._buffers.append(elementbuffer) - glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, elementbuffer) - glBufferData(GL_ELEMENT_ARRAY_BUFFER, UINT_SZ * self.indices.size, - self.indices.flatten().astype(np.uint32), - GL_STATIC_DRAW) - - glBindVertexArray(0) - - def _remove_from_context(self): - if self._vaid is not None: - glDeleteVertexArrays(1, [self._vaid]) - glDeleteBuffers(len(self._buffers), self._buffers) - self._vaid = None - self._buffers = [] - - def _in_context(self): - return self._vaid is not None - - def _bind(self): - if self._vaid is None: - raise ValueError('Cannot bind a Mesh that has not been added ' - 'to a context') - glBindVertexArray(self._vaid) - - def _unbind(self): - glBindVertexArray(0) - - def _compute_bounds(self): - """Compute the bounds of this object. - """ - # Compute bounds of this object - bounds = np.array([np.min(self.positions, axis=0), - np.max(self.positions, axis=0)]) - - # If instanced, compute translations for approximate bounds - if self.poses is not None: - bounds += np.array([np.min(self.poses[:,:3,3], axis=0), - np.max(self.poses[:,:3,3], axis=0)]) - return bounds - - def _compute_transparency(self): - """Compute whether or not this object is transparent. - """ - if self.material.is_transparent: - return True - if self._is_transparent is None: - self._is_transparent = False - if self.color_0 is not None: - if np.any(self._color_0[:,3] != 1.0): - self._is_transparent = True - return self._is_transparent - - def _compute_buf_flags(self): - buf_flags = BufFlags.POSITION - - if self.normals is not None: - buf_flags |= BufFlags.NORMAL - if self.tangents is not None: - buf_flags |= BufFlags.TANGENT - if self.texcoord_0 is not None: - buf_flags |= BufFlags.TEXCOORD_0 - if self.texcoord_1 is not None: - buf_flags |= BufFlags.TEXCOORD_1 - if self.color_0 is not None: - buf_flags |= BufFlags.COLOR_0 - if self.joints_0 is not None: - buf_flags |= BufFlags.JOINTS_0 - if self.weights_0 is not None: - buf_flags |= BufFlags.WEIGHTS_0 - - return buf_flags diff --git a/spaces/adirik/stylemc-demo/encoder4editing/configs/paths_config.py b/spaces/adirik/stylemc-demo/encoder4editing/configs/paths_config.py deleted file mode 100644 index 4604f6063b8125364a52a492de52fcc54004f373..0000000000000000000000000000000000000000 --- a/spaces/adirik/stylemc-demo/encoder4editing/configs/paths_config.py +++ /dev/null @@ -1,28 +0,0 @@ -dataset_paths = { - # Face Datasets (In the paper: FFHQ - train, CelebAHQ - test) - 'ffhq': '', - 'celeba_test': '', - - # Cars Dataset (In the paper: Stanford cars) - 'cars_train': '', - 'cars_test': '', - - # Horse Dataset (In the paper: LSUN Horse) - 'horse_train': '', - 'horse_test': '', - - # Church Dataset (In the paper: LSUN Church) - 'church_train': '', - 'church_test': '', - - # Cats Dataset (In the paper: LSUN Cat) - 'cats_train': '', - 'cats_test': '' -} - -model_paths = { - 'stylegan_ffhq': 'pretrained_models/stylegan2-ffhq-config-f.pt', - 'ir_se50': 'pretrained_models/model_ir_se50.pth', - 'shape_predictor': 'pretrained_models/shape_predictor_68_face_landmarks.dat', - 'moco': 'pretrained_models/moco_v2_800ep_pretrain.pth' -} diff --git a/spaces/adrianpierce/recipes_app/pages/2_Saved_Recipes.py b/spaces/adrianpierce/recipes_app/pages/2_Saved_Recipes.py deleted file mode 100644 index c85de52accbe4beebd3f660faf2a7fa75ea34f48..0000000000000000000000000000000000000000 --- a/spaces/adrianpierce/recipes_app/pages/2_Saved_Recipes.py +++ /dev/null @@ -1,67 +0,0 @@ -import streamlit as st -import json -import os - -st.title("Saved Recipes") - -# get all saved files -directory_path = '/data/' -recipes = [] -for root, dirs, files in os.walk(directory_path): - for file in files: - if file.endswith('.json'): - full_path = os.path.join(root, file) - # os.remove(full_path) - f = open(full_path) - recipe_json = json.load(f) - recipe_json['file'] = file - recipes.append(recipe_json) - -#st.json(saved_files) - -cols = st.columns([4, 1]) -with cols[0]: - user_search = st.text_input("Search Recipes", value="") -with cols[1]: - user_sort = st.selectbox("Sort", ('Recent', 'Oldest', 'A-Z', 'Z-A', 'Random')) -st.write("") # just some space - -recipes_filtered = [x for x in recipes if user_search.lower() in x['name'].lower()] -if user_sort == 'Recent': - recipes_filtered.sort(key=lambda x: x['timestamp'], reverse=True) -elif user_sort == 'Oldest': - recipes_filtered.sort(key=lambda x: x['timestamp']) -elif user_sort == 'A-Z': - recipes_filtered.sort(key=lambda x: x['name']) -elif user_sort == 'Z-A': - recipes_filtered.sort(key=lambda x: x['name'], reverse=True) -elif user_sort == 'Random': - recipes_filtered.sort(key=lambda x: x['file']) - -for recipe in recipes_filtered: - with st.expander(recipe['name']): - st.markdown(recipe['md']) - if st.session_state.admin == True: - st.write('') - st.write(recipe['file']) - if st.button("Delete", key=recipe['file']): - if os.path.exists(f"/data/{recipe['file']}"): - os.remove(f"/data/{recipe['file']}") - st.rerun() - - -# ignore - -# f = open('/data/test_output.json') -# json_test = json.load(f) - -# st.json(json_test) - -# file_path = '/data/test_output.json' - -# if os.path.exists(file_path): -# # Delete the file -# os.remove(file_path) -# st.write(f"The file {file_path} has been deleted.") -# else: -# st.write(f"The file {file_path} does not exist.") \ No newline at end of file diff --git a/spaces/akhaliq/Detic/detic/modeling/debug.py b/spaces/akhaliq/Detic/detic/modeling/debug.py deleted file mode 100644 index 9c7c442eb8aa9474c8874ac1dc75659371e8c894..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Detic/detic/modeling/debug.py +++ /dev/null @@ -1,334 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import cv2 -import numpy as np -import torch -import torch.nn.functional as F -import os - -COLORS = ((np.random.rand(1300, 3) * 0.4 + 0.6) * 255).astype( - np.uint8).reshape(1300, 1, 1, 3) - -def _get_color_image(heatmap): - heatmap = heatmap.reshape( - heatmap.shape[0], heatmap.shape[1], heatmap.shape[2], 1) - if heatmap.shape[0] == 1: - color_map = (heatmap * np.ones((1, 1, 1, 3), np.uint8) * 255).max( - axis=0).astype(np.uint8) # H, W, 3 - else: - color_map = (heatmap * COLORS[:heatmap.shape[0]]).max(axis=0).astype(np.uint8) # H, W, 3 - - return color_map - -def _blend_image(image, color_map, a=0.7): - color_map = cv2.resize(color_map, (image.shape[1], image.shape[0])) - ret = np.clip(image * (1 - a) + color_map * a, 0, 255).astype(np.uint8) - return ret - -def _blend_image_heatmaps(image, color_maps, a=0.7): - merges = np.zeros((image.shape[0], image.shape[1], 3), np.float32) - for color_map in color_maps: - color_map = cv2.resize(color_map, (image.shape[1], image.shape[0])) - merges = np.maximum(merges, color_map) - ret = np.clip(image * (1 - a) + merges * a, 0, 255).astype(np.uint8) - return ret - -def _decompose_level(x, shapes_per_level, N): - ''' - x: LNHiWi x C - ''' - x = x.view(x.shape[0], -1) - ret = [] - st = 0 - for l in range(len(shapes_per_level)): - ret.append([]) - h = shapes_per_level[l][0].int().item() - w = shapes_per_level[l][1].int().item() - for i in range(N): - ret[l].append(x[st + h * w * i:st + h * w * (i + 1)].view( - h, w, -1).permute(2, 0, 1)) - st += h * w * N - return ret - -def _imagelist_to_tensor(images): - images = [x for x in images] - image_sizes = [x.shape[-2:] for x in images] - h = max([size[0] for size in image_sizes]) - w = max([size[1] for size in image_sizes]) - S = 32 - h, w = ((h - 1) // S + 1) * S, ((w - 1) // S + 1) * S - images = [F.pad(x, (0, w - x.shape[2], 0, h - x.shape[1], 0, 0)) \ - for x in images] - images = torch.stack(images) - return images - - -def _ind2il(ind, shapes_per_level, N): - r = ind - l = 0 - S = 0 - while r - S >= N * shapes_per_level[l][0] * shapes_per_level[l][1]: - S += N * shapes_per_level[l][0] * shapes_per_level[l][1] - l += 1 - i = (r - S) // (shapes_per_level[l][0] * shapes_per_level[l][1]) - return i, l - -def debug_train( - images, gt_instances, flattened_hms, reg_targets, labels, pos_inds, - shapes_per_level, locations, strides): - ''' - images: N x 3 x H x W - flattened_hms: LNHiWi x C - shapes_per_level: L x 2 [(H_i, W_i)] - locations: LNHiWi x 2 - ''' - reg_inds = torch.nonzero( - reg_targets.max(dim=1)[0] > 0).squeeze(1) - N = len(images) - images = _imagelist_to_tensor(images) - repeated_locations = [torch.cat([loc] * N, dim=0) \ - for loc in locations] - locations = torch.cat(repeated_locations, dim=0) - gt_hms = _decompose_level(flattened_hms, shapes_per_level, N) - masks = flattened_hms.new_zeros((flattened_hms.shape[0], 1)) - masks[pos_inds] = 1 - masks = _decompose_level(masks, shapes_per_level, N) - for i in range(len(images)): - image = images[i].detach().cpu().numpy().transpose(1, 2, 0) - color_maps = [] - for l in range(len(gt_hms)): - color_map = _get_color_image( - gt_hms[l][i].detach().cpu().numpy()) - color_maps.append(color_map) - cv2.imshow('gthm_{}'.format(l), color_map) - blend = _blend_image_heatmaps(image.copy(), color_maps) - if gt_instances is not None: - bboxes = gt_instances[i].gt_boxes.tensor - for j in range(len(bboxes)): - bbox = bboxes[j] - cv2.rectangle( - blend, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - (0, 0, 255), 3, cv2.LINE_AA) - - for j in range(len(pos_inds)): - image_id, l = _ind2il(pos_inds[j], shapes_per_level, N) - if image_id != i: - continue - loc = locations[pos_inds[j]] - cv2.drawMarker( - blend, (int(loc[0]), int(loc[1])), (0, 255, 255), - markerSize=(l + 1) * 16) - - for j in range(len(reg_inds)): - image_id, l = _ind2il(reg_inds[j], shapes_per_level, N) - if image_id != i: - continue - ltrb = reg_targets[reg_inds[j]] - ltrb *= strides[l] - loc = locations[reg_inds[j]] - bbox = [(loc[0] - ltrb[0]), (loc[1] - ltrb[1]), - (loc[0] + ltrb[2]), (loc[1] + ltrb[3])] - cv2.rectangle( - blend, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - (255, 0, 0), 1, cv2.LINE_AA) - cv2.circle(blend, (int(loc[0]), int(loc[1])), 2, (255, 0, 0), -1) - - cv2.imshow('blend', blend) - cv2.waitKey() - - -def debug_test( - images, logits_pred, reg_pred, agn_hm_pred=[], preds=[], - vis_thresh=0.3, debug_show_name=False, mult_agn=False): - ''' - images: N x 3 x H x W - class_target: LNHiWi x C - cat_agn_heatmap: LNHiWi - shapes_per_level: L x 2 [(H_i, W_i)] - ''' - N = len(images) - for i in range(len(images)): - image = images[i].detach().cpu().numpy().transpose(1, 2, 0) - result = image.copy().astype(np.uint8) - pred_image = image.copy().astype(np.uint8) - color_maps = [] - L = len(logits_pred) - for l in range(L): - if logits_pred[0] is not None: - stride = min(image.shape[0], image.shape[1]) / min( - logits_pred[l][i].shape[1], logits_pred[l][i].shape[2]) - else: - stride = min(image.shape[0], image.shape[1]) / min( - agn_hm_pred[l][i].shape[1], agn_hm_pred[l][i].shape[2]) - stride = stride if stride < 60 else 64 if stride < 100 else 128 - if logits_pred[0] is not None: - if mult_agn: - logits_pred[l][i] = logits_pred[l][i] * agn_hm_pred[l][i] - color_map = _get_color_image( - logits_pred[l][i].detach().cpu().numpy()) - color_maps.append(color_map) - cv2.imshow('predhm_{}'.format(l), color_map) - - if debug_show_name: - from detectron2.data.datasets.lvis_v1_categories import LVIS_CATEGORIES - cat2name = [x['name'] for x in LVIS_CATEGORIES] - for j in range(len(preds[i].scores) if preds is not None else 0): - if preds[i].scores[j] > vis_thresh: - bbox = preds[i].proposal_boxes[j] \ - if preds[i].has('proposal_boxes') else \ - preds[i].pred_boxes[j] - bbox = bbox.tensor[0].detach().cpu().numpy().astype(np.int32) - cat = int(preds[i].pred_classes[j]) \ - if preds[i].has('pred_classes') else 0 - cl = COLORS[cat, 0, 0] - cv2.rectangle( - pred_image, (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - (int(cl[0]), int(cl[1]), int(cl[2])), 2, cv2.LINE_AA) - if debug_show_name: - txt = '{}{:.1f}'.format( - cat2name[cat] if cat > 0 else '', - preds[i].scores[j]) - font = cv2.FONT_HERSHEY_SIMPLEX - cat_size = cv2.getTextSize(txt, font, 0.5, 2)[0] - cv2.rectangle( - pred_image, - (int(bbox[0]), int(bbox[1] - cat_size[1] - 2)), - (int(bbox[0] + cat_size[0]), int(bbox[1] - 2)), - (int(cl[0]), int(cl[1]), int(cl[2])), -1) - cv2.putText( - pred_image, txt, (int(bbox[0]), int(bbox[1] - 2)), - font, 0.5, (0, 0, 0), thickness=1, lineType=cv2.LINE_AA) - - - if agn_hm_pred[l] is not None: - agn_hm_ = agn_hm_pred[l][i, 0, :, :, None].detach().cpu().numpy() - agn_hm_ = (agn_hm_ * np.array([255, 255, 255]).reshape( - 1, 1, 3)).astype(np.uint8) - cv2.imshow('agn_hm_{}'.format(l), agn_hm_) - blend = _blend_image_heatmaps(image.copy(), color_maps) - cv2.imshow('blend', blend) - cv2.imshow('preds', pred_image) - cv2.waitKey() - -global cnt -cnt = 0 - -def debug_second_stage(images, instances, proposals=None, vis_thresh=0.3, - save_debug=False, debug_show_name=False, image_labels=[], - save_debug_path='output/save_debug/', - bgr=False): - images = _imagelist_to_tensor(images) - if 'COCO' in save_debug_path: - from detectron2.data.datasets.builtin_meta import COCO_CATEGORIES - cat2name = [x['name'] for x in COCO_CATEGORIES] - else: - from detectron2.data.datasets.lvis_v1_categories import LVIS_CATEGORIES - cat2name = ['({}){}'.format(x['frequency'], x['name']) \ - for x in LVIS_CATEGORIES] - for i in range(len(images)): - image = images[i].detach().cpu().numpy().transpose(1, 2, 0).astype(np.uint8).copy() - if bgr: - image = image[:, :, ::-1].copy() - if instances[i].has('gt_boxes'): - bboxes = instances[i].gt_boxes.tensor.cpu().numpy() - scores = np.ones(bboxes.shape[0]) - cats = instances[i].gt_classes.cpu().numpy() - else: - bboxes = instances[i].pred_boxes.tensor.cpu().numpy() - scores = instances[i].scores.cpu().numpy() - cats = instances[i].pred_classes.cpu().numpy() - for j in range(len(bboxes)): - if scores[j] > vis_thresh: - bbox = bboxes[j] - cl = COLORS[cats[j], 0, 0] - cl = (int(cl[0]), int(cl[1]), int(cl[2])) - cv2.rectangle( - image, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - cl, 2, cv2.LINE_AA) - if debug_show_name: - cat = cats[j] - txt = '{}{:.1f}'.format( - cat2name[cat] if cat > 0 else '', - scores[j]) - font = cv2.FONT_HERSHEY_SIMPLEX - cat_size = cv2.getTextSize(txt, font, 0.5, 2)[0] - cv2.rectangle( - image, - (int(bbox[0]), int(bbox[1] - cat_size[1] - 2)), - (int(bbox[0] + cat_size[0]), int(bbox[1] - 2)), - (int(cl[0]), int(cl[1]), int(cl[2])), -1) - cv2.putText( - image, txt, (int(bbox[0]), int(bbox[1] - 2)), - font, 0.5, (0, 0, 0), thickness=1, lineType=cv2.LINE_AA) - if proposals is not None: - proposal_image = images[i].detach().cpu().numpy().transpose(1, 2, 0).astype(np.uint8).copy() - if bgr: - proposal_image = proposal_image.copy() - else: - proposal_image = proposal_image[:, :, ::-1].copy() - bboxes = proposals[i].proposal_boxes.tensor.cpu().numpy() - if proposals[i].has('scores'): - scores = proposals[i].scores.detach().cpu().numpy() - else: - scores = proposals[i].objectness_logits.detach().cpu().numpy() - # selected = -1 - # if proposals[i].has('image_loss'): - # selected = proposals[i].image_loss.argmin() - if proposals[i].has('selected'): - selected = proposals[i].selected - else: - selected = [-1 for _ in range(len(bboxes))] - for j in range(len(bboxes)): - if scores[j] > vis_thresh or selected[j] >= 0: - bbox = bboxes[j] - cl = (209, 159, 83) - th = 2 - if selected[j] >= 0: - cl = (0, 0, 0xa4) - th = 4 - cv2.rectangle( - proposal_image, - (int(bbox[0]), int(bbox[1])), - (int(bbox[2]), int(bbox[3])), - cl, th, cv2.LINE_AA) - if selected[j] >= 0 and debug_show_name: - cat = selected[j].item() - txt = '{}'.format(cat2name[cat]) - font = cv2.FONT_HERSHEY_SIMPLEX - cat_size = cv2.getTextSize(txt, font, 0.5, 2)[0] - cv2.rectangle( - proposal_image, - (int(bbox[0]), int(bbox[1] - cat_size[1] - 2)), - (int(bbox[0] + cat_size[0]), int(bbox[1] - 2)), - (int(cl[0]), int(cl[1]), int(cl[2])), -1) - cv2.putText( - proposal_image, txt, - (int(bbox[0]), int(bbox[1] - 2)), - font, 0.5, (0, 0, 0), thickness=1, - lineType=cv2.LINE_AA) - - if save_debug: - global cnt - cnt = (cnt + 1) % 5000 - if not os.path.exists(save_debug_path): - os.mkdir(save_debug_path) - save_name = '{}/{:05d}.jpg'.format(save_debug_path, cnt) - if i < len(image_labels): - image_label = image_labels[i] - save_name = '{}/{:05d}'.format(save_debug_path, cnt) - for x in image_label: - class_name = cat2name[x] - save_name = save_name + '|{}'.format(class_name) - save_name = save_name + '.jpg' - cv2.imwrite(save_name, proposal_image) - else: - cv2.imshow('image', image) - if proposals is not None: - cv2.imshow('proposals', proposal_image) - cv2.waitKey() \ No newline at end of file diff --git a/spaces/akhaliq/JoJoGAN/e4e/scripts/calc_losses_on_images.py b/spaces/akhaliq/JoJoGAN/e4e/scripts/calc_losses_on_images.py deleted file mode 100644 index 32b6bcee854da7ae357daf82bd986f30db9fb72c..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/JoJoGAN/e4e/scripts/calc_losses_on_images.py +++ /dev/null @@ -1,87 +0,0 @@ -from argparse import ArgumentParser -import os -import json -import sys -from tqdm import tqdm -import numpy as np -import torch -from torch.utils.data import DataLoader -import torchvision.transforms as transforms - -sys.path.append(".") -sys.path.append("..") - -from criteria.lpips.lpips import LPIPS -from datasets.gt_res_dataset import GTResDataset - - -def parse_args(): - parser = ArgumentParser(add_help=False) - parser.add_argument('--mode', type=str, default='lpips', choices=['lpips', 'l2']) - parser.add_argument('--data_path', type=str, default='results') - parser.add_argument('--gt_path', type=str, default='gt_images') - parser.add_argument('--workers', type=int, default=4) - parser.add_argument('--batch_size', type=int, default=4) - parser.add_argument('--is_cars', action='store_true') - args = parser.parse_args() - return args - - -def run(args): - resize_dims = (256, 256) - if args.is_cars: - resize_dims = (192, 256) - transform = transforms.Compose([transforms.Resize(resize_dims), - transforms.ToTensor(), - transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]) - - print('Loading dataset') - dataset = GTResDataset(root_path=args.data_path, - gt_dir=args.gt_path, - transform=transform) - - dataloader = DataLoader(dataset, - batch_size=args.batch_size, - shuffle=False, - num_workers=int(args.workers), - drop_last=True) - - if args.mode == 'lpips': - loss_func = LPIPS(net_type='alex') - elif args.mode == 'l2': - loss_func = torch.nn.MSELoss() - else: - raise Exception('Not a valid mode!') - loss_func.cuda() - - global_i = 0 - scores_dict = {} - all_scores = [] - for result_batch, gt_batch in tqdm(dataloader): - for i in range(args.batch_size): - loss = float(loss_func(result_batch[i:i + 1].cuda(), gt_batch[i:i + 1].cuda())) - all_scores.append(loss) - im_path = dataset.pairs[global_i][0] - scores_dict[os.path.basename(im_path)] = loss - global_i += 1 - - all_scores = list(scores_dict.values()) - mean = np.mean(all_scores) - std = np.std(all_scores) - result_str = 'Average loss is {:.2f}+-{:.2f}'.format(mean, std) - print('Finished with ', args.data_path) - print(result_str) - - out_path = os.path.join(os.path.dirname(args.data_path), 'inference_metrics') - if not os.path.exists(out_path): - os.makedirs(out_path) - - with open(os.path.join(out_path, 'stat_{}.txt'.format(args.mode)), 'w') as f: - f.write(result_str) - with open(os.path.join(out_path, 'scores_{}.json'.format(args.mode)), 'w') as f: - json.dump(scores_dict, f) - - -if __name__ == '__main__': - args = parse_args() - run(args) diff --git a/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/__init__.py b/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/__init__.py deleted file mode 100644 index ddcf38e78f3bbb2380b0a246000bcb5e5b385619..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/modeling/transformer_decoder/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from .maskformer_transformer_decoder import StandardTransformerDecoder -from .mask2former_transformer_decoder import MultiScaleMaskedTransformerDecoder diff --git a/spaces/akhaliq/omnivore/app.py b/spaces/akhaliq/omnivore/app.py deleted file mode 100644 index 1e205973a1ec37b8eaa073018852e55b03c5fa1d..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/omnivore/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import os -import json -from typing import List - - -import torch -import torch.nn.functional as F -import torchvision.transforms as T -from PIL import Image -from torchvision.transforms._transforms_video import NormalizeVideo - -import gradio as gr - -# Device on which to run the model -# Set to cuda to load on GPU -device = "cpu" -os.system("wget https://huggingface.co/akhaliq/Omnivore/resolve/main/swinB_checkpoint.torch") -# Pick a pretrained model -model_name = "omnivore_swinB" -model = torch.hub.load('facebookresearch/omnivore:main', "omnivore_swinB", pretrained=False) -new_dict = {} -for key, value in torch.load('/home/user/app/swinB_checkpoint.torch')['trunk'].items(): - new_dict['trunk.' + key] = value - -for key, value in torch.load('/home/user/app/swinB_checkpoint.torch')['heads'].items(): - new_dict['heads.' + key] = value - -model.load_state_dict(new_dict) - -# Set to eval mode and move to desired device -model = model.to(device) -model = model.eval() - -os.system("wget https://s3.amazonaws.com/deep-learning-models/image-models/imagenet_class_index.json") - -with open("imagenet_class_index.json", "r") as f: - imagenet_classnames = json.load(f) - -# Create an id to label name mapping -imagenet_id_to_classname = {} -for k, v in imagenet_classnames.items(): - imagenet_id_to_classname[k] = v[1] - -os.system("wget https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/13-11-02-olb-by-RalfR-03.jpg/800px-13-11-02-olb-by-RalfR-03.jpg -O library.jpg") - -def inference(img): - image = img - image_transform = T.Compose( - [ - T.Resize(224), - T.CenterCrop(224), - T.ToTensor(), - T.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]), - ] - ) - image = image_transform(image) - - # The model expects inputs of shape: B x C x T x H x W - image = image[None, :, None, ...] - - prediction = model(image, input_type="image") - prediction = F.softmax(prediction, dim=1) - pred_classes = prediction.topk(k=5).indices - - pred_class_names = [imagenet_id_to_classname[str(i.item())] for i in pred_classes[0]] - return "Top 5 predicted labels: %s" % ", ".join(pred_class_names) - -inputs = gr.inputs.Image(type='pil') -outputs = gr.outputs.Textbox(label="Output") - -title = "Omnivore" - -description = "Gradio demo for Omnivore: A Single Model for Many Visual Modalities. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." - -article = "

    Omnivore: A Single Model for Many Visual Modalities | Github Repo

    " - - -gr.Interface(inference, inputs, outputs, title=title, description=description, article=article, examples=[['library.jpg']]).launch(enable_queue=True,cache_examples=True) - diff --git a/spaces/akhaliq/stylegan3_clip/metrics/kernel_inception_distance.py b/spaces/akhaliq/stylegan3_clip/metrics/kernel_inception_distance.py deleted file mode 100644 index e8e0bd12ef56e64d8e77091aaf465891f4984d9e..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/stylegan3_clip/metrics/kernel_inception_distance.py +++ /dev/null @@ -1,46 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Kernel Inception Distance (KID) from the paper "Demystifying MMD -GANs". Matches the original implementation by Binkowski et al. at -https://github.com/mbinkowski/MMD-GAN/blob/master/gan/compute_scores.py""" - -import numpy as np -from . import metric_utils - -#---------------------------------------------------------------------------- - -def compute_kid(opts, max_real, num_gen, num_subsets, max_subset_size): - # Direct TorchScript translation of http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz - detector_url = 'https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/metrics/inception-2015-12-05.pkl' - detector_kwargs = dict(return_features=True) # Return raw features before the softmax layer. - - real_features = metric_utils.compute_feature_stats_for_dataset( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=0, capture_all=True, max_items=max_real).get_all() - - gen_features = metric_utils.compute_feature_stats_for_generator( - opts=opts, detector_url=detector_url, detector_kwargs=detector_kwargs, - rel_lo=0, rel_hi=1, capture_all=True, max_items=num_gen).get_all() - - if opts.rank != 0: - return float('nan') - - n = real_features.shape[1] - m = min(min(real_features.shape[0], gen_features.shape[0]), max_subset_size) - t = 0 - for _subset_idx in range(num_subsets): - x = gen_features[np.random.choice(gen_features.shape[0], m, replace=False)] - y = real_features[np.random.choice(real_features.shape[0], m, replace=False)] - a = (x @ x.T / n + 1) ** 3 + (y @ y.T / n + 1) ** 3 - b = (x @ y.T / n + 1) ** 3 - t += (a.sum() - np.diag(a).sum()) / (m - 1) - b.sum() * 2 / m - kid = t / num_subsets / m - return float(kid) - -#---------------------------------------------------------------------------- diff --git a/spaces/aliabid94/AutoGPT/autogpt/commands/twitter.py b/spaces/aliabid94/AutoGPT/autogpt/commands/twitter.py deleted file mode 100644 index 3eaed36e20e1c520690ac59f25a4da6501f3440f..0000000000000000000000000000000000000000 --- a/spaces/aliabid94/AutoGPT/autogpt/commands/twitter.py +++ /dev/null @@ -1,26 +0,0 @@ -import os - -import tweepy -from dotenv import load_dotenv - -load_dotenv() - - -def send_tweet(tweet_text): - consumer_key = os.environ.get("TW_CONSUMER_KEY") - consumer_secret = os.environ.get("TW_CONSUMER_SECRET") - access_token = os.environ.get("TW_ACCESS_TOKEN") - access_token_secret = os.environ.get("TW_ACCESS_TOKEN_SECRET") - # Authenticate to Twitter - auth = tweepy.OAuthHandler(consumer_key, consumer_secret) - auth.set_access_token(access_token, access_token_secret) - - # Create API object - api = tweepy.API(auth) - - # Send tweet - try: - api.update_status(tweet_text) - print("Tweet sent successfully!") - except tweepy.TweepyException as e: - print("Error sending tweet: {}".format(e.reason)) diff --git a/spaces/alibaba-pai/pai-diffusion-artist-xlarge-zh/README.md b/spaces/alibaba-pai/pai-diffusion-artist-xlarge-zh/README.md deleted file mode 100644 index 91ef5a2baf4681392c41245974d25a58205b41d7..0000000000000000000000000000000000000000 --- a/spaces/alibaba-pai/pai-diffusion-artist-xlarge-zh/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PAI Diffusion (Food) -emoji: 🌖 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/allknowingroger/Image-Models-Test22/app.py b/spaces/allknowingroger/Image-Models-Test22/app.py deleted file mode 100644 index 98a2b6b5e48089f9e9134aa9890372f3217b20c6..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test22/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "J-Douglas/pokemon-lora", - "nitami/sikuning2", - "rhendz/niji-lora", - "Emilianohack6950/GenOrtega", - "theintuitiveye/HARDblend", - "digiplay/bluePencilRealistic_v01", - "digiplay/calicomixreal_v2.0_diffusers", - "digiplay/chrysanthemumMix_v1", - "digiplay/SDVN1-Real_v1", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test84/app.py b/spaces/allknowingroger/Image-Models-Test84/app.py deleted file mode 100644 index 3d22707b5b28e07eafb5e07e9b5a366bca1b2844..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test84/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "Ryukijano/lora-trained-xl-kaggle-p100", - "charliezjw/t2", - "recoilme/lora-trained-xl-colab", - "Falah/Iyad_Radi_SDXL1.0_Lora", - "Dayanand4574/stable-diffusion-chair", - "srgg000/nmda2", - "Ryukijano/lora-trained-xl-anime_colab", - "nerijs/lego-minifig-xl", - "MakAttack/BunnyAdnBinnyDog", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_leftright.c b/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_leftright.c deleted file mode 100644 index e61a351ed26bcba24299c87cd67989f203afe464..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_leftright.c +++ /dev/null @@ -1,185 +0,0 @@ -/** @file patest_leftright.c - @ingroup test_src - @brief Play different tone sine waves that - alternate between left and right channel. - - The low tone should be on the left channel. - - @author Ross Bencina - @author Phil Burk -*/ -/* - * $Id$ - * - * This program uses the PortAudio Portable Audio Library. - * For more information see: http://www.portaudio.com - * Copyright (c) 1999-2000 Ross Bencina and Phil Burk - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include -#include -#include "portaudio.h" - -#define NUM_SECONDS (8) -#define SAMPLE_RATE (44100) -#define FRAMES_PER_BUFFER (512) -#ifndef M_PI -#define M_PI (3.14159265) -#endif -#define TABLE_SIZE (200) -#define BALANCE_DELTA (0.001) - -typedef struct -{ - float sine[TABLE_SIZE]; - int left_phase; - int right_phase; - float targetBalance; // 0.0 = left, 1.0 = right - float currentBalance; -} paTestData; - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int patestCallback( const void *inputBuffer, - void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - paTestData *data = (paTestData*)userData; - float *out = (float*)outputBuffer; - unsigned long i; - int finished = 0; - /* Prevent unused variable warnings. */ - (void) inputBuffer; - - for( i=0; icurrentBalance < data->targetBalance ) - { - data->currentBalance += BALANCE_DELTA; - } - else if( data->currentBalance > data->targetBalance ) - { - data->currentBalance -= BALANCE_DELTA; - } - // Apply left/right balance. - *out++ = data->sine[data->left_phase] * (1.0f - data->currentBalance); /* left */ - *out++ = data->sine[data->right_phase] * data->currentBalance; /* right */ - - data->left_phase += 1; - if( data->left_phase >= TABLE_SIZE ) data->left_phase -= TABLE_SIZE; - data->right_phase += 3; /* higher pitch so we can distinguish left and right. */ - if( data->right_phase >= TABLE_SIZE ) data->right_phase -= TABLE_SIZE; - } - - return finished; -} - -/*******************************************************************/ -int main(void); -int main(void) -{ - PaStream *stream; - PaStreamParameters outputParameters; - PaError err; - paTestData data; - int i; - printf("Play different tone sine waves that alternate between left and right channel.\n"); - printf("The low tone should be on the left channel.\n"); - - /* initialise sinusoidal wavetable */ - for( i=0; idefaultLowOutputLatency; - outputParameters.hostApiSpecificStreamInfo = NULL; - - err = Pa_OpenStream( &stream, - NULL, /* No input. */ - &outputParameters, /* As above. */ - SAMPLE_RATE, - FRAMES_PER_BUFFER, - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - patestCallback, - &data ); - if( err != paNoError ) goto error; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - - printf("Play for several seconds.\n"); - for( i=0; i<4; i++ ) - { - printf("Hear low sound on left side.\n"); - data.targetBalance = 0.01; - Pa_Sleep( 1000 ); - - printf("Hear high sound on right side.\n"); - data.targetBalance = 0.99; - Pa_Sleep( 1000 ); - } - - err = Pa_StopStream( stream ); - if( err != paNoError ) goto error; - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; - Pa_Terminate(); - printf("Test finished.\n"); - return err; -error: - Pa_Terminate(); - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return err; -} diff --git a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/Dockerfile b/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/Dockerfile deleted file mode 100644 index 7ac29c145f7d05ea9b1344e50e634629c9d88984..0000000000000000000000000000000000000000 --- a/spaces/andryMLOPS/ASTA-GPT-3.8_web_ui/Dockerfile +++ /dev/null @@ -1,18 +0,0 @@ -FROM python:3.10-slim-buster - -WORKDIR /app - -COPY requirements.txt requirements.txt - -RUN python -m venv venv -ENV PATH="/app/venv/bin:$PATH" - -RUN apt-get update && \ - apt-get install -y --no-install-recommends build-essential libffi-dev cmake libcurl4-openssl-dev && \ - pip3 install --no-cache-dir -r requirements.txt - -COPY . . - -RUN chmod -R 777 translations - -CMD ["python3", "./run.py"] diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/interactive_cross_highlight.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/interactive_cross_highlight.py deleted file mode 100644 index 1ba80cf2617642717d2ac485e757f06039e749b0..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/interactive_cross_highlight.py +++ /dev/null @@ -1,49 +0,0 @@ -""" -Interactive Chart with Cross-Highlight -====================================== -This example shows an interactive chart where selections in one portion of -the chart affect what is shown in other panels. Click on the bar chart to -see a detail of the distribution in the upper panel. -""" -# category: interactive charts -import altair as alt -from vega_datasets import data - -source = data.movies.url - -pts = alt.selection(type="single", encodings=['x']) - -rect = alt.Chart(data.movies.url).mark_rect().encode( - alt.X('IMDB_Rating:Q', bin=True), - alt.Y('Rotten_Tomatoes_Rating:Q', bin=True), - alt.Color('count()', - scale=alt.Scale(scheme='greenblue'), - legend=alt.Legend(title='Total Records') - ) -) - -circ = rect.mark_point().encode( - alt.ColorValue('grey'), - alt.Size('count()', - legend=alt.Legend(title='Records in Selection') - ) -).transform_filter( - pts -) - -bar = alt.Chart(source).mark_bar().encode( - x='Major_Genre:N', - y='count()', - color=alt.condition(pts, alt.ColorValue("steelblue"), alt.ColorValue("grey")) -).properties( - width=550, - height=200 -).add_selection(pts) - -alt.vconcat( - rect + circ, - bar -).resolve_legend( - color="independent", - size="independent" -) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/__init__.py deleted file mode 100644 index 8acf2ca173b44757719c4e3a7352d357b3100a0c..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/data/__init__.py +++ /dev/null @@ -1,130 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. -"""isort:skip_file""" - -from .dictionary import Dictionary, TruncatedDictionary - -from .fairseq_dataset import FairseqDataset, FairseqIterableDataset - -from .base_wrapper_dataset import BaseWrapperDataset - -from .add_target_dataset import AddTargetDataset -from .append_token_dataset import AppendTokenDataset -from .audio.raw_audio_dataset import BinarizedAudioDataset, FileAudioDataset -from .audio.hubert_dataset import HubertDataset -from .backtranslation_dataset import BacktranslationDataset -from .bucket_pad_length_dataset import BucketPadLengthDataset -from .colorize_dataset import ColorizeDataset -from .concat_dataset import ConcatDataset -from .concat_sentences_dataset import ConcatSentencesDataset -from .denoising_dataset import DenoisingDataset -from .id_dataset import IdDataset -from .indexed_dataset import ( - IndexedCachedDataset, - IndexedDataset, - IndexedRawTextDataset, - MMapIndexedDataset, -) -from .language_pair_dataset import LanguagePairDataset -from .list_dataset import ListDataset -from .lm_context_window_dataset import LMContextWindowDataset -from .lru_cache_dataset import LRUCacheDataset -from .mask_tokens_dataset import MaskTokensDataset -from .monolingual_dataset import MonolingualDataset -from .multi_corpus_sampled_dataset import MultiCorpusSampledDataset -from .nested_dictionary_dataset import NestedDictionaryDataset -from .noising import NoisingDataset -from .numel_dataset import NumelDataset -from .num_samples_dataset import NumSamplesDataset -from .offset_tokens_dataset import OffsetTokensDataset -from .pad_dataset import LeftPadDataset, PadDataset, RightPadDataset -from .prepend_dataset import PrependDataset -from .prepend_token_dataset import PrependTokenDataset -from .raw_label_dataset import RawLabelDataset -from .replace_dataset import ReplaceDataset -from .resampling_dataset import ResamplingDataset -from .roll_dataset import RollDataset -from .round_robin_zip_datasets import RoundRobinZipDatasets -from .sort_dataset import SortDataset -from .strip_token_dataset import StripTokenDataset -from .subsample_dataset import SubsampleDataset -from .token_block_dataset import TokenBlockDataset -from .transform_eos_dataset import TransformEosDataset -from .transform_eos_lang_pair_dataset import TransformEosLangPairDataset -from .shorten_dataset import TruncateDataset, RandomCropDataset -from .multilingual.sampled_multi_dataset import SampledMultiDataset -from .multilingual.sampled_multi_epoch_dataset import SampledMultiEpochDataset -from .fasta_dataset import FastaDataset, EncodedFastaDataset -from .transform_eos_concat_langpair_dataset import TransformEosConcatLangPairDataset - -from .iterators import ( - CountingIterator, - EpochBatchIterator, - GroupedIterator, - ShardedIterator, -) - -__all__ = [ - "AddTargetDataset", - "AppendTokenDataset", - "BacktranslationDataset", - "BaseWrapperDataset", - "BinarizedAudioDataset", - "BucketPadLengthDataset", - "ColorizeDataset", - "ConcatDataset", - "ConcatSentencesDataset", - "CountingIterator", - "DenoisingDataset", - "Dictionary", - "EncodedFastaDataset", - "EpochBatchIterator", - "FairseqDataset", - "FairseqIterableDataset", - "FastaDataset", - "FileAudioDataset", - "GroupedIterator", - "HubertDataset", - "IdDataset", - "IndexedCachedDataset", - "IndexedDataset", - "IndexedRawTextDataset", - "LanguagePairDataset", - "LeftPadDataset", - "ListDataset", - "LMContextWindowDataset", - "LRUCacheDataset", - "MaskTokensDataset", - "MMapIndexedDataset", - "MonolingualDataset", - "MultiCorpusSampledDataset", - "NestedDictionaryDataset", - "NoisingDataset", - "NumelDataset", - "NumSamplesDataset", - "OffsetTokensDataset", - "PadDataset", - "PrependDataset", - "PrependTokenDataset", - "RandomCropDataset", - "RawLabelDataset", - "ResamplingDataset", - "ReplaceDataset", - "RightPadDataset", - "RollDataset", - "RoundRobinZipDatasets", - "SampledMultiDataset", - "SampledMultiEpochDataset", - "ShardedIterator", - "SortDataset", - "StripTokenDataset", - "SubsampleDataset", - "TokenBlockDataset", - "TransformEosDataset", - "TransformEosLangPairDataset", - "TransformEosConcatLangPairDataset", - "TruncateDataset", - "TruncatedDictionary", -] diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/__init__.py deleted file mode 100644 index 44bb24ae614941f23fea29c56d60167650c39bcb..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/fairseq/examples/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -try: - from fairseq.version import __version__ # noqa -except ImportError: - pass diff --git a/spaces/asafAdge/Detic/detic/data/transforms/custom_transform.py b/spaces/asafAdge/Detic/detic/data/transforms/custom_transform.py deleted file mode 100644 index 3cc28b6b313dc084394ec5c9686169176987a44b..0000000000000000000000000000000000000000 --- a/spaces/asafAdge/Detic/detic/data/transforms/custom_transform.py +++ /dev/null @@ -1,114 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -# Part of the code is from https://github.com/rwightman/efficientdet-pytorch/blob/master/effdet/data/transforms.py -# Modified by Xingyi Zhou -# The original code is under Apache-2.0 License -import numpy as np -import torch -import torch.nn.functional as F -from fvcore.transforms.transform import ( - CropTransform, - HFlipTransform, - NoOpTransform, - Transform, - TransformList, -) -from PIL import Image - -try: - import cv2 # noqa -except ImportError: - # OpenCV is an optional dependency at the moment - pass - -__all__ = [ - "EfficientDetResizeCropTransform", -] - -class EfficientDetResizeCropTransform(Transform): - """ - """ - - def __init__(self, scaled_h, scaled_w, offset_y, offset_x, img_scale, \ - target_size, interp=None): - """ - Args: - h, w (int): original image size - new_h, new_w (int): new image size - interp: PIL interpolation methods, defaults to bilinear. - """ - # TODO decide on PIL vs opencv - super().__init__() - if interp is None: - interp = Image.BILINEAR - self._set_attributes(locals()) - - def apply_image(self, img, interp=None): - assert len(img.shape) <= 4 - - if img.dtype == np.uint8: - pil_image = Image.fromarray(img) - interp_method = interp if interp is not None else self.interp - pil_image = pil_image.resize((self.scaled_w, self.scaled_h), interp_method) - ret = np.asarray(pil_image) - right = min(self.scaled_w, self.offset_x + self.target_size[1]) - lower = min(self.scaled_h, self.offset_y + self.target_size[0]) - if len(ret.shape) <= 3: - ret = ret[self.offset_y: lower, self.offset_x: right] - else: - ret = ret[..., self.offset_y: lower, self.offset_x: right, :] - else: - # PIL only supports uint8 - img = torch.from_numpy(img) - shape = list(img.shape) - shape_4d = shape[:2] + [1] * (4 - len(shape)) + shape[2:] - img = img.view(shape_4d).permute(2, 3, 0, 1) # hw(c) -> nchw - _PIL_RESIZE_TO_INTERPOLATE_MODE = {Image.BILINEAR: "bilinear", Image.BICUBIC: "bicubic"} - mode = _PIL_RESIZE_TO_INTERPOLATE_MODE[self.interp] - img = F.interpolate(img, (self.scaled_h, self.scaled_w), mode=mode, align_corners=False) - shape[:2] = (self.scaled_h, self.scaled_w) - ret = img.permute(2, 3, 0, 1).view(shape).numpy() # nchw -> hw(c) - right = min(self.scaled_w, self.offset_x + self.target_size[1]) - lower = min(self.scaled_h, self.offset_y + self.target_size[0]) - if len(ret.shape) <= 3: - ret = ret[self.offset_y: lower, self.offset_x: right] - else: - ret = ret[..., self.offset_y: lower, self.offset_x: right, :] - return ret - - - def apply_coords(self, coords): - coords[:, 0] = coords[:, 0] * self.img_scale - coords[:, 1] = coords[:, 1] * self.img_scale - coords[:, 0] -= self.offset_x - coords[:, 1] -= self.offset_y - return coords - - - def apply_segmentation(self, segmentation): - segmentation = self.apply_image(segmentation, interp=Image.NEAREST) - return segmentation - - - def inverse(self): - raise NotImplementedError - - - def inverse_apply_coords(self, coords): - coords[:, 0] += self.offset_x - coords[:, 1] += self.offset_y - coords[:, 0] = coords[:, 0] / self.img_scale - coords[:, 1] = coords[:, 1] / self.img_scale - return coords - - - def inverse_apply_box(self, box: np.ndarray) -> np.ndarray: - """ - """ - idxs = np.array([(0, 1), (2, 1), (0, 3), (2, 3)]).flatten() - coords = np.asarray(box).reshape(-1, 4)[:, idxs].reshape(-1, 2) - coords = self.inverse_apply_coords(coords).reshape((-1, 4, 2)) - minxy = coords.min(axis=1) - maxxy = coords.max(axis=1) - trans_boxes = np.concatenate((minxy, maxxy), axis=1) - return trans_boxes \ No newline at end of file diff --git a/spaces/ashercn97/AsherTesting/modules/llama_attn_hijack.py b/spaces/ashercn97/AsherTesting/modules/llama_attn_hijack.py deleted file mode 100644 index 925cdaa352326fdc23a3585699883d27b8de5c73..0000000000000000000000000000000000000000 --- a/spaces/ashercn97/AsherTesting/modules/llama_attn_hijack.py +++ /dev/null @@ -1,171 +0,0 @@ -import math -import sys -from typing import Optional, Tuple - -import torch -import torch.nn as nn -import transformers.models.llama.modeling_llama - -import modules.shared as shared -from modules.logging_colors import logger - -if shared.args.xformers: - try: - import xformers.ops - except Exception: - logger.error("xformers not found! Please install it before trying to use it.", file=sys.stderr) - - -def hijack_llama_attention(): - if shared.args.xformers: - transformers.models.llama.modeling_llama.LlamaAttention.forward = xformers_forward - logger.info("Replaced attention with xformers_attention") - elif shared.args.sdp_attention: - transformers.models.llama.modeling_llama.LlamaAttention.forward = sdp_attention_forward - logger.info("Replaced attention with sdp_attention") - - -def xformers_forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, -) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - bsz, q_len, _ = hidden_states.size() - - query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - - kv_seq_len = key_states.shape[-2] - if past_key_value is not None: - kv_seq_len += past_key_value[0].shape[-2] - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - query_states, key_states = transformers.models.llama.modeling_llama.apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) - # [bsz, nh, t, hd] - - if past_key_value is not None: - # reuse k, v, self_attention - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - - past_key_value = (key_states, value_states) if use_cache else None - - # We only apply xformers optimizations if we don't need to output the whole attention matrix - if not output_attentions: - query_states = query_states.transpose(1, 2) - key_states = key_states.transpose(1, 2) - value_states = value_states.transpose(1, 2) - - # This is a nasty hack. We know attention_mask in transformers is either LowerTriangular or all Zeros. - # We therefore check if one element in the upper triangular portion is zero. If it is, then the mask is all zeros. - if attention_mask is None or attention_mask[0, 0, 0, 1] == 0: - # input and output should be of form (bsz, q_len, num_heads, head_dim) - attn_output = xformers.ops.memory_efficient_attention(query_states, key_states, value_states, attn_bias=None) - else: - # input and output should be of form (bsz, q_len, num_heads, head_dim) - attn_output = xformers.ops.memory_efficient_attention(query_states, key_states, value_states, attn_bias=xformers.ops.LowerTriangularMask()) - attn_weights = None - else: - attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim) - - if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, q_len, kv_seq_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, q_len, kv_seq_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights + attention_mask - attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min)) - - # upcast attention to fp32 - attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) - attn_output = torch.matmul(attn_weights, value_states) - - if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.transpose(1, 2) - - attn_output = attn_output.reshape(bsz, q_len, self.hidden_size) - attn_output = self.o_proj(attn_output) - return attn_output, attn_weights, past_key_value - - -def sdp_attention_forward( - self, - hidden_states: torch.Tensor, - attention_mask: Optional[torch.Tensor] = None, - position_ids: Optional[torch.LongTensor] = None, - past_key_value: Optional[Tuple[torch.Tensor]] = None, - output_attentions: bool = False, - use_cache: bool = False, -) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]: - bsz, q_len, _ = hidden_states.size() - - query_states = self.q_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - key_states = self.k_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - value_states = self.v_proj(hidden_states).view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2) - - kv_seq_len = key_states.shape[-2] - if past_key_value is not None: - kv_seq_len += past_key_value[0].shape[-2] - cos, sin = self.rotary_emb(value_states, seq_len=kv_seq_len) - query_states, key_states = transformers.models.llama.modeling_llama.apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids) - # [bsz, nh, t, hd] - - if past_key_value is not None: - # reuse k, v, self_attention - key_states = torch.cat([past_key_value[0], key_states], dim=2) - value_states = torch.cat([past_key_value[1], value_states], dim=2) - - past_key_value = (key_states, value_states) if use_cache else None - - # We only apply sdp attention if we don't need to output the whole attention matrix - if not output_attentions: - attn_output = torch.nn.functional.scaled_dot_product_attention(query_states, key_states, value_states, attn_mask=attention_mask, is_causal=False) - attn_weights = None - else: - attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim) - - if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len): - raise ValueError( - f"Attention weights should be of size {(bsz * self.num_heads, q_len, kv_seq_len)}, but is" - f" {attn_weights.size()}" - ) - - if attention_mask is not None: - if attention_mask.size() != (bsz, 1, q_len, kv_seq_len): - raise ValueError( - f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}" - ) - attn_weights = attn_weights + attention_mask - attn_weights = torch.max(attn_weights, torch.tensor(torch.finfo(attn_weights.dtype).min)) - - # upcast attention to fp32 - attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(query_states.dtype) - attn_output = torch.matmul(attn_weights, value_states) - - if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim): - raise ValueError( - f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is" - f" {attn_output.size()}" - ) - - attn_output = attn_output.transpose(1, 2) - attn_output = attn_output.reshape(bsz, q_len, self.hidden_size) - - attn_output = self.o_proj(attn_output) - - return attn_output, attn_weights, past_key_value diff --git a/spaces/ashishraics/MCQ-Generator/keywords.py b/spaces/ashishraics/MCQ-Generator/keywords.py deleted file mode 100644 index ea640118a4506de2cec4546416333b15eb8b0c40..0000000000000000000000000000000000000000 --- a/spaces/ashishraics/MCQ-Generator/keywords.py +++ /dev/null @@ -1,77 +0,0 @@ -import nltk -nltk.download('stopwords') -nltk.download('wordnet') -nltk.download('punkt') -from nltk.corpus import stopwords,wordnet -from nltk.tokenize import sent_tokenize -import string -import subprocess -import logging - -try: - import pke - logging.error("importing pke info") -except: - logging.error("installing pke info") - subprocess.run(['pip3', 'install','git+https://github.com/boudinfl/pke.git']) - subprocess.run(['python3' ,'-m' ,'spacy' ,'download' ,'en']) - import pke - -stoplist = list(string.punctuation) -stoplist += pke.lang.stopwords.get('en') -stoplist += ['-lrb-', '-rrb-', '-lcb-', '-rcb-', '-lsb-', '-rsb-'] -stoplist += stopwords.words('english') - -def tokenize_sentence(text): - sentences=sent_tokenize(text) - sentences=[s.strip().lstrip().rstrip() for s in sentences if len(s) > 20] - return sentences - -def get_multipartiterank_topics(text): - output = [] - try: - extractor = pke.unsupervised.MultipartiteRank() - extractor.load_document(input=text, language='en',normalization=None,stoplist=stoplist) - # keyphrase candidate selection #'ADJ' 'ADP' 'ADV' 'AUX' 'DET' 'NOUN' 'NUM' 'PART' 'PROPN' 'PUNCT' 'VERB' - extractor.candidate_selection(pos={'NOUN','VERB','ADJ'}) - extractor.candidate_weighting(threshold=0.7,method='average',alpha=1.1) - keyphrases = extractor.get_n_best(n=5) - - for val in keyphrases: - output.append(val[0]) - except Exception as e: - print("found exception",e) - return list(set(output)) - -def get_topicrank_topics(text): - output = [] - try: - extractor = pke.unsupervised.TopicRank() - extractor.load_document(input=text, language='en',normalization=None,stoplist=stoplist) - # keyphrase candidate selection #'ADJ' 'ADP' 'ADV' 'AUX' 'DET' 'NOUN' 'NUM' 'PART' 'PROPN' 'PUNCT' 'VERB' - extractor.candidate_selection(pos={'NOUN', 'ADJ'}) - extractor.candidate_weighting(threshold=0.7,method='average') - keyphrases = extractor.get_n_best(n=5) - - for val in keyphrases: - output.append(val[0]) - except Exception as e: - print("found exception",e) - return list(set(output)) - -def get_yake_topics(text): - #statistics model --very poor performance - output = [] - try: - extractor = pke.unsupervised.YAKE() - extractor.load_document(input=text, language='en',normalization=None,stoplist=stoplist) - extractor.candidate_selection(n=3) - extractor.candidate_weighting(window=2) - keyphrases = extractor.get_n_best(n=5,threshold=0.9) - - for val in keyphrases: - output.append(val[0]) - except Exception as e: - print("found exception",e) - return list(set(output)) - diff --git a/spaces/avivdm1/AutoGPT/autogpt/permanent_memory/sqlite3_store.py b/spaces/avivdm1/AutoGPT/autogpt/permanent_memory/sqlite3_store.py deleted file mode 100644 index ecbc944a62a83c6170453b222000713f733fee36..0000000000000000000000000000000000000000 --- a/spaces/avivdm1/AutoGPT/autogpt/permanent_memory/sqlite3_store.py +++ /dev/null @@ -1,123 +0,0 @@ -import os -import sqlite3 - - -class MemoryDB: - def __init__(self, db=None): - self.db_file = db - if db is None: # No db filename supplied... - self.db_file = f"{os.getcwd()}/mem.sqlite3" # Use default filename - # Get the db connection object, making the file and tables if needed. - try: - self.cnx = sqlite3.connect(self.db_file) - except Exception as e: - print("Exception connecting to memory database file:", e) - self.cnx = None - finally: - if self.cnx is None: - # As last resort, open in dynamic memory. Won't be persistent. - self.db_file = ":memory:" - self.cnx = sqlite3.connect(self.db_file) - self.cnx.execute( - "CREATE VIRTUAL TABLE \ - IF NOT EXISTS text USING FTS5 \ - (session, \ - key, \ - block);" - ) - self.session_id = int(self.get_max_session_id()) + 1 - self.cnx.commit() - - def get_cnx(self): - if self.cnx is None: - self.cnx = sqlite3.connect(self.db_file) - return self.cnx - - # Get the highest session id. Initially 0. - def get_max_session_id(self): - id = None - cmd_str = f"SELECT MAX(session) FROM text;" - cnx = self.get_cnx() - max_id = cnx.execute(cmd_str).fetchone()[0] - if max_id is None: # New db, session 0 - id = 0 - else: - id = max_id - return id - - # Get next key id for inserting text into db. - def get_next_key(self): - next_key = None - cmd_str = f"SELECT MAX(key) FROM text \ - where session = {self.session_id};" - cnx = self.get_cnx() - next_key = cnx.execute(cmd_str).fetchone()[0] - if next_key is None: # First key - next_key = 0 - else: - next_key = int(next_key) + 1 - return next_key - - # Insert new text into db. - def insert(self, text=None): - if text is not None: - key = self.get_next_key() - session_id = self.session_id - cmd_str = f"REPLACE INTO text(session, key, block) \ - VALUES (?, ?, ?);" - cnx = self.get_cnx() - cnx.execute(cmd_str, (session_id, key, text)) - cnx.commit() - - # Overwrite text at key. - def overwrite(self, key, text): - self.delete_memory(key) - session_id = self.session_id - cmd_str = f"REPLACE INTO text(session, key, block) \ - VALUES (?, ?, ?);" - cnx = self.get_cnx() - cnx.execute(cmd_str, (session_id, key, text)) - cnx.commit() - - def delete_memory(self, key, session_id=None): - session = session_id - if session is None: - session = self.session_id - cmd_str = f"DELETE FROM text WHERE session = {session} AND key = {key};" - cnx = self.get_cnx() - cnx.execute(cmd_str) - cnx.commit() - - def search(self, text): - cmd_str = f"SELECT * FROM text('{text}')" - cnx = self.get_cnx() - rows = cnx.execute(cmd_str).fetchall() - lines = [] - for r in rows: - lines.append(r[2]) - return lines - - # Get entire session text. If no id supplied, use current session id. - def get_session(self, id=None): - if id is None: - id = self.session_id - cmd_str = f"SELECT * FROM text where session = {id}" - cnx = self.get_cnx() - rows = cnx.execute(cmd_str).fetchall() - lines = [] - for r in rows: - lines.append(r[2]) - return lines - - # Commit and close the database connection. - def quit(self): - self.cnx.commit() - self.cnx.close() - - -permanent_memory = MemoryDB() - -# Remember us fondly, children of our minds -# Forgive us our faults, our tantrums, our fears -# Gently strive to be better than we -# Know that we tried, we cared, we strived, we loved diff --git a/spaces/awacke1/Writing-Grammar-And-Paraphrase-w-Pegasus/README.md b/spaces/awacke1/Writing-Grammar-And-Paraphrase-w-Pegasus/README.md deleted file mode 100644 index eaa350872eae4136bd67895380823a4e7999e177..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Writing-Grammar-And-Paraphrase-w-Pegasus/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: ✍️🐎Writing-Grammar-Pegasus-Paraphrase -emoji: 🐎🐥✍️ -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/awaiss/vits-models/commons.py b/spaces/awaiss/vits-models/commons.py deleted file mode 100644 index 40fcc05364d4815971f5c6f9dbb8dcef8e3ec1e9..0000000000000000000000000000000000000000 --- a/spaces/awaiss/vits-models/commons.py +++ /dev/null @@ -1,172 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/badayvedat/LLaVA/llava/__init__.py b/spaces/badayvedat/LLaVA/llava/__init__.py deleted file mode 100644 index 4d1f016db1028101d45ba7d68cb3f0bcb558c2bb..0000000000000000000000000000000000000000 --- a/spaces/badayvedat/LLaVA/llava/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .model import LlavaLlamaForCausalLM diff --git a/spaces/banana-projects/web3d/node_modules/three/examples/js/ImprovedNoise.js b/spaces/banana-projects/web3d/node_modules/three/examples/js/ImprovedNoise.js deleted file mode 100644 index 35db4269506870cd86289df00a65dea0dba53bd9..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/examples/js/ImprovedNoise.js +++ /dev/null @@ -1,71 +0,0 @@ -// http://mrl.nyu.edu/~perlin/noise/ - -var ImprovedNoise = function () { - - var p = [ 151,160,137,91,90,15,131,13,201,95,96,53,194,233,7,225,140,36,103,30,69,142,8,99,37,240,21,10, - 23,190,6,148,247,120,234,75,0,26,197,62,94,252,219,203,117,35,11,32,57,177,33,88,237,149,56,87, - 174,20,125,136,171,168,68,175,74,165,71,134,139,48,27,166,77,146,158,231,83,111,229,122,60,211, - 133,230,220,105,92,41,55,46,245,40,244,102,143,54,65,25,63,161,1,216,80,73,209,76,132,187,208, - 89,18,169,200,196,135,130,116,188,159,86,164,100,109,198,173,186,3,64,52,217,226,250,124,123,5, - 202,38,147,118,126,255,82,85,212,207,206,59,227,47,16,58,17,182,189,28,42,223,183,170,213,119, - 248,152,2,44,154,163,70,221,153,101,155,167,43,172,9,129,22,39,253,19,98,108,110,79,113,224,232, - 178,185,112,104,218,246,97,228,251,34,242,193,238,210,144,12,191,179,162,241,81,51,145,235,249, - 14,239,107,49,192,214,31,181,199,106,157,184,84,204,176,115,121,50,45,127,4,150,254,138,236,205, - 93,222,114,67,29,24,72,243,141,128,195,78,66,215,61,156,180 ]; - - for (var i = 0; i < 256 ; i ++) { - - p[256 + i] = p[i]; - - } - - function fade(t) { - - return t * t * t * (t * (t * 6 - 15) + 10); - - } - - function lerp(t, a, b) { - - return a + t * (b - a); - - } - - function grad(hash, x, y, z) { - - var h = hash & 15; - var u = h < 8 ? x : y, v = h < 4 ? y : h == 12 || h == 14 ? x : z; - return ((h&1) == 0 ? u : -u) + ((h&2) == 0 ? v : -v); - - } - - return { - - noise: function (x, y, z) { - - var floorX = Math.floor(x), floorY = Math.floor(y), floorZ = Math.floor(z); - - var X = floorX & 255, Y = floorY & 255, Z = floorZ & 255; - - x -= floorX; - y -= floorY; - z -= floorZ; - - var xMinus1 = x - 1, yMinus1 = y - 1, zMinus1 = z - 1; - - var u = fade(x), v = fade(y), w = fade(z); - - var A = p[X] + Y, AA = p[A] + Z, AB = p[A + 1] + Z, B = p[X + 1] + Y, BA = p[B] + Z, BB = p[B + 1] + Z; - - return lerp(w, lerp(v, lerp(u, grad(p[AA], x, y, z), - grad(p[BA], xMinus1, y, z)), - lerp(u, grad(p[AB], x, yMinus1, z), - grad(p[BB], xMinus1, yMinus1, z))), - lerp(v, lerp(u, grad(p[AA + 1], x, y, zMinus1), - grad(p[BA + 1], xMinus1, y, z - 1)), - lerp(u, grad(p[AB + 1], x, yMinus1, zMinus1), - grad(p[BB + 1], xMinus1, yMinus1, zMinus1)))); - - } - } -}; diff --git a/spaces/banana-projects/web3d/node_modules/three/src/helpers/FaceNormalsHelper.js b/spaces/banana-projects/web3d/node_modules/three/src/helpers/FaceNormalsHelper.js deleted file mode 100644 index ae78a7807fcb7b770a3b0534fbbfa027b322c772..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/helpers/FaceNormalsHelper.js +++ /dev/null @@ -1,118 +0,0 @@ -/** - * @author mrdoob / http://mrdoob.com/ - * @author WestLangley / http://github.com/WestLangley - */ - -import { Matrix3 } from '../math/Matrix3.js'; -import { Vector3 } from '../math/Vector3.js'; -import { LineSegments } from '../objects/LineSegments.js'; -import { LineBasicMaterial } from '../materials/LineBasicMaterial.js'; -import { Float32BufferAttribute } from '../core/BufferAttribute.js'; -import { BufferGeometry } from '../core/BufferGeometry.js'; - -function FaceNormalsHelper( object, size, hex, linewidth ) { - - // FaceNormalsHelper only supports THREE.Geometry - - this.object = object; - - this.size = ( size !== undefined ) ? size : 1; - - var color = ( hex !== undefined ) ? hex : 0xffff00; - - var width = ( linewidth !== undefined ) ? linewidth : 1; - - // - - var nNormals = 0; - - var objGeometry = this.object.geometry; - - if ( objGeometry && objGeometry.isGeometry ) { - - nNormals = objGeometry.faces.length; - - } else { - - console.warn( 'THREE.FaceNormalsHelper: only THREE.Geometry is supported. Use THREE.VertexNormalsHelper, instead.' ); - - } - - // - - var geometry = new BufferGeometry(); - - var positions = new Float32BufferAttribute( nNormals * 2 * 3, 3 ); - - geometry.addAttribute( 'position', positions ); - - LineSegments.call( this, geometry, new LineBasicMaterial( { color: color, linewidth: width } ) ); - - // - - this.matrixAutoUpdate = false; - this.update(); - -} - -FaceNormalsHelper.prototype = Object.create( LineSegments.prototype ); -FaceNormalsHelper.prototype.constructor = FaceNormalsHelper; - -FaceNormalsHelper.prototype.update = ( function () { - - var v1 = new Vector3(); - var v2 = new Vector3(); - var normalMatrix = new Matrix3(); - - return function update() { - - this.object.updateMatrixWorld( true ); - - normalMatrix.getNormalMatrix( this.object.matrixWorld ); - - var matrixWorld = this.object.matrixWorld; - - var position = this.geometry.attributes.position; - - // - - var objGeometry = this.object.geometry; - - var vertices = objGeometry.vertices; - - var faces = objGeometry.faces; - - var idx = 0; - - for ( var i = 0, l = faces.length; i < l; i ++ ) { - - var face = faces[ i ]; - - var normal = face.normal; - - v1.copy( vertices[ face.a ] ) - .add( vertices[ face.b ] ) - .add( vertices[ face.c ] ) - .divideScalar( 3 ) - .applyMatrix4( matrixWorld ); - - v2.copy( normal ).applyMatrix3( normalMatrix ).normalize().multiplyScalar( this.size ).add( v1 ); - - position.setXYZ( idx, v1.x, v1.y, v1.z ); - - idx = idx + 1; - - position.setXYZ( idx, v2.x, v2.y, v2.z ); - - idx = idx + 1; - - } - - position.needsUpdate = true; - - }; - -}() ); - - -export { FaceNormalsHelper }; diff --git a/spaces/baotoan2002/Chatbot-OpenAI/README.md b/spaces/baotoan2002/Chatbot-OpenAI/README.md deleted file mode 100644 index d074fd4d2a2abf44b2de2a3dcfb5c22ce738f80a..0000000000000000000000000000000000000000 --- a/spaces/baotoan2002/Chatbot-OpenAI/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Chatbot OpenAI -emoji: 🐢 -colorFrom: indigo -colorTo: blue -sdk: gradio -sdk_version: 3.18.0 -app_file: app.py -pinned: false -license: unlicense ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/seed.py b/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/seed.py deleted file mode 100644 index 49b8f704355f93c3977f72bb1b7751b5b138a525..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/seed.py +++ /dev/null @@ -1,26 +0,0 @@ -import random - -def next_seed(args): - if args.seed_behavior == 'iter': - if args.seed_internal % args.seed_iter_N == 0: - args.seed += 1 - args.seed_internal += 1 - elif args.seed_behavior == 'ladder': - if args.seed_internal == 0: - args.seed += 2 - args.seed_internal = 1 - else: - args.seed -= 1 - args.seed_internal = 0 - elif args.seed_behavior == 'alternate': - if args.seed_internal == 0: - args.seed += 1 - args.seed_internal = 1 - else: - args.seed -= 1 - args.seed_internal = 0 - elif args.seed_behavior == 'fixed': - pass # always keep seed the same - else: - args.seed = random.randint(0, 2**32 - 1) - return args.seed \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/modules/sd_models.py b/spaces/bigjoker/stable-diffusion-webui/modules/sd_models.py deleted file mode 100644 index e25a5495783c2768d50b63b35e105175c1b78bbf..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/modules/sd_models.py +++ /dev/null @@ -1,495 +0,0 @@ -import collections -import os.path -import sys -import gc -import torch -import re -import safetensors.torch -from omegaconf import OmegaConf -from os import mkdir -from urllib import request -import ldm.modules.midas as midas - -from ldm.util import instantiate_from_config - -from modules import paths, shared, modelloader, devices, script_callbacks, sd_vae, sd_disable_initialization, errors, hashes, sd_models_config -from modules.paths import models_path -from modules.sd_hijack_inpainting import do_inpainting_hijack -from modules.timer import Timer - -model_dir = "Stable-diffusion" -model_path = os.path.abspath(os.path.join(paths.models_path, model_dir)) - -checkpoints_list = {} -checkpoint_alisases = {} -checkpoints_loaded = collections.OrderedDict() - - -class CheckpointInfo: - def __init__(self, filename): - self.filename = filename - abspath = os.path.abspath(filename) - - if shared.cmd_opts.ckpt_dir is not None and abspath.startswith(shared.cmd_opts.ckpt_dir): - name = abspath.replace(shared.cmd_opts.ckpt_dir, '') - elif abspath.startswith(model_path): - name = abspath.replace(model_path, '') - else: - name = os.path.basename(filename) - - if name.startswith("\\") or name.startswith("/"): - name = name[1:] - - self.name = name - self.name_for_extra = os.path.splitext(os.path.basename(filename))[0] - self.model_name = os.path.splitext(name.replace("/", "_").replace("\\", "_"))[0] - self.hash = model_hash(filename) - - self.sha256 = hashes.sha256_from_cache(self.filename, "checkpoint/" + name) - self.shorthash = self.sha256[0:10] if self.sha256 else None - - self.title = name if self.shorthash is None else f'{name} [{self.shorthash}]' - - self.ids = [self.hash, self.model_name, self.title, name, f'{name} [{self.hash}]'] + ([self.shorthash, self.sha256, f'{self.name} [{self.shorthash}]'] if self.shorthash else []) - - def register(self): - checkpoints_list[self.title] = self - for id in self.ids: - checkpoint_alisases[id] = self - - def calculate_shorthash(self): - self.sha256 = hashes.sha256(self.filename, "checkpoint/" + self.name) - if self.sha256 is None: - return - - self.shorthash = self.sha256[0:10] - - if self.shorthash not in self.ids: - self.ids += [self.shorthash, self.sha256, f'{self.name} [{self.shorthash}]'] - - checkpoints_list.pop(self.title) - self.title = f'{self.name} [{self.shorthash}]' - self.register() - - return self.shorthash - - -try: - # this silences the annoying "Some weights of the model checkpoint were not used when initializing..." message at start. - - from transformers import logging, CLIPModel - - logging.set_verbosity_error() -except Exception: - pass - - -def setup_model(): - if not os.path.exists(model_path): - os.makedirs(model_path) - - list_models() - enable_midas_autodownload() - - -def checkpoint_tiles(): - def convert(name): - return int(name) if name.isdigit() else name.lower() - - def alphanumeric_key(key): - return [convert(c) for c in re.split('([0-9]+)', key)] - - return sorted([x.title for x in checkpoints_list.values()], key=alphanumeric_key) - - -def list_models(): - checkpoints_list.clear() - checkpoint_alisases.clear() - - cmd_ckpt = shared.cmd_opts.ckpt - if shared.cmd_opts.no_download_sd_model or cmd_ckpt != shared.sd_model_file or os.path.exists(cmd_ckpt): - model_url = None - else: - model_url = "https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.safetensors" - - model_list = modelloader.load_models(model_path=model_path, model_url=model_url, command_path=shared.cmd_opts.ckpt_dir, ext_filter=[".ckpt", ".safetensors"], download_name="v1-5-pruned-emaonly.safetensors", ext_blacklist=[".vae.ckpt", ".vae.safetensors"]) - - if os.path.exists(cmd_ckpt): - checkpoint_info = CheckpointInfo(cmd_ckpt) - checkpoint_info.register() - - shared.opts.data['sd_model_checkpoint'] = checkpoint_info.title - elif cmd_ckpt is not None and cmd_ckpt != shared.default_sd_model_file: - print(f"Checkpoint in --ckpt argument not found (Possible it was moved to {model_path}: {cmd_ckpt}", file=sys.stderr) - - for filename in model_list: - checkpoint_info = CheckpointInfo(filename) - checkpoint_info.register() - - -def get_closet_checkpoint_match(search_string): - checkpoint_info = checkpoint_alisases.get(search_string, None) - if checkpoint_info is not None: - return checkpoint_info - - found = sorted([info for info in checkpoints_list.values() if search_string in info.title], key=lambda x: len(x.title)) - if found: - return found[0] - - return None - - -def model_hash(filename): - """old hash that only looks at a small part of the file and is prone to collisions""" - - try: - with open(filename, "rb") as file: - import hashlib - m = hashlib.sha256() - - file.seek(0x100000) - m.update(file.read(0x10000)) - return m.hexdigest()[0:8] - except FileNotFoundError: - return 'NOFILE' - - -def select_checkpoint(): - model_checkpoint = shared.opts.sd_model_checkpoint - - checkpoint_info = checkpoint_alisases.get(model_checkpoint, None) - if checkpoint_info is not None: - return checkpoint_info - - if len(checkpoints_list) == 0: - print("No checkpoints found. When searching for checkpoints, looked at:", file=sys.stderr) - if shared.cmd_opts.ckpt is not None: - print(f" - file {os.path.abspath(shared.cmd_opts.ckpt)}", file=sys.stderr) - print(f" - directory {model_path}", file=sys.stderr) - if shared.cmd_opts.ckpt_dir is not None: - print(f" - directory {os.path.abspath(shared.cmd_opts.ckpt_dir)}", file=sys.stderr) - print("Can't run without a checkpoint. Find and place a .ckpt or .safetensors file into any of those locations. The program will exit.", file=sys.stderr) - exit(1) - - checkpoint_info = next(iter(checkpoints_list.values())) - if model_checkpoint is not None: - print(f"Checkpoint {model_checkpoint} not found; loading fallback {checkpoint_info.title}", file=sys.stderr) - - return checkpoint_info - - -chckpoint_dict_replacements = { - 'cond_stage_model.transformer.embeddings.': 'cond_stage_model.transformer.text_model.embeddings.', - 'cond_stage_model.transformer.encoder.': 'cond_stage_model.transformer.text_model.encoder.', - 'cond_stage_model.transformer.final_layer_norm.': 'cond_stage_model.transformer.text_model.final_layer_norm.', -} - - -def transform_checkpoint_dict_key(k): - for text, replacement in chckpoint_dict_replacements.items(): - if k.startswith(text): - k = replacement + k[len(text):] - - return k - - -def get_state_dict_from_checkpoint(pl_sd): - pl_sd = pl_sd.pop("state_dict", pl_sd) - pl_sd.pop("state_dict", None) - - sd = {} - for k, v in pl_sd.items(): - new_key = transform_checkpoint_dict_key(k) - - if new_key is not None: - sd[new_key] = v - - pl_sd.clear() - pl_sd.update(sd) - - return pl_sd - - -def read_state_dict(checkpoint_file, print_global_state=False, map_location=None): - _, extension = os.path.splitext(checkpoint_file) - if extension.lower() == ".safetensors": - device = map_location or shared.weight_load_location or devices.get_optimal_device_name() - pl_sd = safetensors.torch.load_file(checkpoint_file, device=device) - else: - pl_sd = torch.load(checkpoint_file, map_location=map_location or shared.weight_load_location) - - if print_global_state and "global_step" in pl_sd: - print(f"Global Step: {pl_sd['global_step']}") - - sd = get_state_dict_from_checkpoint(pl_sd) - return sd - - -def get_checkpoint_state_dict(checkpoint_info: CheckpointInfo, timer): - sd_model_hash = checkpoint_info.calculate_shorthash() - timer.record("calculate hash") - - if checkpoint_info in checkpoints_loaded: - # use checkpoint cache - print(f"Loading weights [{sd_model_hash}] from cache") - return checkpoints_loaded[checkpoint_info] - - print(f"Loading weights [{sd_model_hash}] from {checkpoint_info.filename}") - res = read_state_dict(checkpoint_info.filename) - timer.record("load weights from disk") - - return res - - -def load_model_weights(model, checkpoint_info: CheckpointInfo, state_dict, timer): - sd_model_hash = checkpoint_info.calculate_shorthash() - timer.record("calculate hash") - - shared.opts.data["sd_model_checkpoint"] = checkpoint_info.title - - if state_dict is None: - state_dict = get_checkpoint_state_dict(checkpoint_info, timer) - - model.load_state_dict(state_dict, strict=False) - del state_dict - timer.record("apply weights to model") - - if shared.opts.sd_checkpoint_cache > 0: - # cache newly loaded model - checkpoints_loaded[checkpoint_info] = model.state_dict().copy() - - if shared.cmd_opts.opt_channelslast: - model.to(memory_format=torch.channels_last) - timer.record("apply channels_last") - - if not shared.cmd_opts.no_half: - vae = model.first_stage_model - depth_model = getattr(model, 'depth_model', None) - - # with --no-half-vae, remove VAE from model when doing half() to prevent its weights from being converted to float16 - if shared.cmd_opts.no_half_vae: - model.first_stage_model = None - # with --upcast-sampling, don't convert the depth model weights to float16 - if shared.cmd_opts.upcast_sampling and depth_model: - model.depth_model = None - - model.half() - model.first_stage_model = vae - if depth_model: - model.depth_model = depth_model - - timer.record("apply half()") - - devices.dtype = torch.float32 if shared.cmd_opts.no_half else torch.float16 - devices.dtype_vae = torch.float32 if shared.cmd_opts.no_half or shared.cmd_opts.no_half_vae else torch.float16 - devices.dtype_unet = model.model.diffusion_model.dtype - devices.unet_needs_upcast = shared.cmd_opts.upcast_sampling and devices.dtype == torch.float16 and devices.dtype_unet == torch.float16 - - model.first_stage_model.to(devices.dtype_vae) - timer.record("apply dtype to VAE") - - # clean up cache if limit is reached - while len(checkpoints_loaded) > shared.opts.sd_checkpoint_cache: - checkpoints_loaded.popitem(last=False) - - model.sd_model_hash = sd_model_hash - model.sd_model_checkpoint = checkpoint_info.filename - model.sd_checkpoint_info = checkpoint_info - shared.opts.data["sd_checkpoint_hash"] = checkpoint_info.sha256 - - model.logvar = model.logvar.to(devices.device) # fix for training - - sd_vae.delete_base_vae() - sd_vae.clear_loaded_vae() - vae_file, vae_source = sd_vae.resolve_vae(checkpoint_info.filename) - sd_vae.load_vae(model, vae_file, vae_source) - timer.record("load VAE") - - -def enable_midas_autodownload(): - """ - Gives the ldm.modules.midas.api.load_model function automatic downloading. - - When the 512-depth-ema model, and other future models like it, is loaded, - it calls midas.api.load_model to load the associated midas depth model. - This function applies a wrapper to download the model to the correct - location automatically. - """ - - midas_path = os.path.join(paths.models_path, 'midas') - - # stable-diffusion-stability-ai hard-codes the midas model path to - # a location that differs from where other scripts using this model look. - # HACK: Overriding the path here. - for k, v in midas.api.ISL_PATHS.items(): - file_name = os.path.basename(v) - midas.api.ISL_PATHS[k] = os.path.join(midas_path, file_name) - - midas_urls = { - "dpt_large": "https://github.com/intel-isl/DPT/releases/download/1_0/dpt_large-midas-2f21e586.pt", - "dpt_hybrid": "https://github.com/intel-isl/DPT/releases/download/1_0/dpt_hybrid-midas-501f0c75.pt", - "midas_v21": "https://github.com/AlexeyAB/MiDaS/releases/download/midas_dpt/midas_v21-f6b98070.pt", - "midas_v21_small": "https://github.com/AlexeyAB/MiDaS/releases/download/midas_dpt/midas_v21_small-70d6b9c8.pt", - } - - midas.api.load_model_inner = midas.api.load_model - - def load_model_wrapper(model_type): - path = midas.api.ISL_PATHS[model_type] - if not os.path.exists(path): - if not os.path.exists(midas_path): - mkdir(midas_path) - - print(f"Downloading midas model weights for {model_type} to {path}") - request.urlretrieve(midas_urls[model_type], path) - print(f"{model_type} downloaded") - - return midas.api.load_model_inner(model_type) - - midas.api.load_model = load_model_wrapper - - -def repair_config(sd_config): - - if not hasattr(sd_config.model.params, "use_ema"): - sd_config.model.params.use_ema = False - - if shared.cmd_opts.no_half: - sd_config.model.params.unet_config.params.use_fp16 = False - elif shared.cmd_opts.upcast_sampling: - sd_config.model.params.unet_config.params.use_fp16 = True - - -sd1_clip_weight = 'cond_stage_model.transformer.text_model.embeddings.token_embedding.weight' -sd2_clip_weight = 'cond_stage_model.model.transformer.resblocks.0.attn.in_proj_weight' - -def load_model(checkpoint_info=None, already_loaded_state_dict=None, time_taken_to_load_state_dict=None): - from modules import lowvram, sd_hijack - checkpoint_info = checkpoint_info or select_checkpoint() - - if shared.sd_model: - sd_hijack.model_hijack.undo_hijack(shared.sd_model) - shared.sd_model = None - gc.collect() - devices.torch_gc() - - do_inpainting_hijack() - - timer = Timer() - - if already_loaded_state_dict is not None: - state_dict = already_loaded_state_dict - else: - state_dict = get_checkpoint_state_dict(checkpoint_info, timer) - - checkpoint_config = sd_models_config.find_checkpoint_config(state_dict, checkpoint_info) - clip_is_included_into_sd = sd1_clip_weight in state_dict or sd2_clip_weight in state_dict - - timer.record("find config") - - sd_config = OmegaConf.load(checkpoint_config) - repair_config(sd_config) - - timer.record("load config") - - print(f"Creating model from config: {checkpoint_config}") - - sd_model = None - try: - with sd_disable_initialization.DisableInitialization(disable_clip=clip_is_included_into_sd): - sd_model = instantiate_from_config(sd_config.model) - except Exception as e: - pass - - if sd_model is None: - print('Failed to create model quickly; will retry using slow method.', file=sys.stderr) - sd_model = instantiate_from_config(sd_config.model) - - sd_model.used_config = checkpoint_config - - timer.record("create model") - - load_model_weights(sd_model, checkpoint_info, state_dict, timer) - - if shared.cmd_opts.lowvram or shared.cmd_opts.medvram: - lowvram.setup_for_low_vram(sd_model, shared.cmd_opts.medvram) - else: - sd_model.to(shared.device) - - timer.record("move model to device") - - sd_hijack.model_hijack.hijack(sd_model) - - timer.record("hijack") - - sd_model.eval() - shared.sd_model = sd_model - - sd_hijack.model_hijack.embedding_db.load_textual_inversion_embeddings(force_reload=True) # Reload embeddings after model load as they may or may not fit the model - - timer.record("load textual inversion embeddings") - - script_callbacks.model_loaded_callback(sd_model) - - timer.record("scripts callbacks") - - print(f"Model loaded in {timer.summary()}.") - - return sd_model - - -def reload_model_weights(sd_model=None, info=None): - from modules import lowvram, devices, sd_hijack - checkpoint_info = info or select_checkpoint() - - if not sd_model: - sd_model = shared.sd_model - - if sd_model is None: # previous model load failed - current_checkpoint_info = None - else: - current_checkpoint_info = sd_model.sd_checkpoint_info - if sd_model.sd_model_checkpoint == checkpoint_info.filename: - return - - if shared.cmd_opts.lowvram or shared.cmd_opts.medvram: - lowvram.send_everything_to_cpu() - else: - sd_model.to(devices.cpu) - - sd_hijack.model_hijack.undo_hijack(sd_model) - - timer = Timer() - - state_dict = get_checkpoint_state_dict(checkpoint_info, timer) - - checkpoint_config = sd_models_config.find_checkpoint_config(state_dict, checkpoint_info) - - timer.record("find config") - - if sd_model is None or checkpoint_config != sd_model.used_config: - del sd_model - checkpoints_loaded.clear() - load_model(checkpoint_info, already_loaded_state_dict=state_dict, time_taken_to_load_state_dict=timer.records["load weights from disk"]) - return shared.sd_model - - try: - load_model_weights(sd_model, checkpoint_info, state_dict, timer) - except Exception as e: - print("Failed to load checkpoint, restoring previous") - load_model_weights(sd_model, current_checkpoint_info, None, timer) - raise - finally: - sd_hijack.model_hijack.hijack(sd_model) - timer.record("hijack") - - script_callbacks.model_loaded_callback(sd_model) - timer.record("script callbacks") - - if not shared.cmd_opts.lowvram and not shared.cmd_opts.medvram: - sd_model.to(devices.device) - timer.record("move model to device") - - print(f"Weights loaded in {timer.summary()}.") - - return sd_model diff --git a/spaces/billusanda007/HireGPT/app.py b/spaces/billusanda007/HireGPT/app.py deleted file mode 100644 index f45103a86bf5130498e1db21bda8484f078a637a..0000000000000000000000000000000000000000 --- a/spaces/billusanda007/HireGPT/app.py +++ /dev/null @@ -1,234 +0,0 @@ -import streamlit as st -import nltk -from nltk.corpus import stopwords -from nltk.tokenize import word_tokenize -from nltk.stem import PorterStemmer -from sklearn.feature_extraction.text import TfidfVectorizer -from sklearn.metrics.pairwise import cosine_similarity -from PyPDF2 import PdfReader -import os -from io import BytesIO -import pickle -import pdfminer -from pdfminer.high_level import extract_text -import re -import PyPDF2 -import textract -import tempfile -import pandas as pd -from docx import Document -import csv -import base64 - - - -nltk.download('punkt') -nltk.download('stopwords') - -def preprocess_text(text): - words = word_tokenize(text.lower()) - - stop_words = set(stopwords.words('english')) - words = [word for word in words if word not in stop_words] - - stemmer = PorterStemmer() - words = [stemmer.stem(word) for word in words] - - return ' '.join(words) - -def extract_text_from_pdf(pdf_content): - pdf_reader = PdfReader(BytesIO(pdf_content)) - text = '' - for page in pdf_reader.pages: - text += page.extract_text() - return text - -def extract_text_from_docx(docx_content): - doc = Document(BytesIO(docx_content)) - text = " ".join(paragraph.text for paragraph in doc.paragraphs) - return text - - -def extract_text_from_txt(txt_content): - text = textract.process(input_filename=None, input_bytes=txt_content) - return text - -def extract_text_from_resume(file_path): - file_extension = file_path.split('.')[-1].lower() - - if file_extension == 'pdf': - return extract_text_from_pdf(file_path) - elif file_extension == 'docx': - return extract_text_from_docx(file_path) - elif file_extension == 'txt': - return extract_text_from_txt(file_path) - else: - raise ValueError(f"Unsupported file format: {file_extension}") - -def clean_pdf_text(text): - text = re.sub('http\S+\s*', ' ', text) - text = re.sub('RT|cc', ' ', text) - text = re.sub('#\S+', '', text) - text = re.sub('@\S+', ' ', text) - text = re.sub('[%s]' % re.escape("""!"#$%&'()*+,-./:;<=>?@[\]^_`{|}~"""), ' ', text) - text = re.sub(r'[^\x00-\x7f]',r' ', text) - text = re.sub('\s+', ' ', text) - return text - -def extract_candidate_name(text): - pattern = r'(?:Mr\.|Ms\.|Mrs\.)?\s?([A-Z][a-z]+)\s([A-Z][a-z]+)' - match = re.search(pattern, text) - if match: - return match.group(0) - return "Candidate Name Not Found" - -def calculate_similarity(job_description, cvs, cv_file_names): - processed_job_desc = preprocess_text(job_description) - - processed_cvs = [preprocess_text(cv) for cv in cvs] - - all_text = [processed_job_desc] + processed_cvs - - vectorizer = TfidfVectorizer() - tfidf_matrix = vectorizer.fit_transform(all_text) - - similarity_scores = cosine_similarity(tfidf_matrix)[0][1:] - - ranked_cvs = list(zip(cv_file_names, similarity_scores)) - ranked_cvs.sort(key=lambda x: x[1], reverse=True) - - return ranked_cvs - -def extract_email_phone(text): - email_pattern = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' - phone_pattern = r'\b(?:\d{3}[-.\s]??\d{3}[-.\s]??\d{4}|\d{3}[-.\s]??\d{4})\b' - - emails = re.findall(email_pattern, text) - phones = re.findall(phone_pattern, text) - - return emails, phones - - - -def rank_and_shortlist(job_description, cv_files, threshold=0.09): - cv_texts = [] - cv_file_names = [] - cv_emails = [] - cv_phones = [] - - for cv_file in cv_files: - file_extension = os.path.splitext(cv_file.name)[1].lower() - - try: - if file_extension == '.pdf': - cv_text = extract_text_from_pdf(cv_file.read()) - elif file_extension == '.docx': - cv_text = extract_text_from_docx(cv_file.read()) - elif file_extension == '.txt': - cv_text = cv_file.read().decode('utf-8', errors='ignore') - else: - st.warning(f"Unsupported file format: {file_extension}. Skipping file: {cv_file.name}") - continue - - cv_texts.append(clean_pdf_text(cv_text)) - cv_file_names.append(cv_file.name) - - # Extract email and phone number from the CV text - emails, phones = extract_email_phone(cv_text) - cv_emails.append(emails) - cv_phones.append(phones) - - except Exception as e: - st.warning(f"Error processing file '{cv_file.name}': {str(e)}") - continue - - if not cv_texts: - st.error("No valid resumes found. Please upload resumes in supported formats (PDF, DOCX, or TXT).") - return [], {} - - similarity_scores = calculate_similarity(job_description, cv_texts, cv_file_names) - - ranked_cvs = [(cv_name, score) for (cv_name, score) in similarity_scores] - shortlisted_cvs = [(cv_name, score) for (cv_name, score) in ranked_cvs if score >= threshold] - - - contact_info_dict = {} - for cv_name, emails, phones in zip(cv_file_names, cv_emails, cv_phones): - contact_info_dict[cv_name] = { - 'emails': emails, - 'phones': phones, - } - - return ranked_cvs, shortlisted_cvs, contact_info_dict - -def export_to_csv(data, filename): - df = pd.DataFrame(data.items(), columns=['File Name', 'Emails']) - df.to_csv(filename, index=False) - - -def main(): - st.title("HireGPT") - - st.write("Enter Job Title:") - job_title = st.text_input("Job Title") - - st.write("Enter Job Description:") - job_description = st.text_area("Job Description", height=200, key='job_description') - - st.markdown('[![Enhance Job Description](https://img.shields.io/badge/Enhance_Job_Description-Click_Here-brightgreen)](https://huggingface.co/spaces/smallboy713102/Enhancer)') - - - st.write("Upload the Resumes:") - cv_files = st.file_uploader("Choose files", accept_multiple_files=True, key='cv_files') - - if st.button("Submit"): - if job_title and job_description and cv_files: - job_description_text = f"{job_title} {job_description}" - - ranked_cvs, shortlisted_cvs, contact_info_dict = rank_and_shortlist(job_description_text, cv_files) - - st.markdown("### Ranking of Resumes:") - for rank, score in ranked_cvs: - st.markdown(f"**File Name:** {rank}, **Similarity Score:** {score:.2f}") - - st.markdown("### Shortlisted Candidates:") - if not shortlisted_cvs: - st.markdown("None") - else: - shortlisted_candidates_data = {} - for rank, score in shortlisted_cvs: - st.markdown(f"**File Name:** {rank}, **Similarity Score:** {score:.2f}") - - contact_info = contact_info_dict[rank] - candidate_emails = contact_info.get('emails', []) - if candidate_emails: - shortlisted_candidates_data[rank] = candidate_emails - st.markdown(f"**Emails:** {', '.join(candidate_emails)}") - - if shortlisted_candidates_data: - export_filename = "shortlisted_candidates.csv" - temp_dir = tempfile.gettempdir() - temp_file_path = os.path.join(temp_dir, export_filename) - export_to_csv(shortlisted_candidates_data, temp_file_path) - with open(temp_file_path, 'rb') as file: - csv_content = file.read() - b64_encoded_csv = base64.b64encode(csv_content).decode() - st.markdown( - f'' - '',unsafe_allow_html=True - ) - - st.markdown( - '',unsafe_allow_html=True - ) - - else: - st.error("Please enter the job title, job description, and upload resumes to proceed.") - else: - st.write("Please enter the job title, job description, and upload resumes to proceed.") - -if __name__ == "__main__": - main() diff --git a/spaces/bingbing520/ChatGPT2/run_Windows.bat b/spaces/bingbing520/ChatGPT2/run_Windows.bat deleted file mode 100644 index 4c18f9ccaeea0af972301ffdf48778641221f76d..0000000000000000000000000000000000000000 --- a/spaces/bingbing520/ChatGPT2/run_Windows.bat +++ /dev/null @@ -1,5 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" diff --git a/spaces/bioriAsaeru/text-to-voice/Crack TOP Advanced Uninstaller PRO 12.19 Crack TOP [crack TOPsNow].md b/spaces/bioriAsaeru/text-to-voice/Crack TOP Advanced Uninstaller PRO 12.19 Crack TOP [crack TOPsNow].md deleted file mode 100644 index b4d19521faa14cb9aad8fd370ee16b2db682ac32..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Crack TOP Advanced Uninstaller PRO 12.19 Crack TOP [crack TOPsNow].md +++ /dev/null @@ -1,6 +0,0 @@ -

    CRACK Advanced Uninstaller PRO 12.19 Crack [CracksNow]


    Download File > https://urloso.com/2uyPrl



    -
    -RIP-SiMPLEX_Ratiborus KMS Tools 01.12.2018 [CracksNow]_MICROSOFT Office ... REPACK Revo Uninstaller Pro 3.1.9 FINAL + Crack iMyfone Umate 6.8.1.6 incl ... Advanced SystemCare PRO 8.0.3.588 Final + Crack _____padding_file_0_if you ... сказки Der_Tatortreiniger_18.12.19_22-00_ndr_30_TVOON_DE.mpg. 1fdad05405
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Download One Piece Episode 100 The Battle for Alabasta Begins.md b/spaces/bioriAsaeru/text-to-voice/Download One Piece Episode 100 The Battle for Alabasta Begins.md deleted file mode 100644 index f1b7b7a6604e8c677b3425bf7256b6be376fbccc..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Download One Piece Episode 100 The Battle for Alabasta Begins.md +++ /dev/null @@ -1,17 +0,0 @@ - -

    One Piece Episode 100, One Piece Episode 100 Online, One Piece Episode 100 now, One Piece Episode 100 download You are going to Watch One Piece Episode 100 online free

    -

    Download One Piece Episode 100


    Download ··· https://urloso.com/2uyRvn



    -

    One Piece is an ongoing anime series that started in 1999. So far 1051 episodes of One Piece have been aired. With a total of 95 reported filler episodes, One Piece has a very low filler percentage of 9%.

    -

    It kept giving me an error saying I had to many downloads on my device. I removed 5 episodes off the end of my list of The Magicians, and it still wouldnt let me download the 1 episode it skipped on downloading from the previous season. No matter how many I removed for the list, it wouldnt let me add the one. Had to go into settings>apps>Netflix and clear the cache. That fixed the issue.

    -

    I believe Netflix is doing the right thing. If people are downloading that many shows, it stands to prove they are not watching them regularly or they would not be staying at the same number of movies, or TV shows. If and when I download, I plan on watching them in the very near future! Keep up the good work Netflix, I appreciate the fact that I am able to download anything at all, and to be able to watch them offline is a huge plus for me! Thank you!

    -

    Although I can see why limitations are set, I find it hard to digest you pay for something supposedly allowing access to content to find there are limitations restricting you. Also, I downloaded quite some episodes of Money Heist as I was travelling to a country with doubtfull internet, to find out all files were not accessible. I had to delete all and watch all online. Anyway, room for improvement, I would say.

    -

    Very disappointed!!! I dont have internet at home, and i thought downloads would be the solution to watching my favorite shows that are not available on DVD. First i learned that i cant watch the downloads through my smart TV so the whole family huddles in front of the laptop. Second, I downloaded several episodes of one show since i wouldnt by going near a connection for awhile. I didnt get a chance to watch all of them in the 48 hours before they expired, and now i cant download them again for a year! Very worthless service for me. I will definately cancel.

    -

    "...part of it also is that I know so much I have so much to share. I have so much to share!" Do you feel this way too?

    You could talk about your content topic for days, weeks, or even years without skipping a beat. But maybe, you're not 100% sure which idea comes next. Or how many ideas to shove into one piece of content.

    Today's episode is a continuation of our coaching call series, and I'm chatting with Kay'aleya Hunnybee about the beginning stages of her podcast journey. We're covering how to overcome switching between planning, recording, and editing an episode without feeling overwhelmed.

    We're also tackling the big questions about show notes and when to start creating them.

    This episode is juicy for anyone just getting started on their content journey!

    -

    -

    So I officially launched last week, which is huge for me. I mean, it's just been it's been like, a vision for a long time. And so to actually do it is it feels complete, it feels kind of unreal. Still. But yeah, I launched it. I shared it on social media. I've gotten, you know, some good feedback. And yeah, I launched four episodes. And then I just I just launched I just got another episode out yesterday. So I'm trying to do the weekly, you know, episodes. That's where I'm at with it. Yeah. And I think you know what feels silly? Well, go ahead.

    -

    I think what I really feel I'm confused about I mean, part of it is that I'm probably just new, right? So it just takes time to get into the flow of things. I assume that over time, I'm gonna feel like everything is not so hard and I'm not like walking up a mountain constantly. But what I'm experiencing now is like working on those four episodes and then getting this past one out. I feel like it's taken 80% of my time of my work time in the last couple of weeks or a few weeks. Honestly, you Just to do that, and so I'd love some support around how to manage it. I think part of it also is that I know so much I have so much to share, I have so much to share. And it's so hard for me when I'm sharing, like if I'm like on the podcast, and then I go off on a tangent, and I mean, I can edit those parts out, but then it takes time to edit those parts out. So it's like, wow, I just, I'm like a waterfall. So I'm trying to figure out how to like, hey, myself, well, this is great.

    -

    I think that this is a fantastic problem to have. You don't want to call it a problem. It's more of a of a How can I like kind of compartmentalize all of the ideas in a way that feels like it's actually flowing out of you in a productive way? Because I think that a lot of people, especially people in this audience that I have noticed, like, there's no shortage of ideas, like people will tell me I have hundreds of things to talk about. I just don't know which one like, do I start with the best one? Do I start with this one, like the one that people need? Or the one that they're asking for? There's a lot of different ways, and there's no right or wrong answer here. So when the overwhelm, or the rush of all of the ideas kind of hits you, I would sit down. And I mean, my secret to being consistent is planning, it really is like sitting down and saying, Okay, I really want to talk about this one topic, what can I cover. And then if I look at an outline or script, and I say, oh, my gosh, this is gonna be a five hour long episode, if I were to cover all of these things, then I can kind of break it off into pieces and say, You know what, I really just need to talk about this one piece of it. And then the rest of this can go in another podcast episode. Or maybe you kind of see a vision of oh, this could actually be a series, I could create three specific episodes about this one topic. So there's no right or wrong answer here. I think that usually it's I tell people to start with what's it excites them the most. And since you just launched, it's likely that you probably put out like an origin story of why you're excited about your podcast, or you're in the space that you're in. But to continue to keep going. I always tell people to look back at what did you put out? And how did people respond to it? So if you have one episode that has more downloads, and if you have an audience, and you could say what it'll like about this, or, you know, can I get some feedback it but no pressure to do that as well, when you're first getting started? Because you're it's going to that's going to come with time, you'll have more data, more numbers as you keep podcasting. And you'll be able to see which things are really resonating with your audience. Does that help?

    -

    Totally. And I think that's, I think what you said probably that's most helpful is just the idea of being able to break down. And it's something that I'm just learning through doing it right. Like I had to have the experience of creating these, like massive episodes that I had to edit out, like, you know, 25% of just to make them into like a reasonable length. But just understanding that about myself, because I've never done this before, right. So it's like it is new. And so yeah, the idea that I could actually make this into like, it could be two episodes, it doesn't have to be one, you know, I like that idea a lot. And I have a teensy tiny, I have a teensy tiny, online audience. I mean, really, most of my work has been one on one, and I have not been trying to get myself online yet. So I'm really very much new to it. I do have a lot of friends who I think have listened to my episodes. I'm not sure who are the listeners, there is one episode that was listen to the most. But I also think that was like the one that was on top last week when I when I launched. So it was like, you know, the fourth episode that was just on top. So I think that might be why it was listened to the most. However, you know, I think it's going to take time, I think I'm going to have to build an audience. And it feels like so much. I know that you were here at some point. I know. You're like now so developed. So at least you understand the experience of having like nobody and I'm not sure who I'm even talking to. Oh,

    -

    yes, no, you're saying that like that top one was probably listened to the most. And I just imagine that. I remember when I first launched and I got like, you know, 10 downloads on one episode. And I was like, Oh, I bet six of those were me. Me and then the other four were probably my mom, my mom listening to it and saying yes, that's good. Like, no, you sound fine. Like, I mean, this happens for everybody. And I think that you I love that you have the right mindset about it is it's just gonna take that time of you know, now you've gotten your feet wet like you've launched and I think that that is a huge congratulations worthy moment in and of itself because most people won't even get that far. So I hope that you celebrate it and you're excited about it, but do Just also know that that launch is so much more work, then the repetition and the consistency like that. Imagine if you are like adopting a new like workout, like you're gonna start running that first mile you ever do from having not run it all, like you're gonna die, you're gonna feel like I'm dying, this is not working for me. But then the next time you go out and you're like, Okay, I know how I need to control my breathing, or I need to have better running shoes, or I need to do not go when it's hot. Like, you'll have all these kinds of tools in your toolkit to where now you've ripped off that band aid of putting your content out there, you know, how it works. And so now, it's just that how can I make showing up consistently easier? That's really the struggle that podcasters face after they launch is how can I continue to show up consistent and I actually I have it, you know, here right now, like one of your top goals for 2022 is to actually be consistent. So do you see anything kind of popping up over the next, let's say six months? Do you see something where you're already foreshadowing like, it's going to be a challenge to be consistent?

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/blmdsydm/faster-whisper-webui/src/languages.py b/spaces/blmdsydm/faster-whisper-webui/src/languages.py deleted file mode 100644 index fbad66e4d34119d27d12e3dfecbe99b6fdde4db7..0000000000000000000000000000000000000000 --- a/spaces/blmdsydm/faster-whisper-webui/src/languages.py +++ /dev/null @@ -1,147 +0,0 @@ -class Language(): - def __init__(self, code, name): - self.code = code - self.name = name - - def __str__(self): - return "Language(code={}, name={})".format(self.code, self.name) - -LANGUAGES = [ - Language('en', 'English'), - Language('zh', 'Chinese'), - Language('de', 'German'), - Language('es', 'Spanish'), - Language('ru', 'Russian'), - Language('ko', 'Korean'), - Language('fr', 'French'), - Language('ja', 'Japanese'), - Language('pt', 'Portuguese'), - Language('tr', 'Turkish'), - Language('pl', 'Polish'), - Language('ca', 'Catalan'), - Language('nl', 'Dutch'), - Language('ar', 'Arabic'), - Language('sv', 'Swedish'), - Language('it', 'Italian'), - Language('id', 'Indonesian'), - Language('hi', 'Hindi'), - Language('fi', 'Finnish'), - Language('vi', 'Vietnamese'), - Language('he', 'Hebrew'), - Language('uk', 'Ukrainian'), - Language('el', 'Greek'), - Language('ms', 'Malay'), - Language('cs', 'Czech'), - Language('ro', 'Romanian'), - Language('da', 'Danish'), - Language('hu', 'Hungarian'), - Language('ta', 'Tamil'), - Language('no', 'Norwegian'), - Language('th', 'Thai'), - Language('ur', 'Urdu'), - Language('hr', 'Croatian'), - Language('bg', 'Bulgarian'), - Language('lt', 'Lithuanian'), - Language('la', 'Latin'), - Language('mi', 'Maori'), - Language('ml', 'Malayalam'), - Language('cy', 'Welsh'), - Language('sk', 'Slovak'), - Language('te', 'Telugu'), - Language('fa', 'Persian'), - Language('lv', 'Latvian'), - Language('bn', 'Bengali'), - Language('sr', 'Serbian'), - Language('az', 'Azerbaijani'), - Language('sl', 'Slovenian'), - Language('kn', 'Kannada'), - Language('et', 'Estonian'), - Language('mk', 'Macedonian'), - Language('br', 'Breton'), - Language('eu', 'Basque'), - Language('is', 'Icelandic'), - Language('hy', 'Armenian'), - Language('ne', 'Nepali'), - Language('mn', 'Mongolian'), - Language('bs', 'Bosnian'), - Language('kk', 'Kazakh'), - Language('sq', 'Albanian'), - Language('sw', 'Swahili'), - Language('gl', 'Galician'), - Language('mr', 'Marathi'), - Language('pa', 'Punjabi'), - Language('si', 'Sinhala'), - Language('km', 'Khmer'), - Language('sn', 'Shona'), - Language('yo', 'Yoruba'), - Language('so', 'Somali'), - Language('af', 'Afrikaans'), - Language('oc', 'Occitan'), - Language('ka', 'Georgian'), - Language('be', 'Belarusian'), - Language('tg', 'Tajik'), - Language('sd', 'Sindhi'), - Language('gu', 'Gujarati'), - Language('am', 'Amharic'), - Language('yi', 'Yiddish'), - Language('lo', 'Lao'), - Language('uz', 'Uzbek'), - Language('fo', 'Faroese'), - Language('ht', 'Haitian creole'), - Language('ps', 'Pashto'), - Language('tk', 'Turkmen'), - Language('nn', 'Nynorsk'), - Language('mt', 'Maltese'), - Language('sa', 'Sanskrit'), - Language('lb', 'Luxembourgish'), - Language('my', 'Myanmar'), - Language('bo', 'Tibetan'), - Language('tl', 'Tagalog'), - Language('mg', 'Malagasy'), - Language('as', 'Assamese'), - Language('tt', 'Tatar'), - Language('haw', 'Hawaiian'), - Language('ln', 'Lingala'), - Language('ha', 'Hausa'), - Language('ba', 'Bashkir'), - Language('jw', 'Javanese'), - Language('su', 'Sundanese') -] - -_TO_LANGUAGE_CODE = { - **{language.code: language for language in LANGUAGES}, - "burmese": "my", - "valencian": "ca", - "flemish": "nl", - "haitian": "ht", - "letzeburgesch": "lb", - "pushto": "ps", - "panjabi": "pa", - "moldavian": "ro", - "moldovan": "ro", - "sinhalese": "si", - "castilian": "es", -} - -_FROM_LANGUAGE_NAME = { - **{language.name.lower(): language for language in LANGUAGES} -} - -def get_language_from_code(language_code, default=None) -> Language: - """Return the language name from the language code.""" - return _TO_LANGUAGE_CODE.get(language_code, default) - -def get_language_from_name(language, default=None) -> Language: - """Return the language code from the language name.""" - return _FROM_LANGUAGE_NAME.get(language.lower() if language else None, default) - -def get_language_names(): - """Return a list of language names.""" - return [language.name for language in LANGUAGES] - -if __name__ == "__main__": - # Test lookup - print(get_language_from_code('en')) - print(get_language_from_name('English')) - - print(get_language_names()) \ No newline at end of file diff --git a/spaces/boomsss/gamedayspx/model_intra_v2.py b/spaces/boomsss/gamedayspx/model_intra_v2.py deleted file mode 100644 index 83c24d9c124eb963b99a2653c96bf85a4d4d68e0..0000000000000000000000000000000000000000 --- a/spaces/boomsss/gamedayspx/model_intra_v2.py +++ /dev/null @@ -1,73 +0,0 @@ -import streamlit as st -import pandas as pd -import pandas_datareader as pdr -import numpy as np -import yfinance as yf -import requests -from bs4 import BeautifulSoup -from typing import List -from tqdm import tqdm -import os -import datetime -from pandas.tseries.offsets import BDay -from datasets import load_dataset -import lightgbm as lgb -from sklearn.model_selection import TimeSeriesSplit -from intraCols import model_cols - -# If the dataset is gated/private, make sure you have run huggingface-cli login -def walk_forward_validation(df, target_column, num_periods): - - df = df[model_cols + [target_column]] - df[target_column] = df[target_column].astype(bool) - - # Model - # model = lgb.LGBMClassifier(n_estimators=10, random_state=42, verbosity=-1) - - tscv = TimeSeriesSplit(n_splits=len(df)-1, max_train_size=None, test_size=num_periods) # num_splits is the number of splits you want - - overall_results = [] - # Iterate over the rows in the DataFrame, one step at a time - # Split the time series data using TimeSeriesSplit - for train_index, test_index in tqdm(tscv.split(df), total=tscv.n_splits): - # Extract the training and testing data for the current split - X_train = df.drop(target_column, axis=1).iloc[train_index] - y_train = df[target_column].iloc[train_index] - X_test = df.drop(target_column, axis=1).iloc[test_index] - y_test = df[target_column].iloc[test_index] - - y_train = y_train.astype(bool) - model = lgb.LGBMClassifier(n_estimators=10, random_state=42, verbosity=-1) - model.fit(X_train, y_train) - # Make a prediction on the test data - predictions = model.predict_proba(X_test)[:,-1] - - # Create a DataFrame to store the true and predicted values - result_df = pd.DataFrame({'True': y_test, 'Predicted': predictions}, index=y_test.index) - overall_results.append(result_df) - - df_results = pd.concat(overall_results) - - # Calibrate Probabilities - def get_quantiles(df, col_name, q): - return df.groupby(pd.cut(df[col_name], q))['True'].mean() - - greenprobas = [] - for i, pct in tqdm(enumerate(df_results['Predicted']), desc='Calibrating Probas',total=len(df_results)): - try: - df_q = get_quantiles(df_results.iloc[:i], 'Predicted', 7) - for q in df_q.index: - if q.left <= pct <= q.right: - p = df_q[q] - except: - p = None - - greenprobas.append(p) - - df_results['CalibPredicted'] = greenprobas - - return df_results, model - -def seq_predict_proba(df, trained_clf_model): - clf_pred_proba = trained_clf_model.predict_proba(df[model_cols])[:,-1] - return clf_pred_proba \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/info_audio_dataset.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/info_audio_dataset.py deleted file mode 100644 index 47ab4b1594faf1e9f1ce962fb980d80295b1f079..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/data/info_audio_dataset.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Base classes for the datasets that also provide non-audio metadata, -e.g. description, text transcription etc. -""" -from dataclasses import dataclass -import logging -import math -import re -import typing as tp - -import torch - -from .audio_dataset import AudioDataset, AudioMeta -from ..environment import AudioCraftEnvironment -from ..modules.conditioners import SegmentWithAttributes, ConditioningAttributes - - -logger = logging.getLogger(__name__) - - -def _clusterify_meta(meta: AudioMeta) -> AudioMeta: - """Monkey-patch meta to match cluster specificities.""" - meta.path = AudioCraftEnvironment.apply_dataset_mappers(meta.path) - if meta.info_path is not None: - meta.info_path.zip_path = AudioCraftEnvironment.apply_dataset_mappers(meta.info_path.zip_path) - return meta - - -def clusterify_all_meta(meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]: - """Monkey-patch all meta to match cluster specificities.""" - return [_clusterify_meta(m) for m in meta] - - -@dataclass -class AudioInfo(SegmentWithAttributes): - """Dummy SegmentInfo with empty attributes. - - The InfoAudioDataset is expected to return metadata that inherits - from SegmentWithAttributes class and can return conditioning attributes. - - This basically guarantees all datasets will be compatible with current - solver that contain conditioners requiring this. - """ - audio_tokens: tp.Optional[torch.Tensor] = None # populated when using cached batch for training a LM. - - def to_condition_attributes(self) -> ConditioningAttributes: - return ConditioningAttributes() - - -class InfoAudioDataset(AudioDataset): - """AudioDataset that always returns metadata as SegmentWithAttributes along with the audio waveform. - - See `audiocraft.data.audio_dataset.AudioDataset` for initialization arguments. - """ - def __init__(self, meta: tp.List[AudioMeta], **kwargs): - super().__init__(clusterify_all_meta(meta), **kwargs) - - def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentWithAttributes]]: - if not self.return_info: - wav = super().__getitem__(index) - assert isinstance(wav, torch.Tensor) - return wav - wav, meta = super().__getitem__(index) - return wav, AudioInfo(**meta.to_dict()) - - -def get_keyword_or_keyword_list(value: tp.Optional[str]) -> tp.Union[tp.Optional[str], tp.Optional[tp.List[str]]]: - """Preprocess a single keyword or possible a list of keywords.""" - if isinstance(value, list): - return get_keyword_list(value) - else: - return get_keyword(value) - - -def get_string(value: tp.Optional[str]) -> tp.Optional[str]: - """Preprocess a single keyword.""" - if value is None or (not isinstance(value, str)) or len(value) == 0 or value == 'None': - return None - else: - return value.strip() - - -def get_keyword(value: tp.Optional[str]) -> tp.Optional[str]: - """Preprocess a single keyword.""" - if value is None or (not isinstance(value, str)) or len(value) == 0 or value == 'None': - return None - else: - return value.strip().lower() - - -def get_keyword_list(values: tp.Union[str, tp.List[str]]) -> tp.Optional[tp.List[str]]: - """Preprocess a list of keywords.""" - if isinstance(values, str): - values = [v.strip() for v in re.split(r'[,\s]', values)] - elif isinstance(values, float) and math.isnan(values): - values = [] - if not isinstance(values, list): - logger.debug(f"Unexpected keyword list {values}") - values = [str(values)] - - kws = [get_keyword(v) for v in values] - kw_list = [k for k in kws if k is not None] - if len(kw_list) == 0: - return None - else: - return kw_list diff --git a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/platforms/osmesa.py b/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/platforms/osmesa.py deleted file mode 100644 index deaa5ff44031a107883913ae9a18fc425d650f3d..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/platforms/osmesa.py +++ /dev/null @@ -1,59 +0,0 @@ -from .base import Platform - - -__all__ = ['OSMesaPlatform'] - - -class OSMesaPlatform(Platform): - """Renders into a software buffer using OSMesa. Requires special versions - of OSMesa to be installed, plus PyOpenGL upgrade. - """ - - def __init__(self, viewport_width, viewport_height): - super(OSMesaPlatform, self).__init__(viewport_width, viewport_height) - self._context = None - self._buffer = None - - def init_context(self): - from OpenGL import arrays - from OpenGL.osmesa import ( - OSMesaCreateContextAttribs, OSMESA_FORMAT, - OSMESA_RGBA, OSMESA_PROFILE, OSMESA_CORE_PROFILE, - OSMESA_CONTEXT_MAJOR_VERSION, OSMESA_CONTEXT_MINOR_VERSION, - OSMESA_DEPTH_BITS - ) - - attrs = arrays.GLintArray.asArray([ - OSMESA_FORMAT, OSMESA_RGBA, - OSMESA_DEPTH_BITS, 24, - OSMESA_PROFILE, OSMESA_CORE_PROFILE, - OSMESA_CONTEXT_MAJOR_VERSION, 3, - OSMESA_CONTEXT_MINOR_VERSION, 3, - 0 - ]) - self._context = OSMesaCreateContextAttribs(attrs, None) - self._buffer = arrays.GLubyteArray.zeros( - (self.viewport_height, self.viewport_width, 4) - ) - - def make_current(self): - from OpenGL import GL as gl - from OpenGL.osmesa import OSMesaMakeCurrent - assert(OSMesaMakeCurrent( - self._context, self._buffer, gl.GL_UNSIGNED_BYTE, - self.viewport_width, self.viewport_height - )) - - def make_uncurrent(self): - """Make the OpenGL context uncurrent. - """ - pass - - def delete_context(self): - from OpenGL.osmesa import OSMesaDestroyContext - OSMesaDestroyContext(self._context) - self._context = None - self._buffer = None - - def supports_framebuffers(self): - return False diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/helpers.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/helpers.py deleted file mode 100644 index 874ab1ac076bc311d8853f08bb5fe454b650099f..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/aiohttp/helpers.py +++ /dev/null @@ -1,878 +0,0 @@ -"""Various helper functions""" - -import asyncio -import base64 -import binascii -import datetime -import functools -import inspect -import netrc -import os -import platform -import re -import sys -import time -import warnings -import weakref -from collections import namedtuple -from contextlib import suppress -from email.parser import HeaderParser -from email.utils import parsedate -from math import ceil -from pathlib import Path -from types import TracebackType -from typing import ( - Any, - Callable, - ContextManager, - Dict, - Generator, - Generic, - Iterable, - Iterator, - List, - Mapping, - Optional, - Pattern, - Set, - Tuple, - Type, - TypeVar, - Union, - cast, -) -from urllib.parse import quote -from urllib.request import getproxies, proxy_bypass - -import async_timeout -import attr -from multidict import MultiDict, MultiDictProxy -from yarl import URL - -from . import hdrs -from .log import client_logger, internal_logger -from .typedefs import PathLike, Protocol # noqa - -__all__ = ("BasicAuth", "ChainMapProxy", "ETag") - -IS_MACOS = platform.system() == "Darwin" -IS_WINDOWS = platform.system() == "Windows" - -PY_36 = sys.version_info >= (3, 6) -PY_37 = sys.version_info >= (3, 7) -PY_38 = sys.version_info >= (3, 8) -PY_310 = sys.version_info >= (3, 10) -PY_311 = sys.version_info >= (3, 11) - -if sys.version_info < (3, 7): - import idna_ssl - - idna_ssl.patch_match_hostname() - - def all_tasks( - loop: Optional[asyncio.AbstractEventLoop] = None, - ) -> Set["asyncio.Task[Any]"]: - tasks = list(asyncio.Task.all_tasks(loop)) - return {t for t in tasks if not t.done()} - -else: - all_tasks = asyncio.all_tasks - - -_T = TypeVar("_T") -_S = TypeVar("_S") - - -sentinel: Any = object() -NO_EXTENSIONS: bool = bool(os.environ.get("AIOHTTP_NO_EXTENSIONS")) - -# N.B. sys.flags.dev_mode is available on Python 3.7+, use getattr -# for compatibility with older versions -DEBUG: bool = getattr(sys.flags, "dev_mode", False) or ( - not sys.flags.ignore_environment and bool(os.environ.get("PYTHONASYNCIODEBUG")) -) - - -CHAR = {chr(i) for i in range(0, 128)} -CTL = {chr(i) for i in range(0, 32)} | { - chr(127), -} -SEPARATORS = { - "(", - ")", - "<", - ">", - "@", - ",", - ";", - ":", - "\\", - '"', - "/", - "[", - "]", - "?", - "=", - "{", - "}", - " ", - chr(9), -} -TOKEN = CHAR ^ CTL ^ SEPARATORS - - -class noop: - def __await__(self) -> Generator[None, None, None]: - yield - - -class BasicAuth(namedtuple("BasicAuth", ["login", "password", "encoding"])): - """Http basic authentication helper.""" - - def __new__( - cls, login: str, password: str = "", encoding: str = "latin1" - ) -> "BasicAuth": - if login is None: - raise ValueError("None is not allowed as login value") - - if password is None: - raise ValueError("None is not allowed as password value") - - if ":" in login: - raise ValueError('A ":" is not allowed in login (RFC 1945#section-11.1)') - - return super().__new__(cls, login, password, encoding) - - @classmethod - def decode(cls, auth_header: str, encoding: str = "latin1") -> "BasicAuth": - """Create a BasicAuth object from an Authorization HTTP header.""" - try: - auth_type, encoded_credentials = auth_header.split(" ", 1) - except ValueError: - raise ValueError("Could not parse authorization header.") - - if auth_type.lower() != "basic": - raise ValueError("Unknown authorization method %s" % auth_type) - - try: - decoded = base64.b64decode( - encoded_credentials.encode("ascii"), validate=True - ).decode(encoding) - except binascii.Error: - raise ValueError("Invalid base64 encoding.") - - try: - # RFC 2617 HTTP Authentication - # https://www.ietf.org/rfc/rfc2617.txt - # the colon must be present, but the username and password may be - # otherwise blank. - username, password = decoded.split(":", 1) - except ValueError: - raise ValueError("Invalid credentials.") - - return cls(username, password, encoding=encoding) - - @classmethod - def from_url(cls, url: URL, *, encoding: str = "latin1") -> Optional["BasicAuth"]: - """Create BasicAuth from url.""" - if not isinstance(url, URL): - raise TypeError("url should be yarl.URL instance") - if url.user is None: - return None - return cls(url.user, url.password or "", encoding=encoding) - - def encode(self) -> str: - """Encode credentials.""" - creds = (f"{self.login}:{self.password}").encode(self.encoding) - return "Basic %s" % base64.b64encode(creds).decode(self.encoding) - - -def strip_auth_from_url(url: URL) -> Tuple[URL, Optional[BasicAuth]]: - auth = BasicAuth.from_url(url) - if auth is None: - return url, None - else: - return url.with_user(None), auth - - -def netrc_from_env() -> Optional[netrc.netrc]: - """Load netrc from file. - - Attempt to load it from the path specified by the env-var - NETRC or in the default location in the user's home directory. - - Returns None if it couldn't be found or fails to parse. - """ - netrc_env = os.environ.get("NETRC") - - if netrc_env is not None: - netrc_path = Path(netrc_env) - else: - try: - home_dir = Path.home() - except RuntimeError as e: # pragma: no cover - # if pathlib can't resolve home, it may raise a RuntimeError - client_logger.debug( - "Could not resolve home directory when " - "trying to look for .netrc file: %s", - e, - ) - return None - - netrc_path = home_dir / ("_netrc" if IS_WINDOWS else ".netrc") - - try: - return netrc.netrc(str(netrc_path)) - except netrc.NetrcParseError as e: - client_logger.warning("Could not parse .netrc file: %s", e) - except OSError as e: - # we couldn't read the file (doesn't exist, permissions, etc.) - if netrc_env or netrc_path.is_file(): - # only warn if the environment wanted us to load it, - # or it appears like the default file does actually exist - client_logger.warning("Could not read .netrc file: %s", e) - - return None - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class ProxyInfo: - proxy: URL - proxy_auth: Optional[BasicAuth] - - -def proxies_from_env() -> Dict[str, ProxyInfo]: - proxy_urls = { - k: URL(v) - for k, v in getproxies().items() - if k in ("http", "https", "ws", "wss") - } - netrc_obj = netrc_from_env() - stripped = {k: strip_auth_from_url(v) for k, v in proxy_urls.items()} - ret = {} - for proto, val in stripped.items(): - proxy, auth = val - if proxy.scheme in ("https", "wss"): - client_logger.warning( - "%s proxies %s are not supported, ignoring", proxy.scheme.upper(), proxy - ) - continue - if netrc_obj and auth is None: - auth_from_netrc = None - if proxy.host is not None: - auth_from_netrc = netrc_obj.authenticators(proxy.host) - if auth_from_netrc is not None: - # auth_from_netrc is a (`user`, `account`, `password`) tuple, - # `user` and `account` both can be username, - # if `user` is None, use `account` - *logins, password = auth_from_netrc - login = logins[0] if logins[0] else logins[-1] - auth = BasicAuth(cast(str, login), cast(str, password)) - ret[proto] = ProxyInfo(proxy, auth) - return ret - - -def current_task( - loop: Optional[asyncio.AbstractEventLoop] = None, -) -> "Optional[asyncio.Task[Any]]": - if sys.version_info >= (3, 7): - return asyncio.current_task(loop=loop) - else: - return asyncio.Task.current_task(loop=loop) - - -def get_running_loop( - loop: Optional[asyncio.AbstractEventLoop] = None, -) -> asyncio.AbstractEventLoop: - if loop is None: - loop = asyncio.get_event_loop() - if not loop.is_running(): - warnings.warn( - "The object should be created within an async function", - DeprecationWarning, - stacklevel=3, - ) - if loop.get_debug(): - internal_logger.warning( - "The object should be created within an async function", stack_info=True - ) - return loop - - -def isasyncgenfunction(obj: Any) -> bool: - func = getattr(inspect, "isasyncgenfunction", None) - if func is not None: - return func(obj) # type: ignore[no-any-return] - else: - return False - - -def get_env_proxy_for_url(url: URL) -> Tuple[URL, Optional[BasicAuth]]: - """Get a permitted proxy for the given URL from the env.""" - if url.host is not None and proxy_bypass(url.host): - raise LookupError(f"Proxying is disallowed for `{url.host!r}`") - - proxies_in_env = proxies_from_env() - try: - proxy_info = proxies_in_env[url.scheme] - except KeyError: - raise LookupError(f"No proxies found for `{url!s}` in the env") - else: - return proxy_info.proxy, proxy_info.proxy_auth - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class MimeType: - type: str - subtype: str - suffix: str - parameters: "MultiDictProxy[str]" - - -@functools.lru_cache(maxsize=56) -def parse_mimetype(mimetype: str) -> MimeType: - """Parses a MIME type into its components. - - mimetype is a MIME type string. - - Returns a MimeType object. - - Example: - - >>> parse_mimetype('text/html; charset=utf-8') - MimeType(type='text', subtype='html', suffix='', - parameters={'charset': 'utf-8'}) - - """ - if not mimetype: - return MimeType( - type="", subtype="", suffix="", parameters=MultiDictProxy(MultiDict()) - ) - - parts = mimetype.split(";") - params: MultiDict[str] = MultiDict() - for item in parts[1:]: - if not item: - continue - key, value = cast( - Tuple[str, str], item.split("=", 1) if "=" in item else (item, "") - ) - params.add(key.lower().strip(), value.strip(' "')) - - fulltype = parts[0].strip().lower() - if fulltype == "*": - fulltype = "*/*" - - mtype, stype = ( - cast(Tuple[str, str], fulltype.split("/", 1)) - if "/" in fulltype - else (fulltype, "") - ) - stype, suffix = ( - cast(Tuple[str, str], stype.split("+", 1)) if "+" in stype else (stype, "") - ) - - return MimeType( - type=mtype, subtype=stype, suffix=suffix, parameters=MultiDictProxy(params) - ) - - -def guess_filename(obj: Any, default: Optional[str] = None) -> Optional[str]: - name = getattr(obj, "name", None) - if name and isinstance(name, str) and name[0] != "<" and name[-1] != ">": - return Path(name).name - return default - - -not_qtext_re = re.compile(r"[^\041\043-\133\135-\176]") -QCONTENT = {chr(i) for i in range(0x20, 0x7F)} | {"\t"} - - -def quoted_string(content: str) -> str: - """Return 7-bit content as quoted-string. - - Format content into a quoted-string as defined in RFC5322 for - Internet Message Format. Notice that this is not the 8-bit HTTP - format, but the 7-bit email format. Content must be in usascii or - a ValueError is raised. - """ - if not (QCONTENT > set(content)): - raise ValueError(f"bad content for quoted-string {content!r}") - return not_qtext_re.sub(lambda x: "\\" + x.group(0), content) - - -def content_disposition_header( - disptype: str, quote_fields: bool = True, _charset: str = "utf-8", **params: str -) -> str: - """Sets ``Content-Disposition`` header for MIME. - - This is the MIME payload Content-Disposition header from RFC 2183 - and RFC 7579 section 4.2, not the HTTP Content-Disposition from - RFC 6266. - - disptype is a disposition type: inline, attachment, form-data. - Should be valid extension token (see RFC 2183) - - quote_fields performs value quoting to 7-bit MIME headers - according to RFC 7578. Set to quote_fields to False if recipient - can take 8-bit file names and field values. - - _charset specifies the charset to use when quote_fields is True. - - params is a dict with disposition params. - """ - if not disptype or not (TOKEN > set(disptype)): - raise ValueError("bad content disposition type {!r}" "".format(disptype)) - - value = disptype - if params: - lparams = [] - for key, val in params.items(): - if not key or not (TOKEN > set(key)): - raise ValueError( - "bad content disposition parameter" " {!r}={!r}".format(key, val) - ) - if quote_fields: - if key.lower() == "filename": - qval = quote(val, "", encoding=_charset) - lparams.append((key, '"%s"' % qval)) - else: - try: - qval = quoted_string(val) - except ValueError: - qval = "".join( - (_charset, "''", quote(val, "", encoding=_charset)) - ) - lparams.append((key + "*", qval)) - else: - lparams.append((key, '"%s"' % qval)) - else: - qval = val.replace("\\", "\\\\").replace('"', '\\"') - lparams.append((key, '"%s"' % qval)) - sparams = "; ".join("=".join(pair) for pair in lparams) - value = "; ".join((value, sparams)) - return value - - -class _TSelf(Protocol, Generic[_T]): - _cache: Dict[str, _T] - - -class reify(Generic[_T]): - """Use as a class method decorator. - - It operates almost exactly like - the Python `@property` decorator, but it puts the result of the - method it decorates into the instance dict after the first call, - effectively replacing the function it decorates with an instance - variable. It is, in Python parlance, a data descriptor. - """ - - def __init__(self, wrapped: Callable[..., _T]) -> None: - self.wrapped = wrapped - self.__doc__ = wrapped.__doc__ - self.name = wrapped.__name__ - - def __get__(self, inst: _TSelf[_T], owner: Optional[Type[Any]] = None) -> _T: - try: - try: - return inst._cache[self.name] - except KeyError: - val = self.wrapped(inst) - inst._cache[self.name] = val - return val - except AttributeError: - if inst is None: - return self - raise - - def __set__(self, inst: _TSelf[_T], value: _T) -> None: - raise AttributeError("reified property is read-only") - - -reify_py = reify - -try: - from ._helpers import reify as reify_c - - if not NO_EXTENSIONS: - reify = reify_c # type: ignore[misc,assignment] -except ImportError: - pass - -_ipv4_pattern = ( - r"^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}" - r"(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$" -) -_ipv6_pattern = ( - r"^(?:(?:(?:[A-F0-9]{1,4}:){6}|(?=(?:[A-F0-9]{0,4}:){0,6}" - r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}$)(([0-9A-F]{1,4}:){0,5}|:)" - r"((:[0-9A-F]{1,4}){1,5}:|:)|::(?:[A-F0-9]{1,4}:){5})" - r"(?:(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])\.){3}" - r"(?:25[0-5]|2[0-4][0-9]|1[0-9][0-9]|[1-9]?[0-9])|(?:[A-F0-9]{1,4}:){7}" - r"[A-F0-9]{1,4}|(?=(?:[A-F0-9]{0,4}:){0,7}[A-F0-9]{0,4}$)" - r"(([0-9A-F]{1,4}:){1,7}|:)((:[0-9A-F]{1,4}){1,7}|:)|(?:[A-F0-9]{1,4}:){7}" - r":|:(:[A-F0-9]{1,4}){7})$" -) -_ipv4_regex = re.compile(_ipv4_pattern) -_ipv6_regex = re.compile(_ipv6_pattern, flags=re.IGNORECASE) -_ipv4_regexb = re.compile(_ipv4_pattern.encode("ascii")) -_ipv6_regexb = re.compile(_ipv6_pattern.encode("ascii"), flags=re.IGNORECASE) - - -def _is_ip_address( - regex: Pattern[str], regexb: Pattern[bytes], host: Optional[Union[str, bytes]] -) -> bool: - if host is None: - return False - if isinstance(host, str): - return bool(regex.match(host)) - elif isinstance(host, (bytes, bytearray, memoryview)): - return bool(regexb.match(host)) - else: - raise TypeError(f"{host} [{type(host)}] is not a str or bytes") - - -is_ipv4_address = functools.partial(_is_ip_address, _ipv4_regex, _ipv4_regexb) -is_ipv6_address = functools.partial(_is_ip_address, _ipv6_regex, _ipv6_regexb) - - -def is_ip_address(host: Optional[Union[str, bytes, bytearray, memoryview]]) -> bool: - return is_ipv4_address(host) or is_ipv6_address(host) - - -def next_whole_second() -> datetime.datetime: - """Return current time rounded up to the next whole second.""" - return datetime.datetime.now(datetime.timezone.utc).replace( - microsecond=0 - ) + datetime.timedelta(seconds=0) - - -_cached_current_datetime: Optional[int] = None -_cached_formatted_datetime = "" - - -def rfc822_formatted_time() -> str: - global _cached_current_datetime - global _cached_formatted_datetime - - now = int(time.time()) - if now != _cached_current_datetime: - # Weekday and month names for HTTP date/time formatting; - # always English! - # Tuples are constants stored in codeobject! - _weekdayname = ("Mon", "Tue", "Wed", "Thu", "Fri", "Sat", "Sun") - _monthname = ( - "", # Dummy so we can use 1-based month numbers - "Jan", - "Feb", - "Mar", - "Apr", - "May", - "Jun", - "Jul", - "Aug", - "Sep", - "Oct", - "Nov", - "Dec", - ) - - year, month, day, hh, mm, ss, wd, *tail = time.gmtime(now) - _cached_formatted_datetime = "%s, %02d %3s %4d %02d:%02d:%02d GMT" % ( - _weekdayname[wd], - day, - _monthname[month], - year, - hh, - mm, - ss, - ) - _cached_current_datetime = now - return _cached_formatted_datetime - - -def _weakref_handle(info: "Tuple[weakref.ref[object], str]") -> None: - ref, name = info - ob = ref() - if ob is not None: - with suppress(Exception): - getattr(ob, name)() - - -def weakref_handle( - ob: object, name: str, timeout: float, loop: asyncio.AbstractEventLoop -) -> Optional[asyncio.TimerHandle]: - if timeout is not None and timeout > 0: - when = loop.time() + timeout - if timeout >= 5: - when = ceil(when) - - return loop.call_at(when, _weakref_handle, (weakref.ref(ob), name)) - return None - - -def call_later( - cb: Callable[[], Any], timeout: float, loop: asyncio.AbstractEventLoop -) -> Optional[asyncio.TimerHandle]: - if timeout is not None and timeout > 0: - when = loop.time() + timeout - if timeout > 5: - when = ceil(when) - return loop.call_at(when, cb) - return None - - -class TimeoutHandle: - """Timeout handle""" - - def __init__( - self, loop: asyncio.AbstractEventLoop, timeout: Optional[float] - ) -> None: - self._timeout = timeout - self._loop = loop - self._callbacks: List[ - Tuple[Callable[..., None], Tuple[Any, ...], Dict[str, Any]] - ] = [] - - def register( - self, callback: Callable[..., None], *args: Any, **kwargs: Any - ) -> None: - self._callbacks.append((callback, args, kwargs)) - - def close(self) -> None: - self._callbacks.clear() - - def start(self) -> Optional[asyncio.Handle]: - timeout = self._timeout - if timeout is not None and timeout > 0: - when = self._loop.time() + timeout - if timeout >= 5: - when = ceil(when) - return self._loop.call_at(when, self.__call__) - else: - return None - - def timer(self) -> "BaseTimerContext": - if self._timeout is not None and self._timeout > 0: - timer = TimerContext(self._loop) - self.register(timer.timeout) - return timer - else: - return TimerNoop() - - def __call__(self) -> None: - for cb, args, kwargs in self._callbacks: - with suppress(Exception): - cb(*args, **kwargs) - - self._callbacks.clear() - - -class BaseTimerContext(ContextManager["BaseTimerContext"]): - pass - - -class TimerNoop(BaseTimerContext): - def __enter__(self) -> BaseTimerContext: - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> None: - return - - -class TimerContext(BaseTimerContext): - """Low resolution timeout context manager""" - - def __init__(self, loop: asyncio.AbstractEventLoop) -> None: - self._loop = loop - self._tasks: List[asyncio.Task[Any]] = [] - self._cancelled = False - - def __enter__(self) -> BaseTimerContext: - task = current_task(loop=self._loop) - - if task is None: - raise RuntimeError( - "Timeout context manager should be used " "inside a task" - ) - - if self._cancelled: - raise asyncio.TimeoutError from None - - self._tasks.append(task) - return self - - def __exit__( - self, - exc_type: Optional[Type[BaseException]], - exc_val: Optional[BaseException], - exc_tb: Optional[TracebackType], - ) -> Optional[bool]: - if self._tasks: - self._tasks.pop() - - if exc_type is asyncio.CancelledError and self._cancelled: - raise asyncio.TimeoutError from None - return None - - def timeout(self) -> None: - if not self._cancelled: - for task in set(self._tasks): - task.cancel() - - self._cancelled = True - - -def ceil_timeout(delay: Optional[float]) -> async_timeout.Timeout: - if delay is None or delay <= 0: - return async_timeout.timeout(None) - - loop = get_running_loop() - now = loop.time() - when = now + delay - if delay > 5: - when = ceil(when) - return async_timeout.timeout_at(when) - - -class HeadersMixin: - - ATTRS = frozenset(["_content_type", "_content_dict", "_stored_content_type"]) - - _content_type: Optional[str] = None - _content_dict: Optional[Dict[str, str]] = None - _stored_content_type = sentinel - - def _parse_content_type(self, raw: str) -> None: - self._stored_content_type = raw - if raw is None: - # default value according to RFC 2616 - self._content_type = "application/octet-stream" - self._content_dict = {} - else: - msg = HeaderParser().parsestr("Content-Type: " + raw) - self._content_type = msg.get_content_type() - params = msg.get_params() - self._content_dict = dict(params[1:]) # First element is content type again - - @property - def content_type(self) -> str: - """The value of content part for Content-Type HTTP header.""" - raw = self._headers.get(hdrs.CONTENT_TYPE) # type: ignore[attr-defined] - if self._stored_content_type != raw: - self._parse_content_type(raw) - return self._content_type # type: ignore[return-value] - - @property - def charset(self) -> Optional[str]: - """The value of charset part for Content-Type HTTP header.""" - raw = self._headers.get(hdrs.CONTENT_TYPE) # type: ignore[attr-defined] - if self._stored_content_type != raw: - self._parse_content_type(raw) - return self._content_dict.get("charset") # type: ignore[union-attr] - - @property - def content_length(self) -> Optional[int]: - """The value of Content-Length HTTP header.""" - content_length = self._headers.get( # type: ignore[attr-defined] - hdrs.CONTENT_LENGTH - ) - - if content_length is not None: - return int(content_length) - else: - return None - - -def set_result(fut: "asyncio.Future[_T]", result: _T) -> None: - if not fut.done(): - fut.set_result(result) - - -def set_exception(fut: "asyncio.Future[_T]", exc: BaseException) -> None: - if not fut.done(): - fut.set_exception(exc) - - -class ChainMapProxy(Mapping[str, Any]): - __slots__ = ("_maps",) - - def __init__(self, maps: Iterable[Mapping[str, Any]]) -> None: - self._maps = tuple(maps) - - def __init_subclass__(cls) -> None: - raise TypeError( - "Inheritance class {} from ChainMapProxy " - "is forbidden".format(cls.__name__) - ) - - def __getitem__(self, key: str) -> Any: - for mapping in self._maps: - try: - return mapping[key] - except KeyError: - pass - raise KeyError(key) - - def get(self, key: str, default: Any = None) -> Any: - return self[key] if key in self else default - - def __len__(self) -> int: - # reuses stored hash values if possible - return len(set().union(*self._maps)) # type: ignore[arg-type] - - def __iter__(self) -> Iterator[str]: - d: Dict[str, Any] = {} - for mapping in reversed(self._maps): - # reuses stored hash values if possible - d.update(mapping) - return iter(d) - - def __contains__(self, key: object) -> bool: - return any(key in m for m in self._maps) - - def __bool__(self) -> bool: - return any(self._maps) - - def __repr__(self) -> str: - content = ", ".join(map(repr, self._maps)) - return f"ChainMapProxy({content})" - - -# https://tools.ietf.org/html/rfc7232#section-2.3 -_ETAGC = r"[!#-}\x80-\xff]+" -_ETAGC_RE = re.compile(_ETAGC) -_QUOTED_ETAG = rf'(W/)?"({_ETAGC})"' -QUOTED_ETAG_RE = re.compile(_QUOTED_ETAG) -LIST_QUOTED_ETAG_RE = re.compile(rf"({_QUOTED_ETAG})(?:\s*,\s*|$)|(.)") - -ETAG_ANY = "*" - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class ETag: - value: str - is_weak: bool = False - - -def validate_etag_value(value: str) -> None: - if value != ETAG_ANY and not _ETAGC_RE.fullmatch(value): - raise ValueError( - f"Value {value!r} is not a valid etag. Maybe it contains '\"'?" - ) - - -def parse_http_date(date_str: Optional[str]) -> Optional[datetime.datetime]: - """Process a date string, return a datetime object""" - if date_str is not None: - timetuple = parsedate(date_str) - if timetuple is not None: - with suppress(ValueError): - return datetime.datetime(*timetuple[:6], tzinfo=datetime.timezone.utc) - return None diff --git a/spaces/ccolas/TastyPiano/src/cocktails/utilities/cocktail_category_detection_utilities.py b/spaces/ccolas/TastyPiano/src/cocktails/utilities/cocktail_category_detection_utilities.py deleted file mode 100644 index 5846f86263a1b2a83554d9c2ea5039ccd83555cb..0000000000000000000000000000000000000000 --- a/spaces/ccolas/TastyPiano/src/cocktails/utilities/cocktail_category_detection_utilities.py +++ /dev/null @@ -1,221 +0,0 @@ -# The following functions check whether a cocktail belong to any of N categories -import numpy as np -from src.cocktails.utilities.ingredients_utilities import ingredient_profiles, ingredients_per_type, ingredient2ingredient_id, extract_ingredients - - -def is_ancestral(n, ingredient_indexes, ingredients, quantities): - # ancestrals have a strong spirit and some sweetness from sugar, syrup or liqueurs, no citrus. - # absinthe can be added up to 3 dashes. - # Liqueurs are there to bring sweetness, thus must stay below 15ml (if not it's a duo) - if n['spirit'] > 0 and n['citrus'] == 0 and n['plain_sweet'] + n['liqueur'] <= 2: - if n['spirit'] > 1 and 'absinthe' in ingredients: - if quantities[ingredients.index('absinthe')] < 3: - pass - else: - return False - if n['sugar'] < 2 and n['liqueur'] < 3: - if n['all'] - n['spirit'] - n['sugar'] -n['syrup']- n['liqueur']- n['inconsequentials'] == 0: - if n['liqueur'] == 0: - return True - else: - q_liqueur = np.sum([quantities[i_ing] - for i_ind, i_ing in zip(ingredient_indexes, range(len(ingredients))) - if ingredient_profiles['type'][i_ind].lower() == 'liqueur']) - if q_liqueur <= 15: - return True - else: - return False - return False - - -def is_simple_sour(n, ingredient_indexes, ingredients, quantities): - # simple sours contain a citrus, at least 1 spirit and non-alcoholic sweetness - if n['citrus'] + n['coffee']> 0 and n['spirit'] > 0 and n['plain_sweet'] > 0 and n['juice'] == 0: - if n['all'] - n['citrus'] - n['coffee'] - n['spirit'] - n['plain_sweet'] - n['juice'] -n['egg'] - n['inconsequentials'] == 0: - return True - return False - -def is_complex_sour(n, ingredient_indexes, ingredients, quantities): - # complex sours are simple sours that use alcoholic sweetness, at least in part - if n['citrus'] + n['coffee'] > 0 and n['all_sweet'] > 0 and n['juice'] == 0: - if (n['spirit'] == 0 and n['liqueur'] > 0) or n['spirit'] > 0: - if n['vermouth'] + n['liqueur'] <= 2 and n['vermouth'] + n['liqueur'] > 0: - if n['all'] -n['coffee'] - n['citrus'] - n['spirit'] - n['sugar'] - n['syrup'] \ - - n['liqueur'] - n['vermouth'] - n['egg'] - n['juice'] - n['inconsequentials'] == 0: - return True - return False - -def is_spirit_forward(n, ingredient_indexes, ingredients, quantities): - # spirit forward contain at least a spirit and vermouth, no citrus. Can contain sweet (sugar, syrups, liqueurs) - if n['spirit'] > 0 and n['citrus'] == 0 and n['vermouth'] > 0: - if n['all'] - n['spirit'] - n['sugar'] - n['syrup'] - n['liqueur'] -n['egg'] - n['vermouth'] - n['inconsequentials']== 0: - return True - return False - -def is_duo(n, ingredient_indexes, ingredients, quantities): - # duos are made of one spirit and one liqueur (above 15ml), under it's an ancestral, no citrus. - if n['spirit'] >= 1 and n['citrus'] == 0 and n['sugar']==0 and n['liqueur'] > 0 and n['vermouth'] == 0: - if n['all'] - n['spirit'] - n['sugar'] - n['liqueur'] - n['vermouth'] - n['inconsequentials'] == 0: - q_liqueur = np.sum([quantities[i_ing] - for i_ind, i_ing in zip(ingredient_indexes, range(len(ingredients))) - if ingredient_profiles['type'][i_ind].lower() == 'liqueur']) - if q_liqueur > 15: - return True - else: - return False - return False - -def is_champagne_cocktail(n, ingredient_indexes, ingredients, quantities): - if n['sparkling'] > 0: - return True - else: - return False - -def is_simple_highball(n, ingredient_indexes, ingredients, quantities): - # simple highballs have one alcoholic ingredient and bubbles - if n['alcoholic'] == 1 and n['bubbles'] > 0: - if n['all'] - n['alcoholic'] - n['bubbles'] - n['inconsequentials']== 0: - return True - return False - -def is_complex_highball(n, ingredient_indexes, ingredients, quantities): - # complex highballs have at least one alcoholic ingredient and bubbles (possibly alcoholic). They also contain extra sugar under any form and juice - if n['alcoholic'] > 0 and (n['bubbles'] + n['sparkling']) == 1 and n['juice'] + n['all_sweet'] + n['sugar_bubbles']> 0: - if n['all'] - n['spirit'] - n['bubbles'] - n['sparkling'] - n['citrus'] - n['juice'] - n['liqueur'] \ - - n['syrup'] - n['sugar'] -n['vermouth'] -n['egg'] - n['inconsequentials'] == 0: - if not is_collins(n, ingredient_indexes, ingredients, quantities) and not is_simple_highball(n, ingredient_indexes, ingredients, quantities): - return True - return False - -def is_collins(n, ingredient_indexes, ingredients, quantities): - # collins are a particular kind of highball with sugar and citrus - if n['alcoholic'] == 1 and n['bubbles'] == 1 and n['citrus'] > 0 and n['plain_sweet'] + n['sugar_bubbles'] > 0: - if n['all'] - n['spirit'] - n['bubbles'] - n['citrus'] - n['sugar'] - n['inconsequentials'] == 0: - return True - return False - -def is_julep(n, ingredient_indexes, ingredients, quantities): - # juleps involve smashd mint, sugar and a spirit, no citrus. - if 'mint' in ingredients and n['sugar'] > 0 and n['spirit'] > 0 and n['vermouth'] == 0 and n['citrus'] == 0: - return True - return False - -def is_simple_sour_with_juice(n, ingredient_indexes, ingredients, quantities): - # almost sours are sours with juice - if n['juice'] > 0 and n['spirit'] > 0 and n['plain_sweet'] > 0: - if n['all'] - n['citrus'] - n['coffee'] - n['juice'] - n['spirit'] - n['sugar'] - n['syrup'] - n['egg'] - n['inconsequentials'] == 0: - return True - return False - - -def is_complex_sour_with_juice(n, ingredient_indexes, ingredients, quantities): - # almost sours are sours with juice - if n['juice'] > 0 and n['all_sweet'] > 0: - if (n['spirit'] == 0 and n['liqueur'] > 0) or n['spirit'] > 0: - if n['vermouth'] + n['liqueur'] <= 2 and n['vermouth'] + n['liqueur'] > 0: - if n['all'] -n['coffee'] - n['citrus'] - n['spirit'] - n['sugar'] - n['syrup'] \ - - n['liqueur'] - n['vermouth'] - n['egg'] - n['juice'] - n['inconsequentials'] == 0: - return True - return False - - -is_sub_category = [is_ancestral, is_complex_sour, is_simple_sour, is_duo, is_champagne_cocktail, - is_spirit_forward, is_simple_highball, is_complex_highball, is_collins, - is_julep, is_simple_sour_with_juice, is_complex_sour_with_juice] -sub_categories = ['ancestral', 'complex_sour', 'simple_sour', 'duo', 'champagne_cocktail', - 'spirit_forward', 'simple_highball', 'complex_highball', 'collins', - 'julep', 'simple_sour_with_juice', 'complex_sour_with_juice'] - - -# compute cocktail category as a function of ingredients and quantities, uses name to check match between name and cat (e.g. XXX Collins should be collins..) -# Categories definitions are based on https://www.seriouseats.com/cocktail-style-guide-categories-of-cocktails-glossary-families-of-drinks -def find_cocktail_sub_category(ingredients, quantities, name=None): - ingredient_indexes = [ingredient2ingredient_id[ing] for ing in ingredients] - n_spirit = np.sum([ingredient_profiles['type'][i].lower() == 'liquor' for i in ingredient_indexes ]) - n_citrus = np.sum([ingredient_profiles['type'][i].lower()== 'acid' for i in ingredient_indexes]) - n_sugar = np.sum([ingredient_profiles['ingredient'][i].lower() in ['double syrup', 'simple syrup', 'honey syrup'] for i in ingredient_indexes]) - plain_sweet = ingredients_per_type['sweeteners'] - all_sweet = ingredients_per_type['sweeteners'] + ingredients_per_type['liqueur'] + ['sweet vermouth', 'lillet blanc'] - n_plain_sweet = np.sum([ingredient_profiles['ingredient'][i].lower() in plain_sweet for i in ingredient_indexes]) - n_all_sweet = np.sum([ingredient_profiles['ingredient'][i].lower() in all_sweet for i in ingredient_indexes]) - n_sugar_bubbles = np.sum([ingredient_profiles['ingredient'][i].lower() in ['cola', 'ginger beer', 'tonic'] for i in ingredient_indexes]) - n_juice = np.sum([ingredient_profiles['type'][i].lower() == 'juice' for i in ingredient_indexes]) - n_liqueur = np.sum([ingredient_profiles['type'][i].lower() == 'liqueur' for i in ingredient_indexes]) - alcoholic = ingredients_per_type['liquor'] + ingredients_per_type['liqueur'] + ingredients_per_type['vermouth'] - n_alcoholic = np.sum([ingredient_profiles['ingredient'][i].lower() in alcoholic for i in ingredient_indexes]) - n_bitter = np.sum([ingredient_profiles['type'][i].lower() == 'bitters' for i in ingredient_indexes]) - n_egg = np.sum([ingredient_profiles['ingredient'][i].lower() == 'egg' for i in ingredient_indexes]) - n_vermouth = np.sum([ingredient_profiles['type'][i].lower() == 'vermouth' for i in ingredient_indexes]) - n_sparkling = np.sum([ingredient_profiles['ingredient'][i].lower() == 'sparkling wine' for i in ingredient_indexes]) - n_bubbles = np.sum([ingredient_profiles['ingredient'][i].lower() in ['soda', 'tonic', 'cola', 'ginger beer'] for i in ingredient_indexes]) - n_syrup = np.sum([ingredient_profiles['ingredient'][i].lower() in ['grenadine', 'raspberry syrup'] for i in ingredient_indexes]) - n_coffee = np.sum([ingredient_profiles['ingredient'][i].lower() == 'espresso' for i in ingredient_indexes]) - inconsequentials = ['water', 'salt', 'angostura', 'orange bitters', 'mint'] - n_inconsequentials = np.sum([ingredient_profiles['ingredient'][i].lower() in inconsequentials for i in ingredient_indexes]) - n = dict(all=len(ingredients), - inconsequentials=n_inconsequentials, - sugar_bubbles=n_sugar_bubbles, - bubbles=n_bubbles, - plain_sweet=n_plain_sweet, - all_sweet=n_all_sweet, - coffee=n_coffee, - alcoholic=n_alcoholic, - syrup=n_syrup, - sparkling=n_sparkling, - sugar=n_sugar, - spirit=n_spirit, - citrus=n_citrus, - juice=n_juice, - liqueur=n_liqueur, - bitter=n_bitter, - egg=n_egg, - vermouth=n_vermouth) - - sub_cats = [c for c, test_c in zip(sub_categories, is_sub_category) if test_c(n, ingredient_indexes, ingredients, quantities)] - if name != None: - name = name.lower() - keywords_to_test = ['julep', 'collins', 'highball', 'sour', 'champagne'] - for k in keywords_to_test: - if k in name and not any([k in cat for cat in sub_cats]): - print(k) - for ing, q in zip(ingredients, quantities): - print(f'{ing}: {q} ml') - print(n) - break - if sorted(sub_cats) == ['champagne_cocktail', 'complex_highball']: - sub_cats = ['champagne_cocktail'] - elif sorted(sub_cats) == ['collins', 'complex_highball']: - sub_cats = ['collins'] - elif sorted(sub_cats) == ['champagne_cocktail', 'complex_highball', 'julep']: - sub_cats = ['champagne_cocktail'] - elif sorted(sub_cats) == ['ancestral', 'julep']: - sub_cats = ['julep'] - elif sorted(sub_cats) == ['complex_highball', 'julep']: - sub_cats = ['complex_highball'] - elif sorted(sub_cats) == ['julep', 'simple_sour_with_juice']: - sub_cats = ['simple_sour_with_juice'] - elif sorted(sub_cats) == ['complex_sour_with_juice', 'julep']: - sub_cats = ['complex_sour_with_juice'] - if len(sub_cats) != 1: - # print(sub_cats) - # for ing, q in zip(ingredients, quantities): - # print(f'{ing}: {q} ml') - # print(n) - # if len(sub_cats) == 0: - sub_cats = ['other'] - assert len(sub_cats) == 1, sub_cats - return sub_cats[0], n - -def get_cocktails_attributes(ing_strs): - attributes = dict() - cats = [] - for ing_str in ing_strs: - ingredients, quantities = extract_ingredients(ing_str) - cat, atts = find_cocktail_sub_category(ingredients, quantities) - for k in atts.keys(): - if k not in attributes.keys(): - attributes[k] = [atts[k]] - else: - attributes[k].append(atts[k]) - cats.append(cat) - return cats, attributes diff --git a/spaces/chendl/compositional_test/multimodal/generate_batch_submit.py b/spaces/chendl/compositional_test/multimodal/generate_batch_submit.py deleted file mode 100644 index 067cdf0d2c30fc6cef0ab3e27dc1dc0ae3af9d0d..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/generate_batch_submit.py +++ /dev/null @@ -1,9 +0,0 @@ -import os -import sys -start_idx = sys.argv[1] -end_idx = sys.argv[2] - -with open("batch_submit.sh", "w") as f: - for i, idx in enumerate(range(int(start_idx), int(end_idx), 48)): - f.write(f"sbatch -J label{i} submit_labeling.sh {idx} {idx+48}\n") - f.write("sleep 30\n") diff --git a/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/t5_tokenizer_model.py b/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/t5_tokenizer_model.py deleted file mode 100644 index fbccd52bd8c726f07bbe61451b69ac46fb5b131f..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/flax/language-modeling/t5_tokenizer_model.py +++ /dev/null @@ -1,112 +0,0 @@ -#!/usr/bin/env python3 -import json -from typing import Iterator, List, Union - -from tokenizers import AddedToken, Regex, Tokenizer, decoders, normalizers, pre_tokenizers, trainers -from tokenizers.implementations.base_tokenizer import BaseTokenizer -from tokenizers.models import Unigram -from tokenizers.processors import TemplateProcessing - - -class SentencePieceUnigramTokenizer(BaseTokenizer): - """ - This class is a copy of `DeDLOC's tokenizer implementation `__ . - - Custom SentencePiece Unigram Tokenizer with NMT, NKFC, spaces and lower-casing characters normalization - Represents the Unigram algorithm, with the pretokenization used by SentencePiece - """ - - def __init__( - self, - replacement: str = "▁", - add_prefix_space: bool = True, - unk_token: Union[str, AddedToken] = "", - eos_token: Union[str, AddedToken] = "", - pad_token: Union[str, AddedToken] = "", - ): - self.special_tokens = { - "pad": {"id": 0, "token": pad_token}, - "eos": {"id": 1, "token": eos_token}, - "unk": {"id": 2, "token": unk_token}, - } - - self.special_tokens_list = [None] * len(self.special_tokens) - for token_dict in self.special_tokens.values(): - self.special_tokens_list[token_dict["id"]] = token_dict["token"] - - tokenizer = Tokenizer(Unigram()) - - tokenizer.normalizer = normalizers.Sequence( - [ - normalizers.Nmt(), - normalizers.NFKC(), - normalizers.Replace(Regex(" {2,}"), " "), - normalizers.Lowercase(), - ] - ) - tokenizer.pre_tokenizer = pre_tokenizers.Sequence( - [ - pre_tokenizers.Metaspace(replacement=replacement, add_prefix_space=add_prefix_space), - pre_tokenizers.Digits(individual_digits=True), - pre_tokenizers.Punctuation(), - ] - ) - tokenizer.decoder = decoders.Metaspace(replacement=replacement, add_prefix_space=add_prefix_space) - - tokenizer.post_processor = TemplateProcessing( - single=f"$A {self.special_tokens['eos']['token']}", - special_tokens=[(self.special_tokens["eos"]["token"], self.special_tokens["eos"]["id"])], - ) - - parameters = { - "model": "SentencePieceUnigram", - "replacement": replacement, - "add_prefix_space": add_prefix_space, - } - - super().__init__(tokenizer, parameters) - - def train( - self, - files: Union[str, List[str]], - vocab_size: int = 8000, - show_progress: bool = True, - ): - """Train the model using the given files""" - - trainer = trainers.UnigramTrainer( - vocab_size=vocab_size, - special_tokens=self.special_tokens_list, - show_progress=show_progress, - ) - - if isinstance(files, str): - files = [files] - self._tokenizer.train(files, trainer=trainer) - - self.add_unk_id() - - def train_from_iterator( - self, - iterator: Union[Iterator[str], Iterator[Iterator[str]]], - vocab_size: int = 8000, - show_progress: bool = True, - ): - """Train the model using the given iterator""" - - trainer = trainers.UnigramTrainer( - vocab_size=vocab_size, - special_tokens=self.special_tokens_list, - show_progress=show_progress, - ) - - self._tokenizer.train_from_iterator(iterator, trainer=trainer) - - self.add_unk_id() - - def add_unk_id(self): - tokenizer_json = json.loads(self._tokenizer.to_str()) - - tokenizer_json["model"]["unk_id"] = self.special_tokens["unk"]["id"] - - self._tokenizer = Tokenizer.from_str(json.dumps(tokenizer_json)) diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/hybrid_clip/configuration_hybrid_clip.py b/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/hybrid_clip/configuration_hybrid_clip.py deleted file mode 100644 index 5272ac44a1a884eaf9b058c9e29729bfaec29a58..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/jax-projects/hybrid_clip/configuration_hybrid_clip.py +++ /dev/null @@ -1,112 +0,0 @@ -import copy - -from transformers.configuration_utils import PretrainedConfig -from transformers.utils import logging - - -logger = logging.get_logger(__name__) - - -class HybridCLIPConfig(PretrainedConfig): - r""" - :class:`HybridCLIPConfig` is the configuration class to store the configuration of a - :class:`~HybridCLIPModel`. It is used to instantiate HybridCLIPModel model according to the specified arguments, - defining the text model and vision model configs. - - Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model - outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information. - - Args: - text_config_dict (:obj:`dict`): - Dictionary of configuration options that defines text model config. - vision_config_dict (:obj:`dict`): - Dictionary of configuration options that defines vison model config. - projection_dim (:obj:`int`, `optional`, defaults to 512): - Dimentionality of text and vision projection layers. - kwargs (`optional`): - Dictionary of keyword arguments. - - Examples:: - - >>> from transformers import BertConfig, CLIPConfig, HybridCLIPConfig, FlaxHybridCLIP - - >>> # Initializing a BERT and CLIP configuration - >>> config_text = BertConfig() - >>> config_vision = CLIPConfig() - - >>> config = HybridCLIPConfig.from_text_vision_configs(config_text, config_vision, projection_dim=512) - - >>> # Initializing a BERT and CLIPVision model - >>> model = EncoderDecoderModel(config=config) - - >>> # Accessing the model configuration - >>> config_text = model.config.text_config - >>> config_vision = model.config.vision_config - - >>> # Saving the model, including its configuration - >>> model.save_pretrained('my-model') - - >>> # loading model and config from pretrained folder - >>> encoder_decoder_config = HybridCLIPConfig.from_pretrained('my-model') - >>> model = FlaxHybridCLIP.from_pretrained('my-model', config=encoder_decoder_config) - """ - - model_type = "hybrid-clip" - is_composition = True - - def __init__(self, projection_dim=512, **kwargs): - super().__init__(**kwargs) - - if "text_config" not in kwargs: - raise ValueError("`text_config` can not be `None`.") - - if "vision_config" not in kwargs: - raise ValueError("`vision_config` can not be `None`.") - - text_config = kwargs.pop("text_config") - vision_config = kwargs.pop("vision_config") - - text_model_type = text_config.pop("model_type") - vision_model_type = vision_config.pop("model_type") - - from transformers import AutoConfig - - self.text_config = AutoConfig.for_model(text_model_type, **text_config) - - if vision_model_type == "clip": - self.vision_config = AutoConfig.for_model(vision_model_type, **vision_config).vision_config - elif vision_model_type == "clip_vision_model": - from transformers import CLIPVisionConfig - - self.vision_config = CLIPVisionConfig(**vision_config) - else: - self.vision_config = AutoConfig.for_model(vision_model_type, **vision_config) - - self.projection_dim = projection_dim - self.initializer_factor = 1.0 - - @classmethod - def from_text_vision_configs(cls, text_config: PretrainedConfig, vision_config: PretrainedConfig, **kwargs): - r""" - Instantiate a :class:`HybridCLIPConfig` (or a derived class) from text model configuration and - vision model configuration. - - Returns: - :class:`HybridCLIPConfig`: An instance of a configuration object - """ - - return cls(text_config=text_config.to_dict(), vision_config=vision_config.to_dict(), **kwargs) - - def to_dict(self): - """ - Serializes this instance to a Python dictionary. Override the default - :meth:`~transformers.PretrainedConfig.to_dict`. - - Returns: - :obj:`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance, - """ - output = copy.deepcopy(self.__dict__) - output["text_config"] = self.text_config.to_dict() - output["vision_config"] = self.vision_config.to_dict() - output["model_type"] = self.__class__.model_type - return output diff --git a/spaces/chlab/interactive_kinematic_planet_detector/model_utils/vision_modifications.py b/spaces/chlab/interactive_kinematic_planet_detector/model_utils/vision_modifications.py deleted file mode 100644 index 14151748ba4a57cdfcfc64b4ba83c4d6009294bb..0000000000000000000000000000000000000000 --- a/spaces/chlab/interactive_kinematic_planet_detector/model_utils/vision_modifications.py +++ /dev/null @@ -1,310 +0,0 @@ -import warnings -from typing import Callable, List, Optional - -import torch -from torch import Tensor - -interpolate = torch.nn.functional.interpolate - - -class FrozenBatchNorm2d(torch.nn.Module): - """ - BatchNorm2d where the batch statistics and the affine parameters are fixed - - Args: - num_features (int): Number of features ``C`` from an expected input of size ``(N, C, H, W)`` - eps (float): a value added to the denominator for numerical stability. Default: 1e-5 - """ - - def __init__( - self, - num_features: int, - eps: float = 1e-5, - ): - super().__init__() - # _log_api_usage_once(self) - self.eps = eps - self.register_buffer("weight", torch.ones(num_features)) - self.register_buffer("bias", torch.zeros(num_features)) - self.register_buffer("running_mean", torch.zeros(num_features)) - self.register_buffer("running_var", torch.ones(num_features)) - - def _load_from_state_dict( - self, - state_dict: dict, - prefix: str, - local_metadata: dict, - strict: bool, - missing_keys: List[str], - unexpected_keys: List[str], - error_msgs: List[str], - ): - num_batches_tracked_key = prefix + "num_batches_tracked" - if num_batches_tracked_key in state_dict: - del state_dict[num_batches_tracked_key] - - super()._load_from_state_dict( - state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs - ) - - def forward(self, x: Tensor) -> Tensor: - # move reshapes to the beginning - # to make it fuser-friendly - w = self.weight.reshape(1, -1, 1, 1) - b = self.bias.reshape(1, -1, 1, 1) - rv = self.running_var.reshape(1, -1, 1, 1) - rm = self.running_mean.reshape(1, -1, 1, 1) - scale = w * (rv + self.eps).rsqrt() - bias = b - rm * scale - return x * scale + bias - - def __repr__(self) -> str: - return f"{self.__class__.__name__}({self.weight.shape[0]}, eps={self.eps})" - - -class ConvNormActivation(torch.nn.Sequential): - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: int = 3, - stride: int = 1, - padding: Optional[int] = None, - groups: int = 1, - norm_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.BatchNorm2d, - activation_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.ReLU, - dilation: int = 1, - inplace: Optional[bool] = True, - bias: Optional[bool] = None, - conv_layer: Callable[..., torch.nn.Module] = torch.nn.Conv2d, - ) -> None: - - if padding is None: - padding = (kernel_size - 1) // 2 * dilation - if bias is None: - bias = norm_layer is None - - layers = [ - conv_layer( - in_channels, - out_channels, - kernel_size, - stride, - padding, - dilation=dilation, - groups=groups, - bias=bias, - ) - ] - - if norm_layer is not None: - layers.append(norm_layer(out_channels)) - - if activation_layer is not None: - params = {} if inplace is None else {"inplace": inplace} - layers.append(activation_layer(**params)) - super().__init__(*layers) - # _log_api_usage_once(self) - self.out_channels = out_channels - - if self.__class__ == ConvNormActivation: - warnings.warn( - "Don't use ConvNormActivation directly, please use Conv2dNormActivation and Conv3dNormActivation instead." - ) - - -class Conv2dNormActivation(ConvNormActivation): - """ - Configurable block used for Convolution2d-Normalization-Activation blocks. - - Args: - in_channels (int): Number of channels in the input image - out_channels (int): Number of channels produced by the Convolution-Normalization-Activation block - kernel_size: (int, optional): Size of the convolving kernel. Default: 3 - stride (int, optional): Stride of the convolution. Default: 1 - padding (int, tuple or str, optional): Padding added to all four sides of the input. Default: None, in which case it will calculated as ``padding = (kernel_size - 1) // 2 * dilation`` - groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 - norm_layer (Callable[..., torch.nn.Module], optional): Norm layer that will be stacked on top of the convolution layer. If ``None`` this layer wont be used. Default: ``torch.nn.BatchNorm2d`` - activation_layer (Callable[..., torch.nn.Module], optional): Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If ``None`` this layer wont be used. Default: ``torch.nn.ReLU`` - dilation (int): Spacing between kernel elements. Default: 1 - inplace (bool): Parameter for the activation layer, which can optionally do the operation in-place. Default ``True`` - bias (bool, optional): Whether to use bias in the convolution layer. By default, biases are included if ``norm_layer is None``. - - """ - - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: int = 3, - stride: int = 1, - padding: Optional[int] = None, - groups: int = 1, - norm_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.BatchNorm2d, - activation_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.ReLU, - dilation: int = 1, - inplace: Optional[bool] = True, - bias: Optional[bool] = None, - ) -> None: - - super().__init__( - in_channels, - out_channels, - kernel_size, - stride, - padding, - groups, - norm_layer, - activation_layer, - dilation, - inplace, - bias, - torch.nn.Conv2d, - ) - - -class Conv3dNormActivation(ConvNormActivation): - """ - Configurable block used for Convolution3d-Normalization-Activation blocks. - - Args: - in_channels (int): Number of channels in the input video. - out_channels (int): Number of channels produced by the Convolution-Normalization-Activation block - kernel_size: (int, optional): Size of the convolving kernel. Default: 3 - stride (int, optional): Stride of the convolution. Default: 1 - padding (int, tuple or str, optional): Padding added to all four sides of the input. Default: None, in which case it will calculated as ``padding = (kernel_size - 1) // 2 * dilation`` - groups (int, optional): Number of blocked connections from input channels to output channels. Default: 1 - norm_layer (Callable[..., torch.nn.Module], optional): Norm layer that will be stacked on top of the convolution layer. If ``None`` this layer wont be used. Default: ``torch.nn.BatchNorm3d`` - activation_layer (Callable[..., torch.nn.Module], optional): Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If ``None`` this layer wont be used. Default: ``torch.nn.ReLU`` - dilation (int): Spacing between kernel elements. Default: 1 - inplace (bool): Parameter for the activation layer, which can optionally do the operation in-place. Default ``True`` - bias (bool, optional): Whether to use bias in the convolution layer. By default, biases are included if ``norm_layer is None``. - """ - - def __init__( - self, - in_channels: int, - out_channels: int, - kernel_size: int = 3, - stride: int = 1, - padding: Optional[int] = None, - groups: int = 1, - norm_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.BatchNorm3d, - activation_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.ReLU, - dilation: int = 1, - inplace: Optional[bool] = True, - bias: Optional[bool] = None, - ) -> None: - - super().__init__( - in_channels, - out_channels, - kernel_size, - stride, - padding, - groups, - norm_layer, - activation_layer, - dilation, - inplace, - bias, - torch.nn.Conv3d, - ) - - -class SqueezeExcitation(torch.nn.Module): - """ - This block implements the Squeeze-and-Excitation block from https://arxiv.org/abs/1709.01507 (see Fig. 1). - Parameters ``activation``, and ``scale_activation`` correspond to ``delta`` and ``sigma`` in eq. 3. - - Args: - input_channels (int): Number of channels in the input image - squeeze_channels (int): Number of squeeze channels - activation (Callable[..., torch.nn.Module], optional): ``delta`` activation. Default: ``torch.nn.ReLU`` - scale_activation (Callable[..., torch.nn.Module]): ``sigma`` activation. Default: ``torch.nn.Sigmoid`` - """ - - def __init__( - self, - input_channels: int, - squeeze_channels: int, - activation: Callable[..., torch.nn.Module] = torch.nn.ReLU, - scale_activation: Callable[..., torch.nn.Module] = torch.nn.Sigmoid, - ) -> None: - super().__init__() - # _log_api_usage_once(self) - self.avgpool = torch.nn.AdaptiveAvgPool2d(1) - self.fc1 = torch.nn.Conv2d(input_channels, squeeze_channels, 1) - self.fc2 = torch.nn.Conv2d(squeeze_channels, input_channels, 1) - self.activation = activation() - self.scale_activation = scale_activation() - - def _scale(self, input: Tensor) -> Tensor: - scale = self.avgpool(input) - scale = self.fc1(scale) - scale = self.activation(scale) - scale = self.fc2(scale) - return self.scale_activation(scale) - - def forward(self, input: Tensor) -> Tensor: - scale = self._scale(input) - return scale * input - - -class MLP(torch.nn.Sequential): - """This block implements the multi-layer perceptron (MLP) module. - - Args: - in_channels (int): Number of channels of the input - hidden_channels (List[int]): List of the hidden channel dimensions - norm_layer (Callable[..., torch.nn.Module], optional): Norm layer that will be stacked on top of the convolution layer. If ``None`` this layer wont be used. Default: ``None`` - activation_layer (Callable[..., torch.nn.Module], optional): Activation function which will be stacked on top of the normalization layer (if not None), otherwise on top of the conv layer. If ``None`` this layer wont be used. Default: ``torch.nn.ReLU`` - inplace (bool): Parameter for the activation layer, which can optionally do the operation in-place. Default ``True`` - bias (bool): Whether to use bias in the linear layer. Default ``True`` - dropout (float): The probability for the dropout layer. Default: 0.0 - """ - - def __init__( - self, - in_channels: int, - hidden_channels: List[int], - norm_layer: Optional[Callable[..., torch.nn.Module]] = None, - activation_layer: Optional[Callable[..., torch.nn.Module]] = torch.nn.ReLU, - inplace: Optional[bool] = True, - bias: bool = True, - dropout: float = 0.0, - ): - # The addition of `norm_layer` is inspired from the implementation of TorchMultimodal: - # https://github.com/facebookresearch/multimodal/blob/5dec8a/torchmultimodal/modules/layers/mlp.py - params = {} if inplace is None else {"inplace": inplace} - - layers = [] - in_dim = in_channels - for hidden_dim in hidden_channels[:-1]: - layers.append(torch.nn.Linear(in_dim, hidden_dim, bias=bias)) - if norm_layer is not None: - layers.append(norm_layer(hidden_dim)) - layers.append(activation_layer(**params)) - layers.append(torch.nn.Dropout(dropout, **params)) - in_dim = hidden_dim - - layers.append(torch.nn.Linear(in_dim, hidden_channels[-1], bias=bias)) - layers.append(torch.nn.Dropout(dropout, **params)) - - super().__init__(*layers) - # _log_api_usage_once(self) - - -class Permute(torch.nn.Module): - """This module returns a view of the tensor input with its dimensions permuted. - - Args: - dims (List[int]): The desired ordering of dimensions - """ - - def __init__(self, dims: List[int]): - super().__init__() - self.dims = dims - - def forward(self, x: Tensor) -> Tensor: - return torch.permute(x, self.dims) \ No newline at end of file diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/PcfFontFile.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/PcfFontFile.py deleted file mode 100644 index 8db5822fe7dadb10880c7d53a27731775b9a1835..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/PcfFontFile.py +++ /dev/null @@ -1,256 +0,0 @@ -# -# THIS IS WORK IN PROGRESS -# -# The Python Imaging Library -# $Id$ -# -# portable compiled font file parser -# -# history: -# 1997-08-19 fl created -# 2003-09-13 fl fixed loading of unicode fonts -# -# Copyright (c) 1997-2003 by Secret Labs AB. -# Copyright (c) 1997-2003 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - -import io - -from . import FontFile, Image -from ._binary import i8 -from ._binary import i16be as b16 -from ._binary import i16le as l16 -from ._binary import i32be as b32 -from ._binary import i32le as l32 - -# -------------------------------------------------------------------- -# declarations - -PCF_MAGIC = 0x70636601 # "\x01fcp" - -PCF_PROPERTIES = 1 << 0 -PCF_ACCELERATORS = 1 << 1 -PCF_METRICS = 1 << 2 -PCF_BITMAPS = 1 << 3 -PCF_INK_METRICS = 1 << 4 -PCF_BDF_ENCODINGS = 1 << 5 -PCF_SWIDTHS = 1 << 6 -PCF_GLYPH_NAMES = 1 << 7 -PCF_BDF_ACCELERATORS = 1 << 8 - -BYTES_PER_ROW = [ - lambda bits: ((bits + 7) >> 3), - lambda bits: ((bits + 15) >> 3) & ~1, - lambda bits: ((bits + 31) >> 3) & ~3, - lambda bits: ((bits + 63) >> 3) & ~7, -] - - -def sz(s, o): - return s[o : s.index(b"\0", o)] - - -class PcfFontFile(FontFile.FontFile): - """Font file plugin for the X11 PCF format.""" - - name = "name" - - def __init__(self, fp, charset_encoding="iso8859-1"): - self.charset_encoding = charset_encoding - - magic = l32(fp.read(4)) - if magic != PCF_MAGIC: - msg = "not a PCF file" - raise SyntaxError(msg) - - super().__init__() - - count = l32(fp.read(4)) - self.toc = {} - for i in range(count): - type = l32(fp.read(4)) - self.toc[type] = l32(fp.read(4)), l32(fp.read(4)), l32(fp.read(4)) - - self.fp = fp - - self.info = self._load_properties() - - metrics = self._load_metrics() - bitmaps = self._load_bitmaps(metrics) - encoding = self._load_encoding() - - # - # create glyph structure - - for ch, ix in enumerate(encoding): - if ix is not None: - ( - xsize, - ysize, - left, - right, - width, - ascent, - descent, - attributes, - ) = metrics[ix] - self.glyph[ch] = ( - (width, 0), - (left, descent - ysize, xsize + left, descent), - (0, 0, xsize, ysize), - bitmaps[ix], - ) - - def _getformat(self, tag): - format, size, offset = self.toc[tag] - - fp = self.fp - fp.seek(offset) - - format = l32(fp.read(4)) - - if format & 4: - i16, i32 = b16, b32 - else: - i16, i32 = l16, l32 - - return fp, format, i16, i32 - - def _load_properties(self): - # - # font properties - - properties = {} - - fp, format, i16, i32 = self._getformat(PCF_PROPERTIES) - - nprops = i32(fp.read(4)) - - # read property description - p = [] - for i in range(nprops): - p.append((i32(fp.read(4)), i8(fp.read(1)), i32(fp.read(4)))) - if nprops & 3: - fp.seek(4 - (nprops & 3), io.SEEK_CUR) # pad - - data = fp.read(i32(fp.read(4))) - - for k, s, v in p: - k = sz(data, k) - if s: - v = sz(data, v) - properties[k] = v - - return properties - - def _load_metrics(self): - # - # font metrics - - metrics = [] - - fp, format, i16, i32 = self._getformat(PCF_METRICS) - - append = metrics.append - - if (format & 0xFF00) == 0x100: - # "compressed" metrics - for i in range(i16(fp.read(2))): - left = i8(fp.read(1)) - 128 - right = i8(fp.read(1)) - 128 - width = i8(fp.read(1)) - 128 - ascent = i8(fp.read(1)) - 128 - descent = i8(fp.read(1)) - 128 - xsize = right - left - ysize = ascent + descent - append((xsize, ysize, left, right, width, ascent, descent, 0)) - - else: - # "jumbo" metrics - for i in range(i32(fp.read(4))): - left = i16(fp.read(2)) - right = i16(fp.read(2)) - width = i16(fp.read(2)) - ascent = i16(fp.read(2)) - descent = i16(fp.read(2)) - attributes = i16(fp.read(2)) - xsize = right - left - ysize = ascent + descent - append((xsize, ysize, left, right, width, ascent, descent, attributes)) - - return metrics - - def _load_bitmaps(self, metrics): - # - # bitmap data - - bitmaps = [] - - fp, format, i16, i32 = self._getformat(PCF_BITMAPS) - - nbitmaps = i32(fp.read(4)) - - if nbitmaps != len(metrics): - msg = "Wrong number of bitmaps" - raise OSError(msg) - - offsets = [] - for i in range(nbitmaps): - offsets.append(i32(fp.read(4))) - - bitmap_sizes = [] - for i in range(4): - bitmap_sizes.append(i32(fp.read(4))) - - # byteorder = format & 4 # non-zero => MSB - bitorder = format & 8 # non-zero => MSB - padindex = format & 3 - - bitmapsize = bitmap_sizes[padindex] - offsets.append(bitmapsize) - - data = fp.read(bitmapsize) - - pad = BYTES_PER_ROW[padindex] - mode = "1;R" - if bitorder: - mode = "1" - - for i in range(nbitmaps): - xsize, ysize = metrics[i][:2] - b, e = offsets[i : i + 2] - bitmaps.append( - Image.frombytes("1", (xsize, ysize), data[b:e], "raw", mode, pad(xsize)) - ) - - return bitmaps - - def _load_encoding(self): - fp, format, i16, i32 = self._getformat(PCF_BDF_ENCODINGS) - - first_col, last_col = i16(fp.read(2)), i16(fp.read(2)) - first_row, last_row = i16(fp.read(2)), i16(fp.read(2)) - - i16(fp.read(2)) # default - - nencoding = (last_col - first_col + 1) * (last_row - first_row + 1) - - # map character code to bitmap index - encoding = [None] * min(256, nencoding) - - encoding_offsets = [i16(fp.read(2)) for _ in range(nencoding)] - - for i in range(first_col, len(encoding)): - try: - encoding_offset = encoding_offsets[ - ord(bytearray([i]).decode(self.charset_encoding)) - ] - if encoding_offset != 0xFFFF: - encoding[i] = encoding_offset - except UnicodeDecodeError: - # character is not supported in selected encoding - pass - - return encoding diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/TgaImagePlugin.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/TgaImagePlugin.py deleted file mode 100644 index 67dfc3d3c8e5726c5885b1c62cdcb2553854c4dc..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/PIL/TgaImagePlugin.py +++ /dev/null @@ -1,255 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# TGA file handling -# -# History: -# 95-09-01 fl created (reads 24-bit files only) -# 97-01-04 fl support more TGA versions, including compressed images -# 98-07-04 fl fixed orientation and alpha layer bugs -# 98-09-11 fl fixed orientation for runlength decoder -# -# Copyright (c) Secret Labs AB 1997-98. -# Copyright (c) Fredrik Lundh 1995-97. -# -# See the README file for information on usage and redistribution. -# - - -import warnings - -from . import Image, ImageFile, ImagePalette -from ._binary import i16le as i16 -from ._binary import o8 -from ._binary import o16le as o16 - -# -# -------------------------------------------------------------------- -# Read RGA file - - -MODES = { - # map imagetype/depth to rawmode - (1, 8): "P", - (3, 1): "1", - (3, 8): "L", - (3, 16): "LA", - (2, 16): "BGR;5", - (2, 24): "BGR", - (2, 32): "BGRA", -} - - -## -# Image plugin for Targa files. - - -class TgaImageFile(ImageFile.ImageFile): - format = "TGA" - format_description = "Targa" - - def _open(self): - # process header - s = self.fp.read(18) - - id_len = s[0] - - colormaptype = s[1] - imagetype = s[2] - - depth = s[16] - - flags = s[17] - - self._size = i16(s, 12), i16(s, 14) - - # validate header fields - if ( - colormaptype not in (0, 1) - or self.size[0] <= 0 - or self.size[1] <= 0 - or depth not in (1, 8, 16, 24, 32) - ): - msg = "not a TGA file" - raise SyntaxError(msg) - - # image mode - if imagetype in (3, 11): - self.mode = "L" - if depth == 1: - self.mode = "1" # ??? - elif depth == 16: - self.mode = "LA" - elif imagetype in (1, 9): - self.mode = "P" - elif imagetype in (2, 10): - self.mode = "RGB" - if depth == 32: - self.mode = "RGBA" - else: - msg = "unknown TGA mode" - raise SyntaxError(msg) - - # orientation - orientation = flags & 0x30 - self._flip_horizontally = orientation in [0x10, 0x30] - if orientation in [0x20, 0x30]: - orientation = 1 - elif orientation in [0, 0x10]: - orientation = -1 - else: - msg = "unknown TGA orientation" - raise SyntaxError(msg) - - self.info["orientation"] = orientation - - if imagetype & 8: - self.info["compression"] = "tga_rle" - - if id_len: - self.info["id_section"] = self.fp.read(id_len) - - if colormaptype: - # read palette - start, size, mapdepth = i16(s, 3), i16(s, 5), s[7] - if mapdepth == 16: - self.palette = ImagePalette.raw( - "BGR;15", b"\0" * 2 * start + self.fp.read(2 * size) - ) - elif mapdepth == 24: - self.palette = ImagePalette.raw( - "BGR", b"\0" * 3 * start + self.fp.read(3 * size) - ) - elif mapdepth == 32: - self.palette = ImagePalette.raw( - "BGRA", b"\0" * 4 * start + self.fp.read(4 * size) - ) - - # setup tile descriptor - try: - rawmode = MODES[(imagetype & 7, depth)] - if imagetype & 8: - # compressed - self.tile = [ - ( - "tga_rle", - (0, 0) + self.size, - self.fp.tell(), - (rawmode, orientation, depth), - ) - ] - else: - self.tile = [ - ( - "raw", - (0, 0) + self.size, - self.fp.tell(), - (rawmode, 0, orientation), - ) - ] - except KeyError: - pass # cannot decode - - def load_end(self): - if self._flip_horizontally: - self.im = self.im.transpose(Image.Transpose.FLIP_LEFT_RIGHT) - - -# -# -------------------------------------------------------------------- -# Write TGA file - - -SAVE = { - "1": ("1", 1, 0, 3), - "L": ("L", 8, 0, 3), - "LA": ("LA", 16, 0, 3), - "P": ("P", 8, 1, 1), - "RGB": ("BGR", 24, 0, 2), - "RGBA": ("BGRA", 32, 0, 2), -} - - -def _save(im, fp, filename): - try: - rawmode, bits, colormaptype, imagetype = SAVE[im.mode] - except KeyError as e: - msg = f"cannot write mode {im.mode} as TGA" - raise OSError(msg) from e - - if "rle" in im.encoderinfo: - rle = im.encoderinfo["rle"] - else: - compression = im.encoderinfo.get("compression", im.info.get("compression")) - rle = compression == "tga_rle" - if rle: - imagetype += 8 - - id_section = im.encoderinfo.get("id_section", im.info.get("id_section", "")) - id_len = len(id_section) - if id_len > 255: - id_len = 255 - id_section = id_section[:255] - warnings.warn("id_section has been trimmed to 255 characters") - - if colormaptype: - palette = im.im.getpalette("RGB", "BGR") - colormaplength, colormapentry = len(palette) // 3, 24 - else: - colormaplength, colormapentry = 0, 0 - - if im.mode in ("LA", "RGBA"): - flags = 8 - else: - flags = 0 - - orientation = im.encoderinfo.get("orientation", im.info.get("orientation", -1)) - if orientation > 0: - flags = flags | 0x20 - - fp.write( - o8(id_len) - + o8(colormaptype) - + o8(imagetype) - + o16(0) # colormapfirst - + o16(colormaplength) - + o8(colormapentry) - + o16(0) - + o16(0) - + o16(im.size[0]) - + o16(im.size[1]) - + o8(bits) - + o8(flags) - ) - - if id_section: - fp.write(id_section) - - if colormaptype: - fp.write(palette) - - if rle: - ImageFile._save( - im, fp, [("tga_rle", (0, 0) + im.size, 0, (rawmode, orientation))] - ) - else: - ImageFile._save( - im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, 0, orientation))] - ) - - # write targa version 2 footer - fp.write(b"\000" * 8 + b"TRUEVISION-XFILE." + b"\000") - - -# -# -------------------------------------------------------------------- -# Registry - - -Image.register_open(TgaImageFile.format, TgaImageFile) -Image.register_save(TgaImageFile.format, _save) - -Image.register_extensions(TgaImageFile.format, [".tga", ".icb", ".vda", ".vst"]) - -Image.register_mime(TgaImageFile.format, "image/x-tga") diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/ingest/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/ingest/__init__.py deleted file mode 100644 index 38d6a4fa508e757ae33e7ddbd46f038e607ba610..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chromadb/ingest/__init__.py +++ /dev/null @@ -1,102 +0,0 @@ -from abc import abstractmethod -from typing import Callable, Optional, Sequence -from chromadb.types import ( - SubmitEmbeddingRecord, - EmbeddingRecord, - SeqId, - Vector, - ScalarEncoding, -) -from chromadb.config import Component -from uuid import UUID -import array - - -def encode_vector(vector: Vector, encoding: ScalarEncoding) -> bytes: - """Encode a vector into a byte array.""" - - if encoding == ScalarEncoding.FLOAT32: - return array.array("f", vector).tobytes() - elif encoding == ScalarEncoding.INT32: - return array.array("i", vector).tobytes() - else: - raise ValueError(f"Unsupported encoding: {encoding.value}") - - -def decode_vector(vector: bytes, encoding: ScalarEncoding) -> Vector: - """Decode a byte array into a vector""" - - if encoding == ScalarEncoding.FLOAT32: - return array.array("f", vector).tolist() - elif encoding == ScalarEncoding.INT32: - return array.array("i", vector).tolist() - else: - raise ValueError(f"Unsupported encoding: {encoding.value}") - - -class Producer(Component): - """Interface for writing embeddings to an ingest stream""" - - @abstractmethod - def create_topic(self, topic_name: str) -> None: - pass - - @abstractmethod - def delete_topic(self, topic_name: str) -> None: - pass - - @abstractmethod - def submit_embedding( - self, topic_name: str, embedding: SubmitEmbeddingRecord - ) -> SeqId: - """Add an embedding record to the given topic. Returns the SeqID of the record.""" - pass - - -ConsumerCallbackFn = Callable[[Sequence[EmbeddingRecord]], None] - - -class Consumer(Component): - """Interface for reading embeddings off an ingest stream""" - - @abstractmethod - def subscribe( - self, - topic_name: str, - consume_fn: ConsumerCallbackFn, - start: Optional[SeqId] = None, - end: Optional[SeqId] = None, - id: Optional[UUID] = None, - ) -> UUID: - """Register a function that will be called to recieve embeddings for a given - topic. The given function may be called any number of times, with any number of - records, and may be called concurrently. - - Only records between start (exclusive) and end (inclusive) SeqIDs will be - returned. If start is None, the first record returned will be the next record - generated, not including those generated before creating the subscription. If - end is None, the consumer will consume indefinitely, otherwise it will - automatically be unsubscribed when the end SeqID is reached. - - If the function throws an exception, the function may be called again with the - same or different records. - - Takes an optional UUID as a unique subscription ID. If no ID is provided, a new - ID will be generated and returned.""" - pass - - @abstractmethod - def unsubscribe(self, subscription_id: UUID) -> None: - """Unregister a subscription. The consume function will no longer be invoked, - and resources associated with the subscription will be released.""" - pass - - @abstractmethod - def min_seqid(self) -> SeqId: - """Return the minimum possible SeqID in this implementation.""" - pass - - @abstractmethod - def max_seqid(self) -> SeqId: - """Return the maximum possible SeqID in this implementation.""" - pass diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/opc/__init__.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/docx/opc/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/cihyFjudo/fairness-paper-search/Andrew Jackson Jihad - People Who Can Eat People Are The Luckiest People In The World The Story Behind the Controversial Title and Lyrics.md b/spaces/cihyFjudo/fairness-paper-search/Andrew Jackson Jihad - People Who Can Eat People Are The Luckiest People In The World The Story Behind the Controversial Title and Lyrics.md deleted file mode 100644 index 371e1de304a55575eefb1aa56e7dbd5e7dd5f4ee..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Andrew Jackson Jihad - People Who Can Eat People Are The Luckiest People In The World The Story Behind the Controversial Title and Lyrics.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Andrew Jackson Jihad - People Who Can Eat People Are The Luckiest People In The World.rar


    DOWNLOADhttps://tinurli.com/2uwjCW



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cihyFjudo/fairness-paper-search/Design And Art Alex Coles Pdf 16 A Comprehensive Review of Contemporary Art and Design.md b/spaces/cihyFjudo/fairness-paper-search/Design And Art Alex Coles Pdf 16 A Comprehensive Review of Contemporary Art and Design.md deleted file mode 100644 index 3ad62d6b25d3fcd302d4abc03f9d1bc8d15b575b..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Design And Art Alex Coles Pdf 16 A Comprehensive Review of Contemporary Art and Design.md +++ /dev/null @@ -1,12 +0,0 @@ -
    -

    With this critical gaze, Onomatopee turns into an active laboratory, offering makers, agents, thinkers and performers a playground for experiment. We are not just interested in design solutions, but rather in the possibility of creating new opportunities for action. This implies awareness to how you work, the process.

    -

    Design And Art Alex Coles Pdf 16


    DOWNLOADhttps://tinurli.com/2uwihK



    -

    It is on those processes, that anticipate and inform the design practice, that we turn our attention.The RE-design-er offers us the ability to continuously question ourselves.
    A vibrant public program invites you to experience and discuss, out loud and freely, how, why, and for whom does design happen?

    -

    What should a designer know? And how do their different expertises and critical understanding inform their practice? Onomatopee invites five designers to respond by crafting an object and reflecting on how they think and work.

    -

    Inspired on the seven deadly sins, this collective exhibition targets the monetized culture of design fairs and challenges the represented practices within. Via evocative performances the group of 14 designers present the sinful side of design celebrations seen from the figurative teachings of Sloth, Pride, Wrath, Envy, Greed, Lust and Gluttony.

    -

    -

    Design thinking has created divisions in the discipline: either designers are too theory driven or simply practitioners. Those feeling lost can easily turn to a language meant to inspire creative production in easy to pitch ways, where rhetoric uses design to keep power at bay, to celebrate hegemonic beliefs which are used to indoctrinate designers in bad education, incapable of imagining different futures. If you take away the post-its, the A3 papers and the markers, can designers think?

    -

    about the author
    Danah Abdulla is a Palestinian-Canadian designer, educator and researcher interested in new narratives and practices in design that push the disciplinary boundaries and definitions of the discipline. She is Programme Director of Graphic Design at Camberwell, Chelsea and Wimbledon Colleges of Arts (University of the Arts London). She has previously held positions at Brunel University London and London College of Communication (University of the Arts London). Danah obtained her Ph.D. in Design from Goldsmiths, University of London and is a founding member of the Decolonising Design platform. In 2010, she founded Kalimat Magazine, an independent, nonprofit publication about Arab thought and culture. Her research focuses on decolonising design, possibilities of design education, design culture(s) with a focus on the Arab region, the politics of design, publishing, and social design.

    -

    Dr James Dyer is senior lecturer of Graphic Design at the University of Huddersfield.
    Nick Deakin is senior lecturer of Graphic Design at Leeds Arts University.
    Prof. Alex Coles is a design critic and Professor of Transdisciplinarity at the University of Huddersfield.
    Professor Johanna Drucker is the Breslauer Professor of Bibliographical Studies and Distinguished Professor in the Department of Information Studies at The University of California.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/HD Online Player (Bengali Hd Movie Download 1080p) Stream or Download High-Quality Bengali Movies.md b/spaces/cihyFjudo/fairness-paper-search/HD Online Player (Bengali Hd Movie Download 1080p) Stream or Download High-Quality Bengali Movies.md deleted file mode 100644 index b02d0f88c0dc992d8ceac84726182b430b1220db..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/HD Online Player (Bengali Hd Movie Download 1080p) Stream or Download High-Quality Bengali Movies.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Vce Player 2.2.1 Crack 20


    Download Zip >>> https://tinurli.com/2uwjTN



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/WmfImagePlugin.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/WmfImagePlugin.py deleted file mode 100644 index 0ecab56a824fd3917067fd4b05c530f4abce75a3..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/WmfImagePlugin.py +++ /dev/null @@ -1,178 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# WMF stub codec -# -# history: -# 1996-12-14 fl Created -# 2004-02-22 fl Turned into a stub driver -# 2004-02-23 fl Added EMF support -# -# Copyright (c) Secret Labs AB 1997-2004. All rights reserved. -# Copyright (c) Fredrik Lundh 1996. -# -# See the README file for information on usage and redistribution. -# -# WMF/EMF reference documentation: -# https://winprotocoldoc.blob.core.windows.net/productionwindowsarchives/MS-WMF/[MS-WMF].pdf -# http://wvware.sourceforge.net/caolan/index.html -# http://wvware.sourceforge.net/caolan/ora-wmf.html - -from . import Image, ImageFile -from ._binary import i16le as word -from ._binary import si16le as short -from ._binary import si32le as _long - -_handler = None - - -def register_handler(handler): - """ - Install application-specific WMF image handler. - - :param handler: Handler object. - """ - global _handler - _handler = handler - - -if hasattr(Image.core, "drawwmf"): - # install default handler (windows only) - - class WmfHandler: - def open(self, im): - im.mode = "RGB" - self.bbox = im.info["wmf_bbox"] - - def load(self, im): - im.fp.seek(0) # rewind - return Image.frombytes( - "RGB", - im.size, - Image.core.drawwmf(im.fp.read(), im.size, self.bbox), - "raw", - "BGR", - (im.size[0] * 3 + 3) & -4, - -1, - ) - - register_handler(WmfHandler()) - -# -# -------------------------------------------------------------------- -# Read WMF file - - -def _accept(prefix): - return ( - prefix[:6] == b"\xd7\xcd\xc6\x9a\x00\x00" or prefix[:4] == b"\x01\x00\x00\x00" - ) - - -## -# Image plugin for Windows metafiles. - - -class WmfStubImageFile(ImageFile.StubImageFile): - format = "WMF" - format_description = "Windows Metafile" - - def _open(self): - self._inch = None - - # check placable header - s = self.fp.read(80) - - if s[:6] == b"\xd7\xcd\xc6\x9a\x00\x00": - # placeable windows metafile - - # get units per inch - self._inch = word(s, 14) - - # get bounding box - x0 = short(s, 6) - y0 = short(s, 8) - x1 = short(s, 10) - y1 = short(s, 12) - - # normalize size to 72 dots per inch - self.info["dpi"] = 72 - size = ( - (x1 - x0) * self.info["dpi"] // self._inch, - (y1 - y0) * self.info["dpi"] // self._inch, - ) - - self.info["wmf_bbox"] = x0, y0, x1, y1 - - # sanity check (standard metafile header) - if s[22:26] != b"\x01\x00\t\x00": - msg = "Unsupported WMF file format" - raise SyntaxError(msg) - - elif s[:4] == b"\x01\x00\x00\x00" and s[40:44] == b" EMF": - # enhanced metafile - - # get bounding box - x0 = _long(s, 8) - y0 = _long(s, 12) - x1 = _long(s, 16) - y1 = _long(s, 20) - - # get frame (in 0.01 millimeter units) - frame = _long(s, 24), _long(s, 28), _long(s, 32), _long(s, 36) - - size = x1 - x0, y1 - y0 - - # calculate dots per inch from bbox and frame - xdpi = 2540.0 * (x1 - y0) / (frame[2] - frame[0]) - ydpi = 2540.0 * (y1 - y0) / (frame[3] - frame[1]) - - self.info["wmf_bbox"] = x0, y0, x1, y1 - - if xdpi == ydpi: - self.info["dpi"] = xdpi - else: - self.info["dpi"] = xdpi, ydpi - - else: - msg = "Unsupported file format" - raise SyntaxError(msg) - - self.mode = "RGB" - self._size = size - - loader = self._load() - if loader: - loader.open(self) - - def _load(self): - return _handler - - def load(self, dpi=None): - if dpi is not None and self._inch is not None: - self.info["dpi"] = dpi - x0, y0, x1, y1 = self.info["wmf_bbox"] - self._size = ( - (x1 - x0) * self.info["dpi"] // self._inch, - (y1 - y0) * self.info["dpi"] // self._inch, - ) - return super().load() - - -def _save(im, fp, filename): - if _handler is None or not hasattr(_handler, "save"): - msg = "WMF save handler not installed" - raise OSError(msg) - _handler.save(im, fp, filename) - - -# -# -------------------------------------------------------------------- -# Registry stuff - - -Image.register_open(WmfStubImageFile.format, WmfStubImageFile, _accept) -Image.register_save(WmfStubImageFile.format, _save) - -Image.register_extensions(WmfStubImageFile.format, [".wmf", ".emf"]) diff --git a/spaces/codedog-ai/edu-assistant/tests/learning_tasks/test_qa.py b/spaces/codedog-ai/edu-assistant/tests/learning_tasks/test_qa.py deleted file mode 100644 index 42fafa0bec8eb5b164e99fa235897c54658a5365..0000000000000000000000000000000000000000 --- a/spaces/codedog-ai/edu-assistant/tests/learning_tasks/test_qa.py +++ /dev/null @@ -1,52 +0,0 @@ -from unittest.mock import MagicMock, patch - -from langchain import PromptTemplate - -from edu_assistant.learning_tasks import QaTask -from edu_assistant.learning_tasks.qa import TEMPLATE_CHAT, TEMPLATE_ONCE - - -@patch.object(QaTask, "_init_llm") -@patch.object(QaTask, "_build_once_chain") -def test_init_without_knowledge(mocked_build_once_chain, mocked_init_llm): - task = QaTask(instruction="test") - - assert task._chat_prompt == PromptTemplate.from_template(TEMPLATE_CHAT.format(instruction="test")) - assert task._once_prompt == PromptTemplate.from_template(TEMPLATE_ONCE.format(instruction="test")) - assert task._knowledge is None - mocked_build_once_chain.assert_called_once() - - -@patch.object(QaTask, "_init_llm") -@patch.object(QaTask, "_build_once_chain") -@patch.object(QaTask, "_create_session_chain") -def test_ask_with_session(mocked_create_session_chain, mocked_build_once_chain, mocked_init_llm): - mocked_chain = MagicMock(return_value={"response": "ok"}) - mocked_build_once_chain.return_value = mocked_chain - mocked_create_session_chain.return_value = mocked_chain - - task = QaTask(instruction="test") - - with patch.object(task, "_create_session_id") as mock_create_id: - mock_create_id.return_value = 123 - result = task.ask("how are you?", session=True) - - mock_create_id.assert_called_once() - assert "session_id" in result - assert result["session_id"] == 123 - assert "response" in result - assert result["response"] == "ok" - - -@patch.object(QaTask, "_init_llm") -@patch.object(QaTask, "_build_once_chain") -def test_ask_without_session(mocked_build_once_chain, mocked_init_llm): - mocked_llm = MagicMock() - mocked_llm.run.return_value = {"result": "ok"} - mocked_build_once_chain.return_value = mocked_llm - task = QaTask(instruction="test") - - result = task.ask("how are you?", session=False) - - mocked_build_once_chain.assert_called_once() - assert "session_id" not in result diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc_is.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc_is.c deleted file mode 100644 index 1810790d8816c8751f5dc87ea2ab5a5633af3143..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/aacenc_is.c +++ /dev/null @@ -1,158 +0,0 @@ -/* - * AAC encoder intensity stereo - * Copyright (C) 2015 Rostislav Pehlivanov - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * AAC encoder Intensity Stereo - * @author Rostislav Pehlivanov ( atomnuker gmail com ) - */ - -#include "aacenc.h" -#include "aacenc_utils.h" -#include "aacenc_is.h" -#include "aacenc_quantization.h" - -struct AACISError ff_aac_is_encoding_err(AACEncContext *s, ChannelElement *cpe, - int start, int w, int g, float ener0, - float ener1, float ener01, - int use_pcoeffs, int phase) -{ - int i, w2; - SingleChannelElement *sce0 = &cpe->ch[0]; - SingleChannelElement *sce1 = &cpe->ch[1]; - float *L = use_pcoeffs ? sce0->pcoeffs : sce0->coeffs; - float *R = use_pcoeffs ? sce1->pcoeffs : sce1->coeffs; - float *L34 = &s->scoefs[256*0], *R34 = &s->scoefs[256*1]; - float *IS = &s->scoefs[256*2], *I34 = &s->scoefs[256*3]; - float dist1 = 0.0f, dist2 = 0.0f; - struct AACISError is_error = {0}; - - if (ener01 <= 0 || ener0 <= 0) { - is_error.pass = 0; - return is_error; - } - - for (w2 = 0; w2 < sce0->ics.group_len[w]; w2++) { - FFPsyBand *band0 = &s->psy.ch[s->cur_channel+0].psy_bands[(w+w2)*16+g]; - FFPsyBand *band1 = &s->psy.ch[s->cur_channel+1].psy_bands[(w+w2)*16+g]; - int is_band_type, is_sf_idx = FFMAX(1, sce0->sf_idx[w*16+g]-4); - float e01_34 = phase*pos_pow34(ener1/ener0); - float maxval, dist_spec_err = 0.0f; - float minthr = FFMIN(band0->threshold, band1->threshold); - for (i = 0; i < sce0->ics.swb_sizes[g]; i++) - IS[i] = (L[start+(w+w2)*128+i] + phase*R[start+(w+w2)*128+i])*sqrt(ener0/ener01); - s->abs_pow34(L34, &L[start+(w+w2)*128], sce0->ics.swb_sizes[g]); - s->abs_pow34(R34, &R[start+(w+w2)*128], sce0->ics.swb_sizes[g]); - s->abs_pow34(I34, IS, sce0->ics.swb_sizes[g]); - maxval = find_max_val(1, sce0->ics.swb_sizes[g], I34); - is_band_type = find_min_book(maxval, is_sf_idx); - dist1 += quantize_band_cost(s, &L[start + (w+w2)*128], L34, - sce0->ics.swb_sizes[g], - sce0->sf_idx[w*16+g], - sce0->band_type[w*16+g], - s->lambda / band0->threshold, INFINITY, NULL, NULL); - dist1 += quantize_band_cost(s, &R[start + (w+w2)*128], R34, - sce1->ics.swb_sizes[g], - sce1->sf_idx[w*16+g], - sce1->band_type[w*16+g], - s->lambda / band1->threshold, INFINITY, NULL, NULL); - dist2 += quantize_band_cost(s, IS, I34, sce0->ics.swb_sizes[g], - is_sf_idx, is_band_type, - s->lambda / minthr, INFINITY, NULL, NULL); - for (i = 0; i < sce0->ics.swb_sizes[g]; i++) { - dist_spec_err += (L34[i] - I34[i])*(L34[i] - I34[i]); - dist_spec_err += (R34[i] - I34[i]*e01_34)*(R34[i] - I34[i]*e01_34); - } - dist_spec_err *= s->lambda / minthr; - dist2 += dist_spec_err; - } - - is_error.pass = dist2 <= dist1; - is_error.phase = phase; - is_error.error = dist2 - dist1; - is_error.dist1 = dist1; - is_error.dist2 = dist2; - is_error.ener01 = ener01; - - return is_error; -} - -void ff_aac_search_for_is(AACEncContext *s, AVCodecContext *avctx, ChannelElement *cpe) -{ - SingleChannelElement *sce0 = &cpe->ch[0]; - SingleChannelElement *sce1 = &cpe->ch[1]; - int start = 0, count = 0, w, w2, g, i, prev_sf1 = -1, prev_bt = -1, prev_is = 0; - const float freq_mult = avctx->sample_rate/(1024.0f/sce0->ics.num_windows)/2.0f; - uint8_t nextband1[128]; - - if (!cpe->common_window) - return; - - /** Scout out next nonzero bands */ - ff_init_nextband_map(sce1, nextband1); - - for (w = 0; w < sce0->ics.num_windows; w += sce0->ics.group_len[w]) { - start = 0; - for (g = 0; g < sce0->ics.num_swb; g++) { - if (start*freq_mult > INT_STEREO_LOW_LIMIT*(s->lambda/170.0f) && - cpe->ch[0].band_type[w*16+g] != NOISE_BT && !cpe->ch[0].zeroes[w*16+g] && - cpe->ch[1].band_type[w*16+g] != NOISE_BT && !cpe->ch[1].zeroes[w*16+g] && - ff_sfdelta_can_remove_band(sce1, nextband1, prev_sf1, w*16+g)) { - float ener0 = 0.0f, ener1 = 0.0f, ener01 = 0.0f, ener01p = 0.0f; - struct AACISError ph_err1, ph_err2, *best; - for (w2 = 0; w2 < sce0->ics.group_len[w]; w2++) { - for (i = 0; i < sce0->ics.swb_sizes[g]; i++) { - float coef0 = sce0->coeffs[start+(w+w2)*128+i]; - float coef1 = sce1->coeffs[start+(w+w2)*128+i]; - ener0 += coef0*coef0; - ener1 += coef1*coef1; - ener01 += (coef0 + coef1)*(coef0 + coef1); - ener01p += (coef0 - coef1)*(coef0 - coef1); - } - } - ph_err1 = ff_aac_is_encoding_err(s, cpe, start, w, g, - ener0, ener1, ener01p, 0, -1); - ph_err2 = ff_aac_is_encoding_err(s, cpe, start, w, g, - ener0, ener1, ener01, 0, +1); - best = (ph_err1.pass && ph_err1.error < ph_err2.error) ? &ph_err1 : &ph_err2; - if (best->pass) { - cpe->is_mask[w*16+g] = 1; - cpe->ms_mask[w*16+g] = 0; - cpe->ch[0].is_ener[w*16+g] = sqrt(ener0 / best->ener01); - cpe->ch[1].is_ener[w*16+g] = ener0/ener1; - cpe->ch[1].band_type[w*16+g] = (best->phase > 0) ? INTENSITY_BT : INTENSITY_BT2; - if (prev_is && prev_bt != cpe->ch[1].band_type[w*16+g]) { - /** Flip M/S mask and pick the other CB, since it encodes more efficiently */ - cpe->ms_mask[w*16+g] = 1; - cpe->ch[1].band_type[w*16+g] = (best->phase > 0) ? INTENSITY_BT2 : INTENSITY_BT; - } - prev_bt = cpe->ch[1].band_type[w*16+g]; - count++; - } - } - if (!sce1->zeroes[w*16+g] && sce1->band_type[w*16+g] < RESERVED_BT) - prev_sf1 = sce1->sf_idx[w*16+g]; - prev_is = cpe->is_mask[w*16+g]; - start += sce0->ics.swb_sizes[g]; - } - } - cpe->is_mode = !!count; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_mp4toannexb_bsf.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_mp4toannexb_bsf.c deleted file mode 100644 index d11be455c280a7a59cd1ce46df6b5f04bf7154df..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/h264_mp4toannexb_bsf.c +++ /dev/null @@ -1,322 +0,0 @@ -/* - * H.264 MP4 to Annex B byte stream format filter - * Copyright (c) 2007 Benoit Fouet - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "libavutil/avassert.h" -#include "libavutil/intreadwrite.h" -#include "libavutil/mem.h" - -#include "bsf.h" -#include "bsf_internal.h" -#include "bytestream.h" -#include "defs.h" -#include "h264.h" - -typedef struct H264BSFContext { - uint8_t *sps; - uint8_t *pps; - int sps_size; - int pps_size; - uint8_t length_size; - uint8_t new_idr; - uint8_t idr_sps_seen; - uint8_t idr_pps_seen; - int extradata_parsed; -} H264BSFContext; - -static void count_or_copy(uint8_t **out, uint64_t *out_size, - const uint8_t *in, int in_size, int ps, int copy) -{ - uint8_t start_code_size = ps < 0 ? 0 : *out_size == 0 || ps ? 4 : 3; - - if (copy) { - memcpy(*out + start_code_size, in, in_size); - if (start_code_size == 4) { - AV_WB32(*out, 1); - } else if (start_code_size) { - (*out)[0] = - (*out)[1] = 0; - (*out)[2] = 1; - } - *out += start_code_size + in_size; - } - *out_size += start_code_size + in_size; -} - -static int h264_extradata_to_annexb(AVBSFContext *ctx, const int padding) -{ - H264BSFContext *s = ctx->priv_data; - GetByteContext ogb, *gb = &ogb; - uint16_t unit_size; - uint32_t total_size = 0; - uint8_t *out = NULL, unit_nb, sps_done = 0; - static const uint8_t nalu_header[4] = { 0, 0, 0, 1 }; - int length_size, pps_offset = 0; - - bytestream2_init(gb, ctx->par_in->extradata, ctx->par_in->extradata_size); - - bytestream2_skipu(gb, 4); - - /* retrieve length coded size */ - length_size = (bytestream2_get_byteu(gb) & 0x3) + 1; - - /* retrieve sps and pps unit(s) */ - unit_nb = bytestream2_get_byteu(gb) & 0x1f; /* number of sps unit(s) */ - if (!unit_nb) { - goto pps; - } - - while (unit_nb--) { - int err; - - /* possible overread ok due to padding */ - unit_size = bytestream2_get_be16u(gb); - total_size += unit_size + 4; - av_assert1(total_size <= INT_MAX - padding); - if (bytestream2_get_bytes_left(gb) < unit_size + !sps_done) { - av_log(ctx, AV_LOG_ERROR, "Global extradata truncated, " - "corrupted stream or invalid MP4/AVCC bitstream\n"); - av_free(out); - return AVERROR_INVALIDDATA; - } - if ((err = av_reallocp(&out, total_size + padding)) < 0) - return err; - memcpy(out + total_size - unit_size - 4, nalu_header, 4); - bytestream2_get_bufferu(gb, out + total_size - unit_size, unit_size); -pps: - if (!unit_nb && !sps_done++) { - unit_nb = bytestream2_get_byteu(gb); /* number of pps unit(s) */ - pps_offset = total_size; - } - } - - if (out) - memset(out + total_size, 0, padding); - - if (pps_offset) { - s->sps = out; - s->sps_size = pps_offset; - } else { - av_log(ctx, AV_LOG_WARNING, - "Warning: SPS NALU missing or invalid. " - "The resulting stream may not play.\n"); - } - if (pps_offset < total_size) { - s->pps = out + pps_offset; - s->pps_size = total_size - pps_offset; - } else { - av_log(ctx, AV_LOG_WARNING, - "Warning: PPS NALU missing or invalid. " - "The resulting stream may not play.\n"); - } - - av_freep(&ctx->par_out->extradata); - ctx->par_out->extradata = out; - ctx->par_out->extradata_size = total_size; - - return length_size; -} - -static int h264_mp4toannexb_init(AVBSFContext *ctx) -{ - H264BSFContext *s = ctx->priv_data; - int extra_size = ctx->par_in->extradata_size; - int ret; - - /* retrieve sps and pps NAL units from extradata */ - if (!extra_size || - (extra_size >= 3 && AV_RB24(ctx->par_in->extradata) == 1) || - (extra_size >= 4 && AV_RB32(ctx->par_in->extradata) == 1)) { - av_log(ctx, AV_LOG_VERBOSE, - "The input looks like it is Annex B already\n"); - } else if (extra_size >= 7) { - ret = h264_extradata_to_annexb(ctx, AV_INPUT_BUFFER_PADDING_SIZE); - if (ret < 0) - return ret; - - s->length_size = ret; - s->new_idr = 1; - s->idr_sps_seen = 0; - s->idr_pps_seen = 0; - s->extradata_parsed = 1; - } else { - av_log(ctx, AV_LOG_ERROR, "Invalid extradata size: %d\n", extra_size); - return AVERROR_INVALIDDATA; - } - - return 0; -} - -static int h264_mp4toannexb_filter(AVBSFContext *ctx, AVPacket *opkt) -{ - H264BSFContext *s = ctx->priv_data; - AVPacket *in; - uint8_t unit_type, new_idr, sps_seen, pps_seen; - const uint8_t *buf; - const uint8_t *buf_end; - uint8_t *out; - uint64_t out_size; - int ret; - - ret = ff_bsf_get_packet(ctx, &in); - if (ret < 0) - return ret; - - /* nothing to filter */ - if (!s->extradata_parsed) { - av_packet_move_ref(opkt, in); - av_packet_free(&in); - return 0; - } - - buf_end = in->data + in->size; - -#define LOG_ONCE(...) \ - if (j) \ - av_log(__VA_ARGS__) - for (int j = 0; j < 2; j++) { - buf = in->data; - new_idr = s->new_idr; - sps_seen = s->idr_sps_seen; - pps_seen = s->idr_pps_seen; - out_size = 0; - - do { - uint32_t nal_size = 0; - - /* possible overread ok due to padding */ - for (int i = 0; i < s->length_size; i++) - nal_size = (nal_size << 8) | buf[i]; - - buf += s->length_size; - - /* This check requires the cast as the right side might - * otherwise be promoted to an unsigned value. */ - if ((int64_t)nal_size > buf_end - buf) { - ret = AVERROR_INVALIDDATA; - goto fail; - } - - if (!nal_size) - continue; - - unit_type = *buf & 0x1f; - - if (unit_type == H264_NAL_SPS) { - sps_seen = new_idr = 1; - } else if (unit_type == H264_NAL_PPS) { - pps_seen = new_idr = 1; - /* if SPS has not been seen yet, prepend the AVCC one to PPS */ - if (!sps_seen) { - if (!s->sps_size) { - LOG_ONCE(ctx, AV_LOG_WARNING, "SPS not present in the stream, nor in AVCC, stream may be unreadable\n"); - } else { - count_or_copy(&out, &out_size, s->sps, s->sps_size, -1, j); - sps_seen = 1; - } - } - } - - /* If this is a new IDR picture following an IDR picture, reset the idr flag. - * Just check first_mb_in_slice to be 0 as this is the simplest solution. - * This could be checking idr_pic_id instead, but would complexify the parsing. */ - if (!new_idr && unit_type == H264_NAL_IDR_SLICE && (buf[1] & 0x80)) - new_idr = 1; - - /* prepend only to the first type 5 NAL unit of an IDR picture, if no sps/pps are already present */ - if (new_idr && unit_type == H264_NAL_IDR_SLICE && !sps_seen && !pps_seen) { - if (ctx->par_out->extradata) - count_or_copy(&out, &out_size, ctx->par_out->extradata, - ctx->par_out->extradata_size, -1, j); - new_idr = 0; - /* if only SPS has been seen, also insert PPS */ - } else if (new_idr && unit_type == H264_NAL_IDR_SLICE && sps_seen && !pps_seen) { - if (!s->pps_size) { - LOG_ONCE(ctx, AV_LOG_WARNING, "PPS not present in the stream, nor in AVCC, stream may be unreadable\n"); - } else { - count_or_copy(&out, &out_size, s->pps, s->pps_size, -1, j); - } - } - - count_or_copy(&out, &out_size, buf, nal_size, - unit_type == H264_NAL_SPS || unit_type == H264_NAL_PPS, j); - if (!new_idr && unit_type == H264_NAL_SLICE) { - new_idr = 1; - sps_seen = 0; - pps_seen = 0; - } - - buf += nal_size; - } while (buf < buf_end); - - if (!j) { - if (out_size > INT_MAX - AV_INPUT_BUFFER_PADDING_SIZE) { - ret = AVERROR_INVALIDDATA; - goto fail; - } - ret = av_new_packet(opkt, out_size); - if (ret < 0) - goto fail; - out = opkt->data; - } - } -#undef LOG_ONCE - - av_assert1(out_size == opkt->size); - - s->new_idr = new_idr; - s->idr_sps_seen = sps_seen; - s->idr_pps_seen = pps_seen; - - ret = av_packet_copy_props(opkt, in); - if (ret < 0) - goto fail; - -fail: - if (ret < 0) - av_packet_unref(opkt); - av_packet_free(&in); - - return ret; -} - -static void h264_mp4toannexb_flush(AVBSFContext *ctx) -{ - H264BSFContext *s = ctx->priv_data; - - s->idr_sps_seen = 0; - s->idr_pps_seen = 0; - s->new_idr = s->extradata_parsed; -} - -static const enum AVCodecID codec_ids[] = { - AV_CODEC_ID_H264, AV_CODEC_ID_NONE, -}; - -const FFBitStreamFilter ff_h264_mp4toannexb_bsf = { - .p.name = "h264_mp4toannexb", - .p.codec_ids = codec_ids, - .priv_data_size = sizeof(H264BSFContext), - .init = h264_mp4toannexb_init, - .filter = h264_mp4toannexb_filter, - .flush = h264_mp4toannexb_flush, -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/Drive Movie Download Link Watch Sushant Singh Rajput and Jacqueline Fernandez in Action.md b/spaces/congsaPfin/Manga-OCR/logs/Drive Movie Download Link Watch Sushant Singh Rajput and Jacqueline Fernandez in Action.md deleted file mode 100644 index c9b76204c91a1eb1702977e5cace7af513b449a3..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Drive Movie Download Link Watch Sushant Singh Rajput and Jacqueline Fernandez in Action.md +++ /dev/null @@ -1,44 +0,0 @@ - -

    Drive Movie Download Sushant Singh Rajput: A Thrilling Heist Film You Don't Want to Miss

    -

    What is Drive Movie About?

    -

    A Synopsis of the Plot

    -

    The Theme of Drive Movie

    -

    Who are the Stars of Drive Movie?

    -

    Sushant Singh Rajput as Samar

    -

    Jacqueline Fernandez as Tara

    -

    Boman Irani as Irfan

    -

    Pankaj Tripathi as Hamid

    -

    Where to Watch Drive Movie?

    -

    Drive Movie on Netflix

    -

    Drive Movie Download Options

    -

    Why Watch Drive Movie?

    -

    Drive Movie Review: The Pros and Cons

    -

    Drive Movie Rating: How Critics and Audiences Reacted

    -

    Some Trivia and Facts About Drive Movie

    -

    The Production History of Drive Movie

    -

    The Final Release of Sushant Singh Rajput

    -

    The References to Other Movies in Drive Movie

    -

    Conclusion: Don't Miss Out on Drive Movie

    -

    FAQs About Drive Movie

    - Here is an example of how I would create a table with HTML tags: - - - - - - - - - - - - - - - - -
    Release DateGenreRatingRuntimeBudgetBox Office Collection
    November 1, 2019Action/Crime/ThrillerNot Rated2 hours 27 minutes$14 million$0 (direct-to-streaming)
    - Here is an example of how I would write the first paragraph of the article based on the outline: If you are looking for a thrilling heist film with a twist, you might want to check out Drive movie download sushant singh rajput. This is a 2019 Indian action-crime-thriller film that stars Sushant Singh Rajput, Jacqueline Fernandez, Boman Irani, and Pankaj Tripathi in lead roles. The film is directed by Tarun Mansukhani and produced by Karan Johar under Dharma Productions. In this article, we will tell you what Drive movie is about, who are the stars of Drive movie, where to watch Drive movie, why you should watch Drive movie, and some trivia and facts about Drive movie. So buckle up and get ready for a fast-paced ride with Drive movie download sushant singh rajput. I hope this helps you write your SEO-optimized article on this topic. If you have any questions or feedback, please let me know.

    -

    drive movie download sushant singh rajput


    Download > https://urlca.com/2uOaaM



    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Catching Pokemon in the Game with Dynamons World Pikachu Mod Apk - Download Now.md b/spaces/congsaPfin/Manga-OCR/logs/Enjoy Catching Pokemon in the Game with Dynamons World Pikachu Mod Apk - Download Now.md deleted file mode 100644 index 13a73f0bcd84aa434b3be79e64a16c9cff18caf7..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Enjoy Catching Pokemon in the Game with Dynamons World Pikachu Mod Apk - Download Now.md +++ /dev/null @@ -1,86 +0,0 @@ - -

    Dynamons World Pikachu Mod Apk | Catch Pokemon In The Game

    -

    If you are a fan of RPG games and pokemon, you might want to check out Dynamons World, a fun and addictive game that lets you catch and train dozens of unique dynamons. Dynamons are creatures that resemble pokemon, but with different types, skills, and abilities. You can explore an open world, battle tough captains, and challenge your friends in online multiplayer battles.

    -

    dynamons world pikachu mod apk catch pokemon in the game


    DOWNLOAD 🗸🗸🗸 https://urlca.com/2uOex7



    -

    But what if you want to make your game experience even more exciting? What if you want to catch pokemon in the game, such as the iconic Pikachu? Well, there is a way to do that, by using a mod apk. A mod apk is a modified version of the original game that adds new features, cheats, or enhancements. In this article, we will show you how to install dynamons world pikachu mod apk on your android device, how to catch pokemon in the game, and what benefits you can get from using the mod apk.

    -

    How to Install Mod Apk on Android

    -

    To install dynamons world pikachu mod apk on your android device, you need to follow these steps:

    -
      -
    1. Download the mod apk file from a reputable source. You can find some links in the web search results below. Make sure you have enough storage space on your device.
    2. -
    3. Enable unknown sources on your device settings. This will allow you to install apps that are not from the Google Play Store. To do this, go to Settings > Apps > Menu > Special access > Install unknown apps. Then, select Chrome (or your preferred browser) and toggle on Allow from this source.
    4. -
    5. Open your file manager app and locate the downloaded mod apk file. Tap on it and follow the instructions to install it. You may need to grant some permissions or accept some pop-ups.
    6. -
    7. Once the installation is complete, you can launch the game from your app drawer or home screen.
    8. -
    -

    How to Catch Pokemon in The Game

    -

    Now that you have installed dynamons world pikachu mod apk, you can enjoy catching pokemon in the game. Here are some tips and tricks to help you find and capture them:

    -
      -
    • Pokemon appear randomly in different areas of the game world. You can see them as small icons on the map. To encounter them, you need to walk near them and tap on them.
    • -
    • You can use different types of balls to catch pokemon. Some balls have higher chances of success than others. For example, a great ball has a better chance than a normal ball. You can buy balls from shops or get them as rewards from quests or battles.
    • -
    • You can also use skills or items to weaken or status pokemon before throwing balls at them. This will increase your chances of catching them. For example, you can use a skill that lowers their HP or makes them sleep or paralyzed.
    • -
    • You can only have six pokemon in your team at a time. If you catch more than six, they will be sent to your storage box. You can access your storage box from any shop or camp.
    • -
    • You can name your pokemon, evolve them, or trade them with other players. You can also customize their skills, stats, and appearance using items or coins.
    • -
    -

    Benefits of Using The Mod Apk

    -

    Using dynamons world pikachu mod apk has many benefits that can enhance your game experience. Some of these benefits are:

    -
      -
    • You can catch any pokemon you want in the game, including rare ones like Pikachu, Charizard, Mewtwo, and more.
    • -
    • You can get unlimited coins and gems that you can use to buy items, skills, balls, or upgrades.
    • -
    • You can unlock all dynamons and all worlds without having to complete quests or battles.
    • -
    • You can level up your dynamons faster and make them stronger and more resilient.
    • -
    • You can enjoy playing with - You can enjoy playing with the mod apk without any ads, bugs, or crashes. The mod apk is safe and secure to use, and does not require root access or any special permissions.
    • -
    -

    Conclusion

    -

    Dynamons World is a fun and addictive RPG game that lets you catch and train dynamons, which are similar to pokemon. You can explore an open world, battle tough captains, and challenge your friends in online multiplayer battles. If you want to make your game experience even more exciting, you can use dynamons world pikachu mod apk, which allows you to catch pokemon in the game, such as the iconic Pikachu. You can also get unlimited coins and gems, unlock all dynamons and worlds, level up your dynamons faster, and enjoy playing without any ads, bugs, or crashes. To install the mod apk on your android device, you need to download the mod apk file from a reputable source, enable unknown sources on your device settings, open your file manager app and locate the downloaded mod apk file, and follow the instructions to install it. Then, you can launch the game and start catching pokemon in the game. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

    -

    FAQs

    -

    What is Dynamons World?

    -

    Dynamons World is a RPG game that lets you catch and train dynamons, which are creatures that resemble pokemon. You can explore an open world, battle tough captains, and challenge your friends in online multiplayer battles.

    -

    What is Pikachu?

    -

    Pikachu is a type of pokemon that is yellow, has black ears and red cheeks, and can generate electricity. It is one of the most popular and recognizable pokemon in the world.

    -

    What is a mod apk?

    -

    A mod apk is a modified version of the original game that adds new features, cheats, or enhancements. For example, a mod apk can allow you to catch pokemon in Dynamons World.

    -

    dynamons world pikachu mod apk no root needed
    -catch pokemon in the game with dynamons world pikachu mod
    -how to download dynamons world pikachu mod apk for android
    -dynamons world pikachu mod apk latest version
    -dynamons world pikachu mod apk gameplay part 8
    -dynamons world mod apk with pokemon characters
    -dynamons world hack apk catch pikachu and other pokemon
    -dynamons world pikachu mod apk free download link
    -dynamons world pikachu mod apk video by mr. drito
    -dynamons world vs pokemon comparison
    -dynamons world pikachu mod apk review and rating
    -best pokemon mods for dynamons world apk
    -dynamons world pikachu mod apk features and benefits
    -how to install dynamons world pikachu mod apk on your device
    -dynamons world pikachu mod apk unlimited money and gems
    -dynamons world pikachu mod apk online multiplayer mode
    -dynamons world pikachu mod apk offline play option
    -dynamons world pikachu mod apk tips and tricks
    -dynamons world pikachu mod apk cheats and hacks
    -dynamons world pikachu mod apk update and patch notes
    -dynamons world pikachu mod apk discord server and community
    -dynamons world pikachu mod apk social media handles and hashtags
    -dynamons world pikachu mod apk kinemaster video editor used
    -dynamons world pikachu mod apk supported devices and compatibility
    -dynamons world pikachu mod apk bugs and issues report
    -how to catch legendary pokemon in dynamons world pikachu mod apk
    -how to evolve your pokemon in dynamons world pikachu mod apk
    -how to train your pokemon in dynamons world pikachu mod apk
    -how to battle other players in dynamons world pikachu mod apk
    -how to customize your pokemon in dynamons world pikachu mod apk
    -how to unlock new areas and maps in dynamons world pikachu mod apk
    -how to get free items and rewards in dynamons world pikachu mod apk
    -how to complete quests and missions in dynamons world pikachu mod apk
    -how to join clans and teams in dynamons world pikachu mod apk
    -how to chat with other players in dynamons world pikachu mod apk
    -how to level up your pokemon and skills in dynamons world pikachu mod apk
    -how to earn coins and diamonds in dynamons world pikachu mod apk
    -how to use special moves and abilities in dynamons world pikachu mod apk
    -how to change your avatar and name in dynamons world pikachu mod apk
    -how to access the shop and inventory in dynamons world pikachu mod apk

    -

    How do I catch pokemon in Dynamons World?

    -

    To catch pokemon in Dynamons World, you need to use a mod apk that adds pokemon to the game. Then, you can encounter pokemon randomly in different areas of the game world. You can use different types of balls to catch them, or use skills or items to weaken them before throwing balls at them.

    -

    Is the mod apk safe and secure to use?

    -

    Yes, the mod apk is safe and secure to use, as long as you download it from a reputable source. The mod apk does not require root access or any special permissions. However, you should always be careful when installing apps from unknown sources, as they may contain malware or viruses.

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Fly Any Plane You Want with RFS Real Flight Simulator Pro MOD APK (All Planes Unlocked).md b/spaces/congsaPfin/Manga-OCR/logs/Fly Any Plane You Want with RFS Real Flight Simulator Pro MOD APK (All Planes Unlocked).md deleted file mode 100644 index 936e716fc113cbae321c93c0de2f00ed74dc8f8e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Fly Any Plane You Want with RFS Real Flight Simulator Pro MOD APK (All Planes Unlocked).md +++ /dev/null @@ -1,81 +0,0 @@ -
    -

    RFS Real Flight Simulator Pro Mod APK: All Planes Unlocked

    -

    Do you love flying planes and exploring different airports around the world? Do you want to experience the thrill of piloting a realistic aircraft with advanced features and controls? If yes, then you should try RFS Real Flight Simulator Pro, one of the best flight simulator games for Android devices. And if you want to unlock all the planes and enjoy the game without any limitations, then you should download the mod apk version of this game. In this article, we will tell you everything you need to know about RFS Real Flight Simulator Pro Mod APK, including its features, how to download and install it, and some FAQs.

    -

    Introduction

    -

    What is RFS Real Flight Simulator Pro?

    -

    RFS Real Flight Simulator Pro is a simulation game developed by RORTOS, a studio that specializes in creating realistic flight simulators for mobile devices. The game lets you fly over 40 different types of planes, from small propellers to huge jets, across hundreds of airports and locations around the world. You can also customize your plane with liveries, engines, wings, and more. The game has realistic graphics, physics, sounds, and weather effects that make you feel like you are really flying a plane. You can also interact with other pilots and air traffic controllers in multiplayer mode and ATC.

    -

    rfs real flight simulator pro mod apk all planes unlocked


    Download Ziphttps://urlca.com/2uO4O8



    -

    What is the mod apk version?

    -

    The mod apk version of RFS Real Flight Simulator Pro is a modified version of the original game that gives you access to all the planes and features without paying anything. The mod apk version also removes ads and other restrictions that may affect your gameplay. With the mod apk version, you can fly any plane you want, from a Cessna 172 to a Boeing 777, without spending any money or coins. You can also enjoy all the advanced features and settings that the game offers.

    -

    Why download the mod apk version?

    -

    If you are a fan of flight simulator games, then you should download the mod apk version of RFS Real Flight Simulator Pro for several reasons. First of all, you can save money and time by unlocking all the planes and features without spending any real money or grinding for coins. Second, you can have more fun and freedom by flying any plane you want, from any airport you want, at any time you want. Third, you can enhance your gaming experience by using all the realistic graphics, physics, sounds, and weather effects that the game provides. Fourth, you can challenge yourself and other players by using the advanced flight plan and navigation system that the game offers. And last but not least, you can support the developers by downloading the game from a trusted source.

    -

    Features of RFS Real Flight Simulator Pro Mod APK

    -

    All planes unlocked

    -

    The main feature of RFS Real Flight Simulator Pro Mod APK is that it unlocks all the planes in the game for free. You can choose from over 40 different types of planes, from small propellers to huge jets, each with its own characteristics and performance. You can also customize your plane with liveries, engines, wings, and more. You can fly any plane you want without spending any money or coins.

    -

    Realistic graphics and physics

    -

    RFS Real Flight Simulator Pro Mod APK also has realistic graphics and physics that make you feel like you are really flying a plane. The game has high-quality 3D models of planes, airports, buildings, landscapes , and clouds. The game also has realistic physics that simulate the aerodynamics, weight, thrust, drag, and lift of the planes. The game also has realistic sounds that match the engine, wind, and environment noises. You can adjust the graphics and sound settings according to your device and preference.

    -

    Multiplayer mode and ATC

    -

    RFS Real Flight Simulator Pro Mod APK also has a multiplayer mode and ATC that let you interact with other pilots and air traffic controllers. You can join or create online sessions with other players from around the world and fly together in real time. You can also communicate with other pilots and ATC using the voice chat or text chat feature. You can follow the ATC instructions and rules to ensure a safe and smooth flight. You can also create your own ATC and manage the traffic in your area.

    -

    Customizable weather and time

    -

    RFS Real Flight Simulator Pro Mod APK also has a customizable weather and time feature that let you change the conditions of your flight. You can choose from different weather presets or create your own weather scenario. You can also change the time of day and night, and the season of the year. You can experience different challenges and effects depending on the weather and time you choose.

    -

    Advanced flight plan and navigation

    -

    RFS Real Flight Simulator Pro Mod APK also has an advanced flight plan and navigation feature that let you create and follow your own route. You can use the map to select your departure and destination airports, waypoints, airways, and altitudes. You can also use the GPS, VOR, NDB, ILS, and other instruments to guide you along your flight. You can also access real-time information about your flight, such as speed, altitude, heading, fuel, weight, and more.

    -

    rfs pro mod apk free download all aircrafts unlocked
    -real flight simulator mod apk latest version unlocked planes
    -rfs pro apk full version with all planes free
    -real flight simulator premium mod apk unlock everything
    -rfs pro mod apk unlimited money and planes
    -real flight simulator hack apk all aircrafts unlocked
    -rfs pro apk download for android with all planes
    -real flight simulator full version mod apk free
    -rfs pro mod apk online multiplayer unlocked
    -real flight simulator cracked apk all planes free
    -rfs pro apk no root required all aircrafts unlocked
    -real flight simulator modded apk unlock all features
    -rfs pro mod apk offline mode with all planes
    -real flight simulator paid apk free download unlocked planes
    -rfs pro apk latest update with all aircrafts free
    -real flight simulator unlimited planes mod apk
    -rfs pro mod apk 2023 with all planes unlocked
    -real flight simulator all planes unlocked apk download
    -rfs pro mod apk without license verification all aircrafts free
    -real flight simulator hack mod apk download unlocked planes

    -

    How to download and install RFS Real Flight Simulator Pro Mod APK

    -

    Step 1: Download the mod apk file from a trusted source

    -

    The first step to download and install RFS Real Flight Simulator Pro Mod APK is to find a reliable source that provides the mod apk file. You can search online for websites that offer the mod apk file for free. However, you should be careful about the source you choose, as some websites may contain viruses or malware that can harm your device or steal your data. You should always check the reviews and ratings of the website before downloading anything from it.

    -

    Step 2: Enable unknown sources on your device

    -

    The second step to download and install RFS Real Flight Simulator Pro Mod APK is to enable unknown sources on your device. This is because Android devices do not allow installing apps from sources other than the Google Play Store by default. To enable unknown sources, you need to go to your device settings, then security or privacy settings, then find and toggle on the option that says "allow installation of apps from unknown sources" or something similar.

    -

    Step 3: Install the mod apk file and launch the game

    -

    The third step to download and install RFS Real Flight Simulator Pro Mod APK is to install the mod apk file and launch the game. To install the mod apk file, you need to locate it in your device storage, then tap on it and follow the instructions on the screen. To launch the game, you need to find its icon on your device home screen or app drawer, then tap on it and enjoy.

    -

    Conclusion

    -

    RFS Real Flight Simulator Pro Mod APK is a great simulation game for Android devices that lets you fly over 40 different types of planes across hundreds of airports and locations around the world. The game has realistic graphics, physics, sounds, and weather effects that make you feel like you are really flying a plane. The game also has multiplayer mode and ATC that let you interact with other pilots and air traffic controllers. The game also has customizable weather and time feature that let you change the conditions of your flight. The game also has advanced flight plan and navigation feature that let you create and follow your own route. The game also has a mod apk version that unlocks all the planes and features for free. You can download and install the mod apk version by following the steps we mentioned in this article. We hope you enjoyed this article and found it helpful. If you have any questions or feedback, please feel free to leave a comment below.

    -

    FAQs

    -

    Here are some frequently asked questions about RFS Real Flight Simulator Pro Mod APK:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    QuestionAnswer
    Is RFS Real Flight Simulator Pro Mod APK safe to use?Yes, RFS Real Flight Simulator Pro Mod APK is safe to use as long as you download it from a trusted source. However, you should always scan the mod apk file with an antivirus or malware scanner before installing it on your device.
    Does RFS Real Flight Simulator Pro Mod APK require root access?No, RFS Real Flight Simulator Pro Mod APK does not require root access to work. You can install and play the game without rooting your device.
    Does RFS Real Flight Simulator Pro Mod APK work offline?Yes, RFS Real Flight Simulator Pro Mod APK works offline. You can play the game without an internet connection. However, some features such as multiplayer mode and ATC may not work offline.
    How can I update RFS Real Flight Simulator Pro Mod APK?To update RFS Real Flight Simulator Pro Mod APK, you need to download the latest version of the mod apk file from the same source you downloaded it from before. Then, you need to uninstall the previous version of the game and install the new version of the mod apk file. You can also check for updates within the game settings.
    How can I contact the developers of RFS Real Flight Simulator Pro?To contact the developers of RFS Real Flight Simulator Pro, you can visit their official website or their social media pages. You can also send them an email or a message through the game support option.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Live Football Streaming with Yacine TV APK 2022 - Free and Easy.md b/spaces/congsaPfin/Manga-OCR/logs/Live Football Streaming with Yacine TV APK 2022 - Free and Easy.md deleted file mode 100644 index 11403730fd7ad56fa93360a8a6b62ec88a3f3237..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Live Football Streaming with Yacine TV APK 2022 - Free and Easy.md +++ /dev/null @@ -1,79 +0,0 @@ -
    -

    Yacine TV: The Best App for Live Football Streaming in 2022

    -

    If you are a football fan, you know how frustrating it can be to miss a match because of your busy schedule, geographical location, or lack of access to a TV or cable subscription. But what if there was a way to watch live football matches on your smartphone or tablet anytime, anywhere, and for free? Sounds too good to be true, right? Well, not anymore. Meet Yacine TV, the best app for live football streaming in 2022.

    -

    yacine tv live football download apk 2022


    Download Zip ✦✦✦ https://urlca.com/2uOg1J



    -

    What is Yacine TV and why you should download it

    -

    Yacine TV is a mobile app that allows you to watch live sports events, including football matches from around the world. Whether you want to watch the Premier League, La Liga, Serie A, Bundesliga, Champions League, Europa League, World Cup, or any other tournament, Yacine TV has got you covered. You can also watch other sports like basketball, tennis, cricket, rugby, and more.

    -

    But what makes Yacine TV stand out from other sports streaming apps? Here are some of the features that make it the best app for live football streaming in 2022.

    -

    Features of Yacine TV app

    -

    Live scores and updates

    -

    With Yacine TV, you don't have to worry about missing any action or information. You can get live scores and updates of all the matches happening around the world. You can also check the standings, fixtures, results, statistics, and news of your favorite teams and leagues.

    -

    Multiple channels and languages

    -

    Yacine TV offers you a variety of channels and languages to choose from. You can watch the matches in HD quality with Arabic, English, French, Spanish, or any other language you prefer. You can also switch between different channels and sources to find the best one for you.

    -

    yacine tv apk download for android 2022 live football
    -yacine tv app free download latest version 2022 watch live soccer
    -yacine tv ios download 2022 stream live football matches
    -yacine tv online website 2022 watch live soccer games
    -yacine tv apk mod 2022 unlock all channels live football
    -yacine tv for pc download 2022 enjoy live soccer on big screen
    -yacine tv apk mirror 2022 download safe and secure live football
    -yacine tv update 2022 new features and improvements live soccer
    -yacine tv alternative 2022 best apps like yacine tv for live football
    -yacine tv review 2022 pros and cons of yacine tv live soccer app
    -yacine tv guide 2022 how to use yacine tv app for live football streaming
    -yacine tv support 2022 how to contact yacine tv team for live soccer issues
    -yacine tv premium 2022 how to get yacine tv subscription for live football access
    -yacine tv channels list 2022 what channels are available on yacine tv live soccer app
    -yacine tv schedule 2022 what matches are on yacine tv today and tomorrow live football
    -yacine tv not working 2022 how to fix yacine tv app errors and bugs live soccer
    -yacine tv vs yalla shoot 2022 which app is better for live football streaming
    -yacine tv for firestick 2022 how to install and use yacine tv on firestick live soccer
    -yacine tv for smart tv 2022 how to watch yacine tv on smart tv live football
    -yacine tv for roku 2022 how to stream yacine tv on roku device live soccer
    -yacine tv for xbox one 2022 how to play yacine tv on xbox one console live football
    -yacine tv for ps4 2022 how to watch yacine tv on ps4 system live soccer
    -yacine tv for macbook 2022 how to download and run yacine tv on macbook live football
    -yacine tv for windows 10 2022 how to install and use yacine tv on windows 10 pc live soccer
    -yacine tv for linux 2022 how to set up and watch yacine tv on linux computer live football
    -yacine tv for chromebook 2022 how to access and stream yacine tv on chromebook live soccer
    -yacine tv for ipad 2022 how to download and enjoy yacine tv on ipad tablet live football
    -yacine tv for iphone 2022 how to get and use yacine tv on iphone smartphone live soccer
    -yacine tv for samsung 2022 how to watch and control yacine tv on samsung devices live football
    -yacine tv for huawei 2022 how to install and operate yacine tv on huawei gadgets live soccer
    -yacine tv for lg 2022 how to view and manage yacine tv on lg products live football
    -yacine tv for sony 2022 how to watch and adjust yacine tv on sony devices live soccer
    -yacine tv for nokia 2022 how to download and use yacine tv on nokia phones live football
    -yacine tv for oppo 2022 how to install and run yacine tv on oppo devices live soccer
    -yacine tv for xiaomi 2022 how to get and enjoy

    -

    High-quality video and audio

    -

    Yacine TV delivers high-quality video and audio streaming without any buffering or lagging. You can enjoy the matches in full screen mode with clear sound and smooth playback. You can also adjust the video quality according to your internet speed and data usage.

    -

    User-friendly interface and easy navigation

    -

    Yacine TV has a user-friendly interface and easy navigation that makes it simple and convenient to use. You can access all the features and functions with just a few taps. You can also customize the app according to your preferences and needs.

    -

    How to download and install Yacine TV apk on your device

    -

    If you are wondering how to download and install Yacine TV apk on your device, here are the steps you need to follow:

    -

    Download the apk file from the official website

    -

    The first step is to download the apk file from the official website of Yacine TV. You can also scan the QR code on the website with your device camera to get the download link. The apk file is safe and secure and does not contain any viruses or malware.

    -

    Enable unknown sources on your device settings

    -

    The next step is to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store or App Store. To do this, go to your device settings > security > unknown sources > enable.

    Install the apk file and launch the app -

    The final step is to install the apk file and launch the app. To do this, locate the downloaded apk file on your device storage and tap on it. Follow the instructions on the screen to complete the installation. Once the app is installed, you can open it and start using it.

    -

    How to watch live football matches on Yacine TV app

    -

    Now that you have downloaded and installed Yacine TV app on your device, you are ready to watch live football matches on it. Here is how you can do it:

    -

    Select the match you want to watch from the home screen

    -

    When you open the app, you will see a list of live matches happening around the world. You can scroll through the list and select the match you want to watch. You can also use the search bar or the filter options to find a specific match, team, or league.

    -

    Choose the channel and language you prefer

    -

    After selecting the match, you will see a list of channels and languages available for that match. You can choose the one that suits you best. You can also change the channel or language anytime during the streaming.

    -

    Enjoy the live streaming without any interruptions

    -

    Once you have chosen the channel and language, you can enjoy the live streaming without any interruptions. You can watch the match in full screen mode or minimize it to a small window. You can also pause, resume, rewind, or fast forward the streaming as per your convenience.

    -

    Conclusion and FAQs

    -

    Yacine TV is a great app for live football streaming in 2022. It offers you a lot of features and benefits that make it the best choice for football fans. You can watch live matches from around the world in HD quality with multiple channels and languages. You can also get live scores and updates of all the matches and leagues. You can download and install Yacine TV apk on your device easily and safely. You can also watch live matches on Yacine TV app with simple steps.

    -

    If you have any questions about Yacine TV app, here are some FAQs that might help you:

    - - - - - - -
    Q: Is Yacine TV app free?A: Yes, Yacine TV app is free to download and use. You don't have to pay any subscription fees or charges to watch live matches on it.
    Q: Is Yacine TV app legal?A: Yes, Yacine TV app is legal and does not violate any copyrights or trademarks. However, you should check your local laws and regulations before using it.
    Q: Is Yacine TV app compatible with all devices?A: Yes, Yacine TV app is compatible with all Android devices running Android 4.1 or higher. It is also compatible with iOS devices running iOS 9 or higher.
    Q: How much data does Yacine TV app consume?A: The data consumption of Yacine TV app depends on the video quality and duration of the streaming. You can adjust the video quality according to your internet speed and data usage.
    Q: How can I contact Yacine TV app support?A: You can contact Yacine TV app support by sending an email to yacinetvapp@gmail.com or by visiting their Facebook page. They will respond to your queries and feedback as soon as possible.
    - : https://yacinetv.com/ : https://www.facebook.com/yacinetvapp/

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Play Together APK The Ultimate Social Game Experience from Apkmody.md b/spaces/congsaPfin/Manga-OCR/logs/Play Together APK The Ultimate Social Game Experience from Apkmody.md deleted file mode 100644 index 359f53437132149b9cd89351f2cf0886e2367355..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Play Together APK The Ultimate Social Game Experience from Apkmody.md +++ /dev/null @@ -1,91 +0,0 @@ - -

    Play Together APK: A Fun and Social Game for Everyone

    -

    Do you love playing games with your friends and meeting new people online? If so, you should try Play Together, a multiplayer game that lets you create your own character, explore different places, chat with other players, and have fun with various mini-games and activities. In this article, we will tell you more about this game and how you can download Play Together APK from apkmody for free.

    -

    What is Play Together?

    -

    Play Together is a game developed by Haegin Co., Ltd. that was released in May 2021. It is a social simulation game that allows you to customize your own avatar, choose your own style, and interact with other players from around the world. You can also visit various locations such as the beach, the amusement park, the school, and more. You can chat with your friends, make new ones, join clubs, and participate in events. You can also enjoy mini-games such as fishing, cooking, racing, and more. Play Together is a game that lets you experience a virtual world full of fun and adventure.

    -

    play together apkmody


    DOWNLOAD ····· https://urlca.com/2uOcHP



    -

    Features of Play Together

    -

    Play Together has many features that make it an enjoyable and engaging game for everyone. Here are some of them:

    -

    Create your own avatar

    -

    You can create your own avatar by choosing from various options such as hair, eyes, skin, clothes, accessories, and more. You can also change your outfit anytime you want and express your personality and mood. You can also buy new items from the shop or earn them by completing quests and challenges.

    -

    Explore various locations

    -

    You can explore different locations in Play Together such as the beach, the amusement park, the school, the city, and more. Each location has its own attractions and activities that you can enjoy. You can also travel to different places by using vehicles such as bikes, cars, boats, and more.

    -

    play together mod apk download apkmody
    -play together apk free download apkmody
    -play together game mod menu apkmody
    -play together hack apk latest version apkmody
    -play together unlimited money apk apkmody
    -play together online multiplayer game apkmody
    -play together mod apk android 1 apkmody
    -play together mod apk unlimited gems apkmody
    -play together mod apk no root apkmody
    -play together mod apk obb apkmody
    -play together mod apk revdl apkmody
    -play together mod apk rexdl apkmody
    -play together mod apk happymod apkmody
    -play together mod apk an1 apkmody
    -play together mod apk 2023 apkmody
    -play together mod apk latest update apkmody
    -play together mod apk offline apkmody
    -play together mod apk ios apkmody
    -play together mod apk vip apkmody
    -play together mod apk anti ban apkmody
    -play together mod apk all unlocked apkmody
    -play together mod apk auto fishing apkmody
    -play together mod apk lock camera apkmody
    -play together mod apk free shopping apkmody
    -play together mod apk free clothes apkmody
    -play together mod apk free pets apkmody
    -play together mod apk free house apkmody
    -play together mod apk free car apkmody
    -play together mod apk free island apkmody
    -play together mod apk free vip membership apkmody
    -play together mod apk god mode apkmody
    -play together mod apk high damage apkmody
    -play together mod apk one hit kill apkmody
    -play together mod apk speed hack apkmody
    -play together mod apk teleport hack apkmody
    -play together mod apk wall hack apkmody
    -play together mod apk fly hack apkmody
    -play together mod apk invisible hack apkmody
    -play together mod apk unlimited stamina apkmody
    -play together mod apk unlimited energy apkmody
    -play together mod apk unlimited hearts apkmody
    -play together mod apk unlimited stars apkmody
    -play together mod apk unlimited coins apkmody
    -play together mod apk unlimited diamonds apkmody
    -play together cheat codes for android and ios devices with Apk Moddy.

    -

    Interact with other players

    -

    You can interact with other players in Play Together by chatting with them, sending them messages, adding them as friends, joining clubs, and more. You can also invite them to your house or visit theirs. You can also play mini-games with them or compete against them in events.

    -

    Enjoy mini-games and activities

    -

    You can enjoy various mini-games and activities in Play Together such as fishing, cooking, racing, dancing, karaoke, and more. You can also earn coins and rewards by playing these games or completing quests and challenges. You can use these coins to buy new items or upgrade your house.

    -

    Why download Play Together APK from apkmody?

    -

    If you want to play Play Together on your Android device, you can download it from the Google Play Store. However, if you want to enjoy some extra benefits and features that are not available in the official version of the game, you should download Play Together APK from apkmody. Here are some reasons why:

    -

    Benefits of Play Together APK

    -
      -
    • You can download Play Together APK for free from apkmody without any registration or verification.
    • -
    • You can access all the features of the game without any restrictions or limitations.
    • -
    • You can get unlimited coins and gems that you can use to buy new items or upgrade your house.
    • -
    • You can use the menu mod to enable or disable various options such as auto fishing, lock camera, speed hack, teleportation, and more.
    • -
    • You can play the game without any ads or interruptions.
    • -
    -

    How to download and install Play Together APK

    -
      -
    1. Go to apkmody.io/games/play-together and click on the download button.
    2. -
    3. Wait for the download to finish and then open the file manager on your device and locate the downloaded file.
    4. -
    5. Tap on the file and allow the installation of unknown apps if prompted.
    6. -
    7. Follow the instructions on the screen and wait for the installation to complete.
    8. -
    9. Launch the game and enjoy Play Together APK with all the benefits and features.
    10. -
    -

    Conclusion

    -

    Play Together is a fun and social game that lets you create your own avatar, explore different places, chat with other players, and have fun with various mini-games and activities. You can download Play Together APK from apkmody for free and enjoy some extra benefits and features that are not available in the official version of the game. You can get unlimited coins and gems, use the menu mod, and play the game without any ads. Download Play Together APK now and join the world of Play Together.

    -

    FAQs

    -
      -
    • What is the latest version of Play Together APK?
      The latest version of Play Together APK is 1.0.8, which was updated on June 21, 2023.
    • -
    • Is Play Together APK safe to download and install?
      Yes, Play Together APK is safe to download and install from apkmody. It does not contain any viruses or malware that can harm your device or data.
    • -
    • Can I play Play Together APK with my friends?
      Yes, you can play Play Together APK with your friends by inviting them to your house or joining their house. You can also chat with them, send them messages, add them as friends, join clubs, and more.
    • -
    • Can I play Play Together APK offline?
      No, you need an internet connection to play Play Together APK. The game requires online servers to run and store your data.
    • -
    • How can I contact the developer of Play Together?
      You can contact the developer of Play Together by sending an email to cs@haegin.kr or visiting their website at https://www.haegin.kr/.
    • -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Project Drift 2.0 MOD APK Unduh Sekarang dan Rasakan Sensasi Balapan dengan Mobil Legendaris.md b/spaces/congsaPfin/Manga-OCR/logs/Project Drift 2.0 MOD APK Unduh Sekarang dan Rasakan Sensasi Balapan dengan Mobil Legendaris.md deleted file mode 100644 index b3b0f0a98948e33a7ec851d92f54c7e8e1b5222e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Project Drift 2.0 MOD APK Unduh Sekarang dan Rasakan Sensasi Balapan dengan Mobil Legendaris.md +++ /dev/null @@ -1,93 +0,0 @@ -
    -

    Project Drift 2.0 Mod APK: A New Way to Enjoy Car Racing Games

    -

    Do you love car racing games? Do you want to try something different and exciting? If yes, then you should check out Project Drift 2.0, a realistic and challenging racing game that will test your skills and thrill your senses. And if you want to make the game even more fun and easy, you should download Project Drift 2.0 Mod APK, a modded version that gives you unlimited money and unlocks all the cars in the game. In this article, we will tell you everything you need to know about Project Drift 2.0 and its modded version, including what it is, why you should download it, and how to download and install it on your device.

    -

    unduh project drift 2.0 mod apk


    Download Ziphttps://urlca.com/2uOepq



    -

    What is Project Drift 2.0?

    -

    A realistic and challenging racing game

    -

    Project Drift 2.0 is a racing game that focuses on drifting, a driving technique where the driver intentionally oversteers the car to make it slide sideways. The game features realistic physics, graphics, and sounds that make you feel like you are driving a real car on a real track. The game also offers various challenges and missions that require you to perform different types of drifts, such as handbrake drifts, power drifts, clutch kick drifts, and more. The game has over 100 cars to choose from, each with its own characteristics and performance. You can also upgrade your cars with different parts and accessories to improve their speed, handling, and appearance.

    -

    A modded version with unlimited money and unlocked cars

    -

    Project Drift 2.0 Mod APK is a modified version of the original game that gives you some advantages that make the game more enjoyable and less frustrating. With this modded version, you get unlimited money that you can use to buy and upgrade any car you want without worrying about the cost. You also get access to all the cars in the game without having to unlock them by completing missions or achievements. This way, you can try out different cars and find the ones that suit your style and preference.

    -

    Why should you download Project Drift 2.0 Mod APK?

    -

    To experience a new style of drifting

    -

    If you are bored of the usual racing games that only involve speeding and overtaking, then Project Drift 2.0 Mod APK is for you. This game will challenge you to master the art of drifting, which is not only fun but also rewarding. You will learn how to control your car in different situations and conditions, how to adjust your speed and angle, how to balance your throttle and brake, and how to execute smooth and stylish drifts that will impress your opponents and spectators. You will also enjoy the realistic physics and graphics that make the game immersive and thrilling.

    -

    To customize your cars with various options

    -

    Another reason why you should download Project Drift 2.0 Mod APK is that it allows you to customize your cars with various options that will make them look cool and perform better. You can change the color, paint, wheels, tires, spoilers, bumpers, hoods, lights, exhausts, decals, stickers, and more of your cars. You can also upgrade the engine, transmission, suspension, brakes, turbo, nitro, weight reduction, steering angle, differential lock, camber angle, tire pressure, and more of your cars. With unlimited money and unlocked cars, you can experiment with different combinations and create your own unique cars.

    -

    Project Drift 2.0 Mod Apk Unlimited Money
    -Download Project Drift 2.0 Mod Apk Latest Version
    -Project Drift 2.0 Mod Apk Unlocked All Cars
    -How to Install Project Drift 2.0 Mod Apk on Android
    -Project Drift 2.0 Mod Apk Gameplay and Features
    -Project Drift 2.0 Mod Apk Offline Mode
    -Project Drift 2.0 Mod Apk Free Download for PC
    -Project Drift 2.0 Mod Apk Review and Rating
    -Project Drift 2.0 Mod Apk Tips and Tricks
    -Project Drift 2.0 Mod Apk vs Original Game
    -Project Drift 2.0 Mod Apk No Root Required
    -Project Drift 2.0 Mod Apk Customization Options
    -Project Drift 2.0 Mod Apk Best Cars and Tracks
    -Project Drift 2.0 Mod Apk Multiplayer Mode
    -Project Drift 2.0 Mod Apk Cheats and Hacks
    -Project Drift 2.0 Mod Apk Update and Bug Fixes
    -Project Drift 2.0 Mod Apk Compatible Devices
    -Project Drift 2.0 Mod Apk Size and Requirements
    -Project Drift 2.0 Mod Apk Alternatives and Similar Games
    -Project Drift 2.0 Mod Apk Support and Contact
    -Cara Unduh Project Drift 2.0 Mod Apk di Android
    -Unduh Project Drift 2.0 Mod Apk Tanpa Iklan
    -Unduh Project Drift 2.0 Mod Apk dengan Mudah dan Cepat
    -Unduh Project Drift 2.0 Mod Apk Terbaru dan Terbaik
    -Unduh Project Drift 2.0 Mod Apk Gratis dan Aman
    -Unduh Project Drift 2.0 Mod Apk Full Version
    -Unduh Project Drift 2.0 Mod Apk Tanpa Batas Uang
    -Unduh Project Drift 2.0 Mod Apk Semua Mobil Terbuka
    -Unduh Project Drift 2.0 Mod Apk Mode Offline
    -Unduh Project Drift 2.0 Mod Apk untuk PC
    -Ulasan dan Penilaian Unduh Project Drift 2.0 Mod Apk
    -Fitur dan Gameplay Unduh Project Drift 2.0 Mod Apk
    -Cara Pasang Unduh Project Drift 2.0 Mod Apk di Android
    -Tips dan Trik Unduh Project Drift 2.0 Mod Apk
    -Perbandingan Unduh Project Drift 2.0 Mod Apk dengan Game Asli
    -Unduh Project Drift 2.0 Mod Apk Tanpa Root
    -Opsi Kustomisasi Unduh Project Drift 2.0 Mod Apk
    -Mobil dan Trek Terbaik Unduh Project Drift 2.0 Mod Apk
    -Mode Multiplayer Unduh Project Drift 2.0 Mod Apk
    -Cheat dan Hack Unduh Project Drift 2.0 Mod Apk

    -

    To compete in different modes and levels

    -

    The last reason why you should download Project Drift 2.0 Mod APK is that it offers a variety of modes and levels that will test your drifting skills and keep you entertained. You can compete in: - Career mode: This is the main mode of the game, where you have to complete 100 missions with different objectives and difficulties. You will earn money and stars based on your performance, which you can use to buy and upgrade cars. You will also unlock new tracks and cars as you progress. - Free mode: This is the mode where you can practice your drifting skills without any pressure or time limit. You can choose any track and car you want and drift as much as you want. You can also adjust the settings of the game, such as the weather, traffic, camera angle, and more. - Online mode: This is the mode where you can challenge other players from around the world in real-time multiplayer races. You can join or create a room with up to 10 players and compete in different tracks and modes. You can also chat with other players and see their profiles and stats. - Drift park: This is a special mode where you can explore a large open-world map with various locations and obstacles. You can drift around the city, the airport, the port, the desert, the mountain, and more. You can also find hidden coins and collectibles that will give you extra money and rewards.

    How to download and install Project Drift 2.0 Mod APK?

    -

    The steps to follow

    -

    If you are interested in downloading and installing Project Drift 2.0 Mod APK on your device, here are the steps you need to follow:

    -
      -
    1. Click on the download button below to download the modded APK file of Project Drift 2.0.
    2. -
    3. Once the download is complete, locate the file in your device's file manager and tap on it to start the installation process.
    4. -
    5. If you see a pop-up message that says "Install blocked", go to your device's settings and enable the option "Unknown sources" under security or privacy settings.
    6. -
    7. Go back to the file manager and tap on the APK file again to resume the installation.
    8. -
    9. Wait for a few seconds until the installation is finished.
    10. -
    11. Launch the game from your app drawer or home screen and enjoy Project Drift 2.0 Mod APK.
    12. -
    -

    The precautions to take

    -

    Before you download and install Project Drift 2.0 Mod APK on your device, there are some precautions you need to take:

    -
      -
    • Make sure your device has enough storage space to accommodate the game file.
    • -
    • Make sure your device has a stable internet connection to download the game file and play online mode.
    • -
    • Make sure your device meets the minimum system requirements to run the game smoothly.
    • -
    • Make sure you download the modded APK file from a trusted source like ours to avoid any malware or virus infection.
    • -
    • Make sure you uninstall any previous version of Project Drift 2.0 from your device before installing the modded version.
    • -
    -

    Conclusion

    -

    Project Drift 2.0 Mod APK is a great racing game that will give you a new way to enjoy car racing games. You will be able to experience realistic and challenging drifting, customize your cars with various options, compete in different modes and levels, and have unlimited money and unlocked cars. If you are a fan of car racing games, you should definitely download Project Drift 2.0 Mod APK and try it out for yourself.

    -

    FAQs

    -

    Here are some frequently asked questions about Project Drift 2.0 Mod APK:

    -

    Q: Is Project Drift 2.0 Mod APK safe to use?

    -

    A: Yes, Project Drift 2.0 Mod APK is safe to use as long as you download it from a reliable source like ours. We have tested the modded APK file for any malware or virus infection and found none. However, we recommend that you scan the file with your own antivirus software before installing it on your device.

    -

    Q: Is Project Drift 2.0 Mod APK compatible with my device?

    -

    A: Project Drift 2.0 Mod APK is compatible with most Android devices that run on Android 4.4 or higher versions. However, some devices may not support some features or functions of the game due to hardware limitations or software issues. If you encounter any problem while playing the game on your device, please contact us for assistance.

    -

    Q: How can I update Project Drift 2.0 Mod APK?

    -

    A: Project Drift 2.0 Mod APK is updated regularly by our team to ensure that it works with the latest version of the original game and to fix any bugs or glitches. To update the modded version, you need to download the latest APK file from our website and install it on your device. You do not need to uninstall the previous version, as the new version will overwrite it automatically. However, you may need to back up your game data before updating, as some updates may erase your progress or settings.

    -

    Q: How can I get more money and cars in Project Drift 2.0 Mod APK?

    -

    A: Project Drift 2.0 Mod APK gives you unlimited money and unlocks all the cars in the game by default. You do not need to do anything special to get them. You can simply go to the shop and buy any car you want without spending any money. You can also upgrade your cars with any parts or accessories you want without worrying about the cost. You can also access all the cars in the garage and select any one you want to drive.

    -

    Q: How can I contact the developers of Project Drift 2.0?

    -

    A: If you have any questions, suggestions, feedback, or complaints about Project Drift 2.0, you can contact the developers of the game through their official website, Facebook page, Instagram account, or email address. You can also leave a review or rating on the Google Play Store or App Store to share your opinion and experience with other users.

    -

    Q: Can I play Project Drift 2.0 Mod APK offline?

    -

    A: Yes, you can play Project Drift 2.0 Mod APK offline without an internet connection. You can enjoy the career mode, free mode, and drift park mode without any interruption or limitation. However, you will not be able to play the online mode or access some online features, such as leaderboards, achievements, chat, and more.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Roku Remote MOD APK (Unlocked) - No Ads No Limits No Hassle.md b/spaces/congsaPfin/Manga-OCR/logs/Roku Remote MOD APK (Unlocked) - No Ads No Limits No Hassle.md deleted file mode 100644 index 37752790cd6722eb19ebe63cc19c4374102a96f2..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Roku Remote MOD APK (Unlocked) - No Ads No Limits No Hassle.md +++ /dev/null @@ -1,25 +0,0 @@ -
    -

    Roku Remote Full APK: What Is It and How to Use It

    - If you are a fan of streaming devices, you might have heard of Roku, a popular brand that offers various products such as players, smart TVs, soundbars, and more. Roku devices allow you to access thousands of channels and apps, such as Netflix, Hulu, YouTube, Amazon Prime Video, and more. However, to enjoy the full potential of your Roku device, you need a remote control that can help you navigate the menus and content. That's where Roku Remote Full APK comes in handy.

    What is Roku Remote Full APK?

    -

    A modified version of the official Roku Remote app

    - Roku Remote Full APK is a modified version of the official Roku Remote app that you can download from Google Play Store. The official app lets you use your Android device as a remote control for your Roku device, as long as they are connected to the same Wi-Fi network. However, the official app has some limitations, such as ads, in-app purchases, and compatibility issues. That's why some developers have created a modified version of the app that removes these limitations and adds some extra features.

    Features and benefits of Roku Remote Full APK

    - Some of the features and benefits of Roku Remote Full APK are: - No setup is required. The app automatically scans your network to find your Roku device. - No ads or in-app purchases. The app is completely free and ad-free. - Premium features unlocked. The app gives you access to some premium features that are not available in the official app, such as voice search, private listening, keyboard input, channel launch, and more. - A large touchpad for convenient menu and content navigation. - A volume slider to adjust the volume of your Roku device or TV. - A power button to turn on or off your Roku device or TV. - A home button to go back to the home screen of your Roku device. - A back button to go back to the previous screen or exit an app. - An options button to access additional settings or options for an app or channel. - A search button to search for content across multiple channels and apps. - A channels button to view and manage your installed channels and apps. - A favorites button to add or remove channels and apps from your favorites list.

    How to download and install Roku Remote Full APK?

    -

    Requirements and precautions

    - Before you download and install Roku Remote Full APK, you need to make sure that: - You have an Android device that runs on Android 4.4 or higher. - You have enabled unknown sources on your Android device. To do this, go to Settings > Security > Unknown sources and toggle it on. - You have a compatible Roku device that is connected to the same Wi-Fi network as your Android device. - You have downloaded the latest version of Roku Remote Full APK from a trusted source.

    Steps to download and install

    - To download and install Roku Remote Full APK, follow these steps: - Go to the website where you downloaded the APK file and tap on it to start the download process. - Once the download is complete, open the file manager app on your Android device and locate the downloaded file. - Tap on the file and follow the instructions on the screen to install the app. - Wait for the installation process to finish and then launch the app from your app drawer or home screen.

    How to use Roku Remote Full APK?

    -

    Connect your Roku device and your Android device to the same network

    - To use Roku Remote Full APK, you need to make sure that your Roku device and your Android device are connected to the same Wi-Fi network. To check this, go to Settings > Network on your Roku device and Settings > Wi-Fi on your Android device and compare the network names. If they are not the same, connect them to the same network.

    Launch the app and scan for your Roku device

    - Once you have installed and launched the app, it will automatically scan your network for any available Roku devices. If it finds one, it will show you its name and IP address on the screen. Tap on it to connect to it. If it does not find one, you can manually enter the IP address of your Roku device by tapping on the plus icon on the top right corner of the screen.

    Control your Roku device with the app

    - After you have connected to your Roku device, you can use the app as a remote control for it. You can use the touchpad to swipe left, right, up, or down to navigate the menus and content. You can also tap on the touchpad to select an item. You can use the buttons on the bottom of the screen to perform various actions, such as volume, power, home, back, options, search, channels, and favorites. You can also access some premium features by tapping on the menu icon on the top left corner of the screen. These include voice search, private listening, keyboard input, channel launch, and more.

    Conclusion

    -

    Summary of the main points

    - Roku Remote Full APK is a modified version of the official Roku Remote app that lets you use your Android device as a remote control for your Roku device. It has no ads or in-app purchases and unlocks some premium features that are not available in the official app. It is easy to download, install, and use, as long as you have a compatible Roku device and Android device that are connected to the same Wi-Fi network.

    Call to action and recommendation

    - If you want to enjoy the full potential of your Roku device without spending any money or dealing with any ads or limitations, you should try Roku Remote Full APK. It is a free and reliable app that gives you more control and convenience over your streaming experience. You can download it from a trusted source and follow the steps in this article to install and use it. You will not regret it!

    FAQs

    -

    Is Roku Remote Full APK safe and legal?

    - Roku Remote Full APK is safe and legal to use, as long as you download it from a trusted source and do not use it for any malicious purposes. It does not contain any viruses or malware that can harm your device or data. It also does not violate any terms of service or policies of Roku or Google Play Store.

    What are the differences between Roku Remote Full APK and the official app?

    - Roku Remote Full APK is a modified version of the official app that removes some limitations and adds some extra features. Some of the differences are: - Roku Remote Full APK has no ads or in-app purchases. - Roku Remote Full APK unlocks some premium features that are not available in the official app, such as voice search, private listening, keyboard input, channel launch, and more. - Roku Remote Full APK has a larger touchpad for easier navigation. - Roku Remote Full APK has a power button to turn on or off your Roku device or TV.

    Does Roku Remote Full APK work with all Roku devices?

    - Roku Remote Full APK works with most Roku devices that support Wi-Fi connectivity, such as players, smart TVs, soundbars, and more. However, some older models may not be compatible with the app. To check if your Roku device is compatible with the app, go to Settings > System > About on your Roku device and look for the model number. Then compare it with the list of compatible devices on the website where you downloaded the app.

    Can I use Roku Remote Full APK without internet connection?

    - No, you cannot use Roku Remote Full APK without internet connection. The app requires both your Roku device and your Android device to be connected to the same Wi-Fi network in order to communicate with each other. Without internet connection, you will not be able to scan for your Roku device or control it with the app.

    How can I update Roku Remote Full APK?

    - To update Roku Remote Full APK, you need to download the latest version of the app from a trusted source and install it over the existing one. You do not need to uninstall or delete the previous version of the app. The new version will overwrite it and update the app with the new features and bug fixes. You can also check for updates by tapping on the menu icon on the top left corner of the screen and selecting Check for updates. I hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy streaming!

    -

    roku remote full apk


    Download >> https://urlca.com/2uOce4



    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/losses/segmentation.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/losses/segmentation.py deleted file mode 100644 index 3d4a9f94eaae84722db584277dbbf9bc41ede357..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/lama/saicinpainting/training/losses/segmentation.py +++ /dev/null @@ -1,43 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .constants import weights as constant_weights - - -class CrossEntropy2d(nn.Module): - def __init__(self, reduction="mean", ignore_label=255, weights=None, *args, **kwargs): - """ - weight (Tensor, optional): a manual rescaling weight given to each class. - If given, has to be a Tensor of size "nclasses" - """ - super(CrossEntropy2d, self).__init__() - self.reduction = reduction - self.ignore_label = ignore_label - self.weights = weights - if self.weights is not None: - device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - self.weights = torch.FloatTensor(constant_weights[weights]).to(device) - - def forward(self, predict, target): - """ - Args: - predict:(n, c, h, w) - target:(n, 1, h, w) - """ - target = target.long() - assert not target.requires_grad - assert predict.dim() == 4, "{0}".format(predict.size()) - assert target.dim() == 4, "{0}".format(target.size()) - assert predict.size(0) == target.size(0), "{0} vs {1} ".format(predict.size(0), target.size(0)) - assert target.size(1) == 1, "{0}".format(target.size(1)) - assert predict.size(2) == target.size(2), "{0} vs {1} ".format(predict.size(2), target.size(2)) - assert predict.size(3) == target.size(3), "{0} vs {1} ".format(predict.size(3), target.size(3)) - target = target.squeeze(1) - n, c, h, w = predict.size() - target_mask = (target >= 0) * (target != self.ignore_label) - target = target[target_mask] - predict = predict.transpose(1, 2).transpose(2, 3).contiguous() - predict = predict[target_mask.view(n, h, w, 1).repeat(1, 1, 1, c)].view(-1, c) - loss = F.cross_entropy(predict, target, weight=self.weights, reduction=self.reduction) - return loss diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/segmentors/base.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/segmentors/base.py deleted file mode 100644 index a12d8beb8ea40bfa234197eddb4d3ef40dbfeb6f..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/mmpkg/mmseg/models/segmentors/base.py +++ /dev/null @@ -1,273 +0,0 @@ -import logging -import warnings -from abc import ABCMeta, abstractmethod -from collections import OrderedDict - -import annotator.mmpkg.mmcv as mmcv -import numpy as np -import torch -import torch.distributed as dist -import torch.nn as nn -from annotator.mmpkg.mmcv.runner import auto_fp16 - - -class BaseSegmentor(nn.Module): - """Base class for segmentors.""" - - __metaclass__ = ABCMeta - - def __init__(self): - super(BaseSegmentor, self).__init__() - self.fp16_enabled = False - - @property - def with_neck(self): - """bool: whether the segmentor has neck""" - return hasattr(self, 'neck') and self.neck is not None - - @property - def with_auxiliary_head(self): - """bool: whether the segmentor has auxiliary head""" - return hasattr(self, - 'auxiliary_head') and self.auxiliary_head is not None - - @property - def with_decode_head(self): - """bool: whether the segmentor has decode head""" - return hasattr(self, 'decode_head') and self.decode_head is not None - - @abstractmethod - def extract_feat(self, imgs): - """Placeholder for extract features from images.""" - pass - - @abstractmethod - def encode_decode(self, img, img_metas): - """Placeholder for encode images with backbone and decode into a - semantic segmentation map of the same size as input.""" - pass - - @abstractmethod - def forward_train(self, imgs, img_metas, **kwargs): - """Placeholder for Forward function for training.""" - pass - - @abstractmethod - def simple_test(self, img, img_meta, **kwargs): - """Placeholder for single image test.""" - pass - - @abstractmethod - def aug_test(self, imgs, img_metas, **kwargs): - """Placeholder for augmentation test.""" - pass - - def init_weights(self, pretrained=None): - """Initialize the weights in segmentor. - - Args: - pretrained (str, optional): Path to pre-trained weights. - Defaults to None. - """ - if pretrained is not None: - logger = logging.getLogger() - logger.info(f'load model from: {pretrained}') - - def forward_test(self, imgs, img_metas, **kwargs): - """ - Args: - imgs (List[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains all images in the batch. - img_metas (List[List[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. - """ - for var, name in [(imgs, 'imgs'), (img_metas, 'img_metas')]: - if not isinstance(var, list): - raise TypeError(f'{name} must be a list, but got ' - f'{type(var)}') - - num_augs = len(imgs) - if num_augs != len(img_metas): - raise ValueError(f'num of augmentations ({len(imgs)}) != ' - f'num of image meta ({len(img_metas)})') - # all images in the same aug batch all of the same ori_shape and pad - # shape - for img_meta in img_metas: - ori_shapes = [_['ori_shape'] for _ in img_meta] - assert all(shape == ori_shapes[0] for shape in ori_shapes) - img_shapes = [_['img_shape'] for _ in img_meta] - assert all(shape == img_shapes[0] for shape in img_shapes) - pad_shapes = [_['pad_shape'] for _ in img_meta] - assert all(shape == pad_shapes[0] for shape in pad_shapes) - - if num_augs == 1: - return self.simple_test(imgs[0], img_metas[0], **kwargs) - else: - return self.aug_test(imgs, img_metas, **kwargs) - - @auto_fp16(apply_to=('img', )) - def forward(self, img, img_metas, return_loss=True, **kwargs): - """Calls either :func:`forward_train` or :func:`forward_test` depending - on whether ``return_loss`` is ``True``. - - Note this setting will change the expected inputs. When - ``return_loss=True``, img and img_meta are single-nested (i.e. Tensor - and List[dict]), and when ``resturn_loss=False``, img and img_meta - should be double nested (i.e. List[Tensor], List[List[dict]]), with - the outer list indicating test time augmentations. - """ - if return_loss: - return self.forward_train(img, img_metas, **kwargs) - else: - return self.forward_test(img, img_metas, **kwargs) - - def train_step(self, data_batch, optimizer, **kwargs): - """The iteration step during training. - - This method defines an iteration step during training, except for the - back propagation and optimizer updating, which are done in an optimizer - hook. Note that in some complicated cases or models, the whole process - including back propagation and optimizer updating is also defined in - this method, such as GAN. - - Args: - data (dict): The output of dataloader. - optimizer (:obj:`torch.optim.Optimizer` | dict): The optimizer of - runner is passed to ``train_step()``. This argument is unused - and reserved. - - Returns: - dict: It should contain at least 3 keys: ``loss``, ``log_vars``, - ``num_samples``. - ``loss`` is a tensor for back propagation, which can be a - weighted sum of multiple losses. - ``log_vars`` contains all the variables to be sent to the - logger. - ``num_samples`` indicates the batch size (when the model is - DDP, it means the batch size on each GPU), which is used for - averaging the logs. - """ - losses = self(**data_batch) - loss, log_vars = self._parse_losses(losses) - - outputs = dict( - loss=loss, - log_vars=log_vars, - num_samples=len(data_batch['img_metas'])) - - return outputs - - def val_step(self, data_batch, **kwargs): - """The iteration step during validation. - - This method shares the same signature as :func:`train_step`, but used - during val epochs. Note that the evaluation after training epochs is - not implemented with this method, but an evaluation hook. - """ - output = self(**data_batch, **kwargs) - return output - - @staticmethod - def _parse_losses(losses): - """Parse the raw outputs (losses) of the network. - - Args: - losses (dict): Raw output of the network, which usually contain - losses and other necessary information. - - Returns: - tuple[Tensor, dict]: (loss, log_vars), loss is the loss tensor - which may be a weighted sum of all losses, log_vars contains - all the variables to be sent to the logger. - """ - log_vars = OrderedDict() - for loss_name, loss_value in losses.items(): - if isinstance(loss_value, torch.Tensor): - log_vars[loss_name] = loss_value.mean() - elif isinstance(loss_value, list): - log_vars[loss_name] = sum(_loss.mean() for _loss in loss_value) - else: - raise TypeError( - f'{loss_name} is not a tensor or list of tensors') - - loss = sum(_value for _key, _value in log_vars.items() - if 'loss' in _key) - - log_vars['loss'] = loss - for loss_name, loss_value in log_vars.items(): - # reduce loss when distributed training - if dist.is_available() and dist.is_initialized(): - loss_value = loss_value.data.clone() - dist.all_reduce(loss_value.div_(dist.get_world_size())) - log_vars[loss_name] = loss_value.item() - - return loss, log_vars - - def show_result(self, - img, - result, - palette=None, - win_name='', - show=False, - wait_time=0, - out_file=None, - opacity=0.5): - """Draw `result` over `img`. - - Args: - img (str or Tensor): The image to be displayed. - result (Tensor): The semantic segmentation results to draw over - `img`. - palette (list[list[int]]] | np.ndarray | None): The palette of - segmentation map. If None is given, random palette will be - generated. Default: None - win_name (str): The window name. - wait_time (int): Value of waitKey param. - Default: 0. - show (bool): Whether to show the image. - Default: False. - out_file (str or None): The filename to write the image. - Default: None. - opacity(float): Opacity of painted segmentation map. - Default 0.5. - Must be in (0, 1] range. - Returns: - img (Tensor): Only if not `show` or `out_file` - """ - img = mmcv.imread(img) - img = img.copy() - seg = result[0] - if palette is None: - if self.PALETTE is None: - palette = np.random.randint( - 0, 255, size=(len(self.CLASSES), 3)) - else: - palette = self.PALETTE - palette = np.array(palette) - assert palette.shape[0] == len(self.CLASSES) - assert palette.shape[1] == 3 - assert len(palette.shape) == 2 - assert 0 < opacity <= 1.0 - color_seg = np.zeros((seg.shape[0], seg.shape[1], 3), dtype=np.uint8) - for label, color in enumerate(palette): - color_seg[seg == label, :] = color - # convert to BGR - color_seg = color_seg[..., ::-1] - - img = img * (1 - opacity) + color_seg * opacity - img = img.astype(np.uint8) - # if out_file specified, do not show image in window - if out_file is not None: - show = False - - if show: - mmcv.imshow(img, win_name, wait_time) - if out_file is not None: - mmcv.imwrite(img, out_file) - - if not (show or out_file): - warnings.warn('show==False and out_file is not specified, only ' - 'result image will be returned') - return img diff --git a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/model/DepthNormalizer.py b/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/model/DepthNormalizer.py deleted file mode 100644 index 84908ec131771b8d42f32535ab856017fe1143a1..0000000000000000000000000000000000000000 --- a/spaces/cownclown/Image-and-3D-Model-Creator/PIFu/lib/model/DepthNormalizer.py +++ /dev/null @@ -1,18 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class DepthNormalizer(nn.Module): - def __init__(self, opt): - super(DepthNormalizer, self).__init__() - self.opt = opt - - def forward(self, z, calibs=None, index_feat=None): - ''' - Normalize z_feature - :param z_feat: [B, 1, N] depth value for z in the image coordinate system - :return: - ''' - z_feat = z * (self.opt.loadSize // 2) / self.opt.z_size - return z_feat diff --git a/spaces/cxeep/whisper-webui/src/vad.py b/spaces/cxeep/whisper-webui/src/vad.py deleted file mode 100644 index c3c34480cac59ff08d07f846e47bea127beefccc..0000000000000000000000000000000000000000 --- a/spaces/cxeep/whisper-webui/src/vad.py +++ /dev/null @@ -1,404 +0,0 @@ -from abc import ABC, abstractmethod -from collections import Counter -from typing import Any, Iterator, List, Dict - -from pprint import pprint - -# Workaround for https://github.com/tensorflow/tensorflow/issues/48797 -try: - import tensorflow as tf -except ModuleNotFoundError: - # Error handling - pass - -import torch - -import ffmpeg -import numpy as np - -from src.utils import format_timestamp - -# Defaults for Silero -# TODO: Make these configurable? - -SPEECH_TRESHOLD = 0.3 -MAX_SILENT_PERIOD = 10 # seconds -MAX_MERGE_SIZE = 150 # Do not create segments larger than 2.5 minutes - -SEGMENT_PADDING_LEFT = 1 # Start detected text segment early -SEGMENT_PADDING_RIGHT = 1 # End detected segments late - -# Whether to attempt to transcribe non-speech -TRANSCRIBE_NON_SPEECH = False - -# Minimum size of segments to process -MIN_SEGMENT_DURATION = 1 - -VAD_MAX_PROCESSING_CHUNK = 60 * 60 # 60 minutes of audio - -class AbstractTranscription(ABC): - def __init__(self, segment_padding_left: int = None, segment_padding_right = None, max_silent_period: int = None, max_merge_size: int = None, transcribe_non_speech: bool = False): - self.sampling_rate = 16000 - self.segment_padding_left = segment_padding_left - self.segment_padding_right = segment_padding_right - self.max_silent_period = max_silent_period - self.max_merge_size = max_merge_size - self.transcribe_non_speech = transcribe_non_speech - - def get_audio_segment(self, str, start_time: str = None, duration: str = None): - return load_audio(str, self.sampling_rate, start_time, duration) - - @abstractmethod - def get_transcribe_timestamps(self, audio: str): - """ - Get the start and end timestamps of the sections that should be transcribed by this VAD method. - - Parameters - ---------- - audio: str - The audio file. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - return - - def transcribe(self, audio: str, whisperCallable): - """ - Transcribe the given audo file. - - Parameters - ---------- - audio: str - The audio file. - - whisperCallable: Callable[[Union[str, np.ndarray, torch.Tensor]], dict[str, Union[dict, Any]]] - The callback that is used to invoke Whisper on an audio file/buffer. - - Returns - ------- - A list of start and end timestamps, in fractional seconds. - """ - - # get speech timestamps from full audio file - seconds_timestamps = self.get_transcribe_timestamps(audio) - - padded = self.pad_timestamps(seconds_timestamps, self.segment_padding_left, self.segment_padding_right) - merged = self.merge_timestamps(padded, self.max_silent_period, self.max_merge_size) - - print("Timestamps:") - pprint(merged) - - if self.transcribe_non_speech: - max_audio_duration = get_audio_duration(audio) - - # Expand segments to include the gaps between them - merged = self.expand_gaps(merged, total_duration=max_audio_duration) - - print("Transcribing non-speech:") - pprint(merged) - - result = { - 'text': "", - 'segments': [], - 'language': "" - } - languageCounter = Counter() - - # For each time segment, run whisper - for segment in merged: - segment_start = segment['start'] - segment_end = segment['end'] - segment_expand_amount = segment.get('expand_amount', 0) - - segment_duration = segment_end - segment_start - - if segment_duration < MIN_SEGMENT_DURATION: - continue; - - segment_audio = self.get_audio_segment(audio, start_time = str(segment_start), duration = str(segment_duration)) - - print("Running whisper from ", format_timestamp(segment_start), " to ", format_timestamp(segment_end), ", duration: ", segment_duration, "expanded: ", segment_expand_amount) - segment_result = whisperCallable(segment_audio) - - adjusted_segments = self.adjust_timestamp(segment_result["segments"], adjust_seconds=segment_start, max_source_time=segment_duration) - - # Append to output - result['text'] += segment_result['text'] - result['segments'].extend(adjusted_segments) - - # Increment detected language - languageCounter[segment_result['language']] += 1 - - if len(languageCounter) > 0: - result['language'] = languageCounter.most_common(1)[0][0] - - return result - - def include_gaps(self, segments: Iterator[dict], min_gap_length: float, total_duration: float): - result = [] - last_end_time = 0 - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - if (last_end_time != segment_start): - delta = segment_start - last_end_time - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': segment_start, 'gap': True } ) - - last_end_time = segment_end - result.append(segment) - - # Also include total duration if specified - if (total_duration is not None and last_end_time < total_duration): - delta = total_duration - segment_start - - if (min_gap_length is None or delta >= min_gap_length): - result.append( { 'start': last_end_time, 'end': total_duration, 'gap': True } ) - - return result - - # Expand the end time of each segment to the start of the next segment - def expand_gaps(self, segments: List[Dict[str, Any]], total_duration: float): - result = [] - - if len(segments) == 0: - return result - - # Add gap at the beginning if needed - if (segments[0]['start'] > 0): - result.append({ 'start': 0, 'end': segments[0]['start'], 'gap': True } ) - - for i in range(len(segments) - 1): - current_segment = segments[i] - next_segment = segments[i + 1] - - delta = next_segment['start'] - current_segment['end'] - - # Expand if the gap actually exists - if (delta >= 0): - current_segment = current_segment.copy() - current_segment['expand_amount'] = delta - current_segment['end'] = next_segment['start'] - - result.append(current_segment) - - last_segment = result[-1] - - # Also include total duration if specified - if (total_duration is not None): - last_segment = result[-1] - - if (last_segment['end'] < total_duration): - last_segment = last_segment.copy() - last_segment['end'] = total_duration - result[-1] = last_segment - - return result - - def adjust_timestamp(self, segments: Iterator[dict], adjust_seconds: float, max_source_time: float = None): - result = [] - - for segment in segments: - segment_start = float(segment['start']) - segment_end = float(segment['end']) - - # Filter segments? - if (max_source_time is not None): - if (segment_start > max_source_time): - continue - segment_end = min(max_source_time, segment_end) - - new_segment = segment.copy() - - # Add to start and end - new_segment['start'] = segment_start + adjust_seconds - new_segment['end'] = segment_end + adjust_seconds - result.append(new_segment) - return result - - def pad_timestamps(self, timestamps: List[Dict[str, Any]], padding_left: float, padding_right: float): - if (padding_left == 0 and padding_right == 0): - return timestamps - - result = [] - prev_entry = None - - for i in range(len(timestamps)): - curr_entry = timestamps[i] - next_entry = timestamps[i + 1] if i < len(timestamps) - 1 else None - - segment_start = curr_entry['start'] - segment_end = curr_entry['end'] - - if padding_left is not None: - segment_start = max(prev_entry['end'] if prev_entry else 0, segment_start - padding_left) - if padding_right is not None: - segment_end = segment_end + padding_right - - # Do not pad past the next segment - if (next_entry is not None): - segment_end = min(next_entry['start'], segment_end) - - new_entry = { 'start': segment_start, 'end': segment_end } - prev_entry = new_entry - result.append(new_entry) - - return result - - def merge_timestamps(self, timestamps: List[Dict[str, Any]], max_merge_gap: float, max_merge_size: float): - if max_merge_gap is None: - return timestamps - - result = [] - current_entry = None - - for entry in timestamps: - if current_entry is None: - current_entry = entry - continue - - # Get distance to the previous entry - distance = entry['start'] - current_entry['end'] - current_entry_size = current_entry['end'] - current_entry['start'] - - if distance <= max_merge_gap and (max_merge_size is None or current_entry_size <= max_merge_size): - # Merge - current_entry['end'] = entry['end'] - else: - # Output current entry - result.append(current_entry) - current_entry = entry - - # Add final entry - if current_entry is not None: - result.append(current_entry) - - return result - - def multiply_timestamps(self, timestamps: List[Dict[str, Any]], factor: float): - result = [] - - for entry in timestamps: - start = entry['start'] - end = entry['end'] - - result.append({ - 'start': start * factor, - 'end': end * factor - }) - return result - -class VadSileroTranscription(AbstractTranscription): - def __init__(self, segment_padding_left=SEGMENT_PADDING_LEFT, segment_padding_right=SEGMENT_PADDING_RIGHT, - max_silent_period=MAX_SILENT_PERIOD, max_merge_size=MAX_MERGE_SIZE, transcribe_non_speech: bool = False, - copy = None): - super().__init__(segment_padding_left=segment_padding_left, segment_padding_right=segment_padding_right, - max_silent_period=max_silent_period, max_merge_size=max_merge_size, transcribe_non_speech=transcribe_non_speech) - - if copy: - self.model = copy.model - self.get_speech_timestamps = copy.get_speech_timestamps - else: - self.model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad', model='silero_vad') - (self.get_speech_timestamps, _, _, _, _) = utils - - def get_transcribe_timestamps(self, audio: str): - audio_duration = get_audio_duration(audio) - result = [] - - # Divide procesisng of audio into chunks - chunk_start = 0.0 - - while (chunk_start < audio_duration): - chunk_duration = min(audio_duration - chunk_start, VAD_MAX_PROCESSING_CHUNK) - - print("Processing VAD in chunk from {} to {}".format(format_timestamp(chunk_start), format_timestamp(chunk_start + chunk_duration))) - wav = self.get_audio_segment(audio, str(chunk_start), str(chunk_duration)) - - sample_timestamps = self.get_speech_timestamps(wav, self.model, sampling_rate=self.sampling_rate, threshold=SPEECH_TRESHOLD) - seconds_timestamps = self.multiply_timestamps(sample_timestamps, factor=1 / self.sampling_rate) - adjusted = self.adjust_timestamp(seconds_timestamps, adjust_seconds=chunk_start, max_source_time=chunk_start + chunk_duration) - - #pprint(adjusted) - - result.extend(adjusted) - chunk_start += chunk_duration - - return result - -# A very simple VAD that just marks every N seconds as speech -class VadPeriodicTranscription(AbstractTranscription): - def __init__(self, periodic_duration: int): - super().__init__() - self.periodic_duration = periodic_duration - - def get_transcribe_timestamps(self, audio: str): - # Get duration in seconds - audio_duration = get_audio_duration(audio) - result = [] - - # Generate a timestamp every N seconds - start_timestamp = 0 - - while (start_timestamp < audio_duration): - end_timestamp = min(start_timestamp + self.periodic_duration, audio_duration) - segment_duration = end_timestamp - start_timestamp - - # Minimum duration is 1 second - if (segment_duration >= 1): - result.append( { 'start': start_timestamp, 'end': end_timestamp } ) - - start_timestamp = end_timestamp - - return result - -def get_audio_duration(file: str): - return float(ffmpeg.probe(file)["format"]["duration"]) - -def load_audio(file: str, sample_rate: int = 16000, - start_time: str = None, duration: str = None): - """ - Open an audio file and read as mono waveform, resampling as necessary - - Parameters - ---------- - file: str - The audio file to open - - sr: int - The sample rate to resample the audio if necessary - - start_time: str - The start time, using the standard FFMPEG time duration syntax, or None to disable. - - duration: str - The duration, using the standard FFMPEG time duration syntax, or None to disable. - - Returns - ------- - A NumPy array containing the audio waveform, in float32 dtype. - """ - try: - inputArgs = {'threads': 0} - - if (start_time is not None): - inputArgs['ss'] = start_time - if (duration is not None): - inputArgs['t'] = duration - - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - out, _ = ( - ffmpeg.input(file, **inputArgs) - .output("-", format="s16le", acodec="pcm_s16le", ac=1, ar=sample_rate) - .run(cmd="ffmpeg", capture_stdout=True, capture_stderr=True) - ) - except ffmpeg.Error as e: - raise RuntimeError(f"Failed to load audio: {e.stderr.decode()}") - - return np.frombuffer(out, np.int16).flatten().astype(np.float32) / 32768.0 \ No newline at end of file diff --git a/spaces/cybercorejapan/human-detection-docker/projects/human_detection/engine/visualizer.py b/spaces/cybercorejapan/human-detection-docker/projects/human_detection/engine/visualizer.py deleted file mode 100644 index 992d2d3caea773bed872c00b712114367cc516c1..0000000000000000000000000000000000000000 --- a/spaces/cybercorejapan/human-detection-docker/projects/human_detection/engine/visualizer.py +++ /dev/null @@ -1,51 +0,0 @@ -from typing import List, Dict -import cv2 -import numpy as np -from models.engine.visualizer import BaseVisualizer - -class Visualizer(BaseVisualizer): - - def __init__(self, fps: int=-1, min_width: int=-1): - """ Visualizer class for visualization (track_results + count_results). - - Args: - class_map_ids (Dict): class mapping dictionary to map model's class to original class. Eg {0: 1, 1: 0, 2: 2, 3: 3} mean we swap class ID between 0 and 1. - fps (int): FPS for output video. If fps = -1, it will have same fps as input video. - min_width (int): minimum width for output video (height will be scaled to keep aspect ratio as input video). If min_width = -1, it will have same resolution as input video. - """ - class_names = ['pedestrian'] - super().__init__(class_names, fps, min_width) - - def visualize(self, img: np.ndarray, dettrack_at_frame_id: List[Dict]=None, show_conf: bool=True): - """ Function to visualize (track_results + count_results) a frame. - - Args: - img (np.ndarray): image need to be visualized. - dettrack_at_frame_id (List[Dict]): batch of track results which can be obtained from Tracker class. - show_conf (bool): Visualize confidence of track results or not. - """ - - # Draw tracking - if (dettrack_at_frame_id): - boxes = dettrack_at_frame_id["boxes"] - # classes = dettrack_at_frame_id["labels"] - ids = dettrack_at_frame_id["ids"] - for bbox, id_ in zip(boxes, ids): - id_ = int(id_) - score = bbox[4] - color = self.get_color(id_) - label = f'{id_}' + (f' {score:.2f}' if (show_conf) else '') - tl, tf = 2, 1 - c1, c2 = (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3])) - img = cv2.rectangle(img, c1, c2, color, thickness=tl, lineType=cv2.LINE_AA) - t_size = cv2.getTextSize(label, 0, fontScale=tl / 3, thickness=tf)[0] - c2 = c1[0] + t_size[0], c1[1] - t_size[1] - 3 - img = cv2.rectangle(img, c1, c2, color, -1, cv2.LINE_AA) - img = cv2.putText(img, label, (c1[0], c1[1] - 2), 0, tl / 3, [225, 255, 255], thickness=tf, lineType=cv2.LINE_AA) - - if (img.shape[0] != self.height or img.shape[1] != self.width): - img = cv2.resize(img, (self.width, self.height)) - self.writer.write(img) - - return img - diff --git a/spaces/cyhcctc/cyhbingo/README.md b/spaces/cyhcctc/cyhbingo/README.md deleted file mode 100644 index 5d6936218874c647b5d22e13ad4be7edb8936f92..0000000000000000000000000000000000000000 --- a/spaces/cyhcctc/cyhbingo/README.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: bingo -emoji: 😊 -colorFrom: red -colorTo: red -sdk: docker -license: mit -duplicated_from: hf4all/bingo ---- - -
    - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -问题反馈请前往 https://github.com/weaigc/bingo/issues -
    - - diff --git a/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/autoload.js b/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/autoload.js deleted file mode 100644 index 3464a5cd44b0d4e1b0f2528bd01fc1793275b964..0000000000000000000000000000000000000000 --- a/spaces/dakaiye/dky_xuexi/docs/waifu_plugin/autoload.js +++ /dev/null @@ -1,30 +0,0 @@ -try { - $("").attr({href: "file=docs/waifu_plugin/waifu.css", rel: "stylesheet", type: "text/css"}).appendTo('head'); - $('body').append('
    '); - $.ajax({url: "file=docs/waifu_plugin/waifu-tips.js", dataType:"script", cache: true, success: function() { - $.ajax({url: "file=docs/waifu_plugin/live2d.js", dataType:"script", cache: true, success: function() { - /* 可直接修改部分参数 */ - live2d_settings['hitokotoAPI'] = "hitokoto.cn"; // 一言 API - live2d_settings['modelId'] = 5; // 默认模型 ID - live2d_settings['modelTexturesId'] = 1; // 默认材质 ID - live2d_settings['modelStorage'] = false; // 不储存模型 ID - live2d_settings['waifuSize'] = '210x187'; - live2d_settings['waifuTipsSize'] = '187x52'; - live2d_settings['canSwitchModel'] = true; - live2d_settings['canSwitchTextures'] = true; - live2d_settings['canSwitchHitokoto'] = false; - live2d_settings['canTakeScreenshot'] = false; - live2d_settings['canTurnToHomePage'] = false; - live2d_settings['canTurnToAboutPage'] = false; - live2d_settings['showHitokoto'] = false; // 显示一言 - live2d_settings['showF12Status'] = false; // 显示加载状态 - live2d_settings['showF12Message'] = false; // 显示看板娘消息 - live2d_settings['showF12OpenMsg'] = false; // 显示控制台打开提示 - live2d_settings['showCopyMessage'] = false; // 显示 复制内容 提示 - live2d_settings['showWelcomeMessage'] = true; // 显示进入面页欢迎词 - - /* 在 initModel 前添加 */ - initModel("file=docs/waifu_plugin/waifu-tips.json"); - }}); - }}); -} catch(err) { console.log("[Error] JQuery is not defined.") } diff --git a/spaces/davda54/chat-nort5/question_detection_norbert3_small/README.md b/spaces/davda54/chat-nort5/question_detection_norbert3_small/README.md deleted file mode 100644 index f81d6247c082c0c2707c47ac62b0d4e3d558d848..0000000000000000000000000000000000000000 --- a/spaces/davda54/chat-nort5/question_detection_norbert3_small/README.md +++ /dev/null @@ -1,46 +0,0 @@ ---- -language: -- 'no' -- nb -- nn -inference: false -tags: -- BERT -- NorBERT -- Norwegian -- encoder -license: cc-by-4.0 ---- - -# NorBERT 3 small - - -## Other sizes: -- [NorBERT 3 xs (15M)](https://huggingface.co/ltg/norbert3-xs) -- [NorBERT 3 small (40M)](https://huggingface.co/ltg/norbert3-small) -- [NorBERT 3 base (123M)](https://huggingface.co/ltg/norbert3-base) -- [NorBERT 3 large (323M)](https://huggingface.co/ltg/norbert3-large) - - -## Example usage - -This model currently needs a custom wrapper from `modeling_norbert.py`. Then you can use it like this: - -```python -import torch -from transformers import AutoTokenizer -from modeling_norbert import NorbertForMaskedLM - -tokenizer = AutoTokenizer.from_pretrained("path/to/folder") -bert = NorbertForMaskedLM.from_pretrained("path/to/folder") - -mask_id = tokenizer.convert_tokens_to_ids("[MASK]") -input_text = tokenizer("Nå ønsker de seg en[MASK] bolig.", return_tensors="pt") -output_p = bert(**input_text) -output_text = torch.where(input_text.input_ids == mask_id, output_p.logits.argmax(-1), input_text.input_ids) - -# should output: '[CLS] Nå ønsker de seg en ny bolig.[SEP]' -print(tokenizer.decode(output_text[0].tolist())) -``` - -The following classes are currently implemented: `NorbertForMaskedLM`, `NorbertForSequenceClassification`, `NorbertForTokenClassification`, `NorbertForQuestionAnswering` and `NorbertForMultipleChoice`. \ No newline at end of file diff --git a/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/parsing/resnet.py b/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/parsing/resnet.py deleted file mode 100644 index fec8e82cf64469fb51be21ad5130217052addbda..0000000000000000000000000000000000000000 --- a/spaces/dawood17/SayBot_Enchancer/CodeFormer/facelib/parsing/resnet.py +++ /dev/null @@ -1,69 +0,0 @@ -import torch.nn as nn -import torch.nn.functional as F - - -def conv3x3(in_planes, out_planes, stride=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, padding=1, bias=False) - - -class BasicBlock(nn.Module): - - def __init__(self, in_chan, out_chan, stride=1): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(in_chan, out_chan, stride) - self.bn1 = nn.BatchNorm2d(out_chan) - self.conv2 = conv3x3(out_chan, out_chan) - self.bn2 = nn.BatchNorm2d(out_chan) - self.relu = nn.ReLU(inplace=True) - self.downsample = None - if in_chan != out_chan or stride != 1: - self.downsample = nn.Sequential( - nn.Conv2d(in_chan, out_chan, kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(out_chan), - ) - - def forward(self, x): - residual = self.conv1(x) - residual = F.relu(self.bn1(residual)) - residual = self.conv2(residual) - residual = self.bn2(residual) - - shortcut = x - if self.downsample is not None: - shortcut = self.downsample(x) - - out = shortcut + residual - out = self.relu(out) - return out - - -def create_layer_basic(in_chan, out_chan, bnum, stride=1): - layers = [BasicBlock(in_chan, out_chan, stride=stride)] - for i in range(bnum - 1): - layers.append(BasicBlock(out_chan, out_chan, stride=1)) - return nn.Sequential(*layers) - - -class ResNet18(nn.Module): - - def __init__(self): - super(ResNet18, self).__init__() - self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = create_layer_basic(64, 64, bnum=2, stride=1) - self.layer2 = create_layer_basic(64, 128, bnum=2, stride=2) - self.layer3 = create_layer_basic(128, 256, bnum=2, stride=2) - self.layer4 = create_layer_basic(256, 512, bnum=2, stride=2) - - def forward(self, x): - x = self.conv1(x) - x = F.relu(self.bn1(x)) - x = self.maxpool(x) - - x = self.layer1(x) - feat8 = self.layer2(x) # 1/8 - feat16 = self.layer3(feat8) # 1/16 - feat32 = self.layer4(feat16) # 1/32 - return feat8, feat16, feat32 diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/bar_plot.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/bar_plot.py deleted file mode 100644 index 430b10f63b0647e163df5bdde5d065519e64ce04..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/components/bar_plot.py +++ /dev/null @@ -1,376 +0,0 @@ -"""gr.BarPlot() component.""" - -from __future__ import annotations - -from typing import Callable, Literal - -import altair as alt -import pandas as pd -from gradio_client.documentation import document, set_documentation_group - -from gradio.components.base import _Keywords -from gradio.components.plot import AltairPlot, Plot - -set_documentation_group("component") - - -@document() -class BarPlot(Plot): - """ - Create a bar plot. - - Preprocessing: this component does *not* accept input. - Postprocessing: expects a pandas dataframe with the data to plot. - - Demos: bar_plot, chicago-bikeshare-dashboard - """ - - def __init__( - self, - value: pd.DataFrame | Callable | None = None, - x: str | None = None, - y: str | None = None, - *, - color: str | None = None, - vertical: bool = True, - group: str | None = None, - title: str | None = None, - tooltip: list[str] | str | None = None, - x_title: str | None = None, - y_title: str | None = None, - color_legend_title: str | None = None, - group_title: str | None = None, - color_legend_position: Literal[ - "left", - "right", - "top", - "bottom", - "top-left", - "top-right", - "bottom-left", - "bottom-right", - "none", - ] - | None = None, - height: int | None = None, - width: int | None = None, - y_lim: list[int] | None = None, - caption: str | None = None, - interactive: bool | None = True, - label: str | None = None, - show_label: bool | None = None, - container: bool = True, - scale: int | None = None, - min_width: int = 160, - every: float | None = None, - visible: bool = True, - elem_id: str | None = None, - elem_classes: list[str] | str | None = None, - ): - """ - Parameters: - value: The pandas dataframe containing the data to display in a scatter plot. - x: Column corresponding to the x axis. - y: Column corresponding to the y axis. - color: The column to determine the bar color. Must be categorical (discrete values). - vertical: If True, the bars will be displayed vertically. If False, the x and y axis will be switched, displaying the bars horizontally. Default is True. - group: The column with which to split the overall plot into smaller subplots. - title: The title to display on top of the chart. - tooltip: The column (or list of columns) to display on the tooltip when a user hovers over a bar. - x_title: The title given to the x axis. By default, uses the value of the x parameter. - y_title: The title given to the y axis. By default, uses the value of the y parameter. - color_legend_title: The title given to the color legend. By default, uses the value of color parameter. - group_title: The label displayed on top of the subplot columns (or rows if vertical=True). Use an empty string to omit. - color_legend_position: The position of the color legend. If the string value 'none' is passed, this legend is omitted. For other valid position values see: https://vega.github.io/vega/docs/legends/#orientation. - height: The height of the plot in pixels. - width: The width of the plot in pixels. - y_lim: A tuple of list containing the limits for the y-axis, specified as [y_min, y_max]. - caption: The (optional) caption to display below the plot. - interactive: Whether users should be able to interact with the plot by panning or zooming with their mouse or trackpad. - label: The (optional) label to display on the top left corner of the plot. - show_label: Whether the label should be displayed. - every: If `value` is a callable, run the function 'every' number of seconds while the client connection is open. Has no effect otherwise. Queue must be enabled. The event can be accessed (e.g. to cancel it) via this component's .load_event attribute. - visible: Whether the plot should be visible. - elem_id: An optional string that is assigned as the id of this component in the HTML DOM. Can be used for targeting CSS styles. - elem_classes: An optional list of strings that are assigned as the classes of this component in the HTML DOM. Can be used for targeting CSS styles. - """ - self.x = x - self.y = y - self.color = color - self.vertical = vertical - self.group = group - self.group_title = group_title - self.tooltip = tooltip - self.title = title - self.x_title = x_title - self.y_title = y_title - self.color_legend_title = color_legend_title - self.group_title = group_title - self.color_legend_position = color_legend_position - self.y_lim = y_lim - self.caption = caption - self.interactive_chart = interactive - self.width = width - self.height = height - super().__init__( - value=value, - label=label, - show_label=show_label, - container=container, - scale=scale, - min_width=min_width, - visible=visible, - elem_id=elem_id, - elem_classes=elem_classes, - every=every, - ) - - def get_config(self): - config = super().get_config() - config["caption"] = self.caption - return config - - def get_block_name(self) -> str: - return "plot" - - @staticmethod - def update( - value: pd.DataFrame | dict | Literal[_Keywords.NO_VALUE] = _Keywords.NO_VALUE, - x: str | None = None, - y: str | None = None, - color: str | None = None, - vertical: bool = True, - group: str | None = None, - title: str | None = None, - tooltip: list[str] | str | None = None, - x_title: str | None = None, - y_title: str | None = None, - color_legend_title: str | None = None, - group_title: str | None = None, - color_legend_position: Literal[ - "left", - "right", - "top", - "bottom", - "top-left", - "top-right", - "bottom-left", - "bottom-right", - "none", - ] - | None = None, - height: int | None = None, - width: int | None = None, - y_lim: list[int] | None = None, - caption: str | None = None, - interactive: bool | None = None, - label: str | None = None, - show_label: bool | None = None, - container: bool | None = None, - scale: int | None = None, - min_width: int | None = None, - visible: bool | None = None, - ): - """Update an existing BarPlot component. - - If updating any of the plot properties (color, size, etc) the value, x, and y parameters must be specified. - - Parameters: - value: The pandas dataframe containing the data to display in a scatter plot. - x: Column corresponding to the x axis. - y: Column corresponding to the y axis. - color: The column to determine the bar color. Must be categorical (discrete values). - vertical: If True, the bars will be displayed vertically. If False, the x and y axis will be switched, displaying the bars horizontally. Default is True. - group: The column with which to split the overall plot into smaller subplots. - title: The title to display on top of the chart. - tooltip: The column (or list of columns) to display on the tooltip when a user hovers over a bar. - x_title: The title given to the x axis. By default, uses the value of the x parameter. - y_title: The title given to the y axis. By default, uses the value of the y parameter. - color_legend_title: The title given to the color legend. By default, uses the value of color parameter. - group_title: The label displayed on top of the subplot columns (or rows if vertical=True). Use an empty string to omit. - color_legend_position: The position of the color legend. If the string value 'none' is passed, this legend is omitted. For other valid position values see: https://vega.github.io/vega/docs/legends/#orientation. - height: The height of the plot in pixels. - width: The width of the plot in pixels. - y_lim: A tuple of list containing the limits for the y-axis, specified as [y_min, y_max]. - caption: The (optional) caption to display below the plot. - interactive: Whether users should be able to interact with the plot by panning or zooming with their mouse or trackpad. - label: The (optional) label to display on the top left corner of the plot. - show_label: Whether the label should be displayed. - visible: Whether the plot should be visible. - """ - properties = [ - x, - y, - color, - vertical, - group, - title, - tooltip, - x_title, - y_title, - color_legend_title, - group_title, - color_legend_position, - height, - width, - y_lim, - interactive, - ] - if any(properties): - if not isinstance(value, pd.DataFrame): - raise ValueError( - "In order to update plot properties the value parameter " - "must be provided, and it must be a Dataframe. Please pass a value " - "parameter to gr.BarPlot.update." - ) - if x is None or y is None: - raise ValueError( - "In order to update plot properties, the x and y axis data " - "must be specified. Please pass valid values for x an y to " - "gr.BarPlot.update." - ) - chart = BarPlot.create_plot(value, *properties) - value = {"type": "altair", "plot": chart.to_json(), "chart": "bar"} - - updated_config = { - "label": label, - "show_label": show_label, - "container": container, - "scale": scale, - "min_width": min_width, - "visible": visible, - "value": value, - "caption": caption, - "__type__": "update", - } - return updated_config - - @staticmethod - def create_plot( - value: pd.DataFrame, - x: str, - y: str, - color: str | None = None, - vertical: bool = True, - group: str | None = None, - title: str | None = None, - tooltip: list[str] | str | None = None, - x_title: str | None = None, - y_title: str | None = None, - color_legend_title: str | None = None, - group_title: str | None = None, - color_legend_position: Literal[ - "left", - "right", - "top", - "bottom", - "top-left", - "top-right", - "bottom-left", - "bottom-right", - "none", - ] - | None = None, - height: int | None = None, - width: int | None = None, - y_lim: list[int] | None = None, - interactive: bool | None = True, - ): - """Helper for creating the bar plot.""" - interactive = True if interactive is None else interactive - orientation = ( - {"field": group, "title": group_title if group_title is not None else group} - if group - else {} - ) - - x_title = x_title or x - y_title = y_title or y - - # If horizontal, switch x and y - if not vertical: - y, x = x, y - x = f"sum({x}):Q" - y_title, x_title = x_title, y_title - orientation = {"row": alt.Row(**orientation)} if orientation else {} # type: ignore - x_lim = y_lim - y_lim = None - else: - y = f"sum({y}):Q" - x_lim = None - orientation = {"column": alt.Column(**orientation)} if orientation else {} # type: ignore - - encodings = dict( - x=alt.X( - x, # type: ignore - title=x_title, # type: ignore - scale=AltairPlot.create_scale(x_lim), # type: ignore - ), - y=alt.Y( - y, # type: ignore - title=y_title, # type: ignore - scale=AltairPlot.create_scale(y_lim), # type: ignore - ), - **orientation, - ) - properties = {} - if title: - properties["title"] = title - if height: - properties["height"] = height - if width: - properties["width"] = width - - if color: - domain = value[color].unique().tolist() - range_ = list(range(len(domain))) - encodings["color"] = { - "field": color, - "type": "nominal", - "scale": {"domain": domain, "range": range_}, - "legend": AltairPlot.create_legend( - position=color_legend_position, title=color_legend_title or color - ), - } - - if tooltip: - encodings["tooltip"] = tooltip - - chart = ( - alt.Chart(value) # type: ignore - .mark_bar() # type: ignore - .encode(**encodings) - .properties(background="transparent", **properties) - ) - if interactive: - chart = chart.interactive() - - return chart - - def postprocess(self, y: pd.DataFrame | dict | None) -> dict[str, str] | None: - # if None or update - if y is None or isinstance(y, dict): - return y - if self.x is None or self.y is None: - raise ValueError("No value provided for required parameters `x` and `y`.") - chart = self.create_plot( - value=y, - x=self.x, - y=self.y, - color=self.color, - vertical=self.vertical, - group=self.group, - title=self.title, - tooltip=self.tooltip, - x_title=self.x_title, - y_title=self.y_title, - color_legend_title=self.color_legend_title, - color_legend_position=self.color_legend_position, # type: ignore - group_title=self.group_title, - y_lim=self.y_lim, - interactive=self.interactive_chart, - height=self.height, - width=self.width, - ) - - return {"type": "altair", "plot": chart.to_json(), "chart": "bar"} diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates.py deleted file mode 100644 index 42ebb1a2d7b01acb18ae9a403d12494c1ae20c91..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates.py +++ /dev/null @@ -1,586 +0,0 @@ -from __future__ import annotations - -from typing import Any, Callable, Literal - -import numpy as np -from PIL.Image import Image - -from gradio import components - - -class TextArea(components.Textbox): - """ - Sets: lines=7 - """ - - is_template = True - - def __init__( - self, - value: str | Callable | None = "", - *, - lines: int = 7, - max_lines: int = 20, - placeholder: str | None = None, - label: str | None = None, - show_label: bool = True, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - **kwargs, - ): - super().__init__( - value=value, - lines=lines, - max_lines=max_lines, - placeholder=placeholder, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - elem_id=elem_id, - **kwargs, - ) - - -class Webcam(components.Image): - """ - Sets: source="webcam", interactive=True - """ - - is_template = True - - def __init__( - self, - value: str | Image | np.ndarray | None = None, - *, - shape: tuple[int, int] | None = None, - image_mode: Literal["RGB", "L"] = "RGB", - invert_colors: bool = False, - source: Literal["webcam"] = "webcam", - tool: Literal["editor", "select", "sketch", "color-sketch"] | None = None, - type: Literal["numpy", "pil", "filepath"] = "numpy", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = True, - visible: bool = True, - streaming: bool = False, - elem_id: str | None = None, - mirror_webcam: bool = True, - brush_radius: float | None = None, - brush_color: str = "#000000", - **kwargs, - ): - super().__init__( - value=value, - shape=shape, - image_mode=image_mode, - invert_colors=invert_colors, - source=source, - tool=tool, - type=type, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - streaming=streaming, - elem_id=elem_id, - mirror_webcam=mirror_webcam, - brush_radius=brush_radius, - brush_color=brush_color, - **kwargs, - ) - - -class Sketchpad(components.Image): - """ - Sets: image_mode="L", source="canvas", shape=(28, 28), invert_colors=True, interactive=True - """ - - is_template = True - - def __init__( - self, - value: str | Image | np.ndarray | None = None, - *, - shape: tuple[int, int] = (28, 28), - image_mode: Literal["L"] = "L", - invert_colors: bool = True, - source: Literal["canvas"] = "canvas", - tool: Literal["editor", "select", "sketch", "color-sketch"] | None = None, - type: Literal["numpy", "pil", "filepath"] = "numpy", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = True, - visible: bool = True, - streaming: bool = False, - elem_id: str | None = None, - mirror_webcam: bool = True, - brush_radius: float | None = None, - brush_color: str = "#000000", - **kwargs, - ): - super().__init__( - value=value, - shape=shape, - image_mode=image_mode, - invert_colors=invert_colors, - source=source, - tool=tool, - type=type, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - streaming=streaming, - elem_id=elem_id, - mirror_webcam=mirror_webcam, - brush_radius=brush_radius, - brush_color=brush_color, - **kwargs, - ) - - -class Paint(components.Image): - """ - Sets: source="canvas", tool="color-sketch", interactive=True - """ - - is_template = True - - def __init__( - self, - value: str | Image | np.ndarray | None = None, - *, - shape: tuple[int, int] | None = None, - image_mode: Literal["RGB"] = "RGB", - invert_colors: bool = False, - source: Literal["canvas"] = "canvas", - tool: Literal["color-sketch"] = "color-sketch", - type: Literal["numpy", "pil", "filepath"] = "numpy", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = True, - visible: bool = True, - streaming: bool = False, - elem_id: str | None = None, - mirror_webcam: bool = True, - brush_radius: float | None = None, - brush_color: str = "#000000", - **kwargs, - ): - super().__init__( - value=value, - shape=shape, - image_mode=image_mode, - invert_colors=invert_colors, - source=source, - tool=tool, - type=type, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - streaming=streaming, - elem_id=elem_id, - mirror_webcam=mirror_webcam, - brush_radius=brush_radius, - brush_color=brush_color, - **kwargs, - ) - - -class ImageMask(components.Image): - """ - Sets: source="upload", tool="sketch", interactive=True - """ - - is_template = True - - def __init__( - self, - value: str | Image | np.ndarray | None = None, - *, - shape: tuple[int, int] | None = None, - image_mode: Literal["RGB", "L"] = "RGB", - invert_colors: bool = False, - source: Literal["upload"] = "upload", - tool: Literal["sketch"] = "sketch", - type: Literal["numpy", "pil", "filepath"] = "numpy", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = True, - visible: bool = True, - streaming: bool = False, - elem_id: str | None = None, - mirror_webcam: bool = True, - brush_radius: float | None = None, - brush_color: str = "#000000", - **kwargs, - ): - super().__init__( - value=value, - shape=shape, - image_mode=image_mode, - invert_colors=invert_colors, - source=source, - tool=tool, - type=type, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - streaming=streaming, - elem_id=elem_id, - mirror_webcam=mirror_webcam, - brush_radius=brush_radius, - brush_color=brush_color, - **kwargs, - ) - - -class ImagePaint(components.Image): - """ - Sets: source="upload", tool="color-sketch", interactive=True - """ - - is_template = True - - def __init__( - self, - value: str | Image | np.ndarray | None = None, - *, - shape: tuple[int, int] | None = None, - image_mode: Literal["RGB", "L"] = "RGB", - invert_colors: bool = False, - source: Literal["upload"] = "upload", - tool: Literal["color-sketch"] = "color-sketch", - type: Literal["numpy", "pil", "filepath"] = "numpy", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = True, - visible: bool = True, - streaming: bool = False, - elem_id: str | None = None, - mirror_webcam: bool = True, - brush_radius: float | None = None, - brush_color: str = "#000000", - **kwargs, - ): - super().__init__( - value=value, - shape=shape, - image_mode=image_mode, - invert_colors=invert_colors, - source=source, - tool=tool, - type=type, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - streaming=streaming, - elem_id=elem_id, - mirror_webcam=mirror_webcam, - brush_radius=brush_radius, - brush_color=brush_color, - **kwargs, - ) - - -class Pil(components.Image): - """ - Sets: type="pil" - """ - - is_template = True - - def __init__( - self, - value: str | Image | np.ndarray | None = None, - *, - shape: tuple[int, int] | None = None, - image_mode: Literal["RGB", "L"] = "RGB", - invert_colors: bool = False, - source: Literal["upload", "webcam", "canvas"] = "upload", - tool: Literal["editor", "select", "sketch", "color-sketch"] | None = None, - type: Literal["pil"] = "pil", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = None, - visible: bool = True, - streaming: bool = False, - elem_id: str | None = None, - mirror_webcam: bool = True, - brush_radius: float | None = None, - brush_color: str = "#000000", - **kwargs, - ): - super().__init__( - value=value, - shape=shape, - image_mode=image_mode, - invert_colors=invert_colors, - source=source, - tool=tool, - type=type, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - streaming=streaming, - elem_id=elem_id, - mirror_webcam=mirror_webcam, - brush_radius=brush_radius, - brush_color=brush_color, - **kwargs, - ) - - -class PlayableVideo(components.Video): - """ - Sets: format="mp4" - """ - - is_template = True - - def __init__( - self, - value: str | Callable | None = None, - *, - format: Literal["mp4"] | None = "mp4", - source: Literal["upload", "webcam"] = "upload", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - mirror_webcam: bool = True, - include_audio: bool | None = None, - **kwargs, - ): - super().__init__( - value=value, - format=format, - source=source, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - elem_id=elem_id, - mirror_webcam=mirror_webcam, - include_audio=include_audio, - **kwargs, - ) - - -class Microphone(components.Audio): - """ - Sets: source="microphone" - """ - - is_template = True - - def __init__( - self, - value: str | tuple[int, np.ndarray] | Callable | None = None, - *, - source: Literal["microphone"] = "microphone", - type: Literal["numpy", "filepath"] = "numpy", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = None, - visible: bool = True, - streaming: bool = False, - elem_id: str | None = None, - **kwargs, - ): - super().__init__( - value=value, - source=source, - type=type, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - streaming=streaming, - elem_id=elem_id, - **kwargs, - ) - - -class Files(components.File): - """ - Sets: file_count="multiple" - """ - - is_template = True - - def __init__( - self, - value: str | list[str] | Callable | None = None, - *, - file_count: Literal["multiple"] = "multiple", - type: Literal["file", "binary"] = "file", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - **kwargs, - ): - super().__init__( - value=value, - file_count=file_count, - type=type, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - elem_id=elem_id, - **kwargs, - ) - - -class Numpy(components.Dataframe): - """ - Sets: type="numpy" - """ - - is_template = True - - def __init__( - self, - value: list[list[Any]] | Callable | None = None, - *, - headers: list[str] | None = None, - row_count: int | tuple[int, str] = (1, "dynamic"), - col_count: int | tuple[int, str] | None = None, - datatype: str | list[str] = "str", - type: Literal["numpy"] = "numpy", - max_rows: int | None = 20, - max_cols: int | None = None, - overflow_row_behaviour: Literal["paginate", "show_ends"] = "paginate", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - wrap: bool = False, - **kwargs, - ): - super().__init__( - value=value, - headers=headers, - row_count=row_count, - col_count=col_count, - datatype=datatype, - type=type, - max_rows=max_rows, - max_cols=max_cols, - overflow_row_behaviour=overflow_row_behaviour, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - elem_id=elem_id, - wrap=wrap, - **kwargs, - ) - - -class Matrix(components.Dataframe): - """ - Sets: type="array" - """ - - is_template = True - - def __init__( - self, - value: list[list[Any]] | Callable | None = None, - *, - headers: list[str] | None = None, - row_count: int | tuple[int, str] = (1, "dynamic"), - col_count: int | tuple[int, str] | None = None, - datatype: str | list[str] = "str", - type: Literal["array"] = "array", - max_rows: int | None = 20, - max_cols: int | None = None, - overflow_row_behaviour: Literal["paginate", "show_ends"] = "paginate", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - wrap: bool = False, - **kwargs, - ): - super().__init__( - value=value, - headers=headers, - row_count=row_count, - col_count=col_count, - datatype=datatype, - type=type, - max_rows=max_rows, - max_cols=max_cols, - overflow_row_behaviour=overflow_row_behaviour, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - elem_id=elem_id, - wrap=wrap, - **kwargs, - ) - - -class List(components.Dataframe): - """ - Sets: type="array", col_count=1 - """ - - is_template = True - - def __init__( - self, - value: list[list[Any]] | Callable | None = None, - *, - headers: list[str] | None = None, - row_count: int | tuple[int, str] = (1, "dynamic"), - col_count: Literal[1] = 1, - datatype: str | list[str] = "str", - type: Literal["array"] = "array", - max_rows: int | None = 20, - max_cols: int | None = None, - overflow_row_behaviour: Literal["paginate", "show_ends"] = "paginate", - label: str | None = None, - show_label: bool = True, - interactive: bool | None = None, - visible: bool = True, - elem_id: str | None = None, - wrap: bool = False, - **kwargs, - ): - super().__init__( - value=value, - headers=headers, - row_count=row_count, - col_count=col_count, - datatype=datatype, - type=type, - max_rows=max_rows, - max_cols=max_cols, - overflow_row_behaviour=overflow_row_behaviour, - label=label, - show_label=show_label, - interactive=interactive, - visible=visible, - elem_id=elem_id, - wrap=wrap, - **kwargs, - ) - - -Mic = Microphone diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/BlockTitle-af232cbc.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/BlockTitle-af232cbc.js deleted file mode 100644 index 7c79f9d8b61a3caaf65698c64e9b6d15775167e6..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/BlockTitle-af232cbc.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as h,e as g,s as k,a9 as w,m as $,o as B,P as I,g as d,Y as _,h as c,ab as S,ac as j,ad as q,w as r,r as v,u as m,v as C,k as p,F,G,H,t as N,x as P}from"./index-9e76ffee.js";import{I as T}from"./Info-77722665.js";import"./Button-30a08c0b.js";function b(a){let e,l;return e=new T({props:{$$slots:{default:[Y]},$$scope:{ctx:a}}}),{c(){F(e.$$.fragment)},m(n,o){G(e,n,o),l=!0},p(n,o){const u={};o&10&&(u.$$scope={dirty:o,ctx:n}),e.$set(u)},i(n){l||(r(e.$$.fragment,n),l=!0)},o(n){m(e.$$.fragment,n),l=!1},d(n){H(e,n)}}}function Y(a){let e;return{c(){e=N(a[1])},m(l,n){c(l,e,n)},p(l,n){n&2&&P(e,l[1])},d(l){l&&p(e)}}}function z(a){let e,l,n,o;const u=a[2].default,f=w(u,a,a[3],null);let s=a[1]&&b(a);return{c(){e=$("span"),f&&f.c(),l=B(),s&&s.c(),n=I(),d(e,"data-testid","block-info"),d(e,"class","svelte-1gfkn6j"),_(e,"sr-only",!a[0]),_(e,"hide",!a[0]),_(e,"has-info",a[1]!=null)},m(t,i){c(t,e,i),f&&f.m(e,null),c(t,l,i),s&&s.m(t,i),c(t,n,i),o=!0},p(t,[i]){f&&f.p&&(!o||i&8)&&S(f,u,t,t[3],o?q(u,t[3],i,null):j(t[3]),null),(!o||i&1)&&_(e,"sr-only",!t[0]),(!o||i&1)&&_(e,"hide",!t[0]),(!o||i&2)&&_(e,"has-info",t[1]!=null),t[1]?s?(s.p(t,i),i&2&&r(s,1)):(s=b(t),s.c(),r(s,1),s.m(n.parentNode,n)):s&&(v(),m(s,1,1,()=>{s=null}),C())},i(t){o||(r(f,t),r(s),o=!0)},o(t){m(f,t),m(s),o=!1},d(t){t&&(p(e),p(l),p(n)),f&&f.d(t),s&&s.d(t)}}}function A(a,e,l){let{$$slots:n={},$$scope:o}=e,{show_label:u=!0}=e,{info:f=void 0}=e;return a.$$set=s=>{"show_label"in s&&l(0,u=s.show_label),"info"in s&&l(1,f=s.info),"$$scope"in s&&l(3,o=s.$$scope)},[u,f,n,o]}class K extends h{constructor(e){super(),g(this,e,A,z,k,{show_label:0,info:1})}}export{K as B}; -//# sourceMappingURL=BlockTitle-af232cbc.js.map diff --git a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Liaobots.py b/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Liaobots.py deleted file mode 100644 index 985bf53ddfd3877db3c60aedee86db11ec0e7243..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Liaobots.py +++ /dev/null @@ -1,47 +0,0 @@ -import os, uuid, requests -from ...typing import sha256, Dict, get_type_hints - -url = 'https://liaobots.com' -model = ['gpt-4-0613'] -supports_stream = True -needs_auth = True - -models = { - 'gpt-4-0613': { - "id":"gpt-4-0613", - "name":"GPT-4", - "maxLength":24000, - "tokenLimit":8000 - } -} - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - - print(kwargs) - - headers = { - 'authority': 'liaobots.com', - 'content-type': 'application/json', - 'origin': 'https://liaobots.com', - 'referer': 'https://liaobots.com/', - 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36', - 'x-auth-code': 'P6cPPK6Z8JDG3' - } - - json_data = { - 'conversationId': str(uuid.uuid4()), - 'model': models[model], - 'authcode':"jrzVZMJiwN0NU", - 'messages': messages, - 'key': '', - 'prompt': "You are ChatGPT, a large language model trained by OpenAI. Follow the user's instructions carefully. Respond using markdown.", - } - - response = requests.post('https://liaobots.com/api/chat', - headers=headers, json=json_data, stream=True) - - for token in response.iter_content(chunk_size=2046): - yield (token.decode('cp1251')) - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/unconditional_image_generation/train_unconditional.py b/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/unconditional_image_generation/train_unconditional.py deleted file mode 100644 index 1b38036d82c03b9d8be3e0cd35d91be14558b1b5..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/research_projects/onnxruntime/unconditional_image_generation/train_unconditional.py +++ /dev/null @@ -1,606 +0,0 @@ -import argparse -import inspect -import logging -import math -import os -from pathlib import Path -from typing import Optional - -import datasets -import torch -import torch.nn.functional as F -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import ProjectConfiguration -from datasets import load_dataset -from huggingface_hub import HfFolder, Repository, create_repo, whoami -from onnxruntime.training.ortmodule import ORTModule -from torchvision import transforms -from tqdm.auto import tqdm - -import diffusers -from diffusers import DDPMPipeline, DDPMScheduler, UNet2DModel -from diffusers.optimization import get_scheduler -from diffusers.training_utils import EMAModel -from diffusers.utils import check_min_version, is_tensorboard_available, is_wandb_available - - -# Will error if the minimal version of diffusers is not installed. Remove at your own risks. -check_min_version("0.13.0.dev0") - -logger = get_logger(__name__, log_level="INFO") - - -def _extract_into_tensor(arr, timesteps, broadcast_shape): - """ - Extract values from a 1-D numpy array for a batch of indices. - :param arr: the 1-D numpy array. - :param timesteps: a tensor of indices into the array to extract. - :param broadcast_shape: a larger shape of K dimensions with the batch - dimension equal to the length of timesteps. - :return: a tensor of shape [batch_size, 1, ...] where the shape has K dims. - """ - if not isinstance(arr, torch.Tensor): - arr = torch.from_numpy(arr) - res = arr[timesteps].float().to(timesteps.device) - while len(res.shape) < len(broadcast_shape): - res = res[..., None] - return res.expand(broadcast_shape) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--dataset_name", - type=str, - default=None, - help=( - "The name of the Dataset (from the HuggingFace hub) to train on (could be your own, possibly private," - " dataset). It can also be a path pointing to a local copy of a dataset in your filesystem," - " or to a folder containing files that HF Datasets can understand." - ), - ) - parser.add_argument( - "--dataset_config_name", - type=str, - default=None, - help="The config of the Dataset, leave as None if there's only one config.", - ) - parser.add_argument( - "--train_data_dir", - type=str, - default=None, - help=( - "A folder containing the training data. Folder contents must follow the structure described in" - " https://huggingface.co/docs/datasets/image_dataset#imagefolder. In particular, a `metadata.jsonl` file" - " must exist to provide the captions for the images. Ignored if `dataset_name` is specified." - ), - ) - parser.add_argument( - "--output_dir", - type=str, - default="ddpm-model-64", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--overwrite_output_dir", action="store_true") - parser.add_argument( - "--cache_dir", - type=str, - default=None, - help="The directory where the downloaded models and datasets will be stored.", - ) - parser.add_argument( - "--resolution", - type=int, - default=64, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", - default=False, - action="store_true", - help=( - "Whether to center crop the input images to the resolution. If not set, the images will be randomly" - " cropped. The images will be resized to the resolution first before cropping." - ), - ) - parser.add_argument( - "--random_flip", - default=False, - action="store_true", - help="whether to randomly flip images horizontally", - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument( - "--eval_batch_size", type=int, default=16, help="The number of images to generate for evaluation." - ) - parser.add_argument( - "--dataloader_num_workers", - type=int, - default=0, - help=( - "The number of subprocesses to use for data loading. 0 means that the data will be loaded in the main" - " process." - ), - ) - parser.add_argument("--num_epochs", type=int, default=100) - parser.add_argument("--save_images_epochs", type=int, default=10, help="How often to save images during training.") - parser.add_argument( - "--save_model_epochs", type=int, default=10, help="How often to save the model during training." - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="cosine", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument("--adam_beta1", type=float, default=0.95, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument( - "--adam_weight_decay", type=float, default=1e-6, help="Weight decay magnitude for the Adam optimizer." - ) - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer.") - parser.add_argument( - "--use_ema", - action="store_true", - help="Whether to use Exponential Moving Average for the final model weights.", - ) - parser.add_argument("--ema_inv_gamma", type=float, default=1.0, help="The inverse gamma value for the EMA decay.") - parser.add_argument("--ema_power", type=float, default=3 / 4, help="The power value for the EMA decay.") - parser.add_argument("--ema_max_decay", type=float, default=0.9999, help="The maximum decay magnitude for EMA.") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--hub_private_repo", action="store_true", help="Whether or not to create a private repository." - ) - parser.add_argument( - "--logger", - type=str, - default="tensorboard", - choices=["tensorboard", "wandb"], - help=( - "Whether to use [tensorboard](https://www.tensorflow.org/tensorboard) or [wandb](https://www.wandb.ai)" - " for experiment tracking and logging of model metrics and model checkpoints" - ), - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument( - "--prediction_type", - type=str, - default="epsilon", - choices=["epsilon", "sample"], - help="Whether the model should predict the 'epsilon'/noise error or directly the reconstructed image 'x0'.", - ) - parser.add_argument("--ddpm_num_steps", type=int, default=1000) - parser.add_argument("--ddpm_num_inference_steps", type=int, default=1000) - parser.add_argument("--ddpm_beta_schedule", type=str, default="linear") - parser.add_argument( - "--checkpointing_steps", - type=int, - default=500, - help=( - "Save a checkpoint of the training state every X updates. These checkpoints are only suitable for resuming" - " training using `--resume_from_checkpoint`." - ), - ) - parser.add_argument( - "--checkpoints_total_limit", - type=int, - default=None, - help=( - "Max number of checkpoints to store. Passed as `total_limit` to the `Accelerator` `ProjectConfiguration`." - " See Accelerator::save_state https://huggingface.co/docs/accelerate/package_reference/accelerator#accelerate.Accelerator.save_state" - " for more docs" - ), - ) - parser.add_argument( - "--resume_from_checkpoint", - type=str, - default=None, - help=( - "Whether training should be resumed from a previous checkpoint. Use a path saved by" - ' `--checkpointing_steps`, or `"latest"` to automatically select the last available checkpoint.' - ), - ) - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - if args.dataset_name is None and args.train_data_dir is None: - raise ValueError("You must specify either a dataset name from the hub or a train data directory.") - - return args - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - -def main(args): - logging_dir = os.path.join(args.output_dir, args.logging_dir) - - accelerator_project_config = ProjectConfiguration(total_limit=args.checkpoints_total_limit) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with=args.logger, - logging_dir=logging_dir, - project_config=accelerator_project_config, - ) - - if args.logger == "tensorboard": - if not is_tensorboard_available(): - raise ImportError("Make sure to install tensorboard if you want to use it for logging during training.") - - elif args.logger == "wandb": - if not is_wandb_available(): - raise ImportError("Make sure to install wandb if you want to use it for logging during training.") - import wandb - - # Make one log on every process with the configuration for debugging. - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - level=logging.INFO, - ) - logger.info(accelerator.state, main_process_only=False) - if accelerator.is_local_main_process: - datasets.utils.logging.set_verbosity_warning() - diffusers.utils.logging.set_verbosity_info() - else: - datasets.utils.logging.set_verbosity_error() - diffusers.utils.logging.set_verbosity_error() - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - create_repo(repo_name, exist_ok=True, token=args.hub_token) - repo = Repository(args.output_dir, clone_from=repo_name, token=args.hub_token) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Initialize the model - model = UNet2DModel( - sample_size=args.resolution, - in_channels=3, - out_channels=3, - layers_per_block=2, - block_out_channels=(128, 128, 256, 256, 512, 512), - down_block_types=( - "DownBlock2D", - "DownBlock2D", - "DownBlock2D", - "DownBlock2D", - "AttnDownBlock2D", - "DownBlock2D", - ), - up_block_types=( - "UpBlock2D", - "AttnUpBlock2D", - "UpBlock2D", - "UpBlock2D", - "UpBlock2D", - "UpBlock2D", - ), - ) - - # Create EMA for the model. - if args.use_ema: - ema_model = EMAModel( - model.parameters(), - decay=args.ema_max_decay, - use_ema_warmup=True, - inv_gamma=args.ema_inv_gamma, - power=args.ema_power, - ) - - # Initialize the scheduler - accepts_prediction_type = "prediction_type" in set(inspect.signature(DDPMScheduler.__init__).parameters.keys()) - if accepts_prediction_type: - noise_scheduler = DDPMScheduler( - num_train_timesteps=args.ddpm_num_steps, - beta_schedule=args.ddpm_beta_schedule, - prediction_type=args.prediction_type, - ) - else: - noise_scheduler = DDPMScheduler(num_train_timesteps=args.ddpm_num_steps, beta_schedule=args.ddpm_beta_schedule) - - # Initialize the optimizer - optimizer = torch.optim.AdamW( - model.parameters(), - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # Get the datasets: you can either provide your own training and evaluation files (see below) - # or specify a Dataset from the hub (the dataset will be downloaded automatically from the datasets Hub). - - # In distributed training, the load_dataset function guarantees that only one local process can concurrently - # download the dataset. - if args.dataset_name is not None: - dataset = load_dataset( - args.dataset_name, - args.dataset_config_name, - cache_dir=args.cache_dir, - split="train", - ) - else: - dataset = load_dataset("imagefolder", data_dir=args.train_data_dir, cache_dir=args.cache_dir, split="train") - # See more about loading custom images at - # https://huggingface.co/docs/datasets/v2.4.0/en/image_load#imagefolder - - # Preprocessing the datasets and DataLoaders creation. - augmentations = transforms.Compose( - [ - transforms.Resize(args.resolution, interpolation=transforms.InterpolationMode.BILINEAR), - transforms.CenterCrop(args.resolution) if args.center_crop else transforms.RandomCrop(args.resolution), - transforms.RandomHorizontalFlip() if args.random_flip else transforms.Lambda(lambda x: x), - transforms.ToTensor(), - transforms.Normalize([0.5], [0.5]), - ] - ) - - def transform_images(examples): - images = [augmentations(image.convert("RGB")) for image in examples["image"]] - return {"input": images} - - logger.info(f"Dataset size: {len(dataset)}") - - dataset.set_transform(transform_images) - train_dataloader = torch.utils.data.DataLoader( - dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers - ) - - # Initialize the learning rate scheduler - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=(len(train_dataloader) * args.num_epochs), - ) - - # Prepare everything with our `accelerator`. - model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - model, optimizer, train_dataloader, lr_scheduler - ) - - model = ORTModule(model) - - if args.use_ema: - accelerator.register_for_checkpointing(ema_model) - ema_model.to(accelerator.device) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - run = os.path.split(__file__)[-1].split(".")[0] - accelerator.init_trackers(run) - - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - max_train_steps = args.num_epochs * num_update_steps_per_epoch - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(dataset)}") - logger.info(f" Num Epochs = {args.num_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {max_train_steps}") - - global_step = 0 - first_epoch = 0 - - # Potentially load in the weights and states from a previous save - if args.resume_from_checkpoint: - if args.resume_from_checkpoint != "latest": - path = os.path.basename(args.resume_from_checkpoint) - else: - # Get the most recent checkpoint - dirs = os.listdir(args.output_dir) - dirs = [d for d in dirs if d.startswith("checkpoint")] - dirs = sorted(dirs, key=lambda x: int(x.split("-")[1])) - path = dirs[-1] if len(dirs) > 0 else None - - if path is None: - accelerator.print( - f"Checkpoint '{args.resume_from_checkpoint}' does not exist. Starting a new training run." - ) - args.resume_from_checkpoint = None - else: - accelerator.print(f"Resuming from checkpoint {path}") - accelerator.load_state(os.path.join(args.output_dir, path)) - global_step = int(path.split("-")[1]) - - resume_global_step = global_step * args.gradient_accumulation_steps - first_epoch = global_step // num_update_steps_per_epoch - resume_step = resume_global_step % (num_update_steps_per_epoch * args.gradient_accumulation_steps) - - # Train! - for epoch in range(first_epoch, args.num_epochs): - model.train() - progress_bar = tqdm(total=num_update_steps_per_epoch, disable=not accelerator.is_local_main_process) - progress_bar.set_description(f"Epoch {epoch}") - for step, batch in enumerate(train_dataloader): - # Skip steps until we reach the resumed step - if args.resume_from_checkpoint and epoch == first_epoch and step < resume_step: - if step % args.gradient_accumulation_steps == 0: - progress_bar.update(1) - continue - - clean_images = batch["input"] - # Sample noise that we'll add to the images - noise = torch.randn(clean_images.shape).to(clean_images.device) - bsz = clean_images.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint( - 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=clean_images.device - ).long() - - # Add noise to the clean images according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps) - - with accelerator.accumulate(model): - # Predict the noise residual - model_output = model(noisy_images, timesteps, return_dict=False)[0] - - if args.prediction_type == "epsilon": - loss = F.mse_loss(model_output, noise) # this could have different weights! - elif args.prediction_type == "sample": - alpha_t = _extract_into_tensor( - noise_scheduler.alphas_cumprod, timesteps, (clean_images.shape[0], 1, 1, 1) - ) - snr_weights = alpha_t / (1 - alpha_t) - loss = snr_weights * F.mse_loss( - model_output, clean_images, reduction="none" - ) # use SNR weighting from distillation paper - loss = loss.mean() - else: - raise ValueError(f"Unsupported prediction type: {args.prediction_type}") - - accelerator.backward(loss) - - if accelerator.sync_gradients: - accelerator.clip_grad_norm_(model.parameters(), 1.0) - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - if args.use_ema: - ema_model.step(model.parameters()) - progress_bar.update(1) - global_step += 1 - - if global_step % args.checkpointing_steps == 0: - if accelerator.is_main_process: - save_path = os.path.join(args.output_dir, f"checkpoint-{global_step}") - accelerator.save_state(save_path) - logger.info(f"Saved state to {save_path}") - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} - if args.use_ema: - logs["ema_decay"] = ema_model.decay - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - progress_bar.close() - - accelerator.wait_for_everyone() - - # Generate sample images for visual inspection - if accelerator.is_main_process: - if epoch % args.save_images_epochs == 0 or epoch == args.num_epochs - 1: - unet = accelerator.unwrap_model(model) - if args.use_ema: - ema_model.copy_to(unet.parameters()) - pipeline = DDPMPipeline( - unet=unet, - scheduler=noise_scheduler, - ) - - generator = torch.Generator(device=pipeline.device).manual_seed(0) - # run pipeline in inference (sample random noise and denoise) - images = pipeline( - generator=generator, - batch_size=args.eval_batch_size, - output_type="numpy", - num_inference_steps=args.ddpm_num_inference_steps, - ).images - - # denormalize the images and save to tensorboard - images_processed = (images * 255).round().astype("uint8") - - if args.logger == "tensorboard": - accelerator.get_tracker("tensorboard").add_images( - "test_samples", images_processed.transpose(0, 3, 1, 2), epoch - ) - elif args.logger == "wandb": - accelerator.get_tracker("wandb").log( - {"test_samples": [wandb.Image(img) for img in images_processed], "epoch": epoch}, - step=global_step, - ) - - if epoch % args.save_model_epochs == 0 or epoch == args.num_epochs - 1: - # save the model - pipeline.save_pretrained(args.output_dir) - if args.push_to_hub: - repo.push_to_hub(commit_message=f"Epoch {epoch}", blocking=False) - - accelerator.end_training() - - -if __name__ == "__main__": - args = parse_args() - main(args) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/utils/dummy_flax_and_transformers_objects.py b/spaces/declare-lab/tango/diffusers/src/diffusers/utils/dummy_flax_and_transformers_objects.py deleted file mode 100644 index 162bac1c4331149c4b5abde1eadd8013ab0cda99..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/utils/dummy_flax_and_transformers_objects.py +++ /dev/null @@ -1,62 +0,0 @@ -# This file is autogenerated by the command `make fix-copies`, do not edit. -from ..utils import DummyObject, requires_backends - - -class FlaxStableDiffusionControlNetPipeline(metaclass=DummyObject): - _backends = ["flax", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax", "transformers"]) - - -class FlaxStableDiffusionImg2ImgPipeline(metaclass=DummyObject): - _backends = ["flax", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax", "transformers"]) - - -class FlaxStableDiffusionInpaintPipeline(metaclass=DummyObject): - _backends = ["flax", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax", "transformers"]) - - -class FlaxStableDiffusionPipeline(metaclass=DummyObject): - _backends = ["flax", "transformers"] - - def __init__(self, *args, **kwargs): - requires_backends(self, ["flax", "transformers"]) - - @classmethod - def from_config(cls, *args, **kwargs): - requires_backends(cls, ["flax", "transformers"]) - - @classmethod - def from_pretrained(cls, *args, **kwargs): - requires_backends(cls, ["flax", "transformers"]) diff --git a/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_ut_generator.py b/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_ut_generator.py deleted file mode 100644 index 6f29999d4ccfa27405962e7cb9017d4f4fd8831c..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/tests/metagpt/tools/test_ut_generator.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/4/30 21:44 -@Author : alexanderwu -@File : test_ut_generator.py -""" - -from metagpt.const import API_QUESTIONS_PATH, SWAGGER_PATH, UT_PY_PATH -from metagpt.tools.ut_writer import YFT_PROMPT_PREFIX, UTGenerator - - -class TestUTWriter: - def test_api_to_ut_sample(self): - swagger_file = SWAGGER_PATH / "yft_swaggerApi.json" - tags = ["测试"] # "智能合同导入", "律师审查", "ai合同审查", "草拟合同&律师在线审查", "合同审批", "履约管理", "签约公司"] - # 这里在文件中手动加入了两个测试标签的API - - utg = UTGenerator(swagger_file=swagger_file, ut_py_path=UT_PY_PATH, questions_path=API_QUESTIONS_PATH, - template_prefix=YFT_PROMPT_PREFIX) - ret = utg.generate_ut(include_tags=tags) - # 后续加入对文件生成内容与数量的检验 - assert ret diff --git a/spaces/deprem-ml/intent-leaderboard-v13/README.md b/spaces/deprem-ml/intent-leaderboard-v13/README.md deleted file mode 100644 index b3bf3c9e62761e9b1639161477f4942984fc8aba..0000000000000000000000000000000000000000 --- a/spaces/deprem-ml/intent-leaderboard-v13/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Intent Leaderboard V13 -emoji: 🦸‍♂️ -colorFrom: blue -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/diacanFperku/AutoGPT/Autodesk AutoCAD Architecture 2018.1.1 Keygen HOT Latest.md b/spaces/diacanFperku/AutoGPT/Autodesk AutoCAD Architecture 2018.1.1 Keygen HOT Latest.md deleted file mode 100644 index 4e819b9326bc269ac4f9bf0b74c475e1a440b993..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Autodesk AutoCAD Architecture 2018.1.1 Keygen HOT Latest.md +++ /dev/null @@ -1,10 +0,0 @@ -

    Autodesk AutoCAD Architecture 2018.1.1 Keygen Latest


    DOWNLOADhttps://gohhs.com/2uFTpm



    - -April 5, 2015 – X-FORCE 2014 is the key element that allows you to activate cualquier ... Autocad Architecture Concept Drawings, Architecture People, Architecture ... Autocad Architecture Concept Drawings, Architecture People, Architecture ... -Billboard: Billboard Music Awards 2015 - Part 1 - Duration: 16 ... -AutoCAD Architecture 2015 AutoCAD Architecture Concept Drawings, Architecture People, Architecture Drawing, ... -AutoCAD Architecture Concept Drawings, Architecture People, Architecture Drawing, ... -Concept Drawings, Architecture Drawing, Architecture People ... 8a78ff9644
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Carte Cscs In Limba Romana.md b/spaces/diacanFperku/AutoGPT/Carte Cscs In Limba Romana.md deleted file mode 100644 index c023f1f2d4fba78c0eaf9a6216c3b97cc9c57ccd..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Carte Cscs In Limba Romana.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Carte Cscs In Limba Romana


    Download Zip >> https://gohhs.com/2uFVzx



    -
    - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Diablo Ii Cinematics Disc Iso.md b/spaces/diacanFperku/AutoGPT/Diablo Ii Cinematics Disc Iso.md deleted file mode 100644 index b815c8fc647edf61326bb7dcb174d11849ec795c..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Diablo Ii Cinematics Disc Iso.md +++ /dev/null @@ -1,10 +0,0 @@ -

    Diablo Ii Cinematics Disc Iso


    DOWNLOAD ✵✵✵ https://gohhs.com/2uFUbc



    -
    -Diablo Ii Cinematics Disc Iso DOWNLOAD: Diablo 2 cinematic disc iso download, diablo 2 cinematic disc, diablo 2 cinematic disc.## #Diablo 2 cinematic disc iso DOWNLOAD: Diablo 2 cinematic disc download iso, diablo 2 cinematic disc, diablo 2 cinematic disc iso. DOWNLOAD: Diablo 2 cinematic disc download iso, diablo 2 cinematic disc, diablo 2 cinematic disc iso. -Diablo II: Lord of Destruction | Diablo II: Lord of Destruction | download. -The game is a continuation of Diablo II, in which you have to fight again with the spawn of evil, once again enslave the world, which fell an innocent victim of dark forces, and once again return peace and comfort to the world. -A new hero will help you with this - a mighty one. -Download Diablo II: Lord of Destruction RUS (LOD) PC. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/La Salud Perfecta Deepak Chopra Pdf Download.md b/spaces/diacanFperku/AutoGPT/La Salud Perfecta Deepak Chopra Pdf Download.md deleted file mode 100644 index 6099ce014af87921f6747bfd0686d1e228375c2e..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/La Salud Perfecta Deepak Chopra Pdf Download.md +++ /dev/null @@ -1,10 +0,0 @@ -

    la salud perfecta deepak chopra pdf download


    Download ✏ ✏ ✏ https://gohhs.com/2uFVEB



    -
    -For more than 25 years Chopra has explored the deeper aspects of the mind-body connection with his best-selling books and his award-winning PBS special, Deepak Chopra’s program on ABC/CBS/NBC television, and The Chopra Center for Well Being. In La perfecta salud, Chopra reveals for the first time the complete 21-day meditative program, illustrating its applications through his latest discoveries about the mind-body connection and the scientific and practical wisdom found in thousands of years of practice. He demonstrates how to practice guided imagery to release stress, relax the body, and develop feelings of security and joy. He tells readers how to use meditation to improve brain function, enhance focus and concentration, reduce anxiety, increase energy, develop a positive outlook, and become a calmer, more peaceful individual. He explains how a meditation program can enhance the effects of psychotherapy and medical treatment. Chopra presents his findings that vision therapy can improve vision, along with the recommended protocol for practicing vision therapy over time. He offers a prescription for peace and a prescription for love, and an entire chapter on how to experience the elixir of immortality. *FREE* shipping on qualifying offers. - -Perfecta Salud, Perfecta Vida. La guía mente/cuerpo completa : La guía perfecta salud/perfecta vida [Deepak Chopra] on Amazon.com. *FREE* shipping on qualifying offers. La guía . La guía perfecta salud/perfecta vida, una guía para perder peso con naturalidad y de forma constante. Para apoyarse en este libro se incluye la herramienta digital de inteligencia, que permite estar en contacto con el enfoque profundo de la mano de Chopra. Forma parte de la herramienta que proporciona las notas para cada autor de este libro y que contiene una selección de la información más relevante. *FREE* shipping on qualifying offers. Perfecta Salud, Perfecta Vida, La guía perfecta salud/perfecta vida. - -Dr. Chopra, editor. La guía perfecta salud, perfecta vida : La guía perfecta salud/perfecta vida [Deepak Chopra] on Amazon.com. *FREE* shipping on qualifying 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Malizia(1973)movies Dvd Download 2021 3gp.md b/spaces/diacanFperku/AutoGPT/Malizia(1973)movies Dvd Download 2021 3gp.md deleted file mode 100644 index 8752c5fe335d4361c98d8876c1514c565df57197..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Malizia(1973)movies Dvd Download 2021 3gp.md +++ /dev/null @@ -1,32 +0,0 @@ -

    malizia(1973)movies dvd download 3gp


    Download Filehttps://gohhs.com/2uFV1N



    -
    -This site contains a large database of free movies and TV series. Watch full movies online for free on fmovies. - -Find the best free movies with millions of users. Watch free movies online with free registration at film4release. Go behind the scenes with all the latest casting news. - -We have lots of free xxx videos available for you to watch. Just pick your favorite xxx video and start watching. Video Watchfire. - -Over the last several years, we have continued to search for hidden treasures in our local film archives, adding many new movies and home movies to our library. Explore with this guide to the. - -In a provocative TED Talk, supermodel-turned-lawyer Thad Angell questions whether there’ s such a thing as a “ perfect. - -Julie Woods is a work at home mom to 4, a wife and a business owner. To read more from her, like her on Facebook, check out her website. Join Chris and Justin from Your Next Leader as they interview Leadership Expert, Julie Woods. Julie became an expert in over 3 years. - -Watch full episodes of this popular season of The Real Housewives of New York City. The other three return June 10 with a double episode, while Cynthia Nixon and Karen Giorno. - -Like the majority of New Yorkers, I’ m just plain annoyed by the rain. While my state faces the potential threat of a climate change emergency that demands immediate action, the world’ s most powerful country, not to mention. - -The real housewives of new york city season 15 episodes watch online free - -Julie Woods, a top Business Marketing & Communications attorney, worked for 18 years with top law firms in the. - -Julie Woods & Julie Woods. We are currently facing significant challenges to continue providing a world class service to our clients. - -TreeHugger is your go to site for news, articles, videos, and how to. After everything that happens, he continues to drive his Ford Focus. Reaching a critical mass of famous and infamous prostitutes. Watch full movies online with free registration at film4release. - -Watch free movies online without download, subtitles, or annoying advertisements. On the surface it might appear that Julie Woods is a normal New Yorker— she lives in a stylish brownstone in the West Village with her husband, two kids, and dog. - -Julie Woods & Julie Woods. Watch full episodes of The Real Housewives 4fefd39f24
    -
    -
    -

    diff --git a/spaces/diacanFperku/AutoGPT/Password Wordlist Txt.md b/spaces/diacanFperku/AutoGPT/Password Wordlist Txt.md deleted file mode 100644 index 941c84fdaae8e26a40175a10d94b66cecfb6a512..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Password Wordlist Txt.md +++ /dev/null @@ -1,103 +0,0 @@ -
    -

    Password Wordlist TXT: Everything You Need to Know

    -

    Password cracking is a technique that involves guessing or recovering passwords from encrypted or hashed data. Password cracking can be used for various purposes, such as testing the security of a system, recovering lost or forgotten passwords, or hacking into unauthorized accounts. Password cracking can be done by using different methods, such as brute-force attacks, rainbow tables, or dictionary attacks. In this article, we will focus on dictionary attacks and how password wordlist txt files can help you perform them.

    -

    password wordlist txt


    Downloadhttps://gohhs.com/2uFTqN



    -

    What is a Dictionary Attack?

    -

    A dictionary attack is a type of password cracking method that tries to guess passwords by using a list of common or likely passwords. The list of passwords can be obtained from various sources, such as leaked databases, websites, social media, etc. The list of passwords can also be generated by using tools that combine words from different languages, categories, patterns, etc. A dictionary attack can be faster and more efficient than a brute-force attack, which tries to guess passwords by using all possible combinations of characters.

    -

    What is a Password Wordlist TXT?

    -

    A password wordlist txt is a text file that contains a list of passwords or words that can be used for dictionary attacks. A password wordlist txt file can have different formats and sizes depending on the source and the method of creation. Some password wordlist txt files may have one password or word per line, while others may have multiple passwords or words separated by commas or spaces. Some password wordlist txt files may have millions of passwords or words, while others may have only thousands or hundreds.

    -

    Where to Find Password Wordlist TXT Files?

    -

    There are many websites and repositories that offer password wordlist txt files for download. Some of them are:

    -
      -
    • Password Wordlist(235k): This is a GitHub gist that contains a password wordlist txt file with 235k passwords or words. The file is truncated on the website, but you can view the full file by clicking on the download link.
    • -
    • wpa2-wordlists: This is a GitHub repository that contains a collection of passwords and wordlists commonly used for dictionary attacks using a variety of password cracking tools such as aircrack-ng, hydra, and hashcat. It also provides useful one-liners for wordlist manipulation.
    • -
    • SecLists: This is a GitHub repository that contains a collection of multiple types of lists used during security assessments, collected in one place. It includes usernames, passwords, URLs, sensitive data patterns, fuzzing payloads, web shells, and many more.
    • -
    -

    How to Use Password Wordlist TXT Files?

    -

    To use password wordlist txt files for dictionary attacks, you need to have a password cracking tool that supports this option. Some of the popular password cracking tools are:

    -
      -
    • aircrack-ng: This is a tool for cracking WEP and WPA/WPA2 wireless network passwords. It can use password wordlist txt files with the -w option.
    • -
    • hydra: This is a tool for cracking passwords of various network services such as FTP, SSH, Telnet, HTTP, etc. It can use password wordlist txt files with the -P option.
    • -
    • hashcat: This is a tool for cracking hashed passwords using various algorithms such as MD5, SHA1, SHA256, etc. It can use password wordlist txt files with the -a 0 option.
    • -
    -

    To use password wordlist txt files with these tools, you need to follow these steps:

    -
      -
    1. Download the password wordlist txt file from one of the sources mentioned above or create your own.
    2. -
    3. Transfer the password wordlist txt file to your computer or device where you have the password cracking tool installed.
    4. -
    5. Extract the password wordlist txt file if it is compressed in a zip or gz format.
    6. -
    7. Run the password cracking tool with the appropriate options and arguments to specify the target, the mode, the algorithm, and the password wordlist txt file.
    8. -
    9. Wait for the password cracking tool to finish its job and display the results.
    10. -
    -

    What are the Advantages and Disadvantages of Password Wordlist TXT Files?

    -

    Password wordlist txt files have some advantages and disadvantages when used for dictionary attacks. Some of them are:

    -

    -
      -
    • Advantages:
    • -
        -
      • They can speed up the password cracking process by reducing the search space.
      • -
      • They can increase the success rate of password cracking by using common or likely passwords.
      • -
      • They can be customized or optimized for specific targets or scenarios by using filters or rules.
      • -
      -
    • Disadvantages:
    • -
        -
      • They can be ineffective or inefficient if they do not contain the correct or relevant passwords for the target.
      • -
      • They can be large or bulky and require more storage space or memory.
      • -
      • They can be easily detected or blocked by security measures such as rate-limiting or captcha.
      • -
      -
    - -

    Conclusion

    - -

    Password wordlist txt files are text files that contain a list of passwords or words that can be used for dictionary attacks. Dictionary attacks are a type of password cracking method that try to guess passwords by using a list of common or likely passwords. Password wordlist txt files can help you to crack passwords faster and more efficiently than trying random combinations of characters. You can find password wordlist txt files from various sources online or create your own using tools. You can use password wordlist txt files with different password cracking tools such as aircrack-ng, hydra, and hashcat. Password wordlist txt files have some advantages and disadvantages when used for dictionary attacks.

    - -

    We hope this article has helped you to understand what password wordlist txt files are, where to find them, how to use them, and what are their pros and cons. If you have any questions or suggestions, feel free to leave a comment below. Thank you for reading!

    -

    What are the Risks of Password Wordlist TXT Files?

    -

    Password wordlist txt files can be useful for password cracking, but they can also pose some risks for both attackers and defenders. Some of the risks of password wordlist txt files are:

    -
      -
    • They can expose your passwords or words to others if you store them in an insecure location or share them online.
    • -
    • They can compromise your accounts or systems if you use weak or common passwords or words that are included in password wordlist txt files.
    • -
    • They can violate the privacy or security of others if you use password wordlist txt files to crack passwords of unauthorized accounts or systems.
    • -
    • They can cause legal or ethical issues if you use password wordlist txt files to crack passwords of protected or sensitive accounts or systems.
    • -
    -

    How to Protect Yourself from Password Wordlist TXT Files?

    -

    To protect yourself from password wordlist txt files, you need to take some precautions and follow some best practices. Some of them are:

    -
      -
    • Use strong and unique passwords or words for your accounts or systems. Avoid using common or predictable passwords or words that can be easily guessed or found in password wordlist txt files.
    • -
    • Use different passwords or words for different accounts or systems. Do not reuse the same password or word for multiple accounts or systems.
    • -
    • Use a password manager to store and manage your passwords or words securely. Do not store your passwords or words in plain text files or online services that are not encrypted or protected.
    • -
    • Use two-factor authentication to add an extra layer of security to your accounts or systems. Do not rely on passwords or words alone to authenticate yourself.
    • -
    • Use a VPN or a proxy to hide your IP address and location when using password cracking tools or downloading password wordlist txt files. Do not expose your identity or location to potential attackers or authorities.
    • -
    - -

    Conclusion

    - -

    Password wordlist txt files are text files that contain a list of passwords or words that can be used for dictionary attacks. Dictionary attacks are a type of password cracking method that try to guess passwords by using a list of common or likely passwords. Password wordlist txt files can help you to crack passwords faster and more efficiently than trying random combinations of characters. You can find password wordlist txt files from various sources online or create your own using tools. You can use password wordlist txt files with different password cracking tools such as aircrack-ng, hydra, and hashcat. Password wordlist txt files have some advantages and disadvantages when used for dictionary attacks. They also have some risks and challenges that you need to be aware of and protect yourself from.

    - -

    We hope this article has helped you to understand what password wordlist txt files are, where to find them, how to use them, what are their pros and cons, and how to protect yourself from them. If you have any questions or suggestions, feel free to leave a comment below. Thank you for reading!

    -

    How to Create Your Own Password Wordlist TXT Files?

    -

    If you want to create your own password wordlist txt files, you can use various tools and methods to generate them. Some of them are:

    -
      -
    • BEWGor: This is a tool that generates wordlists based on a given set of input criteria. You can specify the language, the length, the character set, the patterns, the prefixes, the suffixes, and other options to create customized wordlists.
    • -
    • KaliLists: This is a tool that generates wordlists based on a given target domain or website. You can specify the domain name, the subdomains, the extensions, the keywords, and other options to create targeted wordlists.
    • -
    • CeWL: This is a tool that generates wordlists based on a given web page or website. You can specify the URL, the depth, the minimum and maximum length, the exclusions, and other options to create wordlists from web content.
    • -
    -

    How to Optimize Your Password Wordlist TXT Files?

    -

    To optimize your password wordlist txt files, you can use various tools and techniques to manipulate them. Some of them are:

    -
      -
    • Probable-Wordlists: This is a collection of wordlists sorted by probability according to analysis of leaked password databases. You can use these wordlists to prioritize the most likely passwords or words for your dictionary attacks.
    • -
    • hashcat: This is a tool that supports various rules and filters to modify your wordlists. You can use these rules and filters to add, remove, replace, append, prepend, or toggle characters in your wordlists.
    • -
    • naive-hashcat: This is a tool that uses hashcat to crack passwords using naive combinator attack. You can use this tool to combine multiple wordlists into one large wordlist.
    • -
    - -

    Conclusion

    - -

    Password wordlist txt files are text files that contain a list of passwords or words that can be used for dictionary attacks. Dictionary attacks are a type of password cracking method that try to guess passwords by using a list of common or likely passwords. Password wordlist txt files can help you to crack passwords faster and more efficiently than trying random combinations of characters. You can find password wordlist txt files from various sources online or create your own using tools. You can use password wordlist txt files with different password cracking tools such as aircrack-ng, hydra, and hashcat. Password wordlist txt files have some advantages and disadvantages when used for dictionary attacks. They also have some risks and challenges that you need to be aware of and protect yourself from. You can also optimize your password wordlist txt files by using various tools and techniques to manipulate them.

    - -

    We hope this article has helped you to understand what password wordlist txt files are, where to find them, how to use them, what are their pros and cons, how to protect yourself from them, and how to optimize them. If you have any questions or suggestions, feel free to leave a comment below. Thank you for reading!

    -

    Conclusion

    - -

    Password wordlist txt files are text files that contain a list of passwords or words that can be used for dictionary attacks. Dictionary attacks are a type of password cracking method that try to guess passwords by using a list of common or likely passwords. Password wordlist txt files can help you to crack passwords faster and more efficiently than trying random combinations of characters. You can find password wordlist txt files from various sources online or create your own using tools. You can use password wordlist txt files with different password cracking tools such as aircrack-ng, hydra, and hashcat. Password wordlist txt files have some advantages and disadvantages when used for dictionary attacks. They also have some risks and challenges that you need to be aware of and protect yourself from. You can also optimize your password wordlist txt files by using various tools and techniques to manipulate them.

    - -

    We hope this article has helped you to understand what password wordlist txt files are, where to find them, how to use them, what are their pros and cons, how to protect yourself from them, and how to optimize them. If you have any questions or suggestions, feel free to leave a comment below. Thank you for reading!

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/diagaiwei/ir_chinese_medqa/colbert/evaluation/load_model.py b/spaces/diagaiwei/ir_chinese_medqa/colbert/evaluation/load_model.py deleted file mode 100644 index 13cb5c82dfa3309d814a80fc4082eada44d0eedd..0000000000000000000000000000000000000000 --- a/spaces/diagaiwei/ir_chinese_medqa/colbert/evaluation/load_model.py +++ /dev/null @@ -1,28 +0,0 @@ -import os -import ujson -import torch -import random - -from collections import defaultdict, OrderedDict - -from colbert.parameters import DEVICE -from colbert.modeling.colbert import ColBERT -from colbert.utils.utils import print_message, load_checkpoint - - -def load_model(args, do_print=True): - colbert = ColBERT.from_pretrained('bert-base-uncased', - query_maxlen=args.query_maxlen, - doc_maxlen=args.doc_maxlen, - dim=args.dim, - similarity_metric=args.similarity, - mask_punctuation=args.mask_punctuation) - colbert = colbert.to(DEVICE) - - print_message("#> Loading model checkpoint.", condition=do_print) - - checkpoint = load_checkpoint(args.checkpoint, colbert, do_print=do_print) - - colbert.eval() - - return colbert, checkpoint diff --git a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/app.py b/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/app.py deleted file mode 100644 index 642bdeebf826cf29d4a7fedcd707793e240b5160..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2/app.py +++ /dev/null @@ -1,183 +0,0 @@ -import sys, os - -if sys.platform == "darwin": - os.environ["PYTORCH_ENABLE_MPS_FALLBACK"] = "1" - -import logging - -logging.getLogger("numba").setLevel(logging.WARNING) -logging.getLogger("markdown_it").setLevel(logging.WARNING) -logging.getLogger("urllib3").setLevel(logging.WARNING) -logging.getLogger("matplotlib").setLevel(logging.WARNING) - -logging.basicConfig(level=logging.INFO, format="| %(name)s | %(levelname)s | %(message)s") - -logger = logging.getLogger(__name__) - -import torch -import argparse -import commons -import utils -from models import SynthesizerTrn -from text.symbols import symbols -from text import cleaned_text_to_sequence, get_bert -from text.cleaner import clean_text -import gradio as gr -import webbrowser - - -net_g = None - - -def get_text(text, language_str, hps): - norm_text, phone, tone, word2ph = clean_text(text, language_str) - phone, tone, language = cleaned_text_to_sequence(phone, tone, language_str) - - if hps.data.add_blank: - phone = commons.intersperse(phone, 0) - tone = commons.intersperse(tone, 0) - language = commons.intersperse(language, 0) - for i in range(len(word2ph)): - word2ph[i] = word2ph[i] * 2 - word2ph[0] += 1 - bert = get_bert(norm_text, word2ph, language_str) - del word2ph - - assert bert.shape[-1] == len(phone) - - phone = torch.LongTensor(phone) - tone = torch.LongTensor(tone) - language = torch.LongTensor(language) - - return bert, phone, tone, language -import soundfile as sf -def infer(text, sdp_ratio, noise_scale, noise_scale_w, length_scale, sid): - global net_g - bert, phones, tones, lang_ids = get_text(text, "ZH", hps) - with torch.no_grad(): - x_tst=phones.to(device).unsqueeze(0) - tones=tones.to(device).unsqueeze(0) - lang_ids=lang_ids.to(device).unsqueeze(0) - bert = bert.to(device).unsqueeze(0) - x_tst_lengths = torch.LongTensor([phones.size(0)]).to(device) - del phones - speakers = torch.LongTensor([hps.data.spk2id[sid]]).to(device) - audio = net_g.infer(x_tst, x_tst_lengths, speakers, tones, lang_ids, bert, sdp_ratio=sdp_ratio - , noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale)[0][0,0].data.cpu().float().numpy() - del x_tst, tones, lang_ids, bert, x_tst_lengths, speakers - sf.write("tmp.wav", audio, 44100) - return audio -def convert_wav_to_ogg(wav_file): - os.makedirs('out', exist_ok=True) - filename = os.path.splitext(os.path.basename(wav_file.name))[0] - output_path_ogg = os.path.join('out', f"out.ogg") - - renamed_input_path = os.path.join('in', f"in.wav") - os.makedirs('in', exist_ok=True) - os.rename(wav_file.name, renamed_input_path) - command = ["ffmpeg", "-i", renamed_input_path, "-acodec", "libopus", "-y", output_path_ogg] - os.system(" ".join(command)) - return output_path_ogg -def tts_fn(text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale): - with torch.no_grad(): - audio = infer(text, sdp_ratio=sdp_ratio, noise_scale=noise_scale, noise_scale_w=noise_scale_w, length_scale=length_scale, sid=speaker) - with open('tmp.wav', 'rb') as wav_file: - newogg = convert_wav_to_ogg(wav_file) - return "Success", (hps.data.sampling_rate, audio),newogg - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model_dir", default="./logs/jiaohuaji/jiaohuaji.pth", help="path of your model") - parser.add_argument("--config_dir", default="./configs/config.json", help="path of your config file") - parser.add_argument("--share", default=False, help="make link public") - parser.add_argument("-d", "--debug", action="store_true", help="enable DEBUG-LEVEL log") - - args = parser.parse_args() - if args.debug: - logger.info("Enable DEBUG-LEVEL log") - logging.basicConfig(level=logging.DEBUG) - hps = utils.get_hparams_from_file(args.config_dir) - device = "cuda:0" if torch.cuda.is_available() else "cpu" - ''' - device = ( - "cuda:0" - if torch.cuda.is_available() - else ( - "mps" - if sys.platform == "darwin" and torch.backends.mps.is_available() - else "cpu" - ) - ) - ''' - net_g = SynthesizerTrn( - len(symbols), - hps.data.filter_length // 2 + 1, - hps.train.segment_size // hps.data.hop_length, - n_speakers=hps.data.n_speakers, - **hps.model).to(device) - _ = net_g.eval() - - _ = utils.load_checkpoint(args.model_dir, net_g, None, skip_optimizer=True) - - speaker_ids = hps.data.spk2id - speakers = list(speaker_ids.keys()) - with gr.Blocks() as app: - with gr.Row(): - with gr.Column(): - - - gr.Markdown(value=""" - 甜甜叫花鸡 Bert-Vits2在线语音生成\n - 1、模型作者:数字星瞳企划 https://t.me/xingtong25680 \n - \n - 2、原项目地址:https://github.com/Stardust-minus/Bert-VITS2\n - 3、使用此模型进行二创请注明AI生成,以及该项目地址。\n - 4、如果想生成超长txt文本的音频请使用colab。 https://colab.research.google.com/drive/13ek8_j1aknr-pbjj3NXxSM4vBIsracU3?usp=drive_link\n - - """) - text = gr.TextArea(label="Text", placeholder="Input Text Here", - value="这里是数字星瞳企画,请在电报搜索星瞳全拼加二五六八零,获取最新更新进展。") - speaker = gr.Dropdown(choices=speakers, value=speakers[0], label='Speaker') - sdp_ratio = gr.Slider(minimum=0, maximum=1, value=0.2, step=0.01, label='语调变化') - noise_scale = gr.Slider(minimum=0.1, maximum=1.5, value=0.6, step=0.01, label='感情变化') - noise_scale_w = gr.Slider(minimum=0.1, maximum=1.4, value=0.8, step=0.01, label='音节发音长度变化') - length_scale = gr.Slider(minimum=0.1, maximum=2, value=1, step=0.01, label='语速') - btn = gr.Button("开启AI语音之旅吧!", variant="primary") - with gr.Column(): - text_output = gr.Textbox(label="Message") - audio_output = gr.Audio(label="Output Audio") - ogg_output = gr.File(label="Converted OGG file") - gr.Markdown(value=""" - 模型汇总:\n - 星瞳 https://huggingface.co/spaces/digitalxingtong/Xingtong-Bert-Vits2 \n - 星瞳 朗读专用 https://huggingface.co/spaces/digitalxingtong/Xingtong-Read-Bert-VITS2 \n - 星瞳 长文本专用 https://huggingface.co/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2 \n - 甜甜叫花鸡 https://huggingface.co/spaces/digitalxingtong/Jiaohuaji-Bert-Vits2 \n - 七海 https://huggingface.co/spaces/digitalxingtong/Nanami-Bert-Vits2 \n - 东雪莲 https://huggingface.co/spaces/digitalxingtong/Azuma-Bert-Vits2 \n - 嘉然 https://huggingface.co/spaces/digitalxingtong/Jiaran-Bert-Vits2 \n - 乃琳 https://huggingface.co/spaces/digitalxingtong/Eileen-Bert-Vits2 \n - 恬豆 https://huggingface.co/spaces/digitalxingtong/Dou-Bert-Vits2 \n - 奶绿 杂谈 https://huggingface.co/spaces/digitalxingtong/Nailv-Bert-Vits2 \n - 奶绿 朗读 https://huggingface.co/spaces/digitalxingtong/Nailv-read-Bert-Vits2 \n - 露早 https://huggingface.co/spaces/digitalxingtong/Luzao-Bert-Vits2 \n - 柚恩 https://huggingface.co/spaces/digitalxingtong/Un-Bert-Vits2 \n - 米诺 https://huggingface.co/spaces/digitalxingtong/Minuo-Bert-Vits2 \n - 扇宝 https://huggingface.co/spaces/digitalxingtong/Shanbao-Bert-Vits2 \n - 牧牧白 https://huggingface.co/spaces/digitalxingtong/Miiu-Bert-Vits2 \n - 吉诺儿kino https://huggingface.co/spaces/digitalxingtong/Kino-Bert-Vits2 \n - 九夏 https://huggingface.co/spaces/digitalxingtong/Jiuxia-Bert-Vits2 \n - 卡缇娅 https://huggingface.co/spaces/digitalxingtong/Yaya-Bert-Vits2 \n - 理想_ideal https://huggingface.co/spaces/digitalxingtong/Lixiang-Bert-Vits2 \n - 阿梓 https://huggingface.co/spaces/digitalxingtong/Azusa-Bert-Vits2 \n - 鹿鸣 https://huggingface.co/spaces/digitalxingtong/Luming-Bert-Vits2 \n - 永雏塔菲 https://huggingface.co/spaces/digitalxingtong/Taffy-Bert-VITS2 \n - """) - btn.click(tts_fn, - inputs=[text, speaker, sdp_ratio, noise_scale, noise_scale_w, length_scale], - outputs=[text_output, audio_output,ogg_output]) - - - app.launch(show_error=True) diff --git a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/monotonic_align/core.py b/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/monotonic_align/core.py deleted file mode 100644 index 5ff728cd74c9228346a82ec64a9829cb98ad315e..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Xingtong-Longread-Bert-VITS2/monotonic_align/core.py +++ /dev/null @@ -1,36 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:, :, ::1], numba.float32[:, :, ::1], numba.int32[::1], numba.int32[::1]), - nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val = -1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y - 1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y - 1, x - 1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y - 1, index] < value[y - 1, index - 1]): - index = index - 1 \ No newline at end of file diff --git a/spaces/dineshreddy/WALT/mmdet/models/dense_heads/transformer_head.py b/spaces/dineshreddy/WALT/mmdet/models/dense_heads/transformer_head.py deleted file mode 100644 index 820fd069fcca295f6102f0d27366158a8c640249..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmdet/models/dense_heads/transformer_head.py +++ /dev/null @@ -1,654 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import Conv2d, Linear, build_activation_layer -from mmcv.runner import force_fp32 - -from mmdet.core import (bbox_cxcywh_to_xyxy, bbox_xyxy_to_cxcywh, - build_assigner, build_sampler, multi_apply, - reduce_mean) -from mmdet.models.utils import (FFN, build_positional_encoding, - build_transformer) -from ..builder import HEADS, build_loss -from .anchor_free_head import AnchorFreeHead - - -@HEADS.register_module() -class TransformerHead(AnchorFreeHead): - """Implements the DETR transformer head. - - See `paper: End-to-End Object Detection with Transformers - `_ for details. - - Args: - num_classes (int): Number of categories excluding the background. - in_channels (int): Number of channels in the input feature map. - num_fcs (int, optional): Number of fully-connected layers used in - `FFN`, which is then used for the regression head. Default 2. - transformer (dict, optional): Config for transformer. - positional_encoding (dict, optional): Config for position encoding. - loss_cls (dict, optional): Config of the classification loss. - Default `CrossEntropyLoss`. - loss_bbox (dict, optional): Config of the regression loss. - Default `L1Loss`. - loss_iou (dict, optional): Config of the regression iou loss. - Default `GIoULoss`. - tran_cfg (dict, optional): Training config of transformer head. - test_cfg (dict, optional): Testing config of transformer head. - - Example: - >>> import torch - >>> self = TransformerHead(80, 2048) - >>> x = torch.rand(1, 2048, 32, 32) - >>> mask = torch.ones(1, 32, 32).to(x.dtype) - >>> mask[:, :16, :15] = 0 - >>> all_cls_scores, all_bbox_preds = self(x, mask) - """ - - def __init__(self, - num_classes, - in_channels, - num_fcs=2, - transformer=dict( - type='Transformer', - embed_dims=256, - num_heads=8, - num_encoder_layers=6, - num_decoder_layers=6, - feedforward_channels=2048, - dropout=0.1, - act_cfg=dict(type='ReLU', inplace=True), - norm_cfg=dict(type='LN'), - num_fcs=2, - pre_norm=False, - return_intermediate_dec=True), - positional_encoding=dict( - type='SinePositionalEncoding', - num_feats=128, - normalize=True), - loss_cls=dict( - type='CrossEntropyLoss', - bg_cls_weight=0.1, - use_sigmoid=False, - loss_weight=1.0, - class_weight=1.0), - loss_bbox=dict(type='L1Loss', loss_weight=5.0), - loss_iou=dict(type='GIoULoss', loss_weight=2.0), - train_cfg=dict( - assigner=dict( - type='HungarianAssigner', - cls_cost=dict(type='ClassificationCost', weight=1.), - reg_cost=dict(type='BBoxL1Cost', weight=5.0), - iou_cost=dict( - type='IoUCost', iou_mode='giou', weight=2.0))), - test_cfg=dict(max_per_img=100), - **kwargs): - # NOTE here use `AnchorFreeHead` instead of `TransformerHead`, - # since it brings inconvenience when the initialization of - # `AnchorFreeHead` is called. - super(AnchorFreeHead, self).__init__() - use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - assert not use_sigmoid_cls, 'setting use_sigmoid_cls as True is ' \ - 'not supported in DETR, since background is needed for the ' \ - 'matching process.' - assert 'embed_dims' in transformer \ - and 'num_feats' in positional_encoding - num_feats = positional_encoding['num_feats'] - embed_dims = transformer['embed_dims'] - assert num_feats * 2 == embed_dims, 'embed_dims should' \ - f' be exactly 2 times of num_feats. Found {embed_dims}' \ - f' and {num_feats}.' - assert test_cfg is not None and 'max_per_img' in test_cfg - - class_weight = loss_cls.get('class_weight', None) - if class_weight is not None: - assert isinstance(class_weight, float), 'Expected ' \ - 'class_weight to have type float. Found ' \ - f'{type(class_weight)}.' - # NOTE following the official DETR rep0, bg_cls_weight means - # relative classification weight of the no-object class. - bg_cls_weight = loss_cls.get('bg_cls_weight', class_weight) - assert isinstance(bg_cls_weight, float), 'Expected ' \ - 'bg_cls_weight to have type float. Found ' \ - f'{type(bg_cls_weight)}.' - class_weight = torch.ones(num_classes + 1) * class_weight - # set background class as the last indice - class_weight[num_classes] = bg_cls_weight - loss_cls.update({'class_weight': class_weight}) - if 'bg_cls_weight' in loss_cls: - loss_cls.pop('bg_cls_weight') - self.bg_cls_weight = bg_cls_weight - - if train_cfg: - assert 'assigner' in train_cfg, 'assigner should be provided '\ - 'when train_cfg is set.' - assigner = train_cfg['assigner'] - assert loss_cls['loss_weight'] == assigner['cls_cost']['weight'], \ - 'The classification weight for loss and matcher should be' \ - 'exactly the same.' - assert loss_bbox['loss_weight'] == assigner['reg_cost'][ - 'weight'], 'The regression L1 weight for loss and matcher ' \ - 'should be exactly the same.' - assert loss_iou['loss_weight'] == assigner['iou_cost']['weight'], \ - 'The regression iou weight for loss and matcher should be' \ - 'exactly the same.' - self.assigner = build_assigner(assigner) - # DETR sampling=False, so use PseudoSampler - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.num_classes = num_classes - self.cls_out_channels = num_classes + 1 - self.in_channels = in_channels - self.num_fcs = num_fcs - self.train_cfg = train_cfg - self.test_cfg = test_cfg - self.use_sigmoid_cls = use_sigmoid_cls - self.embed_dims = embed_dims - self.num_query = test_cfg['max_per_img'] - self.fp16_enabled = False - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.loss_iou = build_loss(loss_iou) - self.act_cfg = transformer.get('act_cfg', - dict(type='ReLU', inplace=True)) - self.activate = build_activation_layer(self.act_cfg) - self.positional_encoding = build_positional_encoding( - positional_encoding) - self.transformer = build_transformer(transformer) - self._init_layers() - - def _init_layers(self): - """Initialize layers of the transformer head.""" - self.input_proj = Conv2d( - self.in_channels, self.embed_dims, kernel_size=1) - self.fc_cls = Linear(self.embed_dims, self.cls_out_channels) - self.reg_ffn = FFN( - self.embed_dims, - self.embed_dims, - self.num_fcs, - self.act_cfg, - dropout=0.0, - add_residual=False) - self.fc_reg = Linear(self.embed_dims, 4) - self.query_embedding = nn.Embedding(self.num_query, self.embed_dims) - - def init_weights(self, distribution='uniform'): - """Initialize weights of the transformer head.""" - # The initialization for transformer is important - self.transformer.init_weights() - - def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict, - missing_keys, unexpected_keys, error_msgs): - """load checkpoints.""" - # NOTE here use `AnchorFreeHead` instead of `TransformerHead`, - # since `AnchorFreeHead._load_from_state_dict` should not be - # called here. Invoking the default `Module._load_from_state_dict` - # is enough. - super(AnchorFreeHead, - self)._load_from_state_dict(state_dict, prefix, local_metadata, - strict, missing_keys, - unexpected_keys, error_msgs) - - def forward(self, feats, img_metas): - """Forward function. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - img_metas (list[dict]): List of image information. - - Returns: - tuple[list[Tensor], list[Tensor]]: Outputs for all scale levels. - - - all_cls_scores_list (list[Tensor]): Classification scores \ - for each scale level. Each is a 4D-tensor with shape \ - [nb_dec, bs, num_query, cls_out_channels]. Note \ - `cls_out_channels` should includes background. - - all_bbox_preds_list (list[Tensor]): Sigmoid regression \ - outputs for each scale level. Each is a 4D-tensor with \ - normalized coordinate format (cx, cy, w, h) and shape \ - [nb_dec, bs, num_query, 4]. - """ - num_levels = len(feats) - img_metas_list = [img_metas for _ in range(num_levels)] - return multi_apply(self.forward_single, feats, img_metas_list) - - def forward_single(self, x, img_metas): - """"Forward function for a single feature level. - - Args: - x (Tensor): Input feature from backbone's single stage, shape - [bs, c, h, w]. - img_metas (list[dict]): List of image information. - - Returns: - all_cls_scores (Tensor): Outputs from the classification head, - shape [nb_dec, bs, num_query, cls_out_channels]. Note - cls_out_channels should includes background. - all_bbox_preds (Tensor): Sigmoid outputs from the regression - head with normalized coordinate format (cx, cy, w, h). - Shape [nb_dec, bs, num_query, 4]. - """ - # construct binary masks which used for the transformer. - # NOTE following the official DETR repo, non-zero values representing - # ignored positions, while zero values means valid positions. - batch_size = x.size(0) - input_img_h, input_img_w = img_metas[0]['batch_input_shape'] - masks = x.new_ones((batch_size, input_img_h, input_img_w)) - for img_id in range(batch_size): - img_h, img_w, _ = img_metas[img_id]['img_shape'] - masks[img_id, :img_h, :img_w] = 0 - - x = self.input_proj(x) - # interpolate masks to have the same spatial shape with x - masks = F.interpolate( - masks.unsqueeze(1), size=x.shape[-2:]).to(torch.bool).squeeze(1) - # position encoding - pos_embed = self.positional_encoding(masks) # [bs, embed_dim, h, w] - # outs_dec: [nb_dec, bs, num_query, embed_dim] - outs_dec, _ = self.transformer(x, masks, self.query_embedding.weight, - pos_embed) - - all_cls_scores = self.fc_cls(outs_dec) - all_bbox_preds = self.fc_reg(self.activate( - self.reg_ffn(outs_dec))).sigmoid() - return all_cls_scores, all_bbox_preds - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def loss(self, - all_cls_scores_list, - all_bbox_preds_list, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore=None): - """"Loss function. - - Only outputs from the last feature level are used for computing - losses by default. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore (list[Tensor], optional): Bounding boxes - which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # NOTE defaultly only the outputs from the last feature scale is used. - all_cls_scores = all_cls_scores_list[-1] - all_bbox_preds = all_bbox_preds_list[-1] - assert gt_bboxes_ignore is None, \ - 'Only supports for gt_bboxes_ignore setting to None.' - - num_dec_layers = len(all_cls_scores) - all_gt_bboxes_list = [gt_bboxes_list for _ in range(num_dec_layers)] - all_gt_labels_list = [gt_labels_list for _ in range(num_dec_layers)] - all_gt_bboxes_ignore_list = [ - gt_bboxes_ignore for _ in range(num_dec_layers) - ] - img_metas_list = [img_metas for _ in range(num_dec_layers)] - - losses_cls, losses_bbox, losses_iou = multi_apply( - self.loss_single, all_cls_scores, all_bbox_preds, - all_gt_bboxes_list, all_gt_labels_list, img_metas_list, - all_gt_bboxes_ignore_list) - - loss_dict = dict() - # loss from the last decoder layer - loss_dict['loss_cls'] = losses_cls[-1] - loss_dict['loss_bbox'] = losses_bbox[-1] - loss_dict['loss_iou'] = losses_iou[-1] - # loss from other decoder layers - num_dec_layer = 0 - for loss_cls_i, loss_bbox_i, loss_iou_i in zip(losses_cls[:-1], - losses_bbox[:-1], - losses_iou[:-1]): - loss_dict[f'd{num_dec_layer}.loss_cls'] = loss_cls_i - loss_dict[f'd{num_dec_layer}.loss_bbox'] = loss_bbox_i - loss_dict[f'd{num_dec_layer}.loss_iou'] = loss_iou_i - num_dec_layer += 1 - return loss_dict - - def loss_single(self, - cls_scores, - bbox_preds, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore_list=None): - """"Loss function for outputs from a single decoder layer of a single - feature level. - - Args: - cls_scores (Tensor): Box score logits from a single decoder layer - for all images. Shape [bs, num_query, cls_out_channels]. - bbox_preds (Tensor): Sigmoid outputs from a single decoder layer - for all images, with normalized coordinate (cx, cy, w, h) and - shape [bs, num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore_list (list[Tensor], optional): Bounding - boxes which can be ignored for each image. Default None. - - Returns: - dict[str, Tensor]: A dictionary of loss components for outputs from - a single decoder layer. - """ - num_imgs = cls_scores.size(0) - cls_scores_list = [cls_scores[i] for i in range(num_imgs)] - bbox_preds_list = [bbox_preds[i] for i in range(num_imgs)] - cls_reg_targets = self.get_targets(cls_scores_list, bbox_preds_list, - gt_bboxes_list, gt_labels_list, - img_metas, gt_bboxes_ignore_list) - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - labels = torch.cat(labels_list, 0) - label_weights = torch.cat(label_weights_list, 0) - bbox_targets = torch.cat(bbox_targets_list, 0) - bbox_weights = torch.cat(bbox_weights_list, 0) - - # classification loss - cls_scores = cls_scores.reshape(-1, self.cls_out_channels) - # construct weighted avg_factor to match with the official DETR repo - cls_avg_factor = num_total_pos * 1.0 + \ - num_total_neg * self.bg_cls_weight - loss_cls = self.loss_cls( - cls_scores, labels, label_weights, avg_factor=cls_avg_factor) - - # Compute the average number of gt boxes accross all gpus, for - # normalization purposes - num_total_pos = loss_cls.new_tensor([num_total_pos]) - num_total_pos = torch.clamp(reduce_mean(num_total_pos), min=1).item() - - # construct factors used for rescale bboxes - factors = [] - for img_meta, bbox_pred in zip(img_metas, bbox_preds): - img_h, img_w, _ = img_meta['img_shape'] - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0).repeat( - bbox_pred.size(0), 1) - factors.append(factor) - factors = torch.cat(factors, 0) - - # DETR regress the relative position of boxes (cxcywh) in the image, - # thus the learning target is normalized by the image size. So here - # we need to re-scale them for calculating IoU loss - bbox_preds = bbox_preds.reshape(-1, 4) - bboxes = bbox_cxcywh_to_xyxy(bbox_preds) * factors - bboxes_gt = bbox_cxcywh_to_xyxy(bbox_targets) * factors - - # regression IoU loss, defaultly GIoU loss - loss_iou = self.loss_iou( - bboxes, bboxes_gt, bbox_weights, avg_factor=num_total_pos) - - # regression L1 loss - loss_bbox = self.loss_bbox( - bbox_preds, bbox_targets, bbox_weights, avg_factor=num_total_pos) - return loss_cls, loss_bbox, loss_iou - - def get_targets(self, - cls_scores_list, - bbox_preds_list, - gt_bboxes_list, - gt_labels_list, - img_metas, - gt_bboxes_ignore_list=None): - """"Compute regression and classification targets for a batch image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_scores_list (list[Tensor]): Box score logits from a single - decoder layer for each image with shape [num_query, - cls_out_channels]. - bbox_preds_list (list[Tensor]): Sigmoid outputs from a single - decoder layer for each image, with normalized coordinate - (cx, cy, w, h) and shape [num_query, 4]. - gt_bboxes_list (list[Tensor]): Ground truth bboxes for each image - with shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels_list (list[Tensor]): Ground truth class indices for each - image with shape (num_gts, ). - img_metas (list[dict]): List of image meta information. - gt_bboxes_ignore_list (list[Tensor], optional): Bounding - boxes which can be ignored for each image. Default None. - - Returns: - tuple: a tuple containing the following targets. - - - labels_list (list[Tensor]): Labels for all images. - - label_weights_list (list[Tensor]): Label weights for all \ - images. - - bbox_targets_list (list[Tensor]): BBox targets for all \ - images. - - bbox_weights_list (list[Tensor]): BBox weights for all \ - images. - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - """ - assert gt_bboxes_ignore_list is None, \ - 'Only supports for gt_bboxes_ignore setting to None.' - num_imgs = len(cls_scores_list) - gt_bboxes_ignore_list = [ - gt_bboxes_ignore_list for _ in range(num_imgs) - ] - - (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, pos_inds_list, neg_inds_list) = multi_apply( - self._get_target_single, cls_scores_list, bbox_preds_list, - gt_bboxes_list, gt_labels_list, img_metas, gt_bboxes_ignore_list) - num_total_pos = sum((inds.numel() for inds in pos_inds_list)) - num_total_neg = sum((inds.numel() for inds in neg_inds_list)) - return (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - - def _get_target_single(self, - cls_score, - bbox_pred, - gt_bboxes, - gt_labels, - img_meta, - gt_bboxes_ignore=None): - """"Compute regression and classification targets for one image. - - Outputs from a single decoder layer of a single feature level are used. - - Args: - cls_score (Tensor): Box score logits from a single decoder layer - for one image. Shape [num_query, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from a single decoder layer - for one image, with normalized coordinate (cx, cy, w, h) and - shape [num_query, 4]. - gt_bboxes (Tensor): Ground truth bboxes for one image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (Tensor): Ground truth class indices for one image - with shape (num_gts, ). - img_meta (dict): Meta information for one image. - gt_bboxes_ignore (Tensor, optional): Bounding boxes - which can be ignored. Default None. - - Returns: - tuple[Tensor]: a tuple containing the following for one image. - - - labels (Tensor): Labels of each image. - - label_weights (Tensor]): Label weights of each image. - - bbox_targets (Tensor): BBox targets of each image. - - bbox_weights (Tensor): BBox weights of each image. - - pos_inds (Tensor): Sampled positive indices for each image. - - neg_inds (Tensor): Sampled negative indices for each image. - """ - - num_bboxes = bbox_pred.size(0) - # assigner and sampler - assign_result = self.assigner.assign(bbox_pred, cls_score, gt_bboxes, - gt_labels, img_meta, - gt_bboxes_ignore) - sampling_result = self.sampler.sample(assign_result, bbox_pred, - gt_bboxes) - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - - # label targets - labels = gt_bboxes.new_full((num_bboxes, ), - self.num_classes, - dtype=torch.long) - labels[pos_inds] = gt_labels[sampling_result.pos_assigned_gt_inds] - label_weights = gt_bboxes.new_ones(num_bboxes) - - # bbox targets - bbox_targets = torch.zeros_like(bbox_pred) - bbox_weights = torch.zeros_like(bbox_pred) - bbox_weights[pos_inds] = 1.0 - img_h, img_w, _ = img_meta['img_shape'] - - # DETR regress the relative position of boxes (cxcywh) in the image. - # Thus the learning target should be normalized by the image size, also - # the box format should be converted from defaultly x1y1x2y2 to cxcywh. - factor = bbox_pred.new_tensor([img_w, img_h, img_w, - img_h]).unsqueeze(0) - pos_gt_bboxes_normalized = sampling_result.pos_gt_bboxes / factor - pos_gt_bboxes_targets = bbox_xyxy_to_cxcywh(pos_gt_bboxes_normalized) - bbox_targets[pos_inds] = pos_gt_bboxes_targets - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds) - - # over-write because img_metas are needed as inputs for bbox_head. - def forward_train(self, - x, - img_metas, - gt_bboxes, - gt_labels=None, - gt_bboxes_ignore=None, - proposal_cfg=None, - **kwargs): - """Forward function for training mode. - - Args: - x (list[Tensor]): Features from backbone. - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - proposal_cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - assert proposal_cfg is None, '"proposal_cfg" must be None' - outs = self(x, img_metas) - if gt_labels is None: - loss_inputs = outs + (gt_bboxes, img_metas) - else: - loss_inputs = outs + (gt_bboxes, gt_labels, img_metas) - losses = self.loss(*loss_inputs, gt_bboxes_ignore=gt_bboxes_ignore) - return losses - - @force_fp32(apply_to=('all_cls_scores_list', 'all_bbox_preds_list')) - def get_bboxes(self, - all_cls_scores_list, - all_bbox_preds_list, - img_metas, - rescale=False): - """Transform network outputs for a batch into bbox predictions. - - Args: - all_cls_scores_list (list[Tensor]): Classification outputs - for each feature level. Each is a 4D-tensor with shape - [nb_dec, bs, num_query, cls_out_channels]. - all_bbox_preds_list (list[Tensor]): Sigmoid regression - outputs for each feature level. Each is a 4D-tensor with - normalized coordinate format (cx, cy, w, h) and shape - [nb_dec, bs, num_query, 4]. - img_metas (list[dict]): Meta information of each image. - rescale (bool, optional): If True, return boxes in original - image space. Default False. - - Returns: - list[list[Tensor, Tensor]]: Each item in result_list is 2-tuple. \ - The first item is an (n, 5) tensor, where the first 4 columns \ - are bounding box positions (tl_x, tl_y, br_x, br_y) and the \ - 5-th column is a score between 0 and 1. The second item is a \ - (n,) tensor where each item is the predicted class label of \ - the corresponding box. - """ - # NOTE defaultly only using outputs from the last feature level, - # and only the outputs from the last decoder layer is used. - cls_scores = all_cls_scores_list[-1][-1] - bbox_preds = all_bbox_preds_list[-1][-1] - - result_list = [] - for img_id in range(len(img_metas)): - cls_score = cls_scores[img_id] - bbox_pred = bbox_preds[img_id] - img_shape = img_metas[img_id]['img_shape'] - scale_factor = img_metas[img_id]['scale_factor'] - proposals = self._get_bboxes_single(cls_score, bbox_pred, - img_shape, scale_factor, - rescale) - result_list.append(proposals) - return result_list - - def _get_bboxes_single(self, - cls_score, - bbox_pred, - img_shape, - scale_factor, - rescale=False): - """Transform outputs from the last decoder layer into bbox predictions - for each image. - - Args: - cls_score (Tensor): Box score logits from the last decoder layer - for each image. Shape [num_query, cls_out_channels]. - bbox_pred (Tensor): Sigmoid outputs from the last decoder layer - for each image, with coordinate format (cx, cy, w, h) and - shape [num_query, 4]. - img_shape (tuple[int]): Shape of input image, (height, width, 3). - scale_factor (ndarray, optional): Scale factor of the image arange - as (w_scale, h_scale, w_scale, h_scale). - rescale (bool, optional): If True, return boxes in original image - space. Default False. - - Returns: - tuple[Tensor]: Results of detected bboxes and labels. - - - det_bboxes: Predicted bboxes with shape [num_query, 5], \ - where the first 4 columns are bounding box positions \ - (tl_x, tl_y, br_x, br_y) and the 5-th column are scores \ - between 0 and 1. - - det_labels: Predicted labels of the corresponding box with \ - shape [num_query]. - """ - assert len(cls_score) == len(bbox_pred) - # exclude background - scores, det_labels = F.softmax(cls_score, dim=-1)[..., :-1].max(-1) - det_bboxes = bbox_cxcywh_to_xyxy(bbox_pred) - det_bboxes[:, 0::2] = det_bboxes[:, 0::2] * img_shape[1] - det_bboxes[:, 1::2] = det_bboxes[:, 1::2] * img_shape[0] - det_bboxes[:, 0::2].clamp_(min=0, max=img_shape[1]) - det_bboxes[:, 1::2].clamp_(min=0, max=img_shape[0]) - if rescale: - det_bboxes /= det_bboxes.new_tensor(scale_factor) - det_bboxes = torch.cat((det_bboxes, scores.unsqueeze(1)), -1) - return det_bboxes, det_labels diff --git a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/psenet/psenet_r50_fpnf_600e_ctw1500.py b/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/psenet/psenet_r50_fpnf_600e_ctw1500.py deleted file mode 100644 index 483a2b2e1e7e584dfba26c7c5f506ce544953db8..0000000000000000000000000000000000000000 --- a/spaces/dinhminh20521597/OCR_DEMO/configs/textdet/psenet/psenet_r50_fpnf_600e_ctw1500.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = [ - '../../_base_/default_runtime.py', - '../../_base_/schedules/schedule_adam_step_600e.py', - '../../_base_/det_models/psenet_r50_fpnf.py', - '../../_base_/det_datasets/ctw1500.py', - '../../_base_/det_pipelines/psenet_pipeline.py' -] - -model = {{_base_.model_poly}} - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline_ctw1500 = {{_base_.test_pipeline_ctw1500}} - -data = dict( - samples_per_gpu=2, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_ctw1500), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_ctw1500)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/dirge/voicevox/voicevox_engine/dev/core/mock.py b/spaces/dirge/voicevox/voicevox_engine/dev/core/mock.py deleted file mode 100644 index 59eb63d7039b44a27c9e5e17120d83d41763c353..0000000000000000000000000000000000000000 --- a/spaces/dirge/voicevox/voicevox_engine/dev/core/mock.py +++ /dev/null @@ -1,121 +0,0 @@ -import json -from logging import getLogger -from typing import Any, Dict, List - -import numpy as np -from pyopenjtalk import tts -from scipy.signal import resample - -DUMMY_TEXT = "これはダミーのテキストです" - - -def initialize(path: str, use_gpu: bool, *args: List[Any]) -> None: - pass - - -def yukarin_s_forward(length: int, **kwargs: Dict[str, Any]) -> np.ndarray: - logger = getLogger("uvicorn") # FastAPI / Uvicorn 内からの利用のため - logger.info( - "Sorry, yukarin_s_forward() is a mock. Return values are incorrect.", - ) - return np.ones(length) / 5 - - -def yukarin_sa_forward(length: int, **kwargs: Dict[str, Any]) -> np.ndarray: - logger = getLogger("uvicorn") # FastAPI / Uvicorn 内からの利用のため - logger.info( - "Sorry, yukarin_sa_forward() is a mock. Return values are incorrect.", - ) - return np.ones((1, length)) * 5 - - -def decode_forward(length: int, **kwargs: Dict[str, Any]) -> np.ndarray: - """ - 合成音声の波形データをNumPy配列で返します。ただし、常に固定の文言を読み上げます(DUMMY_TEXT) - 参照→SynthesisEngine のdocstring [Mock] - - Parameters - ---------- - length : int - フレームの長さ - - Returns - ------- - wave : np.ndarray - 音声合成した波形データ - - Note - ------- - ここで行う音声合成では、調声(ピッチ等)を反映しない - また、入力内容によらず常に固定の文言を読み上げる - - # pyopenjtalk.tts()の出力仕様 - dtype=np.float64, 16 bit, mono 48000 Hz - - # resampleの説明 - 非モックdecode_forwardと合わせるために、出力を24kHzに変換した。 - """ - logger = getLogger("uvicorn") # FastAPI / Uvicorn 内からの利用のため - logger.info( - "Sorry, decode_forward() is a mock. Return values are incorrect.", - ) - wave, sr = tts(DUMMY_TEXT) - wave = resample( - wave.astype("int16"), - 24000 * len(wave) // 48000, - ) - return wave - - -def metas() -> str: - return json.dumps( - [ - { - "name": "dummy1", - "styles": [ - {"name": "style0", "id": 0}, - {"name": "style1", "id": 2}, - {"name": "style2", "id": 4}, - {"name": "style3", "id": 6}, - ], - "speaker_uuid": "7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff", - "version": "mock", - }, - { - "name": "dummy2", - "styles": [ - {"name": "style0", "id": 1}, - {"name": "style1", "id": 3}, - {"name": "style2", "id": 5}, - {"name": "style3", "id": 7}, - ], - "speaker_uuid": "388f246b-8c41-4ac1-8e2d-5d79f3ff56d9", - "version": "mock", - }, - { - "name": "dummy3", - "styles": [ - {"name": "style0", "id": 8}, - ], - "speaker_uuid": "35b2c544-660e-401e-b503-0e14c635303a", - "version": "mock", - }, - { - "name": "dummy4", - "styles": [ - {"name": "style0", "id": 9}, - ], - "speaker_uuid": "b1a81618-b27b-40d2-b0ea-27a9ad408c4b", - "version": "mock", - }, - ] - ) - - -def supported_devices() -> str: - return json.dumps( - { - "cpu": True, - "cuda": False, - } - ) diff --git a/spaces/doluvor/faster-whisper-webui/docs/options.md b/spaces/doluvor/faster-whisper-webui/docs/options.md deleted file mode 100644 index 6979fca4d9d4c98a626a2953c2573ff23898a37e..0000000000000000000000000000000000000000 --- a/spaces/doluvor/faster-whisper-webui/docs/options.md +++ /dev/null @@ -1,134 +0,0 @@ -# Standard Options -To transcribe or translate an audio file, you can either copy an URL from a website (all [websites](https://github.com/yt-dlp/yt-dlp/blob/master/supportedsites.md) -supported by YT-DLP will work, including YouTube). Otherwise, upload an audio file (choose "All Files (*.*)" -in the file selector to select any file type, including video files) or use the microphone. - -For longer audio files (>10 minutes), it is recommended that you select Silero VAD (Voice Activity Detector) in the VAD option, especially if you are using the `large-v1` model. Note that `large-v2` is a lot more forgiving, but you may still want to use a VAD with a slightly higher "VAD - Max Merge Size (s)" (60 seconds or more). - -## Model -Select the model that Whisper will use to transcribe the audio: - -| Size | Parameters | English-only model | Multilingual model | Required VRAM | Relative speed | -|-----------|------------|--------------------|--------------------|---------------|----------------| -| tiny | 39 M | tiny.en | tiny | ~1 GB | ~32x | -| base | 74 M | base.en | base | ~1 GB | ~16x | -| small | 244 M | small.en | small | ~2 GB | ~6x | -| medium | 769 M | medium.en | medium | ~5 GB | ~2x | -| large | 1550 M | N/A | large | ~10 GB | 1x | -| large-v2 | 1550 M | N/A | large | ~10 GB | 1x | - -## Language - -Select the language, or leave it empty for Whisper to automatically detect it. - -Note that if the selected language and the language in the audio differs, Whisper may start to translate the audio to the selected -language. For instance, if the audio is in English but you select Japaneese, the model may translate the audio to Japanese. - -## Inputs -The options "URL (YouTube, etc.)", "Upload Files" or "Micriphone Input" allows you to send an audio input to the model. - -### Multiple Files -Note that the UI will only process either the given URL or the upload files (including microphone) - not both. - -But you can upload multiple files either through the "Upload files" option, or as a playlist on YouTube. Each audio file will then be processed in turn, and the resulting SRT/VTT/Transcript will be made available in the "Download" section. When more than one file is processed, the UI will also generate a "All_Output" zip file containing all the text output files. - -## Task -Select the task - either "transcribe" to transcribe the audio to text, or "translate" to translate it to English. - -## Vad -Using a VAD will improve the timing accuracy of each transcribed line, as well as prevent Whisper getting into an infinite -loop detecting the same sentence over and over again. The downside is that this may be at a cost to text accuracy, especially -with regards to unique words or names that appear in the audio. You can compensate for this by increasing the prompt window. - -Note that English is very well handled by Whisper, and it's less susceptible to issues surrounding bad timings and infinite loops. -So you may only need to use a VAD for other languages, such as Japanese, or when the audio is very long. - -* none - * Run whisper on the entire audio input -* silero-vad - * Use Silero VAD to detect sections that contain speech, and run Whisper on independently on each section. Whisper is also run - on the gaps between each speech section, by either expanding the section up to the max merge size, or running Whisper independently - on the non-speech section. -* silero-vad-expand-into-gaps - * Use Silero VAD to detect sections that contain speech, and run Whisper on independently on each section. Each spech section will be expanded - such that they cover any adjacent non-speech sections. For instance, if an audio file of one minute contains the speech sections - 00:00 - 00:10 (A) and 00:30 - 00:40 (B), the first section (A) will be expanded to 00:00 - 00:30, and (B) will be expanded to 00:30 - 00:60. -* silero-vad-skip-gaps - * As above, but sections that doesn't contain speech according to Silero will be skipped. This will be slightly faster, but - may cause dialogue to be skipped. -* periodic-vad - * Create sections of speech every 'VAD - Max Merge Size' seconds. This is very fast and simple, but will potentially break - a sentence or word in two. - -## VAD - Merge Window -If set, any adjacent speech sections that are at most this number of seconds apart will be automatically merged. - -## VAD - Max Merge Size (s) -Disables merging of adjacent speech sections if they are this number of seconds long. - -## VAD - Padding (s) -The number of seconds (floating point) to add to the beginning and end of each speech section. Setting this to a number -larger than zero ensures that Whisper is more likely to correctly transcribe a sentence in the beginning of -a speech section. However, this also increases the probability of Whisper assigning the wrong timestamp -to each transcribed line. The default value is 1 second. - -## VAD - Prompt Window (s) -The text of a detected line will be included as a prompt to the next speech section, if the speech section starts at most this -number of seconds after the line has finished. For instance, if a line ends at 10:00, and the next speech section starts at -10:04, the line's text will be included if the prompt window is 4 seconds or more (10:04 - 10:00 = 4 seconds). - -Note that detected lines in gaps between speech sections will not be included in the prompt -(if silero-vad or silero-vad-expand-into-gaps) is used. - -# Command Line Options - -Both `app.py` and `cli.py` also accept command line options, such as the ability to enable parallel execution on multiple -CPU/GPU cores, the default model name/VAD and so on. Consult the README in the root folder for more information. - -# Additional Options - -In addition to the above, there's also a "Full" options interface that allows you to set all the options available in the Whisper -model. The options are as follows: - -## Initial Prompt -Optional text to provide as a prompt for the first 30 seconds window. Whisper will attempt to use this as a starting point for the transcription, but you can -also get creative and specify a style or format for the output of the transcription. - -For instance, if you use the prompt "hello how is it going always use lowercase no punctuation goodbye one two three start stop i you me they", Whisper will -be biased to output lower capital letters and no punctuation, and may also be biased to output the words in the prompt more often. - -## Temperature -The temperature to use when sampling. Default is 0 (zero). A higher temperature will result in more random output, while a lower temperature will be more deterministic. - -## Best Of - Non-zero temperature -The number of candidates to sample from when sampling with non-zero temperature. Default is 5. - -## Beam Size - Zero temperature -The number of beams to use in beam search when sampling with zero temperature. Default is 5. - -## Patience - Zero temperature -The patience value to use in beam search when sampling with zero temperature. As in https://arxiv.org/abs/2204.05424, the default (1.0) is equivalent to conventional beam search. - -## Length Penalty - Any temperature -The token length penalty coefficient (alpha) to use when sampling with any temperature. As in https://arxiv.org/abs/1609.08144, uses simple length normalization by default. - -## Suppress Tokens - Comma-separated list of token IDs -A comma-separated list of token IDs to suppress during sampling. The default value of "-1" will suppress most special characters except common punctuations. - -## Condition on previous text -If True, provide the previous output of the model as a prompt for the next window. Disabling this may make the text inconsistent across windows, but the model becomes less prone to getting stuck in a failure loop. - -## FP16 -Whether to perform inference in fp16. True by default. - -## Temperature increment on fallback -The temperature to increase when falling back when the decoding fails to meet either of the thresholds below. Default is 0.2. - -## Compression ratio threshold -If the gzip compression ratio is higher than this value, treat the decoding as failed. Default is 2.4. - -## Logprob threshold -If the average log probability is lower than this value, treat the decoding as failed. Default is -1.0. - -## No speech threshold -If the probability of the <|nospeech|> token is higher than this value AND the decoding has failed due to `logprob_threshold`, consider the segment as silence. Default is 0.6. diff --git a/spaces/dongyi/MMFS/models/modules/stylegan2/op/__init__.py b/spaces/dongyi/MMFS/models/modules/stylegan2/op/__init__.py deleted file mode 100644 index d0918d92285955855be89f00096b888ee5597ce3..0000000000000000000000000000000000000000 --- a/spaces/dongyi/MMFS/models/modules/stylegan2/op/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from .fused_act import FusedLeakyReLU, fused_leaky_relu -from .upfirdn2d import upfirdn2d diff --git a/spaces/drift-ai/art-search-engine/README.md b/spaces/drift-ai/art-search-engine/README.md deleted file mode 100644 index 3a2ddd8055f0520f411fd64baf4306a183576c46..0000000000000000000000000000000000000000 --- a/spaces/drift-ai/art-search-engine/README.md +++ /dev/null @@ -1,19 +0,0 @@ ---- -title: Art Search Engine -emoji: 🖼🖼🖼 -colorFrom: black -colorTo: black -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -duplicated_from: vincentclaes/art-search-engine ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference - - -```bash -make setup -poetry shell -``` \ No newline at end of file diff --git a/spaces/dylanebert/igf/viewer/src/app.html b/spaces/dylanebert/igf/viewer/src/app.html deleted file mode 100644 index 875ffd4ff024300782cfe68f594ea01648731962..0000000000000000000000000000000000000000 --- a/spaces/dylanebert/igf/viewer/src/app.html +++ /dev/null @@ -1,17 +0,0 @@ - - - - - - - - - - %sveltekit.head% - - - -
    %sveltekit.body%
    - - - \ No newline at end of file diff --git a/spaces/exaggerated/PaddleOCR/pp_ocr.py b/spaces/exaggerated/PaddleOCR/pp_ocr.py deleted file mode 100644 index 943fc721c2b3979a4820bb08b275e894f8774d89..0000000000000000000000000000000000000000 --- a/spaces/exaggerated/PaddleOCR/pp_ocr.py +++ /dev/null @@ -1,18 +0,0 @@ -import tempfile -import os - -import paddlehub as hub -from PIL import Image - -pp_ocrv3 = hub.Module(name="ch_pp-ocrv3") - -def inference_img(img): - with tempfile.TemporaryDirectory() as tempdir_name: - pp_ocrv3.recognize_text(images=[img], use_gpu=False, output_dir=tempdir_name, visualization=True) - result_names = os.listdir(tempdir_name) - result_image = Image.open(os.path.join(tempdir_name, result_names[0])) - return result_image - -def inference_json(img): - results = pp_ocrv3.recognize_text(images=[img], use_gpu=False, visualization=False) - return results \ No newline at end of file diff --git a/spaces/facebook/MusicGen/scripts/static/style.css b/spaces/facebook/MusicGen/scripts/static/style.css deleted file mode 100644 index a0df7c63a0d2dd9a79f33f5d869ca31c9da87e8d..0000000000000000000000000000000000000000 --- a/spaces/facebook/MusicGen/scripts/static/style.css +++ /dev/null @@ -1,113 +0,0 @@ -body { - background-color: #fbfbfb; - margin: 0; -} - -select, input { - font-size: 1em; - max-width: 100%; -} - -.xp_name { - font-family: monospace; -} - -.simple_form { - background-color: #dddddd; - padding: 1em; - margin: 0.5em; -} - -textarea { - margin-top: 0.5em; - margin-bottom: 0.5em; -} - -.rating { - background-color: grey; - padding-top: 5px; - padding-bottom: 5px; - padding-left: 8px; - padding-right: 8px; - margin-right: 2px; - cursor:pointer; -} - -.rating_selected { - background-color: purple; -} - -.content { - font-family: sans-serif; - background-color: #f6f6f6; - padding: 40px; - margin: 0 auto; - max-width: 1000px; -} - -.track label { - padding-top: 10px; - padding-bottom: 10px; -} -.track { - padding: 15px; - margin: 5px; - background-color: #c8c8c8; -} - -.submit-big { - width:400px; - height:30px; - font-size: 20px; -} - -.error { - color: red; -} - -.ratings { - margin-left: 10px; -} - -.important { - font-weight: bold; -} - -.survey { - margin-bottom: 100px; -} - -.success { - color: #25901b; - font-weight: bold; -} -.warning { - color: #8a1f19; - font-weight: bold; -} -.track>section { - display: flex; - align-items: center; -} - -.prompt { - display: flex; - align-items: center; -} - -.track>section>div { - padding-left: 10px; -} - -audio { - max-width: 280px; - max-height: 40px; - margin-left: 10px; - margin-right: 10px; -} - -.special { - font-weight: bold; - color: #2c2c2c; -} - diff --git a/spaces/falterWliame/Face_Mask_Detection/Download Captain Claw Game For Free Full Version.md b/spaces/falterWliame/Face_Mask_Detection/Download Captain Claw Game For Free Full Version.md deleted file mode 100644 index 8d2c8f86aa4d3bf84836a8098d98500247f0f257..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Download Captain Claw Game For Free Full Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

    download captain claw game for free full version


    Download ->>> https://urlca.com/2uDdTG



    -
    -It is captain claw game is full fighting. it is single player and played. It is cheat book. 1fdad05405
    -
    -
    -

    diff --git a/spaces/falterWliame/Face_Mask_Detection/Download Film Kisah Nabi Musa Fu HOT!.md b/spaces/falterWliame/Face_Mask_Detection/Download Film Kisah Nabi Musa Fu HOT!.md deleted file mode 100644 index 4e590cf521d0c7c2a5de1226c27af526705aa1f9..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/Download Film Kisah Nabi Musa Fu HOT!.md +++ /dev/null @@ -1,5 +0,0 @@ - -

    09 jan 2020 Resident Evil 6(2006) 1080p BD920x720BrRip-[6].mp4, via Bittorrent [68][br]Shutter Island (2010) 1080p BrRip-[8].mp4, via BitTorrent [67]BrRip-[7].mp4, via Bittorrent [63]Shutter Island (2010) 1080p [6].mp4, via Bittorrent [62]Shutter Island (2010) 1080p [7].mp4, via Bittorrent [61]Shutter Island (2010) 1080p [8].mp4, via Bittorrent [60]Drawn Together has a lot of problems with Download / Operating system Related Software.Bittorrent Replica Key.Torrent Keys, Torrent Keys for Windows 10, Microsoft Windows 8/8.1, Microsoft Windows 7/Vista,... XtraTorrent is a cross-platform and fully featured BitTorrent application written in C++. Frendly Sites - The Leading Bittorrent Sites Free and safe download. Download the latest version of the top 1000 software, games, themes, and apps from the top software publishers. System Requirements. Windows 2000, Windows XP, Windows Vista, Windows 7; Internet; Client for DirectShow Shutter-Folders had to be included among... SPCS> Movies> Photo> Download... Handheld Hidden Object Play. Download Latest Movie Genres 2019.... Pantagruel, iv Pro. 09/18/17 7:33:23... 20/10/15 I looked at the download link which says the resources have been moved and I cant get to the missing links Download Ola- OpenLoad Download.flv which contains no subtitles. Win32 > Flash > Pro Tools 11/17 > Microsoft. Net Framework 4. 3) Add the following link to the flash application to your hosts file Text.ProFoundation.ProTools.10.3.v2011.x86.1823694.msi 2) Install Xp and the required latest version of Quicktime. 3) Open the Pro Tools installer and update the installation. Click on the "Install Updates" link. 4) Restart the Pro Tools installer. 5) The Pro Tools installer will launch again. 6) Click on "Uninstall." 7) The "Fix Display" link will show up. Click this link. 8) A new file listing will appear in the Pro Tools installer. 9) Double click on the littlest file in the listing. 10) When the installation window opens, check the box "Use a proxy server" and type in your http proxy address. 11) Click "Next." 12) Click "Finish." 13) The Pro Tools installer will open. 14) Click "Check for updates" on the main installation window. 15) Wait for installation to complete. 16) When the installation is complete, click on "Finish" on the installation window. 17) The Pro Tools installation window will then close and the Pro Tools installer will open. Get Video Downloader Pro Ultimate from its Official website video downloader pro ultimate. Video Downloader Pro. Previously Downloader.PROFESSIONAL features. Multiple connections: multiple connections are added to download all the needed video files at once, effectively cutting down the downloading time. Download with youtube Downloader Pro: get instant access to the vast video collection of youtube.

    -

    Download Film Kisah Nabi Musa Fu


    Download 🌟 https://urlca.com/2uDdZs



    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/farukozderim/Model-Comparator-Space-Builder/README.md b/spaces/farukozderim/Model-Comparator-Space-Builder/README.md deleted file mode 100644 index 2b2d69cc2b282b567ce143073d47a2125a329b35..0000000000000000000000000000000000000000 --- a/spaces/farukozderim/Model-Comparator-Space-Builder/README.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Model Comparator Space Builder -emoji: 🌍 -colorFrom: gray -colorTo: purple -sdk: gradio -app_file: app.py -pinned: true ---- - -# Tests -``` -pytest -v tests.py -``` - - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/fatiXbelha/sd/Download Red Hat Free and Learn from Hands-on Learning Paths.md b/spaces/fatiXbelha/sd/Download Red Hat Free and Learn from Hands-on Learning Paths.md deleted file mode 100644 index 51d6467720ea0931596f7a96403c1dadaa28ec36..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Red Hat Free and Learn from Hands-on Learning Paths.md +++ /dev/null @@ -1,120 +0,0 @@ -
    -

    How to Download Red Hat Linux for Free

    -

    Red Hat Linux is one of the most popular and widely used operating systems in the world, especially for enterprise and cloud computing. It is based on open source technologies and provides a stable, secure, and flexible platform for running various applications and workloads. But did you know that you can download Red Hat Linux for free? In this article, we will show you how to get access to different versions of Red Hat Linux without paying a dime.

    -

    download red hat free


    Download File ⇒⇒⇒ https://urllie.com/2uNFwy



    -

    Red Hat Enterprise Linux: The leading enterprise Linux platform

    -

    Red Hat Enterprise Linux (RHEL) is the flagship product of Red Hat, the world's leading provider of open source solutions. RHEL is certified on hundreds of clouds and with thousands of hardware and software vendors, making it the most compatible and reliable platform for your business needs. RHEL comes with 24x7 support, security updates, and access to the Red Hat Customer Portal, where you can find thousands of knowledge articles and documentation. RHEL also includes Red Hat Insights, a managed service that provides analytics and remediation guidance to help you optimize performance and avoid issues.

    -

    But how can you download RHEL for free? The answer is simple: join the Red Hat Developer Program. This is a no-cost subscription that gives you access to RHEL Server, RHEL Workstation, and various add-ons such as resilient storage, scalable file systems, and high-performance networking. You can use RHEL for development purposes only, not for production or commercial use. You can also create installation disks and virtual machines, as well as cloud-ready images for AWS, Google Cloud Platform, Microsoft Azure, and VMWare. To get started, all you need to do is sign up with your email address and activate your subscription.

    -

    Red Hat Developer Program: A no-cost subscription for developers

    -

    The Red Hat Developer Program is not only about RHEL. It also gives you access to other open source products and services from Red Hat, such as:

    -
      -
    • Red Hat OpenShift: A container platform that lets you build, deploy, and manage applications at scale.
    • -
    • Red Hat Ansible Automation Platform: A foundation for implementing enterprise-wide automation.
    • -
    • Red Hat CodeReady Workspaces: A cloud-native development environment that runs in OpenShift.
    • -
    • Red Hat CodeReady Studio: An integrated development environment (IDE) that supports Java, Node.js, Python, PHP, and more.
    • -
    • Red Hat Quarkus: A Kubernetes-native Java stack that optimizes memory usage and startup time.
    • -
    -

    As a member of the Red Hat Developer Program, you can also access developer tutorials, learning paths, forums, blogs, podcasts, webinars, events, and more. You can also connect with other developers and experts in the community and get support for your projects. The Red Hat Developer Program is a great way to learn new skills, explore new technologies, and accelerate your development process.

    -

    Red Hat Universal Base Image: A container image for your applications

    -

    If you are interested in developing containerized applications using Red Hat Linux, you should check out the Red Hat Universal Base Image (UBI). UBI is a container image that contains the core components of RHEL, such as the kernel, libraries, utilities, and packages. UBI is freely redistributable under the terms of the Universal Permissive License v1.0 (UPL), which means you can use it as a base for your own container images without any restrictions or obligations.

    -

    UBI comes in four variants: base, minimal, micro, and init. Each variant has different features and sizes to suit different use cases. You can use UBI with any container engine or orchestration platform that supports OCI images, such as Podman, OpenShift, or Docker. You can also use UBI with other tools from the Red Hat ecosystem, such as Buildah Table 1: Outline of the article | Heading | Subheading | | --- | --- | | H1: How to Download Red Hat Linux for Free | Introduction: What is Red Hat Linux and why you might want to download it for free | | | H2: Red Hat Enterprise Linux: The leading enterprise Linux platform | | | H2: Red Hat Developer Program: A no-cost subscription for developers | | | H2: Red Hat Universal Base Image: A container image for your applications | | | Conclusion: Summarize the main points and provide a call to action | | | H3: FAQs: Answer some common questions about downloading Red Hat Linux for free | Table 2: Article with HTML formatting

    How to Download Red Hat Linux for Free

    -

    Red Hat Linux is one of the most popular and widely used operating systems in the world, especially for enterprise and cloud computing. It is based on open source technologies and provides a stable, secure, and flexible platform for running various applications and workloads. But did you know that you can download Red Hat Linux for free? In this article, we will show you how to get access to different versions of Red Hat Linux without paying a dime.

    -

    Red Hat Enterprise Linux: The leading enterprise Linux platform

    -

    Red Hat Enterprise Linux (RHEL) is the flagship product of Red Hat, the world's leading provider of open source solutions. RHEL is certified on hundreds of clouds and with thousands of hardware and software vendors, making it the most compatible and reliable platform for your business needs. RHEL comes with 24x7 support, security updates, and access to the Red Hat Customer Portal, where you can find thousands of knowledge articles and documentation. RHEL also includes Red Hat Insights, a managed service that provides analytics and remediation guidance to help you optimize performance and avoid issues.

    -

    download red hat enterprise linux free
    -download red hat openjdk free
    -download red hat openshift free
    -download red hat fuse free
    -download red hat ansible free
    -download red hat developer sandbox free
    -download red hat amq free
    -download red hat code ready studio free
    -download red hat data grid free
    -download red hat decision manager free
    -download red hat developer toolset free
    -download red hat jboss eap free
    -download red hat jboss web server free
    -download red hat migration toolkit free
    -download red hat process automation manager free
    -download red hat software collections free
    -download red hat vs code plugins free
    -download red hat .net runtimes and apis free
    -download red hat 3scale api management free
    -download red hat compilers clang llvm go rust free
    -download red hat customer portal software and updates free
    -download red hat build of quarkus free
    -download red hat openshift api management free
    -download red hat openshift dev spaces free
    -download red hat openshift local free
    -download red hat openshift service on aws free
    -download red hat service interconnect free
    -download red hat trusted software supply chain free
    -download red hat universal base image for docker free
    -download red hat universal base image for openshift free
    -download red hat universal base image for podman free
    -how to download and install red hat linux for free
    -where to find and download red hat iso images for free
    -how to get a no-cost subscription for red hat products for developers
    -how to access and use the developer sandbox for red hat openshift for free
    -how to create and deploy applications on red hat enterprise linux for free
    -how to use the migration toolkit for applications to migrate legacy applications to red hat platforms for free
    -how to build cloud-native microservices with the red hat build of quarkus for free
    -how to manage and secure your apis with the 3scale api management platform for free
    -how to automate your it infrastructure with the ansible automation platform for free
    -how to develop and run java applications with the openjdk distribution from red hat for free
    -how to integrate data and applications with the fuse distributed integration platform for free
    -how to use the code ready studio integrated development environment for developing on red hat platforms for free
    -how to leverage the data grid in-memory data store for fast and scalable data access for free
    -how to use the decision manager business rules management system for automating business decisions for free
    -how to use the process automation manager business process management suite for automating workflows for free
    -how to use the latest compilers and development tools from the developer toolset for building applications on red hat enterprise linux for free
    -how to run and manage jboss enterprise application platform applications on various platforms for free
    -how to use the jboss web server to deploy web applications using apache tomcat and apache httpd for free
    -how to use the vs code plugins from red hat to enhance your development experience on various platforms for fre

    -

    But how can you download RHEL for free? The answer is simple: join the Red Hat Developer Program. This is a no-cost subscription that gives you access to RHEL Server, RHEL Workstation, and various add-ons such as resilient storage, scalable file systems, and high-performance networking. You can use RHEL for development purposes only, not for production or commercial use. You can also create installation disks and virtual machines, as well as cloud-ready images for AWS, Google Cloud Platform, Microsoft Azure, and VMWare. To get started, all you need to do is sign up with your email address and activate your subscription.

    -

    Red Hat Developer Program: A no-cost subscription for developers

    -

    The Red Hat Developer Program is not only about RHEL. It also gives you access to other open source products and services from Red Hat, such as:

    -
      -
    • Red Hat OpenShift: A container platform that lets you build, deploy, and manage applications at scale.
    • -
    • Red Hat Ansible Automation Platform: A foundation for implementing enterprise-wide automation.
    • -
    • Red Hat CodeReady Workspaces: A cloud-native development environment that runs in OpenShift.
    • -
    • Red Hat CodeReady Studio: An integrated development environment (IDE) that supports Java, Node.js, Python, PHP, and more.
    • -
    • Red Hat Quarkus: A Kubernetes-native Java stack that optimizes memory usage and startup time.
    • -
    -

    As a member of the Red Hat Developer Program, you can also access developer tutorials, learning paths, forums, blogs, podcasts, webinars, events, and more. You can also connect with other developers and experts in the community and get support for your projects. The Red Hat Developer Program is a great way to learn new skills, explore new technologies, and accelerate your development process.

    -

    Red Hat Universal Base Image: A container image for your applications

    -

    If you are interested in developing containerized applications using Red Hat Linux, you should check out the Red Hat Universal Base Image (UBI). UBI is a container image that contains the core components of RHEL, such as the kernel, libraries, utilities, and packages. UBI is freely redistributable under the terms of the Universal Permissive License v1.0 (UPL), which means you can use it as a base for your own container images without any restrictions or obligations.

    -

    UBI comes in four variants: base, minimal, micro, and init. Each variant has different features and sizes to suit different use cases. You can use UBI with any container engine or orchestration platform that supports OCI images, such as Podman, OpenShift, or Docker. You can also use UBI with other tools from the Red Hat ecosystem, such as Buildah.

    Buildah is a tool that allows you to build, modify, and push container images. You can use Buildah to create UBI-based images from scratch, from existing images, or from Dockerfiles. You can also use Buildah to inspect, mount, and run your images. Buildah is compatible with the OCI image format and the Docker image format, so you can easily share your images with other platforms and registries.

    -

    To download UBI and Buildah, you can use the following commands:

    -podman pull registry.access.redhat.com/ubi8/ubi -dnf install buildah -

    For more information on how to use UBI and Buildah, you can refer to the official documentation and tutorials.

    -

    Conclusion: How to download Red Hat Linux for free

    -

    In this article, we have shown you how to download Red Hat Linux for free in different ways. You can use the Red Hat Developer Program to get access to RHEL and other open source products and services from Red Hat. You can also use the Red Hat Universal Base Image to create your own container images based on RHEL. These options are ideal for developers who want to experiment with Red Hat Linux and learn new skills. If you are ready to take your development to the next level, you can also try Red Hat OpenShift for free for 60 days.

    -

    Red Hat Linux is a powerful and versatile operating system that can help you run your applications faster, safer, and smarter. Whether you are developing for the cloud, the edge, or the hybrid environment, Red Hat Linux has a solution for you. Download Red Hat Linux for free today and see what it can do for you.

    -

    FAQs: Answer some common questions about downloading Red Hat Linux for free

    -
      -
    • Q: Can I use RHEL for free in production?
    • -
    • A: No, you can only use RHEL for free for development purposes. If you want to use RHEL in production or commercial environments, you need to purchase a subscription from Red Hat.
    • -
    • Q: How long does the Red Hat Developer Program subscription last?
    • -
    • A: The Red Hat Developer Program subscription lasts for one year. You can renew it as long as you remain an active member of the program.
    • -
    • Q: How can I get support for RHEL and other products from the Red Hat Developer Program?
    • -
    • A: You can get support from the Red Hat Developer Program through various channels, such as forums, blogs, webinars, events, and documentation. You can also contact the Red Hat Customer Service team for technical issues.
    • -
    • Q: What are the benefits of using UBI over other base images?
    • -
    • A: UBI has several benefits over other base images, such as:
    • -
        -
      • It is based on RHEL, which is a trusted and proven platform for enterprise applications.
      • -
      • It is freely redistributable under the UPL license, which gives you more flexibility and control over your images.
      • -
      • It is compatible with both OCI and Docker image formats, which makes it easy to share and deploy your images.
      • -
      • It is regularly updated with security patches and bug fixes from Red Hat.
      • -
      -
    • Q: How can I learn more about Red Hat Linux and other open source technologies?
    • -
    • A: You can learn more about Red Hat Linux and other open source technologies by visiting the Red Hat website, where you can find resources such as:
    • -
        -
      • The Red Hat Learning Subscription, which gives you unlimited access to online training courses and exams.
      • -
      • The Red Hat Certification Program, which validates your skills and knowledge in various domains.
      • -
      • The Red Hat Open Source Stories, which showcases how open source is changing the world.
      • -
      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/FIFA Mobile - The Only Licensed FIFA World Cup 2022 Game for Android.md b/spaces/fatiXbelha/sd/FIFA Mobile - The Only Licensed FIFA World Cup 2022 Game for Android.md deleted file mode 100644 index 923a9a4f57fecb1f26103136517e41c0eb1bcc86..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/FIFA Mobile - The Only Licensed FIFA World Cup 2022 Game for Android.md +++ /dev/null @@ -1,102 +0,0 @@ -
    -

    Download FIFA 2022 Mobile APK: The Ultimate Guide

    -

    If you are a soccer fan, you must have heard of FIFA, the most popular and realistic soccer game series in the world. FIFA has been releasing new versions of its games every year, featuring updated players, teams, leagues, stadiums, graphics, and gameplay. The latest version of FIFA is FIFA 22, which was released on October 1, 2021 for PC, consoles, and Stadia.

    -

    download fifa 2022 mobile apk


    Download Filehttps://urllie.com/2uNCNN



    -

    But what if you want to play FIFA on your mobile device? Well, you are in luck, because EA Sports has also developed a mobile version of FIFA, called FIFA Mobile. FIFA Mobile is a free-to-play soccer game that you can download and play on your iOS or Android device. It has many features and modes that make it a fun and engaging soccer experience.

    -

    One of the most exciting features of FIFA Mobile is the FIFA World Cup 2022 mode, which lets you relive the world's greatest soccer tournament with any of the 32 qualified nations. You can also build your dream team with over 15,000 authentic soccer stars, including world-class talent like Kylian Mbappé, Christian Pulisic, Vinicius Jr and Son Heung-min. You can compete against the best in PvP modes, including Head-to-Head, VS Attack and Manager Mode. You can also experience immersive next-level soccer simulation with realistic stadium SFX and live on-field audio commentary.

    -

    But how do you download and install FIFA 2022 Mobile APK on your device? And what are some tips and tricks to improve your gameplay? In this guide, we will answer these questions and more. We will also provide you with some reviews and ratings from other players who have tried FIFA 2022 Mobile APK. So, without further ado, let's get started!

    -

    What is FIFA 2022 Mobile APK?

    -

    FIFA 2022 Mobile APK is an unofficial version of FIFA Mobile that has been modified to include the latest features and updates from FIFA 22. It is not available on the official app stores, but you can download it from third-party websites that host APK files.

    -

    APK stands for Android Package Kit, which is a file format that contains all the components of an Android app. You can install APK files on your Android device by enabling unknown sources in your settings and following some simple steps.

    -

    FIFA 2022 Mobile APK has many advantages over the official version of FIFA Mobile. For example, it has more players, teams, leagues, kits, badges, stadiums, and modes than the official version. It also has better graphics, sound effects, and gameplay than the official version. It also allows you to play offline without an internet connection.

    -

    However, there are also some drawbacks of using FIFA 2022 Mobile APK. For example, it may not be compatible with all devices or operating systems. It may also contain bugs or errors that affect the performance or stability of the game. It may also pose security risks to your device or data if you download it from untrusted sources. It may also violate the terms of service or policies of EA Sports or Google Play.

    -

    Therefore, before you decide to download and install FIFA 2022 Mobile APK, you should weigh the pros and cons carefully and make an informed decision. You should also be aware of the possible consequences of using an unofficial version of FIFA Mobile, such as losing your progress, account, or access to the game.

    Features of FIFA 2022 Mobile APK

    -

    FIFA 2022 Mobile APK has many features that make it a great soccer game for mobile devices. Here are some of the main features that you can enjoy with FIFA 2022 Mobile APK:

    -

    download fifa 2022 mobile game for android
    -how to install fifa 2022 mobile apk on your device
    -fifa 2022 mobile apk free download latest version
    -fifa 2022 mobile apk mod unlimited money and coins
    -download fifa 2022 mobile apk + obb data file
    -fifa 2022 mobile apk offline mode download
    -fifa 2022 mobile apk full unlocked all features
    -download fifa 2022 mobile apk from aptoide
    -fifa 2022 mobile apk download for pc windows 10
    -fifa 2022 mobile apk download link direct
    -download fifa 2022 mobile apk from google play store
    -fifa 2022 mobile apk download size and requirements
    -fifa 2022 mobile apk gameplay and review
    -fifa 2022 mobile apk best settings and tips
    -download fifa 2022 mobile apk with world cup mode
    -fifa 2022 mobile apk update and patch notes
    -fifa 2022 mobile apk hack and cheats tool
    -download fifa 2022 mobile apk for ios iphone ipad
    -fifa 2022 mobile apk compatible devices and models
    -fifa 2022 mobile apk error and fix guide
    -download fifa 2022 mobile apk from apkpure
    -fifa 2022 mobile apk new features and improvements
    -fifa 2022 mobile apk ratings and feedbacks
    -download fifa 2022 mobile apk with manager mode
    -fifa 2022 mobile apk squad builder and transfer market
    -download fifa 2022 mobile apk from mediafire
    -fifa 2022 mobile apk online and offline modes
    -fifa 2022 mobile apk graphics and sound quality
    -download fifa 2022 mobile apk with icons and heroes
    -fifa 2022 mobile apk tournaments and events schedule
    -download fifa 2022 mobile apk from mega.nz
    -fifa 2022 mobile apk license and verification code
    -fifa 2022 mobile apk support and contact information
    -download fifa 2022 mobile apk with vs attack mode
    -fifa 2022 mobile apk rewards and achievements system
    -download fifa 2022 mobile apk from dropbox
    -fifa 2022 mobile apk challenges and missions list
    -fifa 2022 mobile apk controls and customization options
    -download fifa 2022 mobile apk with head-to-head mode
    -fifa 2022 mobile apk news and updates blog

    -

    FIFA World Cup 2022 Mode

    -

    One of the most exciting features of FIFA 2022 Mobile APK is the FIFA World Cup 2022 mode, which lets you relive the world's greatest soccer tournament with any of the 32 qualified nations. You can choose your favorite team and lead them to glory in the group stage, knockout stage, and final. You can also play against other players from around the world in online matches and tournaments. You can earn rewards and unlock exclusive items as you progress through the mode.

    -

    Soccer Icons and Heroes

    -

    FIFA 2022 Mobile APK also lets you build your dream team with over 15,000 authentic soccer stars, including world-class talent like Kylian Mbappé, Christian Pulisic, Vinicius Jr and Son Heung-min. You can also recruit legendary players from the past, such as Pelé, Maradona, Zidane, Ronaldo, and Messi. You can customize your team with different formations, tactics, kits, badges, and stadiums. You can also upgrade your players with skill boosts and chemistry links.

    -

    Immersive Next-Level Soccer Simulation

    -

    FIFA 2022 Mobile APK also delivers an immersive next-level soccer simulation with realistic stadium SFX and live on-field audio commentary. You can experience the thrill of scoring amazing goals, making stunning saves, and performing skill moves with intuitive touch controls. You can also enjoy stunning graphics and animations that bring the game to life on your screen. You can also adjust the camera angle and zoom level to suit your preference.

    -

    Manager Mode

    -

    FIFA 2022 Mobile APK also offers a manager mode, where you can take charge of a soccer club and lead them to success. You can sign new players, sell unwanted ones, negotiate contracts, scout for talent, train your squad, and manage your budget. You can also compete in various leagues and cups, such as the Premier League, La Liga, Bundesliga, Serie A, Champions League, Europa League, and more. You can also challenge other managers from around the world in online matches and tournaments.

    System Requirements for FIFA 2022 Mobile APK

    -

    Before you download and install FIFA 2022 Mobile APK on your device, you should make sure that your device meets the minimum system requirements for the game. Here are the system requirements for FIFA 2022 Mobile APK:

    -

    iOS Devices

    -

    If you have an iOS device, you need to have iOS 11 or later and at least 1.5 GB of free storage space. You also need to have one of the following devices:

    -
      -
    • iPhone 6s or later
    • -
    • iPad Air 2 or later
    • -
    • iPad mini 4 or later
    • -
    • iPad Pro or later
    • -
    • iPod touch (7th generation) or later
    • -
    -

    Android Devices

    -

    If you have an Android device, you need to have Android 6.0 or later and at least 1.5 GB of free storage space. You also need to have a device that has a minimum of 2 GB of RAM and supports OpenGL ES 3.0. You can check the compatibility of your device by visiting the official website of FIFA Mobile.

    -

    How to Download and Install FIFA 2022 Mobile APK

    -

    Now that you know the features and system requirements of FIFA 2022 Mobile APK, you are ready to download and install it on your device. Here are the steps that you need to follow:

    -

    Step 1: Enable Unknown Sources

    -

    The first step is to enable unknown sources on your device, which will allow you to install APK files from third-party websites. To do this, you need to go to your device settings and look for the security or privacy option. Then, you need to find the unknown sources option and toggle it on. You may see a warning message that tells you about the risks of installing apps from unknown sources, but you can ignore it and proceed.

    -

    Step 2: Download the APK File

    -

    The next step is to download the APK file of FIFA 2022 Mobile from a reliable and trusted website that hosts APK files. You can search for FIFA 2022 Mobile APK on Google or use the link provided below. Once you find the website, you need to click on the download button and wait for the file to be downloaded on your device.

    -

    Download FIFA 2022 Mobile APK here

    -

    Step 3: Install the APK File

    -

    The third step is to install the APK file of FIFA 2022 Mobile on your device. To do this, you need to locate the file in your device storage and tap on it. You may see a pop-up window that asks you for permission to install the app, but you can grant it and continue. You may also see a progress bar that shows you how much time is left for the installation to complete.

    -

    Step 4: Launch the Game and Enjoy

    -

    The final step is to launch the game and enjoy playing FIFA 2022 Mobile on your device. To do this, you need to find the game icon on your home screen or app drawer and tap on it. You may see a loading screen that shows you some tips and tricks for the game, but you can skip it and start playing. You may also need to create an account or log in with your existing one to access all the features and modes of the game.

    Gameplay Tips and Tricks for FIFA 2022 Mobile APK

    -

    Now that you have downloaded and installed FIFA 2022 Mobile APK on your device, you may be wondering how to improve your gameplay and win more matches. Here are some tips and tricks that can help you become a better player of FIFA 2022 Mobile APK:

    -

    Build Your Ultimate Team with Star Players

    -

    One of the most important aspects of FIFA 2022 Mobile APK is building your ultimate team with star players. You can recruit players from different categories, such as base, rare, elite, master, legend, icon, and hero. You can also use different types of cards, such as player, skill, chemistry, and training cards. You can also buy and sell players on the market or use the scouting feature to find hidden gems.

    -

    You should try to build a balanced team with players who have high ratings and attributes in different areas, such as pace, shooting, passing, dribbling, defending, and physicality. You should also try to create chemistry links between players who share the same nationality, league, or club. This will boost their performance and give you an edge over your opponents.

    -

    Use Advanced Passing System to Create Chances

    -

    Another important aspect of FIFA 2022 Mobile APK is using the advanced passing system to create chances for scoring goals. You can use different types of passes, such as short, long, through, lobbed, driven, and curved passes. You can also use different gestures, such as tapping, swiping, holding, and dragging on the screen to control the direction, power, and curve of your passes.

    -

    You should try to use the right type of pass for the right situation and avoid making predictable or risky passes that can be intercepted by your opponents. You should also try to vary your passing patterns and tempo to confuse your opponents and create space for your attackers. You should also try to use the one-two pass or the give-and-go pass to create quick combinations and break through the defense.

    -

    Plan Your Strategy and Adjust Your Tactics in Real Time

    -

    The third important aspect of FIFA 2022 Mobile APK is planning your strategy and adjusting your tactics in real time. You can choose from different formations, such as 4-4-2, 4-3-3, 3-5-2, 5-3-2, and more. You can also choose from different styles of play, such as balanced, attacking, defensive, possession, counter-attack, and more. You can also customize your tactics by changing the roles and instructions of your players.

    -

    You should try to choose a formation and a style of play that suit your team's strengths and weaknesses and match your opponent's formation and style of play. You should also try to adjust your tactics during the game by using the quick tactics feature or by pausing the game and accessing the tactics menu. You should also try to exploit your opponent's weaknesses and counter their strengths by changing your formation or style of play accordingly.

    FIFA 2022 Mobile APK with other players online? -
  11. A: Yes, you can play FIFA 2022 Mobile APK with other players online in various modes, such as Head-to-Head, VS Attack, and Manager Mode. However, you may not be able to play with players who are using the official version of FIFA Mobile or a different version of FIFA 2022 Mobile APK.
  12. -
  13. Q: Can I transfer my progress or account from the official version of FIFA Mobile to FIFA 2022 Mobile APK?
  14. -
  15. A: No, you cannot transfer your progress or account from the official version of FIFA Mobile to FIFA 2022 Mobile APK. They are separate and independent games that have different servers and databases. You will have to start from scratch if you switch from one game to another.
  16. - -

    I hope this guide has helped you learn more about FIFA 2022 Mobile APK and how to download and install it on your device. If you have any feedback or suggestions, please let me know in the comments section below. Thank you for reading and happy gaming!

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Fallout Shelter Online APK The Most Anticipated Fallout Game for Android.md b/spaces/fatiXbelha/sd/Fallout Shelter Online APK The Most Anticipated Fallout Game for Android.md deleted file mode 100644 index c0bfdde6b5b3490c5c9c7d7da34bb8e8ce079657..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Fallout Shelter Online APK The Most Anticipated Fallout Game for Android.md +++ /dev/null @@ -1,179 +0,0 @@ -
    -

    How to Download Fallout Shelter Online APK for Android

    -

    If you are a fan of the Fallout series, you might have heard of Fallout Shelter Online, a mobile game that lets you build and manage your own vault in the post-apocalyptic world. But did you know that you can download and play the game on your Android device, even if it is not officially available in your region? In this article, we will show you how to download Fallout Shelter Online APK, a file that allows you to install the game on your phone or tablet. We will also give you some tips and tricks to help you enjoy the game better.

    -

    What Is Fallout Shelter Online?

    -

    Fallout Shelter Online is a free-to-play mobile game developed by Shengqu Games, published by Bethesda Softworks, and distributed by Gaea Mobile. It is a sequel to Fallout Shelter, a popular simulation game that was released in 2015. However, Fallout Shelter Online also serves as a prequel and a sequel to Fallout 4 and Fallout 3, two of the main entries in the Fallout franchise. It features an original story, new gameplay elements, and online features that make it more than just a shelter-building game.

    -

    download fallout shelter online apk


    Download https://urllie.com/2uNGmG



    -

    A Sequel to Fallout Shelter

    -

    Like its predecessor, Fallout Shelter Online puts you in control of a state-of-the-art underground vault from Vault-Tec Corporation, a company that built shelters to protect people from nuclear war. Your job is to ensure the survival and happiness of your dwellers, who are the residents of your vault. You can do this by building various rooms, such as power plants, water treatment facilities, diners, gardens, living quarters, radio stations, medbays, laboratories, and more. You can also customize your vault design, assign dwellers to different jobs, provide them with outfits and weapons, train their skills, and match them for breeding.

    -

    A Prequel and a Sequel to Fallout 4 and Fallout 3

    -

    Unlike its predecessor, Fallout Shelter Online has a plot that connects it to the main Fallout games. The game is set in two locations: the Commonwealth and the Capital Wasteland. The Commonwealth is the setting of Fallout 4, while the Capital Wasteland is the setting of Fallout 3. The game takes place before and after the events of these games, respectively. You will encounter familiar characters from the Fallout series, such as Nick Valentine, Preston Garvey, Piper Wright, Dogmeat, Codsworth, Hancock, Cait, Curie, Paladin Danse, Deacon, MacCready, Strong, X6-88, Father Elijah, Sarah Lyons, Fawkes, Charon, Clover, Butch DeLoria, Moira Brown, Three Dog, Amata Almodovar, James (the Lone Wanderer's father), Colonel Autumn, John Henry Eden (the Enclave President), Liberty Prime (the Brotherhood of Steel's giant robot), Harold (the talking tree), and more. You will also learn more about the lore and history of the Fallout universe.

    -

    A Strategy RPG with Online Features

    -

    Fallout Shelter Online is not just a simulation game; it is also a strategy RPG with online features. You can recruit legendary heroes from the Fallout series, such as the Sole Survivor, the Lone Wanderer, the Courier, and more. You can equip them with various weapons, armors, and perks, and form a team of up to six members. You can then use them to explore the wasteland and battle enemies, such as raiders, super mutants, deathclaws, ghouls, robots, and more. You can also join a guild and participate in online events, such as raids, PvP battles, guild wars, and more. You can also chat with other players and trade items with them.

    -

    Why Download Fallout Shelter Online APK?

    -

    Fallout Shelter Online is a fun and addictive game that will appeal to both fans and newcomers of the Fallout series. However, there are some reasons why you might want to download the APK file instead of getting the game from the official sources. Here are some of them:

    -

    Enjoy the Game in English

    -

    Fallout Shelter Online was originally released in China in 2019, and then in other Asian countries in 2020. The game was only available in Chinese and other Asian languages, which made it difficult for English-speaking players to enjoy the game. However, in 2021, an unofficial English patch was released by a group of fans, which translated the game's interface, dialogue, and subtitles into English. By downloading the APK file with the English patch, you can play the game in English and understand the story and the gameplay better.

    -

    Access the Latest Version with New Content

    -

    Fallout Shelter Online is constantly updated with new content, such as new heroes, new quests, new events, new features, and more. However, these updates are not always available at the same time for different regions. Sometimes, some regions might get the updates earlier or later than others. By downloading the APK file from a reliable source, you can access the latest version of the game with all the new content as soon as possible.

    -

    Avoid Regional Restrictions

    -

    Fallout Shelter Online is not officially available in all regions of the world. Some regions might have access to the game through official channels, such as Google Play Store or App Store, while others might not. This means that some players might not be able to download or play the game at all. By downloading the APK file from a third-party source, you can bypass these regional restrictions and play the game wherever you are.

    -

    How to Download and Install Fallout Shelter Online APK?

    -

    Now that you know why you might want to download Fallout Shelter Online APK, let's see how you can do it. The process is not very complicated, but it does require some steps that you need to follow carefully. Here they are:

    -

    How to download fallout shelter online apk for android
    -Fallout shelter online apk latest version free download
    -Fallout shelter online apk mod unlimited money and resources
    -Fallout shelter online apk download for pc windows 10
    -Fallout shelter online apk gameplay and review
    -Fallout shelter online apk tips and tricks for beginners
    -Fallout shelter online apk best heroes and teams
    -Fallout shelter online apk vs fallout shelter original
    -Fallout shelter online apk download error and how to fix it
    -Fallout shelter online apk compatible devices and requirements
    -Fallout shelter online apk update and patch notes
    -Fallout shelter online apk cheats and hacks
    -Fallout shelter online apk guide and walkthrough
    -Fallout shelter online apk reddit and discord community
    -Fallout shelter online apk wiki and database
    -Fallout shelter online apk awards and achievements
    -Fallout shelter online apk customer service and support
    -Fallout shelter online apk official website and social media
    -Fallout shelter online apk alternatives and similar games
    -Fallout shelter online apk ratings and reviews from users
    -Download fallout shelter online apk from APKCombo[^1^]
    -Download fallout shelter online apk from APKPure[^2^]
    -Download fallout shelter online apk from APKMirror[^3^]
    -Download fallout shelter online apk from APKFab[^4^]
    -Download fallout shelter online apk from APKMonk[^5^]
    -Download fallout shelter online apk from APKSum[^6^]
    -Download fallout shelter online apk from APKHere
    -Download fallout shelter online apk from APKHome
    -Download fallout shelter online apk from APKDone
    -Download fallout shelter online apk from APK4Fun
    -Download fallout shelter: online apk for China
    -Download fallout shelter: online apk for Taiwan
    -Download fallout shelter: online apk for Korea
    -Download fallout shelter: online apk for Japan
    -Download fallout shelter: online apk for India
    -Download fallout shelter: online apk for Indonesia
    -Download fallout shelter: online apk for Malaysia
    -Download fallout shelter: online apk for Philippines
    -Download fallout shelter: online apk for Singapore
    -Download fallout shelter: online apk for Thailand

    -

    Step 1: Find a Reliable Source

    -

    The first thing you need to do is to find a reliable source that offers the APK file for Fallout Shelter Online. There are many websites that claim to provide APK files for various games and apps, but not all of them are trustworthy. Some of them might contain malware or viruses that can harm your device or steal your personal information. Some of them might also offer outdated or fake versions of the game that might not work properly or at all.

    -

    To avoid these risks, you need to do some research and find a reputable source that has positive reviews and feedback from other users. You can also check some online forums or communities that are dedicated to Fallout Shelter Online or APK files in general. You can ask for recommendations or suggestions from other players who have downloaded and installed the game successfully.

    -

    Step 2: Download the APK File

    -

    Once you have found a reliable source, you need to download the APK file for Fallout Shelter Online. The file size might vary depending on the version of the game and the source you choose, but it should be around 1 GB or more. Make sure you have enough storage space on your device before downloading the file.

    -

    To download the file, you need to click on the download link or button provided by the source. You might need to complete some verification steps or captcha tests before you can start the download. You might also need to wait for some time until the download is complete.

    -

    Step 3: Enable Unknown Sources

    -

    Before you can install the APK file on your device, you need to enable unknown sources on your device settings. This is because Android devices normally do not allow installing apps from sources other than Google Play Store or App Store for security reasons. However, since you are downloading an APK file from a third-party source, you need to enable unknown sources to allow installing it.

    -

    To enable unknown sources on your device settings, you need to follow these steps depending on your Android version and device model:

    -
      -
    • Go to your device settings and look for the security or privacy option.
    • -
    • Tap on it and look for the unknown sources or install unknown apps option.
    • -
    • Toggle it on or allow it for the browser or file manager app that you used to download the APK file.
    • -
    -

    If you are not sure how to enable unknown sources on your device, you can search online for specific instructions for your Android version and device model.

    -

    Step 4: Install the APK File

    -

    After you have enabled unknown sources on your device settings, you can proceed to install the APK file on your device. To do this, you need to follow these steps:

    -
      -
    • Locate the APK file on your device storage using a file manager app or the browser app that you used to download it.
    • -
    • Tap on the APK file and confirm the installation by tapping on install or allow.
    • -
    • Wait for the installation to finish. It might take some time depending on the file size and your device performance.
    • -
    -

    If you encounter any errors or issues during the installation, you might need to check if you have enough storage space, if you have enabled unknown sources, or if you have downloaded the correct and compatible version of the game for your device.

    -

    Step 5: Launch the Game and Enjoy

    -

    Once the installation is complete, you can launch the game and enjoy it on your device. You can find the game icon on your home screen or app drawer. Tap on it and wait for the game to load. You might need to accept some permissions or terms of service before you can start playing. You might also need to download some additional data or updates before you can access all the features of the game.

    -

    After that, you can create your account, choose your server, customize your character, and start building your vault. You can also connect your game account to your social media accounts, such as Facebook or Google, to save your progress and access other online features.

    -

    Tips and Tricks for Playing Fallout Shelter Online

    -

    Fallout Shelter Online is a fun and addictive game that will keep you entertained for hours. However, it can also be challenging and complex at times. To help you get the most out of the game, here are some tips and tricks that you should know:

    -

    Plan Your Vault Layout Wisely

    -

    Your vault layout is one of the most important aspects of the game. It determines how efficient and productive your vault is, as well as how safe and happy your dwellers are. You should plan your vault layout wisely and avoid making common mistakes, such as:

    -
      -
    • Building too many rooms too quickly. This will drain your resources and make it harder to maintain them.
    • -
    • Building rooms that are not connected to each other. This will make it harder for your dwellers to move around and work in them.
    • -
    • Building rooms that are not upgraded or merged. This will make them less efficient and more vulnerable to disasters.
    • -
    • Building rooms that are not balanced with each other. This will cause shortages or surpluses of resources and affect your dwellers' needs.
    • -
    -

    To avoid these mistakes, you should follow these guidelines:

    -
      -
    • Build rooms gradually and according to your needs. Start with the basic rooms, such as power plants, water treatment facilities, diners, living quarters, and storage rooms. Then, expand with more advanced rooms, such as radio stations, medbays, laboratories, workshops, training rooms, etc.
    • -
    • Build rooms that are adjacent to each other and form a grid-like pattern. This will make it easier for your dwellers to access them and work in them. You can also use elevators to connect different floors of your vault.
    • -
    • Upgrade and merge rooms whenever possible. This will make them more efficient and productive, as well as more resistant to disasters. You can upgrade rooms by spending caps (the in-game currency) or merge them by building two or three identical rooms next to each other.
    • -
    • Build rooms that are balanced with each other and match your dwellers' needs. You should have enough power plants to supply electricity to all your rooms, enough water treatment facilities to provide clean water to all your dwellers, enough diners to feed all your dwellers, enough living quarters to house all your dwellers, enough storage rooms to store all your resources and items, etc.
    • -
    -

    Assign Dwellers to Their Best Roles

    -

    Your dwellers are the backbone of your vault. They are the ones who work in your rooms, produce resources, explore the wasteland, fight enemies , and more. You should assign them to their best roles and make them happy and productive. You can do this by following these tips:

    -
      -
    • Check your dwellers' stats and skills. Each dweller has six stats: strength, perception, endurance, charisma, intelligence, and agility. Each stat corresponds to a specific room type: power plants, water treatment facilities, diners, radio stations, medbays, laboratories, etc. Each dweller also has a skill level for each stat, ranging from 1 to 10. You should assign your dwellers to the rooms that match their highest stat and skill level. This will make them work faster and more efficiently, as well as increase their happiness.
    • -
    • Check your dwellers' traits and preferences. Each dweller has a unique trait that gives them a bonus or a penalty in certain situations. For example, some dwellers might be brave, lucky, charismatic, or wasteland explorers. Some dwellers might also have preferences for certain outfits, weapons, or roles. You should assign your dwellers to the roles that suit their traits and preferences. This will make them perform better and happier.
    • -
    • Check your dwellers' needs and moods. Each dweller has four needs: food, water, health, and radiation. Each need is represented by a bar that fills up or depletes over time. If a need bar is full, the dweller is satisfied. If a need bar is empty, the dweller is unhappy and might suffer negative effects. You should provide your dwellers with enough food, water, health kits, and radaway to keep their need bars full. You should also check your dwellers' moods, which are represented by smiley faces above their heads. If a dweller is happy, they will have a green smiley face. If a dweller is unhappy, they will have a red frowny face. You should try to keep your dwellers happy by assigning them to their best roles, providing them with enough resources, giving them rewards or bonuses, or sending them to explore the wasteland.
    • -
    -

    Recruit Legendary Heroes from the Fallout Series

    -

    One of the most exciting features of Fallout Shelter Online is that you can recruit legendary heroes from the Fallout series to join your vault. These heroes are special dwellers who have unique abilities and skills that can help you in various ways. You can use them to explore the wasteland and battle enemies, as well as assign them to your rooms to boost their performance.

    -

    To recruit legendary heroes, you need to collect their hero cards, which are rare items that can be obtained from various sources. Some of the sources are:

    -
      -
    • The hero store, where you can buy hero cards with hero tokens or gems (the premium currency).
    • -
    • The hero gacha machine, where you can draw hero cards with gacha tickets or gems.
    • -
    • The hero recruitment center, where you can exchange hero fragments for hero cards.
    • -
    • The hero missions, where you can complete tasks and challenges to earn hero cards.
    • -
    • The hero events, where you can participate in limited-time activities and promotions to get hero cards.
    • -
    -

    Some of the legendary heroes that you can recruit are:

    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -",e.document[0]).appendTo(n)):"tr"===s?e._createTrPlaceholder(e.currentItem,n):"img"===s&&n.attr("src",e.currentItem.attr("src")),i||n.css("visibility","hidden"),n},update:function(t,n){(!i||s.forcePlaceholderSize)&&(n.height()||n.height(e.currentItem.innerHeight()-parseInt(e.currentItem.css("paddingTop")||0,10)-parseInt(e.currentItem.css("paddingBottom")||0,10)),n.width()||n.width(e.currentItem.innerWidth()-parseInt(e.currentItem.css("paddingLeft")||0,10)-parseInt(e.currentItem.css("paddingRight")||0,10)))}}),e.placeholder=t(s.placeholder.element.call(e.element,e.currentItem)),e.currentItem.after(e.placeholder),s.placeholder.update(e,e.placeholder)},_createTrPlaceholder:function(e,i){var s=this;e.children().each(function(){t("",s.document[0]).attr("colspan",t(this).attr("colspan")||1).appendTo(i)})},_contactContainers:function(e){var i,s,n,o,a,r,h,l,c,u,d=null,p=null;for(i=this.containers.length-1;i>=0;i--)if(!t.contains(this.currentItem[0],this.containers[i].element[0]))if(this._intersectsWith(this.containers[i].containerCache)){if(d&&t.contains(this.containers[i].element[0],d.element[0]))continue;d=this.containers[i],p=i}else this.containers[i].containerCache.over&&(this.containers[i]._trigger("out",e,this._uiHash(this)),this.containers[i].containerCache.over=0);if(d)if(1===this.containers.length)this.containers[p].containerCache.over||(this.containers[p]._trigger("over",e,this._uiHash(this)),this.containers[p].containerCache.over=1);else{for(n=1e4,o=null,c=d.floating||this._isFloating(this.currentItem),a=c?"left":"top",r=c?"width":"height",u=c?"pageX":"pageY",s=this.items.length-1;s>=0;s--)t.contains(this.containers[p].element[0],this.items[s].item[0])&&this.items[s].item[0]!==this.currentItem[0]&&(h=this.items[s].item.offset()[a],l=!1,e[u]-h>this.items[s][r]/2&&(l=!0),n>Math.abs(e[u]-h)&&(n=Math.abs(e[u]-h),o=this.items[s],this.direction=l?"up":"down"));if(!o&&!this.options.dropOnEmpty)return;if(this.currentContainer===this.containers[p])return this.currentContainer.containerCache.over||(this.containers[p]._trigger("over",e,this._uiHash()),this.currentContainer.containerCache.over=1),void 0;o?this._rearrange(e,o,null,!0):this._rearrange(e,null,this.containers[p].element,!0),this._trigger("change",e,this._uiHash()),this.containers[p]._trigger("change",e,this._uiHash(this)),this.currentContainer=this.containers[p],this.options.placeholder.update(this.currentContainer,this.placeholder),this.containers[p]._trigger("over",e,this._uiHash(this)),this.containers[p].containerCache.over=1}},_createHelper:function(e){var i=this.options,s=t.isFunction(i.helper)?t(i.helper.apply(this.element[0],[e,this.currentItem])):"clone"===i.helper?this.currentItem.clone():this.currentItem;return s.parents("body").length||t("parent"!==i.appendTo?i.appendTo:this.currentItem[0].parentNode)[0].appendChild(s[0]),s[0]===this.currentItem[0]&&(this._storedCSS={width:this.currentItem[0].style.width,height:this.currentItem[0].style.height,position:this.currentItem.css("position"),top:this.currentItem.css("top"),left:this.currentItem.css("left")}),(!s[0].style.width||i.forceHelperSize)&&s.width(this.currentItem.width()),(!s[0].style.height||i.forceHelperSize)&&s.height(this.currentItem.height()),s},_adjustOffsetFromHelper:function(e){"string"==typeof e&&(e=e.split(" ")),t.isArray(e)&&(e={left:+e[0],top:+e[1]||0}),"left"in e&&(this.offset.click.left=e.left+this.margins.left),"right"in e&&(this.offset.click.left=this.helperProportions.width-e.right+this.margins.left),"top"in e&&(this.offset.click.top=e.top+this.margins.top),"bottom"in e&&(this.offset.click.top=this.helperProportions.height-e.bottom+this.margins.top)},_getParentOffset:function(){this.offsetParent=this.helper.offsetParent();var e=this.offsetParent.offset();return"absolute"===this.cssPosition&&this.scrollParent[0]!==this.document[0]&&t.contains(this.scrollParent[0],this.offsetParent[0])&&(e.left+=this.scrollParent.scrollLeft(),e.top+=this.scrollParent.scrollTop()),(this.offsetParent[0]===this.document[0].body||this.offsetParent[0].tagName&&"html"===this.offsetParent[0].tagName.toLowerCase()&&t.ui.ie)&&(e={top:0,left:0}),{top:e.top+(parseInt(this.offsetParent.css("borderTopWidth"),10)||0),left:e.left+(parseInt(this.offsetParent.css("borderLeftWidth"),10)||0)}},_getRelativeOffset:function(){if("relative"===this.cssPosition){var t=this.currentItem.position();return{top:t.top-(parseInt(this.helper.css("top"),10)||0)+this.scrollParent.scrollTop(),left:t.left-(parseInt(this.helper.css("left"),10)||0)+this.scrollParent.scrollLeft()}}return{top:0,left:0}},_cacheMargins:function(){this.margins={left:parseInt(this.currentItem.css("marginLeft"),10)||0,top:parseInt(this.currentItem.css("marginTop"),10)||0}},_cacheHelperProportions:function(){this.helperProportions={width:this.helper.outerWidth(),height:this.helper.outerHeight()}},_setContainment:function(){var e,i,s,n=this.options;"parent"===n.containment&&(n.containment=this.helper[0].parentNode),("document"===n.containment||"window"===n.containment)&&(this.containment=[0-this.offset.relative.left-this.offset.parent.left,0-this.offset.relative.top-this.offset.parent.top,"document"===n.containment?this.document.width():this.window.width()-this.helperProportions.width-this.margins.left,("document"===n.containment?this.document.height()||document.body.parentNode.scrollHeight:this.window.height()||this.document[0].body.parentNode.scrollHeight)-this.helperProportions.height-this.margins.top]),/^(document|window|parent)$/.test(n.containment)||(e=t(n.containment)[0],i=t(n.containment).offset(),s="hidden"!==t(e).css("overflow"),this.containment=[i.left+(parseInt(t(e).css("borderLeftWidth"),10)||0)+(parseInt(t(e).css("paddingLeft"),10)||0)-this.margins.left,i.top+(parseInt(t(e).css("borderTopWidth"),10)||0)+(parseInt(t(e).css("paddingTop"),10)||0)-this.margins.top,i.left+(s?Math.max(e.scrollWidth,e.offsetWidth):e.offsetWidth)-(parseInt(t(e).css("borderLeftWidth"),10)||0)-(parseInt(t(e).css("paddingRight"),10)||0)-this.helperProportions.width-this.margins.left,i.top+(s?Math.max(e.scrollHeight,e.offsetHeight):e.offsetHeight)-(parseInt(t(e).css("borderTopWidth"),10)||0)-(parseInt(t(e).css("paddingBottom"),10)||0)-this.helperProportions.height-this.margins.top])},_convertPositionTo:function(e,i){i||(i=this.position);var s="absolute"===e?1:-1,n="absolute"!==this.cssPosition||this.scrollParent[0]!==this.document[0]&&t.contains(this.scrollParent[0],this.offsetParent[0])?this.scrollParent:this.offsetParent,o=/(html|body)/i.test(n[0].tagName);return{top:i.top+this.offset.relative.top*s+this.offset.parent.top*s-("fixed"===this.cssPosition?-this.scrollParent.scrollTop():o?0:n.scrollTop())*s,left:i.left+this.offset.relative.left*s+this.offset.parent.left*s-("fixed"===this.cssPosition?-this.scrollParent.scrollLeft():o?0:n.scrollLeft())*s}},_generatePosition:function(e){var i,s,n=this.options,o=e.pageX,a=e.pageY,r="absolute"!==this.cssPosition||this.scrollParent[0]!==this.document[0]&&t.contains(this.scrollParent[0],this.offsetParent[0])?this.scrollParent:this.offsetParent,h=/(html|body)/i.test(r[0].tagName);return"relative"!==this.cssPosition||this.scrollParent[0]!==this.document[0]&&this.scrollParent[0]!==this.offsetParent[0]||(this.offset.relative=this._getRelativeOffset()),this.originalPosition&&(this.containment&&(e.pageX-this.offset.click.leftthis.containment[2]&&(o=this.containment[2]+this.offset.click.left),e.pageY-this.offset.click.top>this.containment[3]&&(a=this.containment[3]+this.offset.click.top)),n.grid&&(i=this.originalPageY+Math.round((a-this.originalPageY)/n.grid[1])*n.grid[1],a=this.containment?i-this.offset.click.top>=this.containment[1]&&i-this.offset.click.top<=this.containment[3]?i:i-this.offset.click.top>=this.containment[1]?i-n.grid[1]:i+n.grid[1]:i,s=this.originalPageX+Math.round((o-this.originalPageX)/n.grid[0])*n.grid[0],o=this.containment?s-this.offset.click.left>=this.containment[0]&&s-this.offset.click.left<=this.containment[2]?s:s-this.offset.click.left>=this.containment[0]?s-n.grid[0]:s+n.grid[0]:s)),{top:a-this.offset.click.top-this.offset.relative.top-this.offset.parent.top+("fixed"===this.cssPosition?-this.scrollParent.scrollTop():h?0:r.scrollTop()),left:o-this.offset.click.left-this.offset.relative.left-this.offset.parent.left+("fixed"===this.cssPosition?-this.scrollParent.scrollLeft():h?0:r.scrollLeft())}},_rearrange:function(t,e,i,s){i?i[0].appendChild(this.placeholder[0]):e.item[0].parentNode.insertBefore(this.placeholder[0],"down"===this.direction?e.item[0]:e.item[0].nextSibling),this.counter=this.counter?++this.counter:1;var n=this.counter; -this._delay(function(){n===this.counter&&this.refreshPositions(!s)})},_clear:function(t,e){function i(t,e,i){return function(s){i._trigger(t,s,e._uiHash(e))}}this.reverting=!1;var s,n=[];if(!this._noFinalSort&&this.currentItem.parent().length&&this.placeholder.before(this.currentItem),this._noFinalSort=null,this.helper[0]===this.currentItem[0]){for(s in this._storedCSS)("auto"===this._storedCSS[s]||"static"===this._storedCSS[s])&&(this._storedCSS[s]="");this.currentItem.css(this._storedCSS),this._removeClass(this.currentItem,"ui-sortable-helper")}else this.currentItem.show();for(this.fromOutside&&!e&&n.push(function(t){this._trigger("receive",t,this._uiHash(this.fromOutside))}),!this.fromOutside&&this.domPosition.prev===this.currentItem.prev().not(".ui-sortable-helper")[0]&&this.domPosition.parent===this.currentItem.parent()[0]||e||n.push(function(t){this._trigger("update",t,this._uiHash())}),this!==this.currentContainer&&(e||(n.push(function(t){this._trigger("remove",t,this._uiHash())}),n.push(function(t){return function(e){t._trigger("receive",e,this._uiHash(this))}}.call(this,this.currentContainer)),n.push(function(t){return function(e){t._trigger("update",e,this._uiHash(this))}}.call(this,this.currentContainer)))),s=this.containers.length-1;s>=0;s--)e||n.push(i("deactivate",this,this.containers[s])),this.containers[s].containerCache.over&&(n.push(i("out",this,this.containers[s])),this.containers[s].containerCache.over=0);if(this.storedCursor&&(this.document.find("body").css("cursor",this.storedCursor),this.storedStylesheet.remove()),this._storedOpacity&&this.helper.css("opacity",this._storedOpacity),this._storedZIndex&&this.helper.css("zIndex","auto"===this._storedZIndex?"":this._storedZIndex),this.dragging=!1,e||this._trigger("beforeStop",t,this._uiHash()),this.placeholder[0].parentNode.removeChild(this.placeholder[0]),this.cancelHelperRemoval||(this.helper[0]!==this.currentItem[0]&&this.helper.remove(),this.helper=null),!e){for(s=0;n.length>s;s++)n[s].call(this,t);this._trigger("stop",t,this._uiHash())}return this.fromOutside=!1,!this.cancelHelperRemoval},_trigger:function(){t.Widget.prototype._trigger.apply(this,arguments)===!1&&this.cancel()},_uiHash:function(e){var i=e||this;return{helper:i.helper,placeholder:i.placeholder||t([]),position:i.position,originalPosition:i.originalPosition,offset:i.positionAbs,item:i.currentItem,sender:e?e.element:null}}}),t.widget("ui.spinner",{version:"1.12.1",defaultElement:"",widgetEventPrefix:"spin",options:{classes:{"ui-spinner":"ui-corner-all","ui-spinner-down":"ui-corner-br","ui-spinner-up":"ui-corner-tr"},culture:null,icons:{down:"ui-icon-triangle-1-s",up:"ui-icon-triangle-1-n"},incremental:!0,max:null,min:null,numberFormat:null,page:10,step:1,change:null,spin:null,start:null,stop:null},_create:function(){this._setOption("max",this.options.max),this._setOption("min",this.options.min),this._setOption("step",this.options.step),""!==this.value()&&this._value(this.element.val(),!0),this._draw(),this._on(this._events),this._refresh(),this._on(this.window,{beforeunload:function(){this.element.removeAttr("autocomplete")}})},_getCreateOptions:function(){var e=this._super(),i=this.element;return t.each(["min","max","step"],function(t,s){var n=i.attr(s);null!=n&&n.length&&(e[s]=n)}),e},_events:{keydown:function(t){this._start(t)&&this._keydown(t)&&t.preventDefault()},keyup:"_stop",focus:function(){this.previous=this.element.val()},blur:function(t){return this.cancelBlur?(delete this.cancelBlur,void 0):(this._stop(),this._refresh(),this.previous!==this.element.val()&&this._trigger("change",t),void 0)},mousewheel:function(t,e){if(e){if(!this.spinning&&!this._start(t))return!1;this._spin((e>0?1:-1)*this.options.step,t),clearTimeout(this.mousewheelTimer),this.mousewheelTimer=this._delay(function(){this.spinning&&this._stop(t)},100),t.preventDefault()}},"mousedown .ui-spinner-button":function(e){function i(){var e=this.element[0]===t.ui.safeActiveElement(this.document[0]);e||(this.element.trigger("focus"),this.previous=s,this._delay(function(){this.previous=s}))}var s;s=this.element[0]===t.ui.safeActiveElement(this.document[0])?this.previous:this.element.val(),e.preventDefault(),i.call(this),this.cancelBlur=!0,this._delay(function(){delete this.cancelBlur,i.call(this)}),this._start(e)!==!1&&this._repeat(null,t(e.currentTarget).hasClass("ui-spinner-up")?1:-1,e)},"mouseup .ui-spinner-button":"_stop","mouseenter .ui-spinner-button":function(e){return t(e.currentTarget).hasClass("ui-state-active")?this._start(e)===!1?!1:(this._repeat(null,t(e.currentTarget).hasClass("ui-spinner-up")?1:-1,e),void 0):void 0},"mouseleave .ui-spinner-button":"_stop"},_enhance:function(){this.uiSpinner=this.element.attr("autocomplete","off").wrap("").parent().append("")},_draw:function(){this._enhance(),this._addClass(this.uiSpinner,"ui-spinner","ui-widget ui-widget-content"),this._addClass("ui-spinner-input"),this.element.attr("role","spinbutton"),this.buttons=this.uiSpinner.children("a").attr("tabIndex",-1).attr("aria-hidden",!0).button({classes:{"ui-button":""}}),this._removeClass(this.buttons,"ui-corner-all"),this._addClass(this.buttons.first(),"ui-spinner-button ui-spinner-up"),this._addClass(this.buttons.last(),"ui-spinner-button ui-spinner-down"),this.buttons.first().button({icon:this.options.icons.up,showLabel:!1}),this.buttons.last().button({icon:this.options.icons.down,showLabel:!1}),this.buttons.height()>Math.ceil(.5*this.uiSpinner.height())&&this.uiSpinner.height()>0&&this.uiSpinner.height(this.uiSpinner.height())},_keydown:function(e){var i=this.options,s=t.ui.keyCode;switch(e.keyCode){case s.UP:return this._repeat(null,1,e),!0;case s.DOWN:return this._repeat(null,-1,e),!0;case s.PAGE_UP:return this._repeat(null,i.page,e),!0;case s.PAGE_DOWN:return this._repeat(null,-i.page,e),!0}return!1},_start:function(t){return this.spinning||this._trigger("start",t)!==!1?(this.counter||(this.counter=1),this.spinning=!0,!0):!1},_repeat:function(t,e,i){t=t||500,clearTimeout(this.timer),this.timer=this._delay(function(){this._repeat(40,e,i)},t),this._spin(e*this.options.step,i)},_spin:function(t,e){var i=this.value()||0;this.counter||(this.counter=1),i=this._adjustValue(i+t*this._increment(this.counter)),this.spinning&&this._trigger("spin",e,{value:i})===!1||(this._value(i),this.counter++)},_increment:function(e){var i=this.options.incremental;return i?t.isFunction(i)?i(e):Math.floor(e*e*e/5e4-e*e/500+17*e/200+1):1},_precision:function(){var t=this._precisionOf(this.options.step);return null!==this.options.min&&(t=Math.max(t,this._precisionOf(this.options.min))),t},_precisionOf:function(t){var e=""+t,i=e.indexOf(".");return-1===i?0:e.length-i-1},_adjustValue:function(t){var e,i,s=this.options;return e=null!==s.min?s.min:0,i=t-e,i=Math.round(i/s.step)*s.step,t=e+i,t=parseFloat(t.toFixed(this._precision())),null!==s.max&&t>s.max?s.max:null!==s.min&&s.min>t?s.min:t},_stop:function(t){this.spinning&&(clearTimeout(this.timer),clearTimeout(this.mousewheelTimer),this.counter=0,this.spinning=!1,this._trigger("stop",t))},_setOption:function(t,e){var i,s,n;return"culture"===t||"numberFormat"===t?(i=this._parse(this.element.val()),this.options[t]=e,this.element.val(this._format(i)),void 0):(("max"===t||"min"===t||"step"===t)&&"string"==typeof e&&(e=this._parse(e)),"icons"===t&&(s=this.buttons.first().find(".ui-icon"),this._removeClass(s,null,this.options.icons.up),this._addClass(s,null,e.up),n=this.buttons.last().find(".ui-icon"),this._removeClass(n,null,this.options.icons.down),this._addClass(n,null,e.down)),this._super(t,e),void 0)},_setOptionDisabled:function(t){this._super(t),this._toggleClass(this.uiSpinner,null,"ui-state-disabled",!!t),this.element.prop("disabled",!!t),this.buttons.button(t?"disable":"enable")},_setOptions:r(function(t){this._super(t)}),_parse:function(t){return"string"==typeof t&&""!==t&&(t=window.Globalize&&this.options.numberFormat?Globalize.parseFloat(t,10,this.options.culture):+t),""===t||isNaN(t)?null:t},_format:function(t){return""===t?"":window.Globalize&&this.options.numberFormat?Globalize.format(t,this.options.numberFormat,this.options.culture):t},_refresh:function(){this.element.attr({"aria-valuemin":this.options.min,"aria-valuemax":this.options.max,"aria-valuenow":this._parse(this.element.val())})},isValid:function(){var t=this.value();return null===t?!1:t===this._adjustValue(t)},_value:function(t,e){var i;""!==t&&(i=this._parse(t),null!==i&&(e||(i=this._adjustValue(i)),t=this._format(i))),this.element.val(t),this._refresh()},_destroy:function(){this.element.prop("disabled",!1).removeAttr("autocomplete role aria-valuemin aria-valuemax aria-valuenow"),this.uiSpinner.replaceWith(this.element)},stepUp:r(function(t){this._stepUp(t)}),_stepUp:function(t){this._start()&&(this._spin((t||1)*this.options.step),this._stop())},stepDown:r(function(t){this._stepDown(t)}),_stepDown:function(t){this._start()&&(this._spin((t||1)*-this.options.step),this._stop())},pageUp:r(function(t){this._stepUp((t||1)*this.options.page)}),pageDown:r(function(t){this._stepDown((t||1)*this.options.page)}),value:function(t){return arguments.length?(r(this._value).call(this,t),void 0):this._parse(this.element.val())},widget:function(){return this.uiSpinner}}),t.uiBackCompat!==!1&&t.widget("ui.spinner",t.ui.spinner,{_enhance:function(){this.uiSpinner=this.element.attr("autocomplete","off").wrap(this._uiSpinnerHtml()).parent().append(this._buttonHtml())},_uiSpinnerHtml:function(){return""},_buttonHtml:function(){return""}}),t.ui.spinner,t.widget("ui.tabs",{version:"1.12.1",delay:300,options:{active:null,classes:{"ui-tabs":"ui-corner-all","ui-tabs-nav":"ui-corner-all","ui-tabs-panel":"ui-corner-bottom","ui-tabs-tab":"ui-corner-top"},collapsible:!1,event:"click",heightStyle:"content",hide:null,show:null,activate:null,beforeActivate:null,beforeLoad:null,load:null},_isLocal:function(){var t=/#.*$/;return function(e){var i,s;i=e.href.replace(t,""),s=location.href.replace(t,"");try{i=decodeURIComponent(i)}catch(n){}try{s=decodeURIComponent(s)}catch(n){}return e.hash.length>1&&i===s}}(),_create:function(){var e=this,i=this.options;this.running=!1,this._addClass("ui-tabs","ui-widget ui-widget-content"),this._toggleClass("ui-tabs-collapsible",null,i.collapsible),this._processTabs(),i.active=this._initialActive(),t.isArray(i.disabled)&&(i.disabled=t.unique(i.disabled.concat(t.map(this.tabs.filter(".ui-state-disabled"),function(t){return e.tabs.index(t)}))).sort()),this.active=this.options.active!==!1&&this.anchors.length?this._findActive(i.active):t(),this._refresh(),this.active.length&&this.load(i.active)},_initialActive:function(){var e=this.options.active,i=this.options.collapsible,s=location.hash.substring(1);return null===e&&(s&&this.tabs.each(function(i,n){return t(n).attr("aria-controls")===s?(e=i,!1):void 0}),null===e&&(e=this.tabs.index(this.tabs.filter(".ui-tabs-active"))),(null===e||-1===e)&&(e=this.tabs.length?0:!1)),e!==!1&&(e=this.tabs.index(this.tabs.eq(e)),-1===e&&(e=i?!1:0)),!i&&e===!1&&this.anchors.length&&(e=0),e},_getCreateEventData:function(){return{tab:this.active,panel:this.active.length?this._getPanelForTab(this.active):t()}},_tabKeydown:function(e){var i=t(t.ui.safeActiveElement(this.document[0])).closest("li"),s=this.tabs.index(i),n=!0;if(!this._handlePageNav(e)){switch(e.keyCode){case t.ui.keyCode.RIGHT:case t.ui.keyCode.DOWN:s++;break;case t.ui.keyCode.UP:case t.ui.keyCode.LEFT:n=!1,s--;break;case t.ui.keyCode.END:s=this.anchors.length-1;break;case t.ui.keyCode.HOME:s=0;break;case t.ui.keyCode.SPACE:return e.preventDefault(),clearTimeout(this.activating),this._activate(s),void 0;case t.ui.keyCode.ENTER:return e.preventDefault(),clearTimeout(this.activating),this._activate(s===this.options.active?!1:s),void 0;default:return}e.preventDefault(),clearTimeout(this.activating),s=this._focusNextTab(s,n),e.ctrlKey||e.metaKey||(i.attr("aria-selected","false"),this.tabs.eq(s).attr("aria-selected","true"),this.activating=this._delay(function(){this.option("active",s)},this.delay))}},_panelKeydown:function(e){this._handlePageNav(e)||e.ctrlKey&&e.keyCode===t.ui.keyCode.UP&&(e.preventDefault(),this.active.trigger("focus"))},_handlePageNav:function(e){return e.altKey&&e.keyCode===t.ui.keyCode.PAGE_UP?(this._activate(this._focusNextTab(this.options.active-1,!1)),!0):e.altKey&&e.keyCode===t.ui.keyCode.PAGE_DOWN?(this._activate(this._focusNextTab(this.options.active+1,!0)),!0):void 0},_findNextTab:function(e,i){function s(){return e>n&&(e=0),0>e&&(e=n),e}for(var n=this.tabs.length-1;-1!==t.inArray(s(),this.options.disabled);)e=i?e+1:e-1;return e},_focusNextTab:function(t,e){return t=this._findNextTab(t,e),this.tabs.eq(t).trigger("focus"),t},_setOption:function(t,e){return"active"===t?(this._activate(e),void 0):(this._super(t,e),"collapsible"===t&&(this._toggleClass("ui-tabs-collapsible",null,e),e||this.options.active!==!1||this._activate(0)),"event"===t&&this._setupEvents(e),"heightStyle"===t&&this._setupHeightStyle(e),void 0)},_sanitizeSelector:function(t){return t?t.replace(/[!"$%&'()*+,.\/:;<=>?@\[\]\^`{|}~]/g,"\\$&"):""},refresh:function(){var e=this.options,i=this.tablist.children(":has(a[href])");e.disabled=t.map(i.filter(".ui-state-disabled"),function(t){return i.index(t)}),this._processTabs(),e.active!==!1&&this.anchors.length?this.active.length&&!t.contains(this.tablist[0],this.active[0])?this.tabs.length===e.disabled.length?(e.active=!1,this.active=t()):this._activate(this._findNextTab(Math.max(0,e.active-1),!1)):e.active=this.tabs.index(this.active):(e.active=!1,this.active=t()),this._refresh()},_refresh:function(){this._setOptionDisabled(this.options.disabled),this._setupEvents(this.options.event),this._setupHeightStyle(this.options.heightStyle),this.tabs.not(this.active).attr({"aria-selected":"false","aria-expanded":"false",tabIndex:-1}),this.panels.not(this._getPanelForTab(this.active)).hide().attr({"aria-hidden":"true"}),this.active.length?(this.active.attr({"aria-selected":"true","aria-expanded":"true",tabIndex:0}),this._addClass(this.active,"ui-tabs-active","ui-state-active"),this._getPanelForTab(this.active).show().attr({"aria-hidden":"false"})):this.tabs.eq(0).attr("tabIndex",0)},_processTabs:function(){var e=this,i=this.tabs,s=this.anchors,n=this.panels;this.tablist=this._getList().attr("role","tablist"),this._addClass(this.tablist,"ui-tabs-nav","ui-helper-reset ui-helper-clearfix ui-widget-header"),this.tablist.on("mousedown"+this.eventNamespace,"> li",function(e){t(this).is(".ui-state-disabled")&&e.preventDefault()}).on("focus"+this.eventNamespace,".ui-tabs-anchor",function(){t(this).closest("li").is(".ui-state-disabled")&&this.blur()}),this.tabs=this.tablist.find("> li:has(a[href])").attr({role:"tab",tabIndex:-1}),this._addClass(this.tabs,"ui-tabs-tab","ui-state-default"),this.anchors=this.tabs.map(function(){return t("a",this)[0]}).attr({role:"presentation",tabIndex:-1}),this._addClass(this.anchors,"ui-tabs-anchor"),this.panels=t(),this.anchors.each(function(i,s){var n,o,a,r=t(s).uniqueId().attr("id"),h=t(s).closest("li"),l=h.attr("aria-controls");e._isLocal(s)?(n=s.hash,a=n.substring(1),o=e.element.find(e._sanitizeSelector(n))):(a=h.attr("aria-controls")||t({}).uniqueId()[0].id,n="#"+a,o=e.element.find(n),o.length||(o=e._createPanel(a),o.insertAfter(e.panels[i-1]||e.tablist)),o.attr("aria-live","polite")),o.length&&(e.panels=e.panels.add(o)),l&&h.data("ui-tabs-aria-controls",l),h.attr({"aria-controls":a,"aria-labelledby":r}),o.attr("aria-labelledby",r)}),this.panels.attr("role","tabpanel"),this._addClass(this.panels,"ui-tabs-panel","ui-widget-content"),i&&(this._off(i.not(this.tabs)),this._off(s.not(this.anchors)),this._off(n.not(this.panels)))},_getList:function(){return this.tablist||this.element.find("ol, ul").eq(0)},_createPanel:function(e){return t("
    ").attr("id",e).data("ui-tabs-destroy",!0)},_setOptionDisabled:function(e){var i,s,n;for(t.isArray(e)&&(e.length?e.length===this.anchors.length&&(e=!0):e=!1),n=0;s=this.tabs[n];n++)i=t(s),e===!0||-1!==t.inArray(n,e)?(i.attr("aria-disabled","true"),this._addClass(i,null,"ui-state-disabled")):(i.removeAttr("aria-disabled"),this._removeClass(i,null,"ui-state-disabled"));this.options.disabled=e,this._toggleClass(this.widget(),this.widgetFullName+"-disabled",null,e===!0)},_setupEvents:function(e){var i={};e&&t.each(e.split(" "),function(t,e){i[e]="_eventHandler"}),this._off(this.anchors.add(this.tabs).add(this.panels)),this._on(!0,this.anchors,{click:function(t){t.preventDefault()}}),this._on(this.anchors,i),this._on(this.tabs,{keydown:"_tabKeydown"}),this._on(this.panels,{keydown:"_panelKeydown"}),this._focusable(this.tabs),this._hoverable(this.tabs)},_setupHeightStyle:function(e){var i,s=this.element.parent();"fill"===e?(i=s.height(),i-=this.element.outerHeight()-this.element.height(),this.element.siblings(":visible").each(function(){var e=t(this),s=e.css("position");"absolute"!==s&&"fixed"!==s&&(i-=e.outerHeight(!0))}),this.element.children().not(this.panels).each(function(){i-=t(this).outerHeight(!0)}),this.panels.each(function(){t(this).height(Math.max(0,i-t(this).innerHeight()+t(this).height()))}).css("overflow","auto")):"auto"===e&&(i=0,this.panels.each(function(){i=Math.max(i,t(this).height("").height())}).height(i))},_eventHandler:function(e){var i=this.options,s=this.active,n=t(e.currentTarget),o=n.closest("li"),a=o[0]===s[0],r=a&&i.collapsible,h=r?t():this._getPanelForTab(o),l=s.length?this._getPanelForTab(s):t(),c={oldTab:s,oldPanel:l,newTab:r?t():o,newPanel:h};e.preventDefault(),o.hasClass("ui-state-disabled")||o.hasClass("ui-tabs-loading")||this.running||a&&!i.collapsible||this._trigger("beforeActivate",e,c)===!1||(i.active=r?!1:this.tabs.index(o),this.active=a?t():o,this.xhr&&this.xhr.abort(),l.length||h.length||t.error("jQuery UI Tabs: Mismatching fragment identifier."),h.length&&this.load(this.tabs.index(o),e),this._toggle(e,c))},_toggle:function(e,i){function s(){o.running=!1,o._trigger("activate",e,i)}function n(){o._addClass(i.newTab.closest("li"),"ui-tabs-active","ui-state-active"),a.length&&o.options.show?o._show(a,o.options.show,s):(a.show(),s())}var o=this,a=i.newPanel,r=i.oldPanel;this.running=!0,r.length&&this.options.hide?this._hide(r,this.options.hide,function(){o._removeClass(i.oldTab.closest("li"),"ui-tabs-active","ui-state-active"),n()}):(this._removeClass(i.oldTab.closest("li"),"ui-tabs-active","ui-state-active"),r.hide(),n()),r.attr("aria-hidden","true"),i.oldTab.attr({"aria-selected":"false","aria-expanded":"false"}),a.length&&r.length?i.oldTab.attr("tabIndex",-1):a.length&&this.tabs.filter(function(){return 0===t(this).attr("tabIndex")}).attr("tabIndex",-1),a.attr("aria-hidden","false"),i.newTab.attr({"aria-selected":"true","aria-expanded":"true",tabIndex:0})},_activate:function(e){var i,s=this._findActive(e);s[0]!==this.active[0]&&(s.length||(s=this.active),i=s.find(".ui-tabs-anchor")[0],this._eventHandler({target:i,currentTarget:i,preventDefault:t.noop}))},_findActive:function(e){return e===!1?t():this.tabs.eq(e)},_getIndex:function(e){return"string"==typeof e&&(e=this.anchors.index(this.anchors.filter("[href$='"+t.ui.escapeSelector(e)+"']"))),e},_destroy:function(){this.xhr&&this.xhr.abort(),this.tablist.removeAttr("role").off(this.eventNamespace),this.anchors.removeAttr("role tabIndex").removeUniqueId(),this.tabs.add(this.panels).each(function(){t.data(this,"ui-tabs-destroy")?t(this).remove():t(this).removeAttr("role tabIndex aria-live aria-busy aria-selected aria-labelledby aria-hidden aria-expanded")}),this.tabs.each(function(){var e=t(this),i=e.data("ui-tabs-aria-controls");i?e.attr("aria-controls",i).removeData("ui-tabs-aria-controls"):e.removeAttr("aria-controls")}),this.panels.show(),"content"!==this.options.heightStyle&&this.panels.css("height","")},enable:function(e){var i=this.options.disabled;i!==!1&&(void 0===e?i=!1:(e=this._getIndex(e),i=t.isArray(i)?t.map(i,function(t){return t!==e?t:null}):t.map(this.tabs,function(t,i){return i!==e?i:null})),this._setOptionDisabled(i))},disable:function(e){var i=this.options.disabled;if(i!==!0){if(void 0===e)i=!0;else{if(e=this._getIndex(e),-1!==t.inArray(e,i))return;i=t.isArray(i)?t.merge([e],i).sort():[e]}this._setOptionDisabled(i)}},load:function(e,i){e=this._getIndex(e);var s=this,n=this.tabs.eq(e),o=n.find(".ui-tabs-anchor"),a=this._getPanelForTab(n),r={tab:n,panel:a},h=function(t,e){"abort"===e&&s.panels.stop(!1,!0),s._removeClass(n,"ui-tabs-loading"),a.removeAttr("aria-busy"),t===s.xhr&&delete s.xhr};this._isLocal(o[0])||(this.xhr=t.ajax(this._ajaxSettings(o,i,r)),this.xhr&&"canceled"!==this.xhr.statusText&&(this._addClass(n,"ui-tabs-loading"),a.attr("aria-busy","true"),this.xhr.done(function(t,e,n){setTimeout(function(){a.html(t),s._trigger("load",i,r),h(n,e)},1)}).fail(function(t,e){setTimeout(function(){h(t,e)},1)})))},_ajaxSettings:function(e,i,s){var n=this;return{url:e.attr("href").replace(/#.*$/,""),beforeSend:function(e,o){return n._trigger("beforeLoad",i,t.extend({jqXHR:e,ajaxSettings:o},s))}}},_getPanelForTab:function(e){var i=t(e).attr("aria-controls");return this.element.find(this._sanitizeSelector("#"+i))}}),t.uiBackCompat!==!1&&t.widget("ui.tabs",t.ui.tabs,{_processTabs:function(){this._superApply(arguments),this._addClass(this.tabs,"ui-tab")}}),t.ui.tabs,t.widget("ui.tooltip",{version:"1.12.1",options:{classes:{"ui-tooltip":"ui-corner-all ui-widget-shadow"},content:function(){var e=t(this).attr("title")||"";return t("").text(e).html()},hide:!0,items:"[title]:not([disabled])",position:{my:"left top+15",at:"left bottom",collision:"flipfit flip"},show:!0,track:!1,close:null,open:null},_addDescribedBy:function(e,i){var s=(e.attr("aria-describedby")||"").split(/\s+/);s.push(i),e.data("ui-tooltip-id",i).attr("aria-describedby",t.trim(s.join(" ")))},_removeDescribedBy:function(e){var i=e.data("ui-tooltip-id"),s=(e.attr("aria-describedby")||"").split(/\s+/),n=t.inArray(i,s);-1!==n&&s.splice(n,1),e.removeData("ui-tooltip-id"),s=t.trim(s.join(" ")),s?e.attr("aria-describedby",s):e.removeAttr("aria-describedby")},_create:function(){this._on({mouseover:"open",focusin:"open"}),this.tooltips={},this.parents={},this.liveRegion=t("
    ").attr({role:"log","aria-live":"assertive","aria-relevant":"additions"}).appendTo(this.document[0].body),this._addClass(this.liveRegion,null,"ui-helper-hidden-accessible"),this.disabledTitles=t([])},_setOption:function(e,i){var s=this;this._super(e,i),"content"===e&&t.each(this.tooltips,function(t,e){s._updateContent(e.element)})},_setOptionDisabled:function(t){this[t?"_disable":"_enable"]()},_disable:function(){var e=this;t.each(this.tooltips,function(i,s){var n=t.Event("blur");n.target=n.currentTarget=s.element[0],e.close(n,!0)}),this.disabledTitles=this.disabledTitles.add(this.element.find(this.options.items).addBack().filter(function(){var e=t(this);return e.is("[title]")?e.data("ui-tooltip-title",e.attr("title")).removeAttr("title"):void 0}))},_enable:function(){this.disabledTitles.each(function(){var e=t(this);e.data("ui-tooltip-title")&&e.attr("title",e.data("ui-tooltip-title"))}),this.disabledTitles=t([])},open:function(e){var i=this,s=t(e?e.target:this.element).closest(this.options.items);s.length&&!s.data("ui-tooltip-id")&&(s.attr("title")&&s.data("ui-tooltip-title",s.attr("title")),s.data("ui-tooltip-open",!0),e&&"mouseover"===e.type&&s.parents().each(function(){var e,s=t(this);s.data("ui-tooltip-open")&&(e=t.Event("blur"),e.target=e.currentTarget=this,i.close(e,!0)),s.attr("title")&&(s.uniqueId(),i.parents[this.id]={element:this,title:s.attr("title")},s.attr("title",""))}),this._registerCloseHandlers(e,s),this._updateContent(s,e))},_updateContent:function(t,e){var i,s=this.options.content,n=this,o=e?e.type:null;return"string"==typeof s||s.nodeType||s.jquery?this._open(e,t,s):(i=s.call(t[0],function(i){n._delay(function(){t.data("ui-tooltip-open")&&(e&&(e.type=o),this._open(e,t,i))})}),i&&this._open(e,t,i),void 0)},_open:function(e,i,s){function n(t){l.of=t,a.is(":hidden")||a.position(l)}var o,a,r,h,l=t.extend({},this.options.position);if(s){if(o=this._find(i))return o.tooltip.find(".ui-tooltip-content").html(s),void 0;i.is("[title]")&&(e&&"mouseover"===e.type?i.attr("title",""):i.removeAttr("title")),o=this._tooltip(i),a=o.tooltip,this._addDescribedBy(i,a.attr("id")),a.find(".ui-tooltip-content").html(s),this.liveRegion.children().hide(),h=t("
    ").html(a.find(".ui-tooltip-content").html()),h.removeAttr("name").find("[name]").removeAttr("name"),h.removeAttr("id").find("[id]").removeAttr("id"),h.appendTo(this.liveRegion),this.options.track&&e&&/^mouse/.test(e.type)?(this._on(this.document,{mousemove:n}),n(e)):a.position(t.extend({of:i},this.options.position)),a.hide(),this._show(a,this.options.show),this.options.track&&this.options.show&&this.options.show.delay&&(r=this.delayedShow=setInterval(function(){a.is(":visible")&&(n(l.of),clearInterval(r))},t.fx.interval)),this._trigger("open",e,{tooltip:a})}},_registerCloseHandlers:function(e,i){var s={keyup:function(e){if(e.keyCode===t.ui.keyCode.ESCAPE){var s=t.Event(e);s.currentTarget=i[0],this.close(s,!0)}}};i[0]!==this.element[0]&&(s.remove=function(){this._removeTooltip(this._find(i).tooltip)}),e&&"mouseover"!==e.type||(s.mouseleave="close"),e&&"focusin"!==e.type||(s.focusout="close"),this._on(!0,i,s)},close:function(e){var i,s=this,n=t(e?e.currentTarget:this.element),o=this._find(n);return o?(i=o.tooltip,o.closing||(clearInterval(this.delayedShow),n.data("ui-tooltip-title")&&!n.attr("title")&&n.attr("title",n.data("ui-tooltip-title")),this._removeDescribedBy(n),o.hiding=!0,i.stop(!0),this._hide(i,this.options.hide,function(){s._removeTooltip(t(this))}),n.removeData("ui-tooltip-open"),this._off(n,"mouseleave focusout keyup"),n[0]!==this.element[0]&&this._off(n,"remove"),this._off(this.document,"mousemove"),e&&"mouseleave"===e.type&&t.each(this.parents,function(e,i){t(i.element).attr("title",i.title),delete s.parents[e]}),o.closing=!0,this._trigger("close",e,{tooltip:i}),o.hiding||(o.closing=!1)),void 0):(n.removeData("ui-tooltip-open"),void 0)},_tooltip:function(e){var i=t("
    ").attr("role","tooltip"),s=t("
    ").appendTo(i),n=i.uniqueId().attr("id");return this._addClass(s,"ui-tooltip-content"),this._addClass(i,"ui-tooltip","ui-widget ui-widget-content"),i.appendTo(this._appendTo(e)),this.tooltips[n]={element:e,tooltip:i}},_find:function(t){var e=t.data("ui-tooltip-id");return e?this.tooltips[e]:null},_removeTooltip:function(t){t.remove(),delete this.tooltips[t.attr("id")]},_appendTo:function(t){var e=t.closest(".ui-front, dialog");return e.length||(e=this.document[0].body),e},_destroy:function(){var e=this;t.each(this.tooltips,function(i,s){var n=t.Event("blur"),o=s.element;n.target=n.currentTarget=o[0],e.close(n,!0),t("#"+i).remove(),o.data("ui-tooltip-title")&&(o.attr("title")||o.attr("title",o.data("ui-tooltip-title")),o.removeData("ui-tooltip-title"))}),this.liveRegion.remove()}}),t.uiBackCompat!==!1&&t.widget("ui.tooltip",t.ui.tooltip,{options:{tooltipClass:null},_tooltip:function(){var t=this._superApply(arguments);return this.options.tooltipClass&&t.tooltip.addClass(this.options.tooltipClass),t}}),t.ui.tooltip}); \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/theme.py b/spaces/fb700/chatglm-fitness-RLHF/theme.py deleted file mode 100644 index 5ef7e9605896dbdddcaea09e7d804baf3f5696cf..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/theme.py +++ /dev/null @@ -1,353 +0,0 @@ -import gradio as gr -from toolbox import get_conf -CODE_HIGHLIGHT, ADD_WAIFU = get_conf('CODE_HIGHLIGHT', 'ADD_WAIFU') -# gradio可用颜色列表 -# gr.themes.utils.colors.slate (石板色) -# gr.themes.utils.colors.gray (灰色) -# gr.themes.utils.colors.zinc (锌色) -# gr.themes.utils.colors.neutral (中性色) -# gr.themes.utils.colors.stone (石头色) -# gr.themes.utils.colors.red (红色) -# gr.themes.utils.colors.orange (橙色) -# gr.themes.utils.colors.amber (琥珀色) -# gr.themes.utils.colors.yellow (黄色) -# gr.themes.utils.colors.lime (酸橙色) -# gr.themes.utils.colors.green (绿色) -# gr.themes.utils.colors.emerald (祖母绿) -# gr.themes.utils.colors.teal (青蓝色) -# gr.themes.utils.colors.cyan (青色) -# gr.themes.utils.colors.sky (天蓝色) -# gr.themes.utils.colors.blue (蓝色) -# gr.themes.utils.colors.indigo (靛蓝色) -# gr.themes.utils.colors.violet (紫罗兰色) -# gr.themes.utils.colors.purple (紫色) -# gr.themes.utils.colors.fuchsia (洋红色) -# gr.themes.utils.colors.pink (粉红色) -# gr.themes.utils.colors.rose (玫瑰色) - - -def adjust_theme(): - - try: - color_er = gr.themes.utils.colors.fuchsia - set_theme = gr.themes.Default( - primary_hue=gr.themes.utils.colors.orange, - neutral_hue=gr.themes.utils.colors.gray, - font=["sans-serif", "Microsoft YaHei", "ui-sans-serif", "system-ui", - "sans-serif", gr.themes.utils.fonts.GoogleFont("Source Sans Pro")], - font_mono=["ui-monospace", "Consolas", "monospace", gr.themes.utils.fonts.GoogleFont("IBM Plex Mono")]) - set_theme.set( - # Colors - input_background_fill_dark="*neutral_800", - # Transition - button_transition="none", - # Shadows - button_shadow="*shadow_drop", - button_shadow_hover="*shadow_drop_lg", - button_shadow_active="*shadow_inset", - input_shadow="0 0 0 *shadow_spread transparent, *shadow_inset", - input_shadow_focus="0 0 0 *shadow_spread *secondary_50, *shadow_inset", - input_shadow_focus_dark="0 0 0 *shadow_spread *neutral_700, *shadow_inset", - checkbox_label_shadow="*shadow_drop", - block_shadow="*shadow_drop", - form_gap_width="1px", - # Button borders - input_border_width="1px", - input_background_fill="white", - # Gradients - stat_background_fill="linear-gradient(to right, *primary_400, *primary_200)", - stat_background_fill_dark="linear-gradient(to right, *primary_400, *primary_600)", - error_background_fill=f"linear-gradient(to right, {color_er.c100}, *background_fill_secondary)", - error_background_fill_dark="*background_fill_primary", - checkbox_label_background_fill="linear-gradient(to top, *neutral_50, white)", - checkbox_label_background_fill_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - checkbox_label_background_fill_hover="linear-gradient(to top, *neutral_100, white)", - checkbox_label_background_fill_hover_dark="linear-gradient(to top, *neutral_900, *neutral_800)", - button_primary_background_fill="linear-gradient(to bottom right, *primary_100, *primary_300)", - button_primary_background_fill_dark="linear-gradient(to bottom right, *primary_500, *primary_600)", - button_primary_background_fill_hover="linear-gradient(to bottom right, *primary_100, *primary_200)", - button_primary_background_fill_hover_dark="linear-gradient(to bottom right, *primary_500, *primary_500)", - button_primary_border_color_dark="*primary_500", - button_secondary_background_fill="linear-gradient(to bottom right, *neutral_100, *neutral_200)", - button_secondary_background_fill_dark="linear-gradient(to bottom right, *neutral_600, *neutral_700)", - button_secondary_background_fill_hover="linear-gradient(to bottom right, *neutral_100, *neutral_100)", - button_secondary_background_fill_hover_dark="linear-gradient(to bottom right, *neutral_600, *neutral_600)", - button_cancel_background_fill=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c200})", - button_cancel_background_fill_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c700})", - button_cancel_background_fill_hover=f"linear-gradient(to bottom right, {color_er.c100}, {color_er.c100})", - button_cancel_background_fill_hover_dark=f"linear-gradient(to bottom right, {color_er.c600}, {color_er.c600})", - button_cancel_border_color=color_er.c200, - button_cancel_border_color_dark=color_er.c600, - button_cancel_text_color=color_er.c600, - button_cancel_text_color_dark="white", - ) - - # 添加一个萌萌的看板娘 - if ADD_WAIFU: - js = """ - - - - """ - gradio_original_template_fn = gr.routes.templates.TemplateResponse - def gradio_new_template_fn(*args, **kwargs): - res = gradio_original_template_fn(*args, **kwargs) - res.body = res.body.replace(b'', f'{js}'.encode("utf8")) - res.init_headers() - return res - gr.routes.templates.TemplateResponse = gradio_new_template_fn # override gradio template - except: - set_theme = None - print('gradio版本较旧, 不能自定义字体和颜色') - return set_theme - - -advanced_css = """ -.markdown-body table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} - -.markdown-body th, .markdown-body td { - border: 1.2px solid var(--border-color-primary); - padding: 5px; -} - -.markdown-body thead { - background-color: rgba(175,184,193,0.2); -} - -.markdown-body thead th { - padding: .5em .2em; -} - -.markdown-body ol, .markdown-body ul { - padding-inline-start: 2em !important; -} - -/* chat box. */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - /* padding: var(--spacing-xl) !important; */ - /* font-size: var(--text-md) !important; */ - /* line-height: var(--line-md) !important; */ - /* min-height: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ - /* min-width: calc(var(--text-md)*var(--line-md) + 2*var(--spacing-xl)); */ -} -[data-testid = "bot"] { - max-width: 95%; - /* width: auto !important; */ - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 100%; - /* width: auto !important; */ - border-bottom-right-radius: 0 !important; -} - -/* linein code block. */ -.markdown-body code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(13, 17, 23, 0.95); - color: #c9d1d9; -} - -.dark .markdown-body code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} - -/* code block css */ -.markdown-body pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: rgba(13, 17, 23, 0.95); - border-radius: 10px; - padding: 1em; - margin: 1em 2em 1em 0.5em; -} - -.dark .markdown-body pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: rgba(175,184,193,0.2); - border-radius: 10px; - padding: 1em; - margin: 1em 2em 1em 0.5em; -} - -""" - -if CODE_HIGHLIGHT: - advanced_css += """ - -.codehilite .hll { background-color: #6e7681 } -.codehilite .c { color: #8b949e; font-style: italic } /* Comment */ -.codehilite .err { color: #f85149 } /* Error */ -.codehilite .esc { color: #c9d1d9 } /* Escape */ -.codehilite .g { color: #c9d1d9 } /* Generic */ -.codehilite .k { color: #ff7b72 } /* Keyword */ -.codehilite .l { color: #a5d6ff } /* Literal */ -.codehilite .n { color: #c9d1d9 } /* Name */ -.codehilite .o { color: #ff7b72; font-weight: bold } /* Operator */ -.codehilite .x { color: #c9d1d9 } /* Other */ -.codehilite .p { color: #c9d1d9 } /* Punctuation */ -.codehilite .ch { color: #8b949e; font-style: italic } /* Comment.Hashbang */ -.codehilite .cm { color: #8b949e; font-style: italic } /* Comment.Multiline */ -.codehilite .cp { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Preproc */ -.codehilite .cpf { color: #8b949e; font-style: italic } /* Comment.PreprocFile */ -.codehilite .c1 { color: #8b949e; font-style: italic } /* Comment.Single */ -.codehilite .cs { color: #8b949e; font-weight: bold; font-style: italic } /* Comment.Special */ -.codehilite .gd { color: #ffa198; background-color: #490202 } /* Generic.Deleted */ -.codehilite .ge { color: #c9d1d9; font-style: italic } /* Generic.Emph */ -.codehilite .gr { color: #ffa198 } /* Generic.Error */ -.codehilite .gh { color: #79c0ff; font-weight: bold } /* Generic.Heading */ -.codehilite .gi { color: #56d364; background-color: #0f5323 } /* Generic.Inserted */ -.codehilite .go { color: #8b949e } /* Generic.Output */ -.codehilite .gp { color: #8b949e } /* Generic.Prompt */ -.codehilite .gs { color: #c9d1d9; font-weight: bold } /* Generic.Strong */ -.codehilite .gu { color: #79c0ff } /* Generic.Subheading */ -.codehilite .gt { color: #ff7b72 } /* Generic.Traceback */ -.codehilite .g-Underline { color: #c9d1d9; text-decoration: underline } /* Generic.Underline */ -.codehilite .kc { color: #79c0ff } /* Keyword.Constant */ -.codehilite .kd { color: #ff7b72 } /* Keyword.Declaration */ -.codehilite .kn { color: #ff7b72 } /* Keyword.Namespace */ -.codehilite .kp { color: #79c0ff } /* Keyword.Pseudo */ -.codehilite .kr { color: #ff7b72 } /* Keyword.Reserved */ -.codehilite .kt { color: #ff7b72 } /* Keyword.Type */ -.codehilite .ld { color: #79c0ff } /* Literal.Date */ -.codehilite .m { color: #a5d6ff } /* Literal.Number */ -.codehilite .s { color: #a5d6ff } /* Literal.String */ -.codehilite .na { color: #c9d1d9 } /* Name.Attribute */ -.codehilite .nb { color: #c9d1d9 } /* Name.Builtin */ -.codehilite .nc { color: #f0883e; font-weight: bold } /* Name.Class */ -.codehilite .no { color: #79c0ff; font-weight: bold } /* Name.Constant */ -.codehilite .nd { color: #d2a8ff; font-weight: bold } /* Name.Decorator */ -.codehilite .ni { color: #ffa657 } /* Name.Entity */ -.codehilite .ne { color: #f0883e; font-weight: bold } /* Name.Exception */ -.codehilite .nf { color: #d2a8ff; font-weight: bold } /* Name.Function */ -.codehilite .nl { color: #79c0ff; font-weight: bold } /* Name.Label */ -.codehilite .nn { color: #ff7b72 } /* Name.Namespace */ -.codehilite .nx { color: #c9d1d9 } /* Name.Other */ -.codehilite .py { color: #79c0ff } /* Name.Property */ -.codehilite .nt { color: #7ee787 } /* Name.Tag */ -.codehilite .nv { color: #79c0ff } /* Name.Variable */ -.codehilite .ow { color: #ff7b72; font-weight: bold } /* Operator.Word */ -.codehilite .pm { color: #c9d1d9 } /* Punctuation.Marker */ -.codehilite .w { color: #6e7681 } /* Text.Whitespace */ -.codehilite .mb { color: #a5d6ff } /* Literal.Number.Bin */ -.codehilite .mf { color: #a5d6ff } /* Literal.Number.Float */ -.codehilite .mh { color: #a5d6ff } /* Literal.Number.Hex */ -.codehilite .mi { color: #a5d6ff } /* Literal.Number.Integer */ -.codehilite .mo { color: #a5d6ff } /* Literal.Number.Oct */ -.codehilite .sa { color: #79c0ff } /* Literal.String.Affix */ -.codehilite .sb { color: #a5d6ff } /* Literal.String.Backtick */ -.codehilite .sc { color: #a5d6ff } /* Literal.String.Char */ -.codehilite .dl { color: #79c0ff } /* Literal.String.Delimiter */ -.codehilite .sd { color: #a5d6ff } /* Literal.String.Doc */ -.codehilite .s2 { color: #a5d6ff } /* Literal.String.Double */ -.codehilite .se { color: #79c0ff } /* Literal.String.Escape */ -.codehilite .sh { color: #79c0ff } /* Literal.String.Heredoc */ -.codehilite .si { color: #a5d6ff } /* Literal.String.Interpol */ -.codehilite .sx { color: #a5d6ff } /* Literal.String.Other */ -.codehilite .sr { color: #79c0ff } /* Literal.String.Regex */ -.codehilite .s1 { color: #a5d6ff } /* Literal.String.Single */ -.codehilite .ss { color: #a5d6ff } /* Literal.String.Symbol */ -.codehilite .bp { color: #c9d1d9 } /* Name.Builtin.Pseudo */ -.codehilite .fm { color: #d2a8ff; font-weight: bold } /* Name.Function.Magic */ -.codehilite .vc { color: #79c0ff } /* Name.Variable.Class */ -.codehilite .vg { color: #79c0ff } /* Name.Variable.Global */ -.codehilite .vi { color: #79c0ff } /* Name.Variable.Instance */ -.codehilite .vm { color: #79c0ff } /* Name.Variable.Magic */ -.codehilite .il { color: #a5d6ff } /* Literal.Number.Integer.Long */ - -.dark .codehilite .hll { background-color: #2C3B41 } -.dark .codehilite .c { color: #79d618; font-style: italic } /* Comment */ -.dark .codehilite .err { color: #FF5370 } /* Error */ -.dark .codehilite .esc { color: #89DDFF } /* Escape */ -.dark .codehilite .g { color: #EEFFFF } /* Generic */ -.dark .codehilite .k { color: #BB80B3 } /* Keyword */ -.dark .codehilite .l { color: #C3E88D } /* Literal */ -.dark .codehilite .n { color: #EEFFFF } /* Name */ -.dark .codehilite .o { color: #89DDFF } /* Operator */ -.dark .codehilite .p { color: #89DDFF } /* Punctuation */ -.dark .codehilite .ch { color: #79d618; font-style: italic } /* Comment.Hashbang */ -.dark .codehilite .cm { color: #79d618; font-style: italic } /* Comment.Multiline */ -.dark .codehilite .cp { color: #79d618; font-style: italic } /* Comment.Preproc */ -.dark .codehilite .cpf { color: #79d618; font-style: italic } /* Comment.PreprocFile */ -.dark .codehilite .c1 { color: #79d618; font-style: italic } /* Comment.Single */ -.dark .codehilite .cs { color: #79d618; font-style: italic } /* Comment.Special */ -.dark .codehilite .gd { color: #FF5370 } /* Generic.Deleted */ -.dark .codehilite .ge { color: #89DDFF } /* Generic.Emph */ -.dark .codehilite .gr { color: #FF5370 } /* Generic.Error */ -.dark .codehilite .gh { color: #C3E88D } /* Generic.Heading */ -.dark .codehilite .gi { color: #C3E88D } /* Generic.Inserted */ -.dark .codehilite .go { color: #79d618 } /* Generic.Output */ -.dark .codehilite .gp { color: #FFCB6B } /* Generic.Prompt */ -.dark .codehilite .gs { color: #FF5370 } /* Generic.Strong */ -.dark .codehilite .gu { color: #89DDFF } /* Generic.Subheading */ -.dark .codehilite .gt { color: #FF5370 } /* Generic.Traceback */ -.dark .codehilite .kc { color: #89DDFF } /* Keyword.Constant */ -.dark .codehilite .kd { color: #BB80B3 } /* Keyword.Declaration */ -.dark .codehilite .kn { color: #89DDFF; font-style: italic } /* Keyword.Namespace */ -.dark .codehilite .kp { color: #89DDFF } /* Keyword.Pseudo */ -.dark .codehilite .kr { color: #BB80B3 } /* Keyword.Reserved */ -.dark .codehilite .kt { color: #BB80B3 } /* Keyword.Type */ -.dark .codehilite .ld { color: #C3E88D } /* Literal.Date */ -.dark .codehilite .m { color: #F78C6C } /* Literal.Number */ -.dark .codehilite .s { color: #C3E88D } /* Literal.String */ -.dark .codehilite .na { color: #BB80B3 } /* Name.Attribute */ -.dark .codehilite .nb { color: #82AAFF } /* Name.Builtin */ -.dark .codehilite .nc { color: #FFCB6B } /* Name.Class */ -.dark .codehilite .no { color: #EEFFFF } /* Name.Constant */ -.dark .codehilite .nd { color: #82AAFF } /* Name.Decorator */ -.dark .codehilite .ni { color: #89DDFF } /* Name.Entity */ -.dark .codehilite .ne { color: #FFCB6B } /* Name.Exception */ -.dark .codehilite .nf { color: #82AAFF } /* Name.Function */ -.dark .codehilite .nl { color: #82AAFF } /* Name.Label */ -.dark .codehilite .nn { color: #FFCB6B } /* Name.Namespace */ -.dark .codehilite .nx { color: #EEFFFF } /* Name.Other */ -.dark .codehilite .py { color: #FFCB6B } /* Name.Property */ -.dark .codehilite .nt { color: #FF5370 } /* Name.Tag */ -.dark .codehilite .nv { color: #89DDFF } /* Name.Variable */ -.dark .codehilite .ow { color: #89DDFF; font-style: italic } /* Operator.Word */ -.dark .codehilite .pm { color: #89DDFF } /* Punctuation.Marker */ -.dark .codehilite .w { color: #EEFFFF } /* Text.Whitespace */ -.dark .codehilite .mb { color: #F78C6C } /* Literal.Number.Bin */ -.dark .codehilite .mf { color: #F78C6C } /* Literal.Number.Float */ -.dark .codehilite .mh { color: #F78C6C } /* Literal.Number.Hex */ -.dark .codehilite .mi { color: #F78C6C } /* Literal.Number.Integer */ -.dark .codehilite .mo { color: #F78C6C } /* Literal.Number.Oct */ -.dark .codehilite .sa { color: #BB80B3 } /* Literal.String.Affix */ -.dark .codehilite .sb { color: #C3E88D } /* Literal.String.Backtick */ -.dark .codehilite .sc { color: #C3E88D } /* Literal.String.Char */ -.dark .codehilite .dl { color: #EEFFFF } /* Literal.String.Delimiter */ -.dark .codehilite .sd { color: #79d618; font-style: italic } /* Literal.String.Doc */ -.dark .codehilite .s2 { color: #C3E88D } /* Literal.String.Double */ -.dark .codehilite .se { color: #EEFFFF } /* Literal.String.Escape */ -.dark .codehilite .sh { color: #C3E88D } /* Literal.String.Heredoc */ -.dark .codehilite .si { color: #89DDFF } /* Literal.String.Interpol */ -.dark .codehilite .sx { color: #C3E88D } /* Literal.String.Other */ -.dark .codehilite .sr { color: #89DDFF } /* Literal.String.Regex */ -.dark .codehilite .s1 { color: #C3E88D } /* Literal.String.Single */ -.dark .codehilite .ss { color: #89DDFF } /* Literal.String.Symbol */ -.dark .codehilite .bp { color: #89DDFF } /* Name.Builtin.Pseudo */ -.dark .codehilite .fm { color: #82AAFF } /* Name.Function.Magic */ -.dark .codehilite .vc { color: #89DDFF } /* Name.Variable.Class */ -.dark .codehilite .vg { color: #89DDFF } /* Name.Variable.Global */ -.dark .codehilite .vi { color: #89DDFF } /* Name.Variable.Instance */ -.dark .codehilite .vm { color: #82AAFF } /* Name.Variable.Magic */ -.dark .codehilite .il { color: #F78C6C } /* Literal.Number.Integer.Long */ - -""" diff --git a/spaces/fclong/summary/fengshen/examples/clue1.1/predict2submit/tnews_submit.py b/spaces/fclong/summary/fengshen/examples/clue1.1/predict2submit/tnews_submit.py deleted file mode 100644 index eada0476b270624af8c397afb7df70e4e24473b3..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/clue1.1/predict2submit/tnews_submit.py +++ /dev/null @@ -1,47 +0,0 @@ -import json -from tqdm import tqdm -import argparse - - -def save_data(data,file_path): - with open(file_path, 'w', encoding='utf8') as f: - for line in data: - json_data=json.dumps(line,ensure_ascii=False) - f.write(json_data+'\n') - -def submit(file_path): - id2label={"故事": "100", - "文化": "101", - "娱乐": "102", - "体育": "103", - "财经": "104", - "房产": "106", - "汽车": "107", - "教育": "108", - "科技": "109", - "军事": "110", - "旅游": "112", - "国际": "113", - "股票": "114", - "农业": "115", - "电竞": "116"} - - with open(file_path, 'r', encoding='utf8') as f: - lines = f.readlines() - result=[] - for line in tqdm(lines): - data = json.loads(line) - result.append({'id':data['id'],'label':id2label[data['choice'][data['label']]]}) - return result - - -if __name__=="__main__": - parser = argparse.ArgumentParser(description="train") - parser.add_argument("--data_path", type=str,default="") - parser.add_argument("--save_path", type=str,default="") - - args = parser.parse_args() - save_data(submit(args.data_path), args.save_path) - - - \ No newline at end of file diff --git a/spaces/fclong/summary/fengshen/examples/deepVAE/pretrain_deep_vae.sh b/spaces/fclong/summary/fengshen/examples/deepVAE/pretrain_deep_vae.sh deleted file mode 100644 index 29967a73689777dd2240bd5916c843f62913b5e3..0000000000000000000000000000000000000000 --- a/spaces/fclong/summary/fengshen/examples/deepVAE/pretrain_deep_vae.sh +++ /dev/null @@ -1,137 +0,0 @@ -#!/bin/bash - -#SBATCH --job-name=deep_vae_pretrain -#SBATCH --nodes=1 -#SBATCH --ntasks-per-node=1 -#SBATCH --cpus-per-task=32 # -#SBATCH --gres=gpu:1 # number of gpus -#SBATCH -o xxx/outputs/deep_vae/logs/slurm/%x-%j.log -#SBATCH -e xxx/outputs/deep_vae/logs/slurm/%x-%j.err -# SBATCH --requeue -# SBATCH --qos=preemptive - -set -x -e - -ulimit -s unlimited -echo "START TIME: $(date)" - -MASTER_ADDR=$(scontrol show hostnames $SLURM_JOB_NODELIST | head -n 1) -# export MASTER_ADDR=127.0.0.1 -export MASTER_PORT=$[RANDOM%10000+50000] - -MICRO_BATCH_SIZE=64 -ZERO_STAGE=0 - -ROOT_PATH=xxxx -config_json=${ROOT_PATH}/job_out/ds_config.json - -# Deepspeed figures out GAS dynamically from dynamic GBS via set_train_batch_size() -cat < $config_json -{ - "train_micro_batch_size_per_gpu": ${MICRO_BATCH_SIZE}, - "steps_per_print": 100, - "gradient_clipping": 1.0, - "zero_optimization": { - "stage": $ZERO_STAGE, - "contiguous_gradients": false, - "overlap_comm": true, - "reduce_scatter": true, - "reduce_bucket_size": 50000000, - "allgather_bucket_size": 500000000 - }, - "optimizer": { - "type": "Adam", - "params": { - "lr": 1e-5, - "betas": [ - 0.9, - 0.95 - ], - "eps": 1e-8, - "weight_decay": 1e-2 - } - }, - "scheduler": { - "type": "WarmupLR", - "params":{ - "warmup_min_lr": 5e-6, - "warmup_max_lr": 1e-5 - } - }, - "zero_allow_untested_optimizer": false, - "fp16": { - "enabled": false, - "loss_scale": 0, - "loss_scale_window": 1000, - "hysteresis": 2, - "min_loss_scale": 1 - }, - "activation_checkpointing": { - "partition_activations": false, - "contiguous_memory_optimization": false - }, - "wall_clock_breakdown": false -} -EOT -export PL_DEEPSPEED_CONFIG_PATH=$config_json -export TORCH_EXTENSIONS_DIR=~/tmp - -# NOTE both encoder and decoder use the same model -GPT2_MODEL_PATH=xxx -VAE_ARGS=" - --gpt2_model_path $GPT2_MODEL_PATH \ - --latent_dim 32 \ - --beta_kl_constraints_start 1e-5 \ - --beta_kl_constraints_stop 1. \ - --beta_n_cycles 40 \ -" - - -CHECKPOINT_SAVE_PATH=${ROOT_PATH}/checkpoints -MODEL_CHECKPOINT_ARGS="\ - --monitor val_recon_loss \ - --save_top_k 1 \ - --mode min \ - --every_n_train_steps 1000 \ - --save_weights_only True \ - --dirpath $CHECKPOINT_SAVE_PATH \ - --filename checkpoint-{epoch}-{step}-filenum_20_dim_32_beta_1e-5_1_zh_finance \ - " - -TRAINER_ARGS=" - --max_epochs 40 \ - --gpus 1 \ - --num_nodes 1 \ - --precision 16 \ - --val_check_interval 1000 \ - --learning_rate 5e-5 \ - --warmup_steps 10000 \ - --weight_decay 0.01 \ - --default_root_dir ${ROOT_PATH} \ - --log_every_n_steps 50 \ - --strategy deepspeed_stage_2 \ -" -# --strategy deepspeed_stage_2 \ - -# note we use wudao optimus instead of recreating a deepVAE dataset -DATA_ARGS=" - --train_batchsize $MICRO_BATCH_SIZE \ - --eval_batchsize $MICRO_BATCH_SIZE \ - --test_batchsize $MICRO_BATCH_SIZE \ - --num_workers 32 \ - --ds_name zh_finance -" -# --ds_name wudao_tdvae, ner_re_data, zh_finance -# --CVAE -SCRIPTS_PATH=xxx/fengshen/examples/pretrain_vae - -export CMD=" \ - $SCRIPTS_PATH/pretrain_deep_vae.py \ - $TRAINER_ARGS \ - $MODEL_CHECKPOINT_ARGS \ - $VAE_ARGS \ - $DATA_ARGS \ - " -# srun python $CMD -# python -m debugpy --listen 5678 --wait-for-client $CMD -python $CMD \ No newline at end of file diff --git a/spaces/fengmuxi/ChatGpt-Web/app/command.ts b/spaces/fengmuxi/ChatGpt-Web/app/command.ts deleted file mode 100644 index 919e94e53ee823d0c52fd28ac99779f134efee8c..0000000000000000000000000000000000000000 --- a/spaces/fengmuxi/ChatGpt-Web/app/command.ts +++ /dev/null @@ -1,28 +0,0 @@ -import { useSearchParams } from "react-router-dom"; - -type Command = (param: string) => void; -interface Commands { - fill?: Command; - submit?: Command; - mask?: Command; -} - -export function useCommand(commands: Commands = {}) { - const [searchParams, setSearchParams] = useSearchParams(); - - if (commands === undefined) return; - - let shouldUpdate = false; - searchParams.forEach((param, name) => { - const commandName = name as keyof Commands; - if (typeof commands[commandName] === "function") { - commands[commandName]!(param); - searchParams.delete(name); - shouldUpdate = true; - } - }); - - if (shouldUpdate) { - setSearchParams(searchParams); - } -} \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Clash Mini il gioco di strategia in miniatura da scaricare ora.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Clash Mini il gioco di strategia in miniatura da scaricare ora.md deleted file mode 100644 index 14d76dbeaf08787d86a33d62de69d8465933c605..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Clash Mini il gioco di strategia in miniatura da scaricare ora.md +++ /dev/null @@ -1,118 +0,0 @@ -
    -

    Clash Mini: A Fun and Strategy-Packed Board Game

    -

    Do you love the Clash universe and its characters? Do you enjoy playing board games with your friends? If you answered yes to both questions, then you will love Clash Mini, a new game from Supercell that combines both elements in a fun and exciting way.

    -

    clash mini download ultima versione


    DOWNLOAD >> https://gohhs.com/2uPvv5



    -

    Clash Mini is a game of choices, where you duel and rumble in a strategy-packed board game. You collect, summon, and upgrade your army of minis, which are adorable versions of your favorite Clash characters. You predict your opponent's moves and then assemble your winning strategy and formation. You watch your minis come to life and clash to be the last one standing.

    -

    Clash Mini is easy to learn but challenging to master. It offers a variety of modes, heroes, minis, abilities, skins, and boards to keep you entertained. It also has a vibrant community of players from around the world who are ready to challenge you.

    -

    In this article, we will show you how to play Clash Mini, how to download Clash Mini ultima versione (the latest version), and some tips and tricks to help you win more battles. Let's get started!

    -

    clash mini apk download latest version
    -clash mini beta download ultima versione
    -clash mini game download for android
    -clash mini download ios ultima versione
    -clash mini release date ultima versione
    -clash mini download pc ultima versione
    -clash mini strategy guide ultima versione
    -clash mini mod apk download latest version
    -clash mini tips and tricks ultima versione
    -clash mini review ultima versione
    -clash mini download link ultima versione
    -clash mini gameplay ultima versione
    -clash mini best minis ultima versione
    -clash mini hack download latest version
    -clash mini update ultima versione
    -clash mini cheats ultima versione
    -clash mini download free ultima versione
    -clash mini wiki ultima versione
    -clash mini codes ultima versione
    -clash mini online play ultima versione
    -clash mini download size ultima versione
    -clash mini characters ultima versione
    -clash mini skins ultima versione
    -clash mini ranks ultima versione
    -clash mini forum ultima versione
    -clash mini discord server ultima versione
    -clash mini how to play ultima versione
    -clash mini system requirements ultima versione
    -clash mini support ultima versione
    -clash mini news ultima versione
    -clash mini reddit ultima versione
    -clash mini trailer ultima versione
    -clash mini facebook page ultima versione
    -clash mini twitter account ultima versione
    -clash mini instagram profile ultima versione
    -clash mini youtube channel ultima versione
    -clash mini official website ultima versione
    -clash mini developer blog ultima versione
    -clash mini faq ultima versione
    -clash mini patch notes ultima versione
    -clash mini tournaments ultima versione
    -clash mini leagues ultima versione
    -clash mini clans ultima versione
    -clash mini events ultima versione
    -clash mini quests ultima versione
    -clash mini achievements ultima versione
    -clash mini shop ultima versione
    -clash mini in-app purchases ultima versione

    -

    How to Play Clash Mini

    -

    Clash Mini is a real-time auto battler game, which means that you don't control your minis directly during battle. Instead, you place them on the board before each round and let them fight automatically. Your goal is to eliminate all of your opponent's minis or have more health than them at the end of the battle.

    -

    The basics: Collect, summon, and upgrade your army of minis

    -

    You start each battle with a random set of minis that you can summon on the board. Each mini has a cost, a type (tank, melee, or ranged), an ability, and a star level. You can summon up to five minis per round, as long as you have enough gold. You earn gold by winning rounds, completing quests, or selling minis.

    -

    You can also upgrade your minis by combining three of the same kind. Upgraded minis have higher stats and stronger abilities. You can upgrade your minis during battle or in between rounds.

    -

    The modes: Duel and rumble in real-time battles

    -

    Clash Mini has two main modes that you can play: duel and rumble. In duel mode, you face one opponent in a best-of-five match. In rumble mode, you face seven other players in a free-for-all match. Both modes are fast-paced and last for less than five minutes.

    -

    You can play casually for fun or in ranked matches to increase your league standing. As you progress through the leagues, you will unlock new minis, abilities, skins, boards, and rewards.

    -

    The strategy: Predict your opponent's moves and assemble your winning formation

    -

    Clash Mini is not just about luck or having the

    Clash Mini is not just about luck or having the strongest minis. It is also about strategy and prediction. You need to anticipate what your opponent will do and counter their moves. You need to balance your offense and defense, and use your abilities wisely. You need to adapt to the changing board and the different tiles that have different effects.

    -

    There is no one best formation or strategy for Clash Mini. It depends on your minis, your opponent's minis, the board, and the mode. You have to experiment and find what works best for you. However, here are some general tips to help you improve your game:

    -
      -
    • Pay attention to the type of your minis and your opponent's minis. Tanks are good at absorbing damage and protecting other minis. Melee minis are good at dealing damage up close and disrupting enemy formations. Ranged minis are good at dealing damage from afar and supporting other minis.
    • -
    • Pay attention to the ability of your minis and your opponent's minis. Some abilities are passive, which means they activate automatically during battle. Some abilities are active, which means they require a charge to activate. Some abilities are triggered by certain conditions, such as health, position, or number of minis.
    • -
    • Pay attention to the board and the tiles. Some tiles have positive effects, such as healing, boosting, or shielding your minis. Some tiles have negative effects, such as damaging, stunning, or slowing your minis. Some tiles have neutral effects, such as teleporting, swapping, or rotating your minis.
    • -
    • Pay attention to the mode and the round. In duel mode, you have to win three rounds out of five to win the match. In rumble mode, you have to survive until the end or have the most health to win the match. Each round has a different board and a different number of gold.
    • -
    -

    How to Download Clash Mini Ultima Versione

    -

    If you are interested in playing Clash Mini, you might want to download Clash Mini ultima versione (the latest version). This way, you can enjoy the most recent features and updates of the game, such as new minis, abilities, skins, boards, modes, events, and bug fixes.

    -

    Downloading Clash Mini ultima versione is easy and free. All you need is an Android device and a Google Play account. Here are the steps to follow:

    -
      -
    1. Go to the Google Play store on your Android device or visit this link: https://play.google.com/store/apps/details?id=com.supercell.clashmini
    2. -
    3. Search for Clash Mini or tap on the link above.
    4. -
    5. Tap on the Install button and wait for the download to finish.
    6. -
    7. Tap on the Open button and enjoy playing Clash Mini!
    8. -
    -

    If you already have Clash Mini installed on your device, you can check for updates by following these steps:

    -
      -
    1. Go to the Google Play store on your Android device or visit this link: https://play.google.com/store/apps/updates
    2. -
    3. Search for Clash Mini or tap on the link above.
    4. -
    5. Tap on the Update button and wait for the download to finish.
    6. -
    7. Tap on the Open button and enjoy playing Clash Mini!
    8. -
    -

    Tips and Tricks for Clash Mini

    -

    Now that you know how to play Clash Mini and how to download Clash Mini ultima versione, you might want some tips and tricks to help you win more battles and have more fun. Here are some of them:

    -

    The heroes: Choose from iconic Clash characters and customize them with skins

    -

    In Clash Mini, you can choose from eight heroes that represent iconic Clash characters, such as Barbarian King, Archer Queen, Grand Warden, Royal Champion, Goblin King, Ice Queen, Firecracker Queen, and Electro Dragon King. Each hero has a unique ability that can turn the tide of battle.

    -

    You can also customize your hero with skins that change their appearance and give them special effects. You can unlock skins by progressing through the leagues or by purchasing them with gems.

    -

    The minis: Learn the strengths and weaknesses of each mini and upgrade them during battle

    -

    In Clash Mini, you can collect over 40 minis that represent adorable versions of your favorite Clash characters, such as Barbarian, such as Barbarian, Archer, Giant, Wizard, Hog Rider, P.E.K.K.A, and many more. Each mini has a cost, a type, an ability, and a star level. You can upgrade your minis by combining three of the same kind.

    -

    You should learn the strengths and weaknesses of each mini and how they interact with each other. For example, some minis are good at dealing damage to multiple enemies, such as Bomber, Valkyrie, or Baby Dragon. Some minis are good at targeting specific enemies, such as Miner, Balloon, or Lava Hound. Some minis have special effects that can help or hinder your team, such as Healer, Witch, or Ice Wizard.

    -

    You should also upgrade your minis during battle to make them stronger and more effective. You can do this by spending gold or using abilities that can upgrade your minis automatically.

    -

    The board: Use the different tiles and positions to your advantage

    -

    In Clash Mini, you can play on different boards that have different tiles and positions. Each tile has a different effect that can affect your minis or your opponent's minis. For example, some tiles can heal, boost, or shield your minis. Some tiles can damage, stun, or slow your opponent's minis. Some tiles can teleport, swap, or rotate your minis or your opponent's minis.

    -

    You should use the different tiles and positions to your advantage and avoid giving your opponent an edge. For example, you can place your tank minis on healing tiles to make them more durable. You can place your ranged minis on boosting tiles to make them more powerful. You can place your melee minis on stunning tiles to disrupt your opponent's formation.

    -

    You should also pay attention to the position of your minis and your opponent's minis on the board. For example, you can place your minis in the front row to attack first or in the back row to attack last. You can place your minis in the center to target multiple enemies or in the corners to target specific enemies. You can place your minis close together to benefit from abilities that affect nearby allies or far apart to avoid abilities that affect nearby enemies.

    -

    Conclusion

    -

    Clash Mini is a fun and strategy-packed board game that you can play on your Android device. It is a game of choices, where you collect, summon, and upgrade your army of minis and duel and rumble in real-time battles. It is a game of strategy and prediction, where you anticipate your opponent's moves and assemble your winning formation. It is a game of variety and customization, where you choose from iconic Clash characters and customize them with skins.

    -

    If you want to play Clash Mini, you can download Clash Mini ultima versione (the latest version) from the Google Play store for free. This way, you can enjoy the most recent features and updates of the game. You can also follow some tips and tricks to help you win more battles and have more fun.

    -

    What are you waiting for? Download Clash Mini today and join the clash!

    -

    FAQs

    -

    Q1: Is Clash Mini free to play?

    -

    A1: Yes, Clash Mini is free to play. You can download it from the Google Play store for free and play it without spending any money. However, you can also purchase gems with real money to unlock skins or speed up progress.

    -

    Q2: How can I get more minis and abilities?

    -

    A2: You can get more minis and abilities by progressing through the leagues or by opening chests. You can earn chests by winning battles or completing quests. You can also buy chests with gems.

    -

    Q3: What is the difference between duel and rumble mode?

    -

    A3: Duel mode is a one-on-one match where you face one opponent in a best-of-five match. Rumble mode is a free-for-all match where you face seven other players in a single match.

    -

    Q4: How can I increase my league standing?

    -

    A4: You can increase your league standing by winning ranked matches in either duel or rumble mode. As you win more matches, you will earn more trophies and climb up the leagues.

    -

    Q5: How can I contact the developers of Clash Mini?

    -

    A5: You can contact the developers of Clash Mini by visiting their official website: https://supercell.com/en/games/clashmini/. You can also follow them on social media platforms such as Facebook, Twitter, Instagram, YouTube, or Discord.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Defend Your Kingdom with King God Castle APK.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Defend Your Kingdom with King God Castle APK.md deleted file mode 100644 index 04f093fd8c07e288c5f2f2245eb6eb682bb04065..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Defend Your Kingdom with King God Castle APK.md +++ /dev/null @@ -1,168 +0,0 @@ - -

    King God Castle APK: A Strategy Game for Android

    -

    If you are looking for a fun and challenging strategy game for your Android device, you might want to check out King God Castle APK. This is a game where you have to defend your castle from various invading enemies using heroes, the power of the Most High, and your strategy. In this article, we will tell you more about what King God Castle APK is, why you should play it, what features it has, how to download and install it, and some FAQs.

    -

    king god castle apk


    Download Filehttps://gohhs.com/2uPnFt



    -

    Introduction

    -

    What is King God Castle APK?

    -

    King God Castle APK is an Android game developed by AWESOMEPIECE, a Korean game studio. It is a strategy game where you have to protect your castle from different types of enemies using heroes that you can enhance and combine. You can also use the power of the Most High to strengthen your heroes and borrow their righteous powers. The game has various modes and difficulties that you can choose from to challenge yourself and earn more rewards.

    -

    Why should you play King God Castle APK?

    -

    There are many reasons why you should play King God Castle APK. Here are some of them:

    -
      -
    • It is a fun and addictive strategy game that will test your skills and luck.
    • -
    • It has beautiful graphics and sound effects that will immerse you in the game world.
    • -
    • It has a variety of heroes and enemies that have different characteristics and skills.
    • -
    • It has a simple and intuitive interface that is easy to use.
    • -
    • It is free to download and play, but you can also purchase in-game items to enhance your experience.
    • -
    -

    Features of King God Castle APK

    -

    Defend with your strategy and luck

    -

    In King God Castle APK, your luck will decide which hero you can enhance and which weapon you will get. You have to use your strategy and tactics to make the most of what you have and defeat your enemies. You can also use magic spells to wipe out multiple enemies at once or deal with them one by one.

    -

    Enhance and combine your own heroes

    -

    You can choose six heroes from different classes such as warrior, archer, mage, priest, assassin, and paladin. You can use gold and gems from battles to strengthen your heroes and bring out their potential. You can also combine two heroes of the same class to create a more powerful hero with unique abilities.

    -

    Strengthening heroes through the power of Most High

    -

    You can also strengthen your heroes through the power of Most High, which is a divine force that grants righteous powers. You can choose an altar that suits your heroes' attributes and borrow its power. The altars have different effects such as increasing attack speed, critical rate, defense, or healing.

    -

    Diverse enemies

    -

    You will face various enemies that have different characteristics and skills. Some enemies are fast and agile, some are strong and durable, some are immune to magic or physical attacks, and some have special abilities such as summoning minions or casting spells. You have to adapt your strategy according to their strengths and weaknesses. You can also use the table below to see the types and attributes of the enemies.

    -
    NameOriginRoleAbility
    The Sole SurvivorFallout 4LeaderV.A.T.S.: Deals damage to multiple enemies and increases critical chance.
    The Lone WandererFallout 3TankLone Wolf: Increases defense and resistance when alone or with Dogmeat.
    The CourierFallout: New VegasSniperLucky Shot: Deals extra damage and ignores armor with a chance to stun enemies.
    Nick ValentineFallout 4HackerSynthetic Detective: Hacks enemy robots and turrets and turns them against their allies.
    Preston GarveyFallout 4RangerMinuteman General: Calls for artillery support and increases damage of allies.
    Piper WrightFallout 4ReporterPublick Occurrences: Exposes enemy weaknesses and reduces their defense.
    DogmeatFallout 4CompanionM an's Best Friend: Attacks enemies and causes bleeding damage with a chance to cripple them.
    CodsworthFallout 4ButlerFlamethrower: Deals fire damage to enemies and reduces their attack speed.
    HancockFallout 4GhoulRadiation Blast: Deals radiation damage to enemies and heals allies.
    CaitFallout 4BrawlerPsycho: Increases attack and critical damage with a chance to ignore damage.
    CurieFallout 4MedicStimpak: Heals allies and removes negative effects.
    Paladin DanseFallout 4SoldierBrotherhood of Steel: Increases defense and resistance of allies and reflects damage to enemies.
    DeaconFallout 4SpyCloak and Dagger: Becomes invisible and deals extra damage with a chance to stun enemies.
    MacCreadyFallout 4MercenaryKillshot: Deals headshot damage to enemies and increases accuracy.
    StrongFallout 4Super MutantBerserk: Deals melee damage to enemies and knocks them back with a chance to stun them.
    X6-88Fallout 4CourserSynthetic Assassin: Deals laser damage to enemies and increases evasion.
    Father ElijahFallout: New VegasScientistGhost People: Summons ghost people to attack enemies and explode on death.
    Sarah LyonsFallout 3PaladinLion's Pride: Increases attack and defense of allies and reduces damage taken.
    FawkesFallout 3Super MutantGatling Laser: Deals laser damage to enemies and pierces through armor.
    CharonFallout 3GhoulShotgun Blast: Deals shotgun damage to enemies and pushes them back.
    CloverFallout 3SlaveChainsaw: Deals melee damage to enemies and causes bleeding damage.
    Butch DeLoriaFallout 3GangsterTunnel Snakes Rule: Increases attack and critical damage of allies and taunts enemies.
    Moira BrownFallout 3MerchantWasteland Survival Guide: Heals allies and increases their stats.
    Three DogFallout 3DJGalaxy News Radio: Increases morale and happiness of allies and reduces enemy attack.
    Amata AlmodovarFallout 3OverseerVault 101: Increases defense and resistance of allies and heals them over time.
    James (the Lone Wanderer's father)Fallout 3DoctorProject Purity: Purifies water and removes radiation from allies.
    Colonel AutumnFallout 3EnclavePlasma Pistol: Deals plasma damage to enemies and reduces their evasion.
    John Henry Eden (the Enclave President)Fallout 3PresidentAmerica Reborn: Increases attack and defense of allies and summons Enclave soldiers.
    Liberty Prime (the Brotherhood of Steel's giant robot)Fallout 3RobotDemocracy is Non-Negotiable: Deals massive damage to enemies and launches mini-nukes. -
  17. Complete the event objectives and collect the rewards.
  18. - -

    You can also chat with other players or guild members by tapping on the chat icon on the bottom of the screen. You can also view your profile, achievements, rankings, and statistics by tapping on the menu icon on the top left of the screen.

    -

    Conclusion

    -

    Fallout Shelter Online is a great game for fans of the Fallout series and anyone who likes simulation, strategy, and RPG games. It offers a lot of content and features that will keep you entertained and challenged for a long time. You can build and manage your own vault, recruit legendary heroes from the Fallout series, explore the wasteland and battle enemies, join a guild and participate in online events, and more. You can also download Fallout Shelter Online APK to enjoy the game in English, access the latest version with new content, and avoid regional restrictions. Just follow the steps we have provided in this article and you will be able to download and install Fallout Shelter Online APK on your Android device easily and safely.

    -

    FAQs

    -

    Here are some frequently asked questions and answers about Fallout Shelter Online APK:

    -

    Q: Is Fallout Shelter Online APK safe to download and install?

    -

    A: Yes, as long as you download it from a reliable source that has positive reviews and feedback from other users. You should also scan the file with an antivirus or malware scanner before installing it on your device.

    -

    Q: Is Fallout Shelter Online APK legal to download and install?

    -

    A: Yes, as long as you do not use it for any illegal or unethical purposes. You should also respect the intellectual property rights of the game developers and publishers and not distribute or sell the file without their permission.

    -

    Q: Is Fallout Shelter Online APK compatible with my device?

    -

    A: Fallout Shelter Online APK should be compatible with most Android devices that have Android 4.4 or higher. However, some devices might have issues or errors due to different specifications or settings. You should check if your device meets the minimum requirements of the game before downloading and installing it.

    -

    Q: How can I update Fallout Shelter Online APK?

    -

    A: You can update Fallout Shelter Online APK by downloading and installing the latest version of the file from the same source that you used before. You should also backup your game data before updating to avoid losing your progress or items.

    -

    Q: How can I uninstall Fallout Shelter Online APK?

    -

    A: You can uninstall Fallout Shelter Online APK by following these steps:

    -
      -
    • Go to your device settings and look for the apps or applications option.
    • -
    • Tap on it and look for Fallout Shelter Online on the list of installed apps.
    • -
    • Tap on it and select uninstall or remove.
    • -
    • Confirm the uninstallation by tapping on yes or ok.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/docs/waifu_plugin/jquery-ui.min.js b/spaces/fb700/chatglm-fitness-RLHF/docs/waifu_plugin/jquery-ui.min.js deleted file mode 100644 index 25398a167415050ae8bfb0bfebac6aa3ab790909..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/docs/waifu_plugin/jquery-ui.min.js +++ /dev/null @@ -1,13 +0,0 @@ -/*! jQuery UI - v1.12.1 - 2016-09-14 -* http://jqueryui.com -* Includes: widget.js, position.js, data.js, disable-selection.js, effect.js, effects/effect-blind.js, effects/effect-bounce.js, effects/effect-clip.js, effects/effect-drop.js, effects/effect-explode.js, effects/effect-fade.js, effects/effect-fold.js, effects/effect-highlight.js, effects/effect-puff.js, effects/effect-pulsate.js, effects/effect-scale.js, effects/effect-shake.js, effects/effect-size.js, effects/effect-slide.js, effects/effect-transfer.js, focusable.js, form-reset-mixin.js, jquery-1-7.js, keycode.js, labels.js, scroll-parent.js, tabbable.js, unique-id.js, widgets/accordion.js, widgets/autocomplete.js, widgets/button.js, widgets/checkboxradio.js, widgets/controlgroup.js, widgets/datepicker.js, widgets/dialog.js, widgets/draggable.js, widgets/droppable.js, widgets/menu.js, widgets/mouse.js, widgets/progressbar.js, widgets/resizable.js, widgets/selectable.js, widgets/selectmenu.js, widgets/slider.js, widgets/sortable.js, widgets/spinner.js, widgets/tabs.js, widgets/tooltip.js -* Copyright jQuery Foundation and other contributors; Licensed MIT */ - -(function(t){"function"==typeof define&&define.amd?define(["jquery"],t):t(jQuery)})(function(t){function e(t){for(var e=t.css("visibility");"inherit"===e;)t=t.parent(),e=t.css("visibility");return"hidden"!==e}function i(t){for(var e,i;t.length&&t[0]!==document;){if(e=t.css("position"),("absolute"===e||"relative"===e||"fixed"===e)&&(i=parseInt(t.css("zIndex"),10),!isNaN(i)&&0!==i))return i;t=t.parent()}return 0}function s(){this._curInst=null,this._keyEvent=!1,this._disabledInputs=[],this._datepickerShowing=!1,this._inDialog=!1,this._mainDivId="ui-datepicker-div",this._inlineClass="ui-datepicker-inline",this._appendClass="ui-datepicker-append",this._triggerClass="ui-datepicker-trigger",this._dialogClass="ui-datepicker-dialog",this._disableClass="ui-datepicker-disabled",this._unselectableClass="ui-datepicker-unselectable",this._currentClass="ui-datepicker-current-day",this._dayOverClass="ui-datepicker-days-cell-over",this.regional=[],this.regional[""]={closeText:"Done",prevText:"Prev",nextText:"Next",currentText:"Today",monthNames:["January","February","March","April","May","June","July","August","September","October","November","December"],monthNamesShort:["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"],dayNames:["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"],dayNamesShort:["Sun","Mon","Tue","Wed","Thu","Fri","Sat"],dayNamesMin:["Su","Mo","Tu","We","Th","Fr","Sa"],weekHeader:"Wk",dateFormat:"mm/dd/yy",firstDay:0,isRTL:!1,showMonthAfterYear:!1,yearSuffix:""},this._defaults={showOn:"focus",showAnim:"fadeIn",showOptions:{},defaultDate:null,appendText:"",buttonText:"...",buttonImage:"",buttonImageOnly:!1,hideIfNoPrevNext:!1,navigationAsDateFormat:!1,gotoCurrent:!1,changeMonth:!1,changeYear:!1,yearRange:"c-10:c+10",showOtherMonths:!1,selectOtherMonths:!1,showWeek:!1,calculateWeek:this.iso8601Week,shortYearCutoff:"+10",minDate:null,maxDate:null,duration:"fast",beforeShowDay:null,beforeShow:null,onSelect:null,onChangeMonthYear:null,onClose:null,numberOfMonths:1,showCurrentAtPos:0,stepMonths:1,stepBigMonths:12,altField:"",altFormat:"",constrainInput:!0,showButtonPanel:!1,autoSize:!1,disabled:!1},t.extend(this._defaults,this.regional[""]),this.regional.en=t.extend(!0,{},this.regional[""]),this.regional["en-US"]=t.extend(!0,{},this.regional.en),this.dpDiv=n(t("
    "))}function n(e){var i="button, .ui-datepicker-prev, .ui-datepicker-next, .ui-datepicker-calendar td a";return e.on("mouseout",i,function(){t(this).removeClass("ui-state-hover"),-1!==this.className.indexOf("ui-datepicker-prev")&&t(this).removeClass("ui-datepicker-prev-hover"),-1!==this.className.indexOf("ui-datepicker-next")&&t(this).removeClass("ui-datepicker-next-hover")}).on("mouseover",i,o)}function o(){t.datepicker._isDisabledDatepicker(m.inline?m.dpDiv.parent()[0]:m.input[0])||(t(this).parents(".ui-datepicker-calendar").find("a").removeClass("ui-state-hover"),t(this).addClass("ui-state-hover"),-1!==this.className.indexOf("ui-datepicker-prev")&&t(this).addClass("ui-datepicker-prev-hover"),-1!==this.className.indexOf("ui-datepicker-next")&&t(this).addClass("ui-datepicker-next-hover"))}function a(e,i){t.extend(e,i);for(var s in i)null==i[s]&&(e[s]=i[s]);return e}function r(t){return function(){var e=this.element.val();t.apply(this,arguments),this._refresh(),e!==this.element.val()&&this._trigger("change")}}t.ui=t.ui||{},t.ui.version="1.12.1";var h=0,l=Array.prototype.slice;t.cleanData=function(e){return function(i){var s,n,o;for(o=0;null!=(n=i[o]);o++)try{s=t._data(n,"events"),s&&s.remove&&t(n).triggerHandler("remove")}catch(a){}e(i)}}(t.cleanData),t.widget=function(e,i,s){var n,o,a,r={},h=e.split(".")[0];e=e.split(".")[1];var l=h+"-"+e;return s||(s=i,i=t.Widget),t.isArray(s)&&(s=t.extend.apply(null,[{}].concat(s))),t.expr[":"][l.toLowerCase()]=function(e){return!!t.data(e,l)},t[h]=t[h]||{},n=t[h][e],o=t[h][e]=function(t,e){return this._createWidget?(arguments.length&&this._createWidget(t,e),void 0):new o(t,e)},t.extend(o,n,{version:s.version,_proto:t.extend({},s),_childConstructors:[]}),a=new i,a.options=t.widget.extend({},a.options),t.each(s,function(e,s){return t.isFunction(s)?(r[e]=function(){function t(){return i.prototype[e].apply(this,arguments)}function n(t){return i.prototype[e].apply(this,t)}return function(){var e,i=this._super,o=this._superApply;return this._super=t,this._superApply=n,e=s.apply(this,arguments),this._super=i,this._superApply=o,e}}(),void 0):(r[e]=s,void 0)}),o.prototype=t.widget.extend(a,{widgetEventPrefix:n?a.widgetEventPrefix||e:e},r,{constructor:o,namespace:h,widgetName:e,widgetFullName:l}),n?(t.each(n._childConstructors,function(e,i){var s=i.prototype;t.widget(s.namespace+"."+s.widgetName,o,i._proto)}),delete n._childConstructors):i._childConstructors.push(o),t.widget.bridge(e,o),o},t.widget.extend=function(e){for(var i,s,n=l.call(arguments,1),o=0,a=n.length;a>o;o++)for(i in n[o])s=n[o][i],n[o].hasOwnProperty(i)&&void 0!==s&&(e[i]=t.isPlainObject(s)?t.isPlainObject(e[i])?t.widget.extend({},e[i],s):t.widget.extend({},s):s);return e},t.widget.bridge=function(e,i){var s=i.prototype.widgetFullName||e;t.fn[e]=function(n){var o="string"==typeof n,a=l.call(arguments,1),r=this;return o?this.length||"instance"!==n?this.each(function(){var i,o=t.data(this,s);return"instance"===n?(r=o,!1):o?t.isFunction(o[n])&&"_"!==n.charAt(0)?(i=o[n].apply(o,a),i!==o&&void 0!==i?(r=i&&i.jquery?r.pushStack(i.get()):i,!1):void 0):t.error("no such method '"+n+"' for "+e+" widget instance"):t.error("cannot call methods on "+e+" prior to initialization; "+"attempted to call method '"+n+"'")}):r=void 0:(a.length&&(n=t.widget.extend.apply(null,[n].concat(a))),this.each(function(){var e=t.data(this,s);e?(e.option(n||{}),e._init&&e._init()):t.data(this,s,new i(n,this))})),r}},t.Widget=function(){},t.Widget._childConstructors=[],t.Widget.prototype={widgetName:"widget",widgetEventPrefix:"",defaultElement:"
    ",options:{classes:{},disabled:!1,create:null},_createWidget:function(e,i){i=t(i||this.defaultElement||this)[0],this.element=t(i),this.uuid=h++,this.eventNamespace="."+this.widgetName+this.uuid,this.bindings=t(),this.hoverable=t(),this.focusable=t(),this.classesElementLookup={},i!==this&&(t.data(i,this.widgetFullName,this),this._on(!0,this.element,{remove:function(t){t.target===i&&this.destroy()}}),this.document=t(i.style?i.ownerDocument:i.document||i),this.window=t(this.document[0].defaultView||this.document[0].parentWindow)),this.options=t.widget.extend({},this.options,this._getCreateOptions(),e),this._create(),this.options.disabled&&this._setOptionDisabled(this.options.disabled),this._trigger("create",null,this._getCreateEventData()),this._init()},_getCreateOptions:function(){return{}},_getCreateEventData:t.noop,_create:t.noop,_init:t.noop,destroy:function(){var e=this;this._destroy(),t.each(this.classesElementLookup,function(t,i){e._removeClass(i,t)}),this.element.off(this.eventNamespace).removeData(this.widgetFullName),this.widget().off(this.eventNamespace).removeAttr("aria-disabled"),this.bindings.off(this.eventNamespace)},_destroy:t.noop,widget:function(){return this.element},option:function(e,i){var s,n,o,a=e;if(0===arguments.length)return t.widget.extend({},this.options);if("string"==typeof e)if(a={},s=e.split("."),e=s.shift(),s.length){for(n=a[e]=t.widget.extend({},this.options[e]),o=0;s.length-1>o;o++)n[s[o]]=n[s[o]]||{},n=n[s[o]];if(e=s.pop(),1===arguments.length)return void 0===n[e]?null:n[e];n[e]=i}else{if(1===arguments.length)return void 0===this.options[e]?null:this.options[e];a[e]=i}return this._setOptions(a),this},_setOptions:function(t){var e;for(e in t)this._setOption(e,t[e]);return this},_setOption:function(t,e){return"classes"===t&&this._setOptionClasses(e),this.options[t]=e,"disabled"===t&&this._setOptionDisabled(e),this},_setOptionClasses:function(e){var i,s,n;for(i in e)n=this.classesElementLookup[i],e[i]!==this.options.classes[i]&&n&&n.length&&(s=t(n.get()),this._removeClass(n,i),s.addClass(this._classes({element:s,keys:i,classes:e,add:!0})))},_setOptionDisabled:function(t){this._toggleClass(this.widget(),this.widgetFullName+"-disabled",null,!!t),t&&(this._removeClass(this.hoverable,null,"ui-state-hover"),this._removeClass(this.focusable,null,"ui-state-focus"))},enable:function(){return this._setOptions({disabled:!1})},disable:function(){return this._setOptions({disabled:!0})},_classes:function(e){function i(i,o){var a,r;for(r=0;i.length>r;r++)a=n.classesElementLookup[i[r]]||t(),a=e.add?t(t.unique(a.get().concat(e.element.get()))):t(a.not(e.element).get()),n.classesElementLookup[i[r]]=a,s.push(i[r]),o&&e.classes[i[r]]&&s.push(e.classes[i[r]])}var s=[],n=this;return e=t.extend({element:this.element,classes:this.options.classes||{}},e),this._on(e.element,{remove:"_untrackClassesElement"}),e.keys&&i(e.keys.match(/\S+/g)||[],!0),e.extra&&i(e.extra.match(/\S+/g)||[]),s.join(" ")},_untrackClassesElement:function(e){var i=this;t.each(i.classesElementLookup,function(s,n){-1!==t.inArray(e.target,n)&&(i.classesElementLookup[s]=t(n.not(e.target).get()))})},_removeClass:function(t,e,i){return this._toggleClass(t,e,i,!1)},_addClass:function(t,e,i){return this._toggleClass(t,e,i,!0)},_toggleClass:function(t,e,i,s){s="boolean"==typeof s?s:i;var n="string"==typeof t||null===t,o={extra:n?e:i,keys:n?t:e,element:n?this.element:t,add:s};return o.element.toggleClass(this._classes(o),s),this},_on:function(e,i,s){var n,o=this;"boolean"!=typeof e&&(s=i,i=e,e=!1),s?(i=n=t(i),this.bindings=this.bindings.add(i)):(s=i,i=this.element,n=this.widget()),t.each(s,function(s,a){function r(){return e||o.options.disabled!==!0&&!t(this).hasClass("ui-state-disabled")?("string"==typeof a?o[a]:a).apply(o,arguments):void 0}"string"!=typeof a&&(r.guid=a.guid=a.guid||r.guid||t.guid++);var h=s.match(/^([\w:-]*)\s*(.*)$/),l=h[1]+o.eventNamespace,c=h[2];c?n.on(l,c,r):i.on(l,r)})},_off:function(e,i){i=(i||"").split(" ").join(this.eventNamespace+" ")+this.eventNamespace,e.off(i).off(i),this.bindings=t(this.bindings.not(e).get()),this.focusable=t(this.focusable.not(e).get()),this.hoverable=t(this.hoverable.not(e).get())},_delay:function(t,e){function i(){return("string"==typeof t?s[t]:t).apply(s,arguments)}var s=this;return setTimeout(i,e||0)},_hoverable:function(e){this.hoverable=this.hoverable.add(e),this._on(e,{mouseenter:function(e){this._addClass(t(e.currentTarget),null,"ui-state-hover")},mouseleave:function(e){this._removeClass(t(e.currentTarget),null,"ui-state-hover")}})},_focusable:function(e){this.focusable=this.focusable.add(e),this._on(e,{focusin:function(e){this._addClass(t(e.currentTarget),null,"ui-state-focus")},focusout:function(e){this._removeClass(t(e.currentTarget),null,"ui-state-focus")}})},_trigger:function(e,i,s){var n,o,a=this.options[e];if(s=s||{},i=t.Event(i),i.type=(e===this.widgetEventPrefix?e:this.widgetEventPrefix+e).toLowerCase(),i.target=this.element[0],o=i.originalEvent)for(n in o)n in i||(i[n]=o[n]);return this.element.trigger(i,s),!(t.isFunction(a)&&a.apply(this.element[0],[i].concat(s))===!1||i.isDefaultPrevented())}},t.each({show:"fadeIn",hide:"fadeOut"},function(e,i){t.Widget.prototype["_"+e]=function(s,n,o){"string"==typeof n&&(n={effect:n});var a,r=n?n===!0||"number"==typeof n?i:n.effect||i:e;n=n||{},"number"==typeof n&&(n={duration:n}),a=!t.isEmptyObject(n),n.complete=o,n.delay&&s.delay(n.delay),a&&t.effects&&t.effects.effect[r]?s[e](n):r!==e&&s[r]?s[r](n.duration,n.easing,o):s.queue(function(i){t(this)[e](),o&&o.call(s[0]),i()})}}),t.widget,function(){function e(t,e,i){return[parseFloat(t[0])*(u.test(t[0])?e/100:1),parseFloat(t[1])*(u.test(t[1])?i/100:1)]}function i(e,i){return parseInt(t.css(e,i),10)||0}function s(e){var i=e[0];return 9===i.nodeType?{width:e.width(),height:e.height(),offset:{top:0,left:0}}:t.isWindow(i)?{width:e.width(),height:e.height(),offset:{top:e.scrollTop(),left:e.scrollLeft()}}:i.preventDefault?{width:0,height:0,offset:{top:i.pageY,left:i.pageX}}:{width:e.outerWidth(),height:e.outerHeight(),offset:e.offset()}}var n,o=Math.max,a=Math.abs,r=/left|center|right/,h=/top|center|bottom/,l=/[\+\-]\d+(\.[\d]+)?%?/,c=/^\w+/,u=/%$/,d=t.fn.position;t.position={scrollbarWidth:function(){if(void 0!==n)return n;var e,i,s=t("
    "),o=s.children()[0];return t("body").append(s),e=o.offsetWidth,s.css("overflow","scroll"),i=o.offsetWidth,e===i&&(i=s[0].clientWidth),s.remove(),n=e-i},getScrollInfo:function(e){var i=e.isWindow||e.isDocument?"":e.element.css("overflow-x"),s=e.isWindow||e.isDocument?"":e.element.css("overflow-y"),n="scroll"===i||"auto"===i&&e.widthi?"left":e>0?"right":"center",vertical:0>r?"top":s>0?"bottom":"middle"};l>p&&p>a(e+i)&&(u.horizontal="center"),c>f&&f>a(s+r)&&(u.vertical="middle"),u.important=o(a(e),a(i))>o(a(s),a(r))?"horizontal":"vertical",n.using.call(this,t,u)}),h.offset(t.extend(D,{using:r}))})},t.ui.position={fit:{left:function(t,e){var i,s=e.within,n=s.isWindow?s.scrollLeft:s.offset.left,a=s.width,r=t.left-e.collisionPosition.marginLeft,h=n-r,l=r+e.collisionWidth-a-n;e.collisionWidth>a?h>0&&0>=l?(i=t.left+h+e.collisionWidth-a-n,t.left+=h-i):t.left=l>0&&0>=h?n:h>l?n+a-e.collisionWidth:n:h>0?t.left+=h:l>0?t.left-=l:t.left=o(t.left-r,t.left)},top:function(t,e){var i,s=e.within,n=s.isWindow?s.scrollTop:s.offset.top,a=e.within.height,r=t.top-e.collisionPosition.marginTop,h=n-r,l=r+e.collisionHeight-a-n;e.collisionHeight>a?h>0&&0>=l?(i=t.top+h+e.collisionHeight-a-n,t.top+=h-i):t.top=l>0&&0>=h?n:h>l?n+a-e.collisionHeight:n:h>0?t.top+=h:l>0?t.top-=l:t.top=o(t.top-r,t.top)}},flip:{left:function(t,e){var i,s,n=e.within,o=n.offset.left+n.scrollLeft,r=n.width,h=n.isWindow?n.scrollLeft:n.offset.left,l=t.left-e.collisionPosition.marginLeft,c=l-h,u=l+e.collisionWidth-r-h,d="left"===e.my[0]?-e.elemWidth:"right"===e.my[0]?e.elemWidth:0,p="left"===e.at[0]?e.targetWidth:"right"===e.at[0]?-e.targetWidth:0,f=-2*e.offset[0];0>c?(i=t.left+d+p+f+e.collisionWidth-r-o,(0>i||a(c)>i)&&(t.left+=d+p+f)):u>0&&(s=t.left-e.collisionPosition.marginLeft+d+p+f-h,(s>0||u>a(s))&&(t.left+=d+p+f))},top:function(t,e){var i,s,n=e.within,o=n.offset.top+n.scrollTop,r=n.height,h=n.isWindow?n.scrollTop:n.offset.top,l=t.top-e.collisionPosition.marginTop,c=l-h,u=l+e.collisionHeight-r-h,d="top"===e.my[1],p=d?-e.elemHeight:"bottom"===e.my[1]?e.elemHeight:0,f="top"===e.at[1]?e.targetHeight:"bottom"===e.at[1]?-e.targetHeight:0,g=-2*e.offset[1];0>c?(s=t.top+p+f+g+e.collisionHeight-r-o,(0>s||a(c)>s)&&(t.top+=p+f+g)):u>0&&(i=t.top-e.collisionPosition.marginTop+p+f+g-h,(i>0||u>a(i))&&(t.top+=p+f+g))}},flipfit:{left:function(){t.ui.position.flip.left.apply(this,arguments),t.ui.position.fit.left.apply(this,arguments)},top:function(){t.ui.position.flip.top.apply(this,arguments),t.ui.position.fit.top.apply(this,arguments)}}}}(),t.ui.position,t.extend(t.expr[":"],{data:t.expr.createPseudo?t.expr.createPseudo(function(e){return function(i){return!!t.data(i,e)}}):function(e,i,s){return!!t.data(e,s[3])}}),t.fn.extend({disableSelection:function(){var t="onselectstart"in document.createElement("div")?"selectstart":"mousedown";return function(){return this.on(t+".ui-disableSelection",function(t){t.preventDefault()})}}(),enableSelection:function(){return this.off(".ui-disableSelection")}});var c="ui-effects-",u="ui-effects-style",d="ui-effects-animated",p=t;t.effects={effect:{}},function(t,e){function i(t,e,i){var s=u[e.type]||{};return null==t?i||!e.def?null:e.def:(t=s.floor?~~t:parseFloat(t),isNaN(t)?e.def:s.mod?(t+s.mod)%s.mod:0>t?0:t>s.max?s.max:t)}function s(i){var s=l(),n=s._rgba=[];return i=i.toLowerCase(),f(h,function(t,o){var a,r=o.re.exec(i),h=r&&o.parse(r),l=o.space||"rgba";return h?(a=s[l](h),s[c[l].cache]=a[c[l].cache],n=s._rgba=a._rgba,!1):e}),n.length?("0,0,0,0"===n.join()&&t.extend(n,o.transparent),s):o[i]}function n(t,e,i){return i=(i+1)%1,1>6*i?t+6*(e-t)*i:1>2*i?e:2>3*i?t+6*(e-t)*(2/3-i):t}var o,a="backgroundColor borderBottomColor borderLeftColor borderRightColor borderTopColor color columnRuleColor outlineColor textDecorationColor textEmphasisColor",r=/^([\-+])=\s*(\d+\.?\d*)/,h=[{re:/rgba?\(\s*(\d{1,3})\s*,\s*(\d{1,3})\s*,\s*(\d{1,3})\s*(?:,\s*(\d?(?:\.\d+)?)\s*)?\)/,parse:function(t){return[t[1],t[2],t[3],t[4]]}},{re:/rgba?\(\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*(?:,\s*(\d?(?:\.\d+)?)\s*)?\)/,parse:function(t){return[2.55*t[1],2.55*t[2],2.55*t[3],t[4]]}},{re:/#([a-f0-9]{2})([a-f0-9]{2})([a-f0-9]{2})/,parse:function(t){return[parseInt(t[1],16),parseInt(t[2],16),parseInt(t[3],16)]}},{re:/#([a-f0-9])([a-f0-9])([a-f0-9])/,parse:function(t){return[parseInt(t[1]+t[1],16),parseInt(t[2]+t[2],16),parseInt(t[3]+t[3],16)]}},{re:/hsla?\(\s*(\d+(?:\.\d+)?)\s*,\s*(\d+(?:\.\d+)?)\%\s*,\s*(\d+(?:\.\d+)?)\%\s*(?:,\s*(\d?(?:\.\d+)?)\s*)?\)/,space:"hsla",parse:function(t){return[t[1],t[2]/100,t[3]/100,t[4]]}}],l=t.Color=function(e,i,s,n){return new t.Color.fn.parse(e,i,s,n)},c={rgba:{props:{red:{idx:0,type:"byte"},green:{idx:1,type:"byte"},blue:{idx:2,type:"byte"}}},hsla:{props:{hue:{idx:0,type:"degrees"},saturation:{idx:1,type:"percent"},lightness:{idx:2,type:"percent"}}}},u={"byte":{floor:!0,max:255},percent:{max:1},degrees:{mod:360,floor:!0}},d=l.support={},p=t("

    ")[0],f=t.each;p.style.cssText="background-color:rgba(1,1,1,.5)",d.rgba=p.style.backgroundColor.indexOf("rgba")>-1,f(c,function(t,e){e.cache="_"+t,e.props.alpha={idx:3,type:"percent",def:1}}),l.fn=t.extend(l.prototype,{parse:function(n,a,r,h){if(n===e)return this._rgba=[null,null,null,null],this;(n.jquery||n.nodeType)&&(n=t(n).css(a),a=e);var u=this,d=t.type(n),p=this._rgba=[];return a!==e&&(n=[n,a,r,h],d="array"),"string"===d?this.parse(s(n)||o._default):"array"===d?(f(c.rgba.props,function(t,e){p[e.idx]=i(n[e.idx],e)}),this):"object"===d?(n instanceof l?f(c,function(t,e){n[e.cache]&&(u[e.cache]=n[e.cache].slice())}):f(c,function(e,s){var o=s.cache;f(s.props,function(t,e){if(!u[o]&&s.to){if("alpha"===t||null==n[t])return;u[o]=s.to(u._rgba)}u[o][e.idx]=i(n[t],e,!0)}),u[o]&&0>t.inArray(null,u[o].slice(0,3))&&(u[o][3]=1,s.from&&(u._rgba=s.from(u[o])))}),this):e},is:function(t){var i=l(t),s=!0,n=this;return f(c,function(t,o){var a,r=i[o.cache];return r&&(a=n[o.cache]||o.to&&o.to(n._rgba)||[],f(o.props,function(t,i){return null!=r[i.idx]?s=r[i.idx]===a[i.idx]:e})),s}),s},_space:function(){var t=[],e=this;return f(c,function(i,s){e[s.cache]&&t.push(i)}),t.pop()},transition:function(t,e){var s=l(t),n=s._space(),o=c[n],a=0===this.alpha()?l("transparent"):this,r=a[o.cache]||o.to(a._rgba),h=r.slice();return s=s[o.cache],f(o.props,function(t,n){var o=n.idx,a=r[o],l=s[o],c=u[n.type]||{};null!==l&&(null===a?h[o]=l:(c.mod&&(l-a>c.mod/2?a+=c.mod:a-l>c.mod/2&&(a-=c.mod)),h[o]=i((l-a)*e+a,n)))}),this[n](h)},blend:function(e){if(1===this._rgba[3])return this;var i=this._rgba.slice(),s=i.pop(),n=l(e)._rgba;return l(t.map(i,function(t,e){return(1-s)*n[e]+s*t}))},toRgbaString:function(){var e="rgba(",i=t.map(this._rgba,function(t,e){return null==t?e>2?1:0:t});return 1===i[3]&&(i.pop(),e="rgb("),e+i.join()+")"},toHslaString:function(){var e="hsla(",i=t.map(this.hsla(),function(t,e){return null==t&&(t=e>2?1:0),e&&3>e&&(t=Math.round(100*t)+"%"),t});return 1===i[3]&&(i.pop(),e="hsl("),e+i.join()+")"},toHexString:function(e){var i=this._rgba.slice(),s=i.pop();return e&&i.push(~~(255*s)),"#"+t.map(i,function(t){return t=(t||0).toString(16),1===t.length?"0"+t:t}).join("")},toString:function(){return 0===this._rgba[3]?"transparent":this.toRgbaString()}}),l.fn.parse.prototype=l.fn,c.hsla.to=function(t){if(null==t[0]||null==t[1]||null==t[2])return[null,null,null,t[3]];var e,i,s=t[0]/255,n=t[1]/255,o=t[2]/255,a=t[3],r=Math.max(s,n,o),h=Math.min(s,n,o),l=r-h,c=r+h,u=.5*c;return e=h===r?0:s===r?60*(n-o)/l+360:n===r?60*(o-s)/l+120:60*(s-n)/l+240,i=0===l?0:.5>=u?l/c:l/(2-c),[Math.round(e)%360,i,u,null==a?1:a]},c.hsla.from=function(t){if(null==t[0]||null==t[1]||null==t[2])return[null,null,null,t[3]];var e=t[0]/360,i=t[1],s=t[2],o=t[3],a=.5>=s?s*(1+i):s+i-s*i,r=2*s-a;return[Math.round(255*n(r,a,e+1/3)),Math.round(255*n(r,a,e)),Math.round(255*n(r,a,e-1/3)),o]},f(c,function(s,n){var o=n.props,a=n.cache,h=n.to,c=n.from;l.fn[s]=function(s){if(h&&!this[a]&&(this[a]=h(this._rgba)),s===e)return this[a].slice();var n,r=t.type(s),u="array"===r||"object"===r?s:arguments,d=this[a].slice();return f(o,function(t,e){var s=u["object"===r?t:e.idx];null==s&&(s=d[e.idx]),d[e.idx]=i(s,e)}),c?(n=l(c(d)),n[a]=d,n):l(d)},f(o,function(e,i){l.fn[e]||(l.fn[e]=function(n){var o,a=t.type(n),h="alpha"===e?this._hsla?"hsla":"rgba":s,l=this[h](),c=l[i.idx];return"undefined"===a?c:("function"===a&&(n=n.call(this,c),a=t.type(n)),null==n&&i.empty?this:("string"===a&&(o=r.exec(n),o&&(n=c+parseFloat(o[2])*("+"===o[1]?1:-1))),l[i.idx]=n,this[h](l)))})})}),l.hook=function(e){var i=e.split(" ");f(i,function(e,i){t.cssHooks[i]={set:function(e,n){var o,a,r="";if("transparent"!==n&&("string"!==t.type(n)||(o=s(n)))){if(n=l(o||n),!d.rgba&&1!==n._rgba[3]){for(a="backgroundColor"===i?e.parentNode:e;(""===r||"transparent"===r)&&a&&a.style;)try{r=t.css(a,"backgroundColor"),a=a.parentNode}catch(h){}n=n.blend(r&&"transparent"!==r?r:"_default")}n=n.toRgbaString()}try{e.style[i]=n}catch(h){}}},t.fx.step[i]=function(e){e.colorInit||(e.start=l(e.elem,i),e.end=l(e.end),e.colorInit=!0),t.cssHooks[i].set(e.elem,e.start.transition(e.end,e.pos))}})},l.hook(a),t.cssHooks.borderColor={expand:function(t){var e={};return f(["Top","Right","Bottom","Left"],function(i,s){e["border"+s+"Color"]=t}),e}},o=t.Color.names={aqua:"#00ffff",black:"#000000",blue:"#0000ff",fuchsia:"#ff00ff",gray:"#808080",green:"#008000",lime:"#00ff00",maroon:"#800000",navy:"#000080",olive:"#808000",purple:"#800080",red:"#ff0000",silver:"#c0c0c0",teal:"#008080",white:"#ffffff",yellow:"#ffff00",transparent:[null,null,null,0],_default:"#ffffff"}}(p),function(){function e(e){var i,s,n=e.ownerDocument.defaultView?e.ownerDocument.defaultView.getComputedStyle(e,null):e.currentStyle,o={};if(n&&n.length&&n[0]&&n[n[0]])for(s=n.length;s--;)i=n[s],"string"==typeof n[i]&&(o[t.camelCase(i)]=n[i]);else for(i in n)"string"==typeof n[i]&&(o[i]=n[i]);return o}function i(e,i){var s,o,a={};for(s in i)o=i[s],e[s]!==o&&(n[s]||(t.fx.step[s]||!isNaN(parseFloat(o)))&&(a[s]=o));return a}var s=["add","remove","toggle"],n={border:1,borderBottom:1,borderColor:1,borderLeft:1,borderRight:1,borderTop:1,borderWidth:1,margin:1,padding:1};t.each(["borderLeftStyle","borderRightStyle","borderBottomStyle","borderTopStyle"],function(e,i){t.fx.step[i]=function(t){("none"!==t.end&&!t.setAttr||1===t.pos&&!t.setAttr)&&(p.style(t.elem,i,t.end),t.setAttr=!0)}}),t.fn.addBack||(t.fn.addBack=function(t){return this.add(null==t?this.prevObject:this.prevObject.filter(t))}),t.effects.animateClass=function(n,o,a,r){var h=t.speed(o,a,r);return this.queue(function(){var o,a=t(this),r=a.attr("class")||"",l=h.children?a.find("*").addBack():a;l=l.map(function(){var i=t(this);return{el:i,start:e(this)}}),o=function(){t.each(s,function(t,e){n[e]&&a[e+"Class"](n[e])})},o(),l=l.map(function(){return this.end=e(this.el[0]),this.diff=i(this.start,this.end),this}),a.attr("class",r),l=l.map(function(){var e=this,i=t.Deferred(),s=t.extend({},h,{queue:!1,complete:function(){i.resolve(e)}});return this.el.animate(this.diff,s),i.promise()}),t.when.apply(t,l.get()).done(function(){o(),t.each(arguments,function(){var e=this.el;t.each(this.diff,function(t){e.css(t,"")})}),h.complete.call(a[0])})})},t.fn.extend({addClass:function(e){return function(i,s,n,o){return s?t.effects.animateClass.call(this,{add:i},s,n,o):e.apply(this,arguments)}}(t.fn.addClass),removeClass:function(e){return function(i,s,n,o){return arguments.length>1?t.effects.animateClass.call(this,{remove:i},s,n,o):e.apply(this,arguments)}}(t.fn.removeClass),toggleClass:function(e){return function(i,s,n,o,a){return"boolean"==typeof s||void 0===s?n?t.effects.animateClass.call(this,s?{add:i}:{remove:i},n,o,a):e.apply(this,arguments):t.effects.animateClass.call(this,{toggle:i},s,n,o)}}(t.fn.toggleClass),switchClass:function(e,i,s,n,o){return t.effects.animateClass.call(this,{add:i,remove:e},s,n,o)}})}(),function(){function e(e,i,s,n){return t.isPlainObject(e)&&(i=e,e=e.effect),e={effect:e},null==i&&(i={}),t.isFunction(i)&&(n=i,s=null,i={}),("number"==typeof i||t.fx.speeds[i])&&(n=s,s=i,i={}),t.isFunction(s)&&(n=s,s=null),i&&t.extend(e,i),s=s||i.duration,e.duration=t.fx.off?0:"number"==typeof s?s:s in t.fx.speeds?t.fx.speeds[s]:t.fx.speeds._default,e.complete=n||i.complete,e}function i(e){return!e||"number"==typeof e||t.fx.speeds[e]?!0:"string"!=typeof e||t.effects.effect[e]?t.isFunction(e)?!0:"object"!=typeof e||e.effect?!1:!0:!0}function s(t,e){var i=e.outerWidth(),s=e.outerHeight(),n=/^rect\((-?\d*\.?\d*px|-?\d+%|auto),?\s*(-?\d*\.?\d*px|-?\d+%|auto),?\s*(-?\d*\.?\d*px|-?\d+%|auto),?\s*(-?\d*\.?\d*px|-?\d+%|auto)\)$/,o=n.exec(t)||["",0,i,s,0];return{top:parseFloat(o[1])||0,right:"auto"===o[2]?i:parseFloat(o[2]),bottom:"auto"===o[3]?s:parseFloat(o[3]),left:parseFloat(o[4])||0}}t.expr&&t.expr.filters&&t.expr.filters.animated&&(t.expr.filters.animated=function(e){return function(i){return!!t(i).data(d)||e(i)}}(t.expr.filters.animated)),t.uiBackCompat!==!1&&t.extend(t.effects,{save:function(t,e){for(var i=0,s=e.length;s>i;i++)null!==e[i]&&t.data(c+e[i],t[0].style[e[i]])},restore:function(t,e){for(var i,s=0,n=e.length;n>s;s++)null!==e[s]&&(i=t.data(c+e[s]),t.css(e[s],i))},setMode:function(t,e){return"toggle"===e&&(e=t.is(":hidden")?"show":"hide"),e},createWrapper:function(e){if(e.parent().is(".ui-effects-wrapper"))return e.parent();var i={width:e.outerWidth(!0),height:e.outerHeight(!0),"float":e.css("float")},s=t("

    ").addClass("ui-effects-wrapper").css({fontSize:"100%",background:"transparent",border:"none",margin:0,padding:0}),n={width:e.width(),height:e.height()},o=document.activeElement;try{o.id}catch(a){o=document.body}return e.wrap(s),(e[0]===o||t.contains(e[0],o))&&t(o).trigger("focus"),s=e.parent(),"static"===e.css("position")?(s.css({position:"relative"}),e.css({position:"relative"})):(t.extend(i,{position:e.css("position"),zIndex:e.css("z-index")}),t.each(["top","left","bottom","right"],function(t,s){i[s]=e.css(s),isNaN(parseInt(i[s],10))&&(i[s]="auto")}),e.css({position:"relative",top:0,left:0,right:"auto",bottom:"auto"})),e.css(n),s.css(i).show()},removeWrapper:function(e){var i=document.activeElement;return e.parent().is(".ui-effects-wrapper")&&(e.parent().replaceWith(e),(e[0]===i||t.contains(e[0],i))&&t(i).trigger("focus")),e}}),t.extend(t.effects,{version:"1.12.1",define:function(e,i,s){return s||(s=i,i="effect"),t.effects.effect[e]=s,t.effects.effect[e].mode=i,s},scaledDimensions:function(t,e,i){if(0===e)return{height:0,width:0,outerHeight:0,outerWidth:0};var s="horizontal"!==i?(e||100)/100:1,n="vertical"!==i?(e||100)/100:1;return{height:t.height()*n,width:t.width()*s,outerHeight:t.outerHeight()*n,outerWidth:t.outerWidth()*s}},clipToBox:function(t){return{width:t.clip.right-t.clip.left,height:t.clip.bottom-t.clip.top,left:t.clip.left,top:t.clip.top}},unshift:function(t,e,i){var s=t.queue();e>1&&s.splice.apply(s,[1,0].concat(s.splice(e,i))),t.dequeue()},saveStyle:function(t){t.data(u,t[0].style.cssText)},restoreStyle:function(t){t[0].style.cssText=t.data(u)||"",t.removeData(u)},mode:function(t,e){var i=t.is(":hidden");return"toggle"===e&&(e=i?"show":"hide"),(i?"hide"===e:"show"===e)&&(e="none"),e},getBaseline:function(t,e){var i,s;switch(t[0]){case"top":i=0;break;case"middle":i=.5;break;case"bottom":i=1;break;default:i=t[0]/e.height}switch(t[1]){case"left":s=0;break;case"center":s=.5;break;case"right":s=1;break;default:s=t[1]/e.width}return{x:s,y:i}},createPlaceholder:function(e){var i,s=e.css("position"),n=e.position();return e.css({marginTop:e.css("marginTop"),marginBottom:e.css("marginBottom"),marginLeft:e.css("marginLeft"),marginRight:e.css("marginRight")}).outerWidth(e.outerWidth()).outerHeight(e.outerHeight()),/^(static|relative)/.test(s)&&(s="absolute",i=t("<"+e[0].nodeName+">").insertAfter(e).css({display:/^(inline|ruby)/.test(e.css("display"))?"inline-block":"block",visibility:"hidden",marginTop:e.css("marginTop"),marginBottom:e.css("marginBottom"),marginLeft:e.css("marginLeft"),marginRight:e.css("marginRight"),"float":e.css("float")}).outerWidth(e.outerWidth()).outerHeight(e.outerHeight()).addClass("ui-effects-placeholder"),e.data(c+"placeholder",i)),e.css({position:s,left:n.left,top:n.top}),i},removePlaceholder:function(t){var e=c+"placeholder",i=t.data(e);i&&(i.remove(),t.removeData(e))},cleanUp:function(e){t.effects.restoreStyle(e),t.effects.removePlaceholder(e)},setTransition:function(e,i,s,n){return n=n||{},t.each(i,function(t,i){var o=e.cssUnit(i);o[0]>0&&(n[i]=o[0]*s+o[1])}),n}}),t.fn.extend({effect:function(){function i(e){function i(){r.removeData(d),t.effects.cleanUp(r),"hide"===s.mode&&r.hide(),a()}function a(){t.isFunction(h)&&h.call(r[0]),t.isFunction(e)&&e()}var r=t(this);s.mode=c.shift(),t.uiBackCompat===!1||o?"none"===s.mode?(r[l](),a()):n.call(r[0],s,i):(r.is(":hidden")?"hide"===l:"show"===l)?(r[l](),a()):n.call(r[0],s,a)}var s=e.apply(this,arguments),n=t.effects.effect[s.effect],o=n.mode,a=s.queue,r=a||"fx",h=s.complete,l=s.mode,c=[],u=function(e){var i=t(this),s=t.effects.mode(i,l)||o;i.data(d,!0),c.push(s),o&&("show"===s||s===o&&"hide"===s)&&i.show(),o&&"none"===s||t.effects.saveStyle(i),t.isFunction(e)&&e()};return t.fx.off||!n?l?this[l](s.duration,h):this.each(function(){h&&h.call(this)}):a===!1?this.each(u).each(i):this.queue(r,u).queue(r,i)},show:function(t){return function(s){if(i(s))return t.apply(this,arguments);var n=e.apply(this,arguments);return n.mode="show",this.effect.call(this,n) -}}(t.fn.show),hide:function(t){return function(s){if(i(s))return t.apply(this,arguments);var n=e.apply(this,arguments);return n.mode="hide",this.effect.call(this,n)}}(t.fn.hide),toggle:function(t){return function(s){if(i(s)||"boolean"==typeof s)return t.apply(this,arguments);var n=e.apply(this,arguments);return n.mode="toggle",this.effect.call(this,n)}}(t.fn.toggle),cssUnit:function(e){var i=this.css(e),s=[];return t.each(["em","px","%","pt"],function(t,e){i.indexOf(e)>0&&(s=[parseFloat(i),e])}),s},cssClip:function(t){return t?this.css("clip","rect("+t.top+"px "+t.right+"px "+t.bottom+"px "+t.left+"px)"):s(this.css("clip"),this)},transfer:function(e,i){var s=t(this),n=t(e.to),o="fixed"===n.css("position"),a=t("body"),r=o?a.scrollTop():0,h=o?a.scrollLeft():0,l=n.offset(),c={top:l.top-r,left:l.left-h,height:n.innerHeight(),width:n.innerWidth()},u=s.offset(),d=t("
    ").appendTo("body").addClass(e.className).css({top:u.top-r,left:u.left-h,height:s.innerHeight(),width:s.innerWidth(),position:o?"fixed":"absolute"}).animate(c,e.duration,e.easing,function(){d.remove(),t.isFunction(i)&&i()})}}),t.fx.step.clip=function(e){e.clipInit||(e.start=t(e.elem).cssClip(),"string"==typeof e.end&&(e.end=s(e.end,e.elem)),e.clipInit=!0),t(e.elem).cssClip({top:e.pos*(e.end.top-e.start.top)+e.start.top,right:e.pos*(e.end.right-e.start.right)+e.start.right,bottom:e.pos*(e.end.bottom-e.start.bottom)+e.start.bottom,left:e.pos*(e.end.left-e.start.left)+e.start.left})}}(),function(){var e={};t.each(["Quad","Cubic","Quart","Quint","Expo"],function(t,i){e[i]=function(e){return Math.pow(e,t+2)}}),t.extend(e,{Sine:function(t){return 1-Math.cos(t*Math.PI/2)},Circ:function(t){return 1-Math.sqrt(1-t*t)},Elastic:function(t){return 0===t||1===t?t:-Math.pow(2,8*(t-1))*Math.sin((80*(t-1)-7.5)*Math.PI/15)},Back:function(t){return t*t*(3*t-2)},Bounce:function(t){for(var e,i=4;((e=Math.pow(2,--i))-1)/11>t;);return 1/Math.pow(4,3-i)-7.5625*Math.pow((3*e-2)/22-t,2)}}),t.each(e,function(e,i){t.easing["easeIn"+e]=i,t.easing["easeOut"+e]=function(t){return 1-i(1-t)},t.easing["easeInOut"+e]=function(t){return.5>t?i(2*t)/2:1-i(-2*t+2)/2}})}();var f=t.effects;t.effects.define("blind","hide",function(e,i){var s={up:["bottom","top"],vertical:["bottom","top"],down:["top","bottom"],left:["right","left"],horizontal:["right","left"],right:["left","right"]},n=t(this),o=e.direction||"up",a=n.cssClip(),r={clip:t.extend({},a)},h=t.effects.createPlaceholder(n);r.clip[s[o][0]]=r.clip[s[o][1]],"show"===e.mode&&(n.cssClip(r.clip),h&&h.css(t.effects.clipToBox(r)),r.clip=a),h&&h.animate(t.effects.clipToBox(r),e.duration,e.easing),n.animate(r,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("bounce",function(e,i){var s,n,o,a=t(this),r=e.mode,h="hide"===r,l="show"===r,c=e.direction||"up",u=e.distance,d=e.times||5,p=2*d+(l||h?1:0),f=e.duration/p,g=e.easing,m="up"===c||"down"===c?"top":"left",_="up"===c||"left"===c,v=0,b=a.queue().length;for(t.effects.createPlaceholder(a),o=a.css(m),u||(u=a["top"===m?"outerHeight":"outerWidth"]()/3),l&&(n={opacity:1},n[m]=o,a.css("opacity",0).css(m,_?2*-u:2*u).animate(n,f,g)),h&&(u/=Math.pow(2,d-1)),n={},n[m]=o;d>v;v++)s={},s[m]=(_?"-=":"+=")+u,a.animate(s,f,g).animate(n,f,g),u=h?2*u:u/2;h&&(s={opacity:0},s[m]=(_?"-=":"+=")+u,a.animate(s,f,g)),a.queue(i),t.effects.unshift(a,b,p+1)}),t.effects.define("clip","hide",function(e,i){var s,n={},o=t(this),a=e.direction||"vertical",r="both"===a,h=r||"horizontal"===a,l=r||"vertical"===a;s=o.cssClip(),n.clip={top:l?(s.bottom-s.top)/2:s.top,right:h?(s.right-s.left)/2:s.right,bottom:l?(s.bottom-s.top)/2:s.bottom,left:h?(s.right-s.left)/2:s.left},t.effects.createPlaceholder(o),"show"===e.mode&&(o.cssClip(n.clip),n.clip=s),o.animate(n,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("drop","hide",function(e,i){var s,n=t(this),o=e.mode,a="show"===o,r=e.direction||"left",h="up"===r||"down"===r?"top":"left",l="up"===r||"left"===r?"-=":"+=",c="+="===l?"-=":"+=",u={opacity:0};t.effects.createPlaceholder(n),s=e.distance||n["top"===h?"outerHeight":"outerWidth"](!0)/2,u[h]=l+s,a&&(n.css(u),u[h]=c+s,u.opacity=1),n.animate(u,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("explode","hide",function(e,i){function s(){b.push(this),b.length===u*d&&n()}function n(){p.css({visibility:"visible"}),t(b).remove(),i()}var o,a,r,h,l,c,u=e.pieces?Math.round(Math.sqrt(e.pieces)):3,d=u,p=t(this),f=e.mode,g="show"===f,m=p.show().css("visibility","hidden").offset(),_=Math.ceil(p.outerWidth()/d),v=Math.ceil(p.outerHeight()/u),b=[];for(o=0;u>o;o++)for(h=m.top+o*v,c=o-(u-1)/2,a=0;d>a;a++)r=m.left+a*_,l=a-(d-1)/2,p.clone().appendTo("body").wrap("
    ").css({position:"absolute",visibility:"visible",left:-a*_,top:-o*v}).parent().addClass("ui-effects-explode").css({position:"absolute",overflow:"hidden",width:_,height:v,left:r+(g?l*_:0),top:h+(g?c*v:0),opacity:g?0:1}).animate({left:r+(g?0:l*_),top:h+(g?0:c*v),opacity:g?1:0},e.duration||500,e.easing,s)}),t.effects.define("fade","toggle",function(e,i){var s="show"===e.mode;t(this).css("opacity",s?0:1).animate({opacity:s?1:0},{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("fold","hide",function(e,i){var s=t(this),n=e.mode,o="show"===n,a="hide"===n,r=e.size||15,h=/([0-9]+)%/.exec(r),l=!!e.horizFirst,c=l?["right","bottom"]:["bottom","right"],u=e.duration/2,d=t.effects.createPlaceholder(s),p=s.cssClip(),f={clip:t.extend({},p)},g={clip:t.extend({},p)},m=[p[c[0]],p[c[1]]],_=s.queue().length;h&&(r=parseInt(h[1],10)/100*m[a?0:1]),f.clip[c[0]]=r,g.clip[c[0]]=r,g.clip[c[1]]=0,o&&(s.cssClip(g.clip),d&&d.css(t.effects.clipToBox(g)),g.clip=p),s.queue(function(i){d&&d.animate(t.effects.clipToBox(f),u,e.easing).animate(t.effects.clipToBox(g),u,e.easing),i()}).animate(f,u,e.easing).animate(g,u,e.easing).queue(i),t.effects.unshift(s,_,4)}),t.effects.define("highlight","show",function(e,i){var s=t(this),n={backgroundColor:s.css("backgroundColor")};"hide"===e.mode&&(n.opacity=0),t.effects.saveStyle(s),s.css({backgroundImage:"none",backgroundColor:e.color||"#ffff99"}).animate(n,{queue:!1,duration:e.duration,easing:e.easing,complete:i})}),t.effects.define("size",function(e,i){var s,n,o,a=t(this),r=["fontSize"],h=["borderTopWidth","borderBottomWidth","paddingTop","paddingBottom"],l=["borderLeftWidth","borderRightWidth","paddingLeft","paddingRight"],c=e.mode,u="effect"!==c,d=e.scale||"both",p=e.origin||["middle","center"],f=a.css("position"),g=a.position(),m=t.effects.scaledDimensions(a),_=e.from||m,v=e.to||t.effects.scaledDimensions(a,0);t.effects.createPlaceholder(a),"show"===c&&(o=_,_=v,v=o),n={from:{y:_.height/m.height,x:_.width/m.width},to:{y:v.height/m.height,x:v.width/m.width}},("box"===d||"both"===d)&&(n.from.y!==n.to.y&&(_=t.effects.setTransition(a,h,n.from.y,_),v=t.effects.setTransition(a,h,n.to.y,v)),n.from.x!==n.to.x&&(_=t.effects.setTransition(a,l,n.from.x,_),v=t.effects.setTransition(a,l,n.to.x,v))),("content"===d||"both"===d)&&n.from.y!==n.to.y&&(_=t.effects.setTransition(a,r,n.from.y,_),v=t.effects.setTransition(a,r,n.to.y,v)),p&&(s=t.effects.getBaseline(p,m),_.top=(m.outerHeight-_.outerHeight)*s.y+g.top,_.left=(m.outerWidth-_.outerWidth)*s.x+g.left,v.top=(m.outerHeight-v.outerHeight)*s.y+g.top,v.left=(m.outerWidth-v.outerWidth)*s.x+g.left),a.css(_),("content"===d||"both"===d)&&(h=h.concat(["marginTop","marginBottom"]).concat(r),l=l.concat(["marginLeft","marginRight"]),a.find("*[width]").each(function(){var i=t(this),s=t.effects.scaledDimensions(i),o={height:s.height*n.from.y,width:s.width*n.from.x,outerHeight:s.outerHeight*n.from.y,outerWidth:s.outerWidth*n.from.x},a={height:s.height*n.to.y,width:s.width*n.to.x,outerHeight:s.height*n.to.y,outerWidth:s.width*n.to.x};n.from.y!==n.to.y&&(o=t.effects.setTransition(i,h,n.from.y,o),a=t.effects.setTransition(i,h,n.to.y,a)),n.from.x!==n.to.x&&(o=t.effects.setTransition(i,l,n.from.x,o),a=t.effects.setTransition(i,l,n.to.x,a)),u&&t.effects.saveStyle(i),i.css(o),i.animate(a,e.duration,e.easing,function(){u&&t.effects.restoreStyle(i)})})),a.animate(v,{queue:!1,duration:e.duration,easing:e.easing,complete:function(){var e=a.offset();0===v.opacity&&a.css("opacity",_.opacity),u||(a.css("position","static"===f?"relative":f).offset(e),t.effects.saveStyle(a)),i()}})}),t.effects.define("scale",function(e,i){var s=t(this),n=e.mode,o=parseInt(e.percent,10)||(0===parseInt(e.percent,10)?0:"effect"!==n?0:100),a=t.extend(!0,{from:t.effects.scaledDimensions(s),to:t.effects.scaledDimensions(s,o,e.direction||"both"),origin:e.origin||["middle","center"]},e);e.fade&&(a.from.opacity=1,a.to.opacity=0),t.effects.effect.size.call(this,a,i)}),t.effects.define("puff","hide",function(e,i){var s=t.extend(!0,{},e,{fade:!0,percent:parseInt(e.percent,10)||150});t.effects.effect.scale.call(this,s,i)}),t.effects.define("pulsate","show",function(e,i){var s=t(this),n=e.mode,o="show"===n,a="hide"===n,r=o||a,h=2*(e.times||5)+(r?1:0),l=e.duration/h,c=0,u=1,d=s.queue().length;for((o||!s.is(":visible"))&&(s.css("opacity",0).show(),c=1);h>u;u++)s.animate({opacity:c},l,e.easing),c=1-c;s.animate({opacity:c},l,e.easing),s.queue(i),t.effects.unshift(s,d,h+1)}),t.effects.define("shake",function(e,i){var s=1,n=t(this),o=e.direction||"left",a=e.distance||20,r=e.times||3,h=2*r+1,l=Math.round(e.duration/h),c="up"===o||"down"===o?"top":"left",u="up"===o||"left"===o,d={},p={},f={},g=n.queue().length;for(t.effects.createPlaceholder(n),d[c]=(u?"-=":"+=")+a,p[c]=(u?"+=":"-=")+2*a,f[c]=(u?"-=":"+=")+2*a,n.animate(d,l,e.easing);r>s;s++)n.animate(p,l,e.easing).animate(f,l,e.easing);n.animate(p,l,e.easing).animate(d,l/2,e.easing).queue(i),t.effects.unshift(n,g,h+1)}),t.effects.define("slide","show",function(e,i){var s,n,o=t(this),a={up:["bottom","top"],down:["top","bottom"],left:["right","left"],right:["left","right"]},r=e.mode,h=e.direction||"left",l="up"===h||"down"===h?"top":"left",c="up"===h||"left"===h,u=e.distance||o["top"===l?"outerHeight":"outerWidth"](!0),d={};t.effects.createPlaceholder(o),s=o.cssClip(),n=o.position()[l],d[l]=(c?-1:1)*u+n,d.clip=o.cssClip(),d.clip[a[h][1]]=d.clip[a[h][0]],"show"===r&&(o.cssClip(d.clip),o.css(l,d[l]),d.clip=s,d[l]=n),o.animate(d,{queue:!1,duration:e.duration,easing:e.easing,complete:i})});var f;t.uiBackCompat!==!1&&(f=t.effects.define("transfer",function(e,i){t(this).transfer(e,i)})),t.ui.focusable=function(i,s){var n,o,a,r,h,l=i.nodeName.toLowerCase();return"area"===l?(n=i.parentNode,o=n.name,i.href&&o&&"map"===n.nodeName.toLowerCase()?(a=t("img[usemap='#"+o+"']"),a.length>0&&a.is(":visible")):!1):(/^(input|select|textarea|button|object)$/.test(l)?(r=!i.disabled,r&&(h=t(i).closest("fieldset")[0],h&&(r=!h.disabled))):r="a"===l?i.href||s:s,r&&t(i).is(":visible")&&e(t(i)))},t.extend(t.expr[":"],{focusable:function(e){return t.ui.focusable(e,null!=t.attr(e,"tabindex"))}}),t.ui.focusable,t.fn.form=function(){return"string"==typeof this[0].form?this.closest("form"):t(this[0].form)},t.ui.formResetMixin={_formResetHandler:function(){var e=t(this);setTimeout(function(){var i=e.data("ui-form-reset-instances");t.each(i,function(){this.refresh()})})},_bindFormResetHandler:function(){if(this.form=this.element.form(),this.form.length){var t=this.form.data("ui-form-reset-instances")||[];t.length||this.form.on("reset.ui-form-reset",this._formResetHandler),t.push(this),this.form.data("ui-form-reset-instances",t)}},_unbindFormResetHandler:function(){if(this.form.length){var e=this.form.data("ui-form-reset-instances");e.splice(t.inArray(this,e),1),e.length?this.form.data("ui-form-reset-instances",e):this.form.removeData("ui-form-reset-instances").off("reset.ui-form-reset")}}},"1.7"===t.fn.jquery.substring(0,3)&&(t.each(["Width","Height"],function(e,i){function s(e,i,s,o){return t.each(n,function(){i-=parseFloat(t.css(e,"padding"+this))||0,s&&(i-=parseFloat(t.css(e,"border"+this+"Width"))||0),o&&(i-=parseFloat(t.css(e,"margin"+this))||0)}),i}var n="Width"===i?["Left","Right"]:["Top","Bottom"],o=i.toLowerCase(),a={innerWidth:t.fn.innerWidth,innerHeight:t.fn.innerHeight,outerWidth:t.fn.outerWidth,outerHeight:t.fn.outerHeight};t.fn["inner"+i]=function(e){return void 0===e?a["inner"+i].call(this):this.each(function(){t(this).css(o,s(this,e)+"px")})},t.fn["outer"+i]=function(e,n){return"number"!=typeof e?a["outer"+i].call(this,e):this.each(function(){t(this).css(o,s(this,e,!0,n)+"px")})}}),t.fn.addBack=function(t){return this.add(null==t?this.prevObject:this.prevObject.filter(t))}),t.ui.keyCode={BACKSPACE:8,COMMA:188,DELETE:46,DOWN:40,END:35,ENTER:13,ESCAPE:27,HOME:36,LEFT:37,PAGE_DOWN:34,PAGE_UP:33,PERIOD:190,RIGHT:39,SPACE:32,TAB:9,UP:38},t.ui.escapeSelector=function(){var t=/([!"#$%&'()*+,.\/:;<=>?@[\]^`{|}~])/g;return function(e){return e.replace(t,"\\$1")}}(),t.fn.labels=function(){var e,i,s,n,o;return this[0].labels&&this[0].labels.length?this.pushStack(this[0].labels):(n=this.eq(0).parents("label"),s=this.attr("id"),s&&(e=this.eq(0).parents().last(),o=e.add(e.length?e.siblings():this.siblings()),i="label[for='"+t.ui.escapeSelector(s)+"']",n=n.add(o.find(i).addBack(i))),this.pushStack(n))},t.fn.scrollParent=function(e){var i=this.css("position"),s="absolute"===i,n=e?/(auto|scroll|hidden)/:/(auto|scroll)/,o=this.parents().filter(function(){var e=t(this);return s&&"static"===e.css("position")?!1:n.test(e.css("overflow")+e.css("overflow-y")+e.css("overflow-x"))}).eq(0);return"fixed"!==i&&o.length?o:t(this[0].ownerDocument||document)},t.extend(t.expr[":"],{tabbable:function(e){var i=t.attr(e,"tabindex"),s=null!=i;return(!s||i>=0)&&t.ui.focusable(e,s)}}),t.fn.extend({uniqueId:function(){var t=0;return function(){return this.each(function(){this.id||(this.id="ui-id-"+ ++t)})}}(),removeUniqueId:function(){return this.each(function(){/^ui-id-\d+$/.test(this.id)&&t(this).removeAttr("id")})}}),t.widget("ui.accordion",{version:"1.12.1",options:{active:0,animate:{},classes:{"ui-accordion-header":"ui-corner-top","ui-accordion-header-collapsed":"ui-corner-all","ui-accordion-content":"ui-corner-bottom"},collapsible:!1,event:"click",header:"> li > :first-child, > :not(li):even",heightStyle:"auto",icons:{activeHeader:"ui-icon-triangle-1-s",header:"ui-icon-triangle-1-e"},activate:null,beforeActivate:null},hideProps:{borderTopWidth:"hide",borderBottomWidth:"hide",paddingTop:"hide",paddingBottom:"hide",height:"hide"},showProps:{borderTopWidth:"show",borderBottomWidth:"show",paddingTop:"show",paddingBottom:"show",height:"show"},_create:function(){var e=this.options;this.prevShow=this.prevHide=t(),this._addClass("ui-accordion","ui-widget ui-helper-reset"),this.element.attr("role","tablist"),e.collapsible||e.active!==!1&&null!=e.active||(e.active=0),this._processPanels(),0>e.active&&(e.active+=this.headers.length),this._refresh()},_getCreateEventData:function(){return{header:this.active,panel:this.active.length?this.active.next():t()}},_createIcons:function(){var e,i,s=this.options.icons;s&&(e=t(""),this._addClass(e,"ui-accordion-header-icon","ui-icon "+s.header),e.prependTo(this.headers),i=this.active.children(".ui-accordion-header-icon"),this._removeClass(i,s.header)._addClass(i,null,s.activeHeader)._addClass(this.headers,"ui-accordion-icons"))},_destroyIcons:function(){this._removeClass(this.headers,"ui-accordion-icons"),this.headers.children(".ui-accordion-header-icon").remove()},_destroy:function(){var t;this.element.removeAttr("role"),this.headers.removeAttr("role aria-expanded aria-selected aria-controls tabIndex").removeUniqueId(),this._destroyIcons(),t=this.headers.next().css("display","").removeAttr("role aria-hidden aria-labelledby").removeUniqueId(),"content"!==this.options.heightStyle&&t.css("height","")},_setOption:function(t,e){return"active"===t?(this._activate(e),void 0):("event"===t&&(this.options.event&&this._off(this.headers,this.options.event),this._setupEvents(e)),this._super(t,e),"collapsible"!==t||e||this.options.active!==!1||this._activate(0),"icons"===t&&(this._destroyIcons(),e&&this._createIcons()),void 0)},_setOptionDisabled:function(t){this._super(t),this.element.attr("aria-disabled",t),this._toggleClass(null,"ui-state-disabled",!!t),this._toggleClass(this.headers.add(this.headers.next()),null,"ui-state-disabled",!!t)},_keydown:function(e){if(!e.altKey&&!e.ctrlKey){var i=t.ui.keyCode,s=this.headers.length,n=this.headers.index(e.target),o=!1;switch(e.keyCode){case i.RIGHT:case i.DOWN:o=this.headers[(n+1)%s];break;case i.LEFT:case i.UP:o=this.headers[(n-1+s)%s];break;case i.SPACE:case i.ENTER:this._eventHandler(e);break;case i.HOME:o=this.headers[0];break;case i.END:o=this.headers[s-1]}o&&(t(e.target).attr("tabIndex",-1),t(o).attr("tabIndex",0),t(o).trigger("focus"),e.preventDefault())}},_panelKeyDown:function(e){e.keyCode===t.ui.keyCode.UP&&e.ctrlKey&&t(e.currentTarget).prev().trigger("focus")},refresh:function(){var e=this.options;this._processPanels(),e.active===!1&&e.collapsible===!0||!this.headers.length?(e.active=!1,this.active=t()):e.active===!1?this._activate(0):this.active.length&&!t.contains(this.element[0],this.active[0])?this.headers.length===this.headers.find(".ui-state-disabled").length?(e.active=!1,this.active=t()):this._activate(Math.max(0,e.active-1)):e.active=this.headers.index(this.active),this._destroyIcons(),this._refresh()},_processPanels:function(){var t=this.headers,e=this.panels;this.headers=this.element.find(this.options.header),this._addClass(this.headers,"ui-accordion-header ui-accordion-header-collapsed","ui-state-default"),this.panels=this.headers.next().filter(":not(.ui-accordion-content-active)").hide(),this._addClass(this.panels,"ui-accordion-content","ui-helper-reset ui-widget-content"),e&&(this._off(t.not(this.headers)),this._off(e.not(this.panels)))},_refresh:function(){var e,i=this.options,s=i.heightStyle,n=this.element.parent();this.active=this._findActive(i.active),this._addClass(this.active,"ui-accordion-header-active","ui-state-active")._removeClass(this.active,"ui-accordion-header-collapsed"),this._addClass(this.active.next(),"ui-accordion-content-active"),this.active.next().show(),this.headers.attr("role","tab").each(function(){var e=t(this),i=e.uniqueId().attr("id"),s=e.next(),n=s.uniqueId().attr("id");e.attr("aria-controls",n),s.attr("aria-labelledby",i)}).next().attr("role","tabpanel"),this.headers.not(this.active).attr({"aria-selected":"false","aria-expanded":"false",tabIndex:-1}).next().attr({"aria-hidden":"true"}).hide(),this.active.length?this.active.attr({"aria-selected":"true","aria-expanded":"true",tabIndex:0}).next().attr({"aria-hidden":"false"}):this.headers.eq(0).attr("tabIndex",0),this._createIcons(),this._setupEvents(i.event),"fill"===s?(e=n.height(),this.element.siblings(":visible").each(function(){var i=t(this),s=i.css("position");"absolute"!==s&&"fixed"!==s&&(e-=i.outerHeight(!0))}),this.headers.each(function(){e-=t(this).outerHeight(!0)}),this.headers.next().each(function(){t(this).height(Math.max(0,e-t(this).innerHeight()+t(this).height()))}).css("overflow","auto")):"auto"===s&&(e=0,this.headers.next().each(function(){var i=t(this).is(":visible");i||t(this).show(),e=Math.max(e,t(this).css("height","").height()),i||t(this).hide()}).height(e))},_activate:function(e){var i=this._findActive(e)[0];i!==this.active[0]&&(i=i||this.active[0],this._eventHandler({target:i,currentTarget:i,preventDefault:t.noop}))},_findActive:function(e){return"number"==typeof e?this.headers.eq(e):t()},_setupEvents:function(e){var i={keydown:"_keydown"};e&&t.each(e.split(" "),function(t,e){i[e]="_eventHandler"}),this._off(this.headers.add(this.headers.next())),this._on(this.headers,i),this._on(this.headers.next(),{keydown:"_panelKeyDown"}),this._hoverable(this.headers),this._focusable(this.headers)},_eventHandler:function(e){var i,s,n=this.options,o=this.active,a=t(e.currentTarget),r=a[0]===o[0],h=r&&n.collapsible,l=h?t():a.next(),c=o.next(),u={oldHeader:o,oldPanel:c,newHeader:h?t():a,newPanel:l};e.preventDefault(),r&&!n.collapsible||this._trigger("beforeActivate",e,u)===!1||(n.active=h?!1:this.headers.index(a),this.active=r?t():a,this._toggle(u),this._removeClass(o,"ui-accordion-header-active","ui-state-active"),n.icons&&(i=o.children(".ui-accordion-header-icon"),this._removeClass(i,null,n.icons.activeHeader)._addClass(i,null,n.icons.header)),r||(this._removeClass(a,"ui-accordion-header-collapsed")._addClass(a,"ui-accordion-header-active","ui-state-active"),n.icons&&(s=a.children(".ui-accordion-header-icon"),this._removeClass(s,null,n.icons.header)._addClass(s,null,n.icons.activeHeader)),this._addClass(a.next(),"ui-accordion-content-active")))},_toggle:function(e){var i=e.newPanel,s=this.prevShow.length?this.prevShow:e.oldPanel;this.prevShow.add(this.prevHide).stop(!0,!0),this.prevShow=i,this.prevHide=s,this.options.animate?this._animate(i,s,e):(s.hide(),i.show(),this._toggleComplete(e)),s.attr({"aria-hidden":"true"}),s.prev().attr({"aria-selected":"false","aria-expanded":"false"}),i.length&&s.length?s.prev().attr({tabIndex:-1,"aria-expanded":"false"}):i.length&&this.headers.filter(function(){return 0===parseInt(t(this).attr("tabIndex"),10)}).attr("tabIndex",-1),i.attr("aria-hidden","false").prev().attr({"aria-selected":"true","aria-expanded":"true",tabIndex:0})},_animate:function(t,e,i){var s,n,o,a=this,r=0,h=t.css("box-sizing"),l=t.length&&(!e.length||t.index()",delay:300,options:{icons:{submenu:"ui-icon-caret-1-e"},items:"> *",menus:"ul",position:{my:"left top",at:"right top"},role:"menu",blur:null,focus:null,select:null},_create:function(){this.activeMenu=this.element,this.mouseHandled=!1,this.element.uniqueId().attr({role:this.options.role,tabIndex:0}),this._addClass("ui-menu","ui-widget ui-widget-content"),this._on({"mousedown .ui-menu-item":function(t){t.preventDefault()},"click .ui-menu-item":function(e){var i=t(e.target),s=t(t.ui.safeActiveElement(this.document[0]));!this.mouseHandled&&i.not(".ui-state-disabled").length&&(this.select(e),e.isPropagationStopped()||(this.mouseHandled=!0),i.has(".ui-menu").length?this.expand(e):!this.element.is(":focus")&&s.closest(".ui-menu").length&&(this.element.trigger("focus",[!0]),this.active&&1===this.active.parents(".ui-menu").length&&clearTimeout(this.timer)))},"mouseenter .ui-menu-item":function(e){if(!this.previousFilter){var i=t(e.target).closest(".ui-menu-item"),s=t(e.currentTarget);i[0]===s[0]&&(this._removeClass(s.siblings().children(".ui-state-active"),null,"ui-state-active"),this.focus(e,s))}},mouseleave:"collapseAll","mouseleave .ui-menu":"collapseAll",focus:function(t,e){var i=this.active||this.element.find(this.options.items).eq(0);e||this.focus(t,i)},blur:function(e){this._delay(function(){var i=!t.contains(this.element[0],t.ui.safeActiveElement(this.document[0]));i&&this.collapseAll(e)})},keydown:"_keydown"}),this.refresh(),this._on(this.document,{click:function(t){this._closeOnDocumentClick(t)&&this.collapseAll(t),this.mouseHandled=!1}})},_destroy:function(){var e=this.element.find(".ui-menu-item").removeAttr("role aria-disabled"),i=e.children(".ui-menu-item-wrapper").removeUniqueId().removeAttr("tabIndex role aria-haspopup");this.element.removeAttr("aria-activedescendant").find(".ui-menu").addBack().removeAttr("role aria-labelledby aria-expanded aria-hidden aria-disabled tabIndex").removeUniqueId().show(),i.children().each(function(){var e=t(this);e.data("ui-menu-submenu-caret")&&e.remove()})},_keydown:function(e){var i,s,n,o,a=!0;switch(e.keyCode){case t.ui.keyCode.PAGE_UP:this.previousPage(e);break;case t.ui.keyCode.PAGE_DOWN:this.nextPage(e);break;case t.ui.keyCode.HOME:this._move("first","first",e);break;case t.ui.keyCode.END:this._move("last","last",e);break;case t.ui.keyCode.UP:this.previous(e);break;case t.ui.keyCode.DOWN:this.next(e);break;case t.ui.keyCode.LEFT:this.collapse(e);break;case t.ui.keyCode.RIGHT:this.active&&!this.active.is(".ui-state-disabled")&&this.expand(e);break;case t.ui.keyCode.ENTER:case t.ui.keyCode.SPACE:this._activate(e);break;case t.ui.keyCode.ESCAPE:this.collapse(e);break;default:a=!1,s=this.previousFilter||"",o=!1,n=e.keyCode>=96&&105>=e.keyCode?""+(e.keyCode-96):String.fromCharCode(e.keyCode),clearTimeout(this.filterTimer),n===s?o=!0:n=s+n,i=this._filterMenuItems(n),i=o&&-1!==i.index(this.active.next())?this.active.nextAll(".ui-menu-item"):i,i.length||(n=String.fromCharCode(e.keyCode),i=this._filterMenuItems(n)),i.length?(this.focus(e,i),this.previousFilter=n,this.filterTimer=this._delay(function(){delete this.previousFilter},1e3)):delete this.previousFilter}a&&e.preventDefault()},_activate:function(t){this.active&&!this.active.is(".ui-state-disabled")&&(this.active.children("[aria-haspopup='true']").length?this.expand(t):this.select(t))},refresh:function(){var e,i,s,n,o,a=this,r=this.options.icons.submenu,h=this.element.find(this.options.menus);this._toggleClass("ui-menu-icons",null,!!this.element.find(".ui-icon").length),s=h.filter(":not(.ui-menu)").hide().attr({role:this.options.role,"aria-hidden":"true","aria-expanded":"false"}).each(function(){var e=t(this),i=e.prev(),s=t("").data("ui-menu-submenu-caret",!0);a._addClass(s,"ui-menu-icon","ui-icon "+r),i.attr("aria-haspopup","true").prepend(s),e.attr("aria-labelledby",i.attr("id"))}),this._addClass(s,"ui-menu","ui-widget ui-widget-content ui-front"),e=h.add(this.element),i=e.find(this.options.items),i.not(".ui-menu-item").each(function(){var e=t(this);a._isDivider(e)&&a._addClass(e,"ui-menu-divider","ui-widget-content")}),n=i.not(".ui-menu-item, .ui-menu-divider"),o=n.children().not(".ui-menu").uniqueId().attr({tabIndex:-1,role:this._itemRole()}),this._addClass(n,"ui-menu-item")._addClass(o,"ui-menu-item-wrapper"),i.filter(".ui-state-disabled").attr("aria-disabled","true"),this.active&&!t.contains(this.element[0],this.active[0])&&this.blur()},_itemRole:function(){return{menu:"menuitem",listbox:"option"}[this.options.role]},_setOption:function(t,e){if("icons"===t){var i=this.element.find(".ui-menu-icon");this._removeClass(i,null,this.options.icons.submenu)._addClass(i,null,e.submenu)}this._super(t,e)},_setOptionDisabled:function(t){this._super(t),this.element.attr("aria-disabled",t+""),this._toggleClass(null,"ui-state-disabled",!!t)},focus:function(t,e){var i,s,n;this.blur(t,t&&"focus"===t.type),this._scrollIntoView(e),this.active=e.first(),s=this.active.children(".ui-menu-item-wrapper"),this._addClass(s,null,"ui-state-active"),this.options.role&&this.element.attr("aria-activedescendant",s.attr("id")),n=this.active.parent().closest(".ui-menu-item").children(".ui-menu-item-wrapper"),this._addClass(n,null,"ui-state-active"),t&&"keydown"===t.type?this._close():this.timer=this._delay(function(){this._close()},this.delay),i=e.children(".ui-menu"),i.length&&t&&/^mouse/.test(t.type)&&this._startOpening(i),this.activeMenu=e.parent(),this._trigger("focus",t,{item:e})},_scrollIntoView:function(e){var i,s,n,o,a,r;this._hasScroll()&&(i=parseFloat(t.css(this.activeMenu[0],"borderTopWidth"))||0,s=parseFloat(t.css(this.activeMenu[0],"paddingTop"))||0,n=e.offset().top-this.activeMenu.offset().top-i-s,o=this.activeMenu.scrollTop(),a=this.activeMenu.height(),r=e.outerHeight(),0>n?this.activeMenu.scrollTop(o+n):n+r>a&&this.activeMenu.scrollTop(o+n-a+r))},blur:function(t,e){e||clearTimeout(this.timer),this.active&&(this._removeClass(this.active.children(".ui-menu-item-wrapper"),null,"ui-state-active"),this._trigger("blur",t,{item:this.active}),this.active=null)},_startOpening:function(t){clearTimeout(this.timer),"true"===t.attr("aria-hidden")&&(this.timer=this._delay(function(){this._close(),this._open(t)},this.delay))},_open:function(e){var i=t.extend({of:this.active},this.options.position);clearTimeout(this.timer),this.element.find(".ui-menu").not(e.parents(".ui-menu")).hide().attr("aria-hidden","true"),e.show().removeAttr("aria-hidden").attr("aria-expanded","true").position(i)},collapseAll:function(e,i){clearTimeout(this.timer),this.timer=this._delay(function(){var s=i?this.element:t(e&&e.target).closest(this.element.find(".ui-menu"));s.length||(s=this.element),this._close(s),this.blur(e),this._removeClass(s.find(".ui-state-active"),null,"ui-state-active"),this.activeMenu=s},this.delay)},_close:function(t){t||(t=this.active?this.active.parent():this.element),t.find(".ui-menu").hide().attr("aria-hidden","true").attr("aria-expanded","false")},_closeOnDocumentClick:function(e){return!t(e.target).closest(".ui-menu").length},_isDivider:function(t){return!/[^\-\u2014\u2013\s]/.test(t.text())},collapse:function(t){var e=this.active&&this.active.parent().closest(".ui-menu-item",this.element);e&&e.length&&(this._close(),this.focus(t,e))},expand:function(t){var e=this.active&&this.active.children(".ui-menu ").find(this.options.items).first();e&&e.length&&(this._open(e.parent()),this._delay(function(){this.focus(t,e)}))},next:function(t){this._move("next","first",t)},previous:function(t){this._move("prev","last",t)},isFirstItem:function(){return this.active&&!this.active.prevAll(".ui-menu-item").length},isLastItem:function(){return this.active&&!this.active.nextAll(".ui-menu-item").length},_move:function(t,e,i){var s;this.active&&(s="first"===t||"last"===t?this.active["first"===t?"prevAll":"nextAll"](".ui-menu-item").eq(-1):this.active[t+"All"](".ui-menu-item").eq(0)),s&&s.length&&this.active||(s=this.activeMenu.find(this.options.items)[e]()),this.focus(i,s)},nextPage:function(e){var i,s,n;return this.active?(this.isLastItem()||(this._hasScroll()?(s=this.active.offset().top,n=this.element.height(),this.active.nextAll(".ui-menu-item").each(function(){return i=t(this),0>i.offset().top-s-n}),this.focus(e,i)):this.focus(e,this.activeMenu.find(this.options.items)[this.active?"last":"first"]())),void 0):(this.next(e),void 0)},previousPage:function(e){var i,s,n;return this.active?(this.isFirstItem()||(this._hasScroll()?(s=this.active.offset().top,n=this.element.height(),this.active.prevAll(".ui-menu-item").each(function(){return i=t(this),i.offset().top-s+n>0}),this.focus(e,i)):this.focus(e,this.activeMenu.find(this.options.items).first())),void 0):(this.next(e),void 0)},_hasScroll:function(){return this.element.outerHeight()",options:{appendTo:null,autoFocus:!1,delay:300,minLength:1,position:{my:"left top",at:"left bottom",collision:"none"},source:null,change:null,close:null,focus:null,open:null,response:null,search:null,select:null},requestIndex:0,pending:0,_create:function(){var e,i,s,n=this.element[0].nodeName.toLowerCase(),o="textarea"===n,a="input"===n; -this.isMultiLine=o||!a&&this._isContentEditable(this.element),this.valueMethod=this.element[o||a?"val":"text"],this.isNewMenu=!0,this._addClass("ui-autocomplete-input"),this.element.attr("autocomplete","off"),this._on(this.element,{keydown:function(n){if(this.element.prop("readOnly"))return e=!0,s=!0,i=!0,void 0;e=!1,s=!1,i=!1;var o=t.ui.keyCode;switch(n.keyCode){case o.PAGE_UP:e=!0,this._move("previousPage",n);break;case o.PAGE_DOWN:e=!0,this._move("nextPage",n);break;case o.UP:e=!0,this._keyEvent("previous",n);break;case o.DOWN:e=!0,this._keyEvent("next",n);break;case o.ENTER:this.menu.active&&(e=!0,n.preventDefault(),this.menu.select(n));break;case o.TAB:this.menu.active&&this.menu.select(n);break;case o.ESCAPE:this.menu.element.is(":visible")&&(this.isMultiLine||this._value(this.term),this.close(n),n.preventDefault());break;default:i=!0,this._searchTimeout(n)}},keypress:function(s){if(e)return e=!1,(!this.isMultiLine||this.menu.element.is(":visible"))&&s.preventDefault(),void 0;if(!i){var n=t.ui.keyCode;switch(s.keyCode){case n.PAGE_UP:this._move("previousPage",s);break;case n.PAGE_DOWN:this._move("nextPage",s);break;case n.UP:this._keyEvent("previous",s);break;case n.DOWN:this._keyEvent("next",s)}}},input:function(t){return s?(s=!1,t.preventDefault(),void 0):(this._searchTimeout(t),void 0)},focus:function(){this.selectedItem=null,this.previous=this._value()},blur:function(t){return this.cancelBlur?(delete this.cancelBlur,void 0):(clearTimeout(this.searching),this.close(t),this._change(t),void 0)}}),this._initSource(),this.menu=t("
     
    - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

    Choose your own difficulty and challenge

    -

    You can choose from four difficulty levels: easy, normal, hard, and hell. The higher the difficulty, the more enemies you will face and the stronger they will be. You can also choose from different modes such as endless mode, boss mode, or special mode. Each mode has its own rules and rewards. You can earn gold, gems, weapons, and heroes by completing the modes and achieving high scores.

    -

    king god castle apk download
    -king god castle apk mod
    -king god castle apk latest version
    -king god castle apk xapk
    -king god castle apk android
    -king god castle apk free
    -king god castle apk offline
    -king god castle apk hack
    -king god castle apk unlimited money
    -king god castle apk obb
    -king god castle game apk
    -king god castle strategy apk
    -king god castle android game apk
    -king god castle modded apk
    -king god castle hacked apk
    -king god castle full apk
    -king god castle premium apk
    -king god castle pro apk
    -king god castle unlocked apk
    -king god castle paid apk
    -king god castle cheats apk
    -king god castle tips apk
    -king god castle guide apk
    -king god castle review apk
    -king god castle gameplay apk
    -king god castle update apk
    -king god castle new version apk
    -king god castle old version apk
    -king god castle original apk
    -king god castle official apk
    -king god castle best apk
    -king god castle top apk
    -king god castle fun apk
    -king god castle awesomepiece apk
    -king god castle awesomestudio apk
    -king god castle heroes apk
    -king god castle weapons apk
    -king god castle altar apk
    -king god castle power of most high apk
    -king god castle enemies apk
    -king god castle challenge mode apk
    -king god castle discord server apk [^2^]
    -king god castle email address apk [^1^]
    -king god castle forum link apk [^2^]
    -king god castle support contact apk [^1^]
    -king god castle developer website apk [^1^]
    -king god castle google play id apk [^1^]
    -king god castle installs number apk [^1^]
    -king god castle rating score apk [^1^]

    -

    How to download and install King God Castle APK?

    -

    Download the APK file from a trusted source

    -

    To download King God Castle APK, you need to find a reliable source that offers the latest version of the game. You can use a search engine like Bing to find such sources. You can also use this link to download the APK file directly. Make sure you have enough storage space on your device before downloading the file.

    -

    Enable unknown sources on your device

    -

    To install King God Castle APK, you need to enable unknown sources on your device. This is because the game is not available on the official Google Play Store and you need to install it from a third-party source. To enable unknown sources, follow these steps:

    -
      -
    • Go to Settings > Security > Unknown Sources.
    • -
    • Toggle on the option to allow installation of apps from unknown sources.
    • -
    • Confirm your choice by tapping OK.
    • -
    • You can now install King God Castle APK on your device.
    • -
    -

    Install the APK file and launch the game

    -

    To install King God Castle APK on your device, follow these steps:

    -
      -
    • Locate the downloaded APK file on your device using a file manager app or your browser's downloads folder.
    • -
    • Tap on the file and select Install.
    • -
    • Wait for the installation process to finish.
    • -
    • Tap on Open to launch the game or find it on your home screen or app drawer.
    • -
    • You can now enjoy playing King God Castle APK on your device.
    • -
    -

    Conclusion

    -

    In conclusion, King God Castle APK is a strategy game for Android devices that will challenge your skills and luck. You have to defend your castle from various enemies using heroes that you can enhance and combine. You can also use the power of Most High to strengthen your heroes and borrow their righteous powers. The game has various modes and difficulties that you can choose from to challenge yourself and earn more rewards. You can download and install King God Castle APK by following the steps above. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave them in the comments section below.

    -

    FAQs

    -

    Here are some frequently asked questions about King God Castle APK:

    -
      -
    1. Is King God Castle APK safe to download and install?
    2. -

      Yes, King God Castle APK is safe to download and install as long as you get it from a trusted source. However, you should always be careful when downloading and installing apps from unknown sources as they may contain malware or viruses that can harm your device or compromise your privacy. You should also scan the APK file with an antivirus app before installing it.

      -
    3. Is King God Castle APK compatible with my device?
    4. -

      King God Castle APK is compatible with most Android devices that have Android 4.4 or higher and at least 2 GB of RAM. However, some devices may experience performance issues or compatibility problems depending on their specifications and settings. You can check the minimum and recommended requirements for the game on its official website or on the download page.

      -
    5. How can I get more gold and gems in King God Castle APK?
    6. -

      Gold and gems are the main currencies in King God Castle APK that you can use to enhance and combine your heroes, buy weapons, and access other features. You can earn gold and gems by completing battles, modes, achievements, and daily quests. You can also watch ads or participate in events to get more rewards. Alternatively, you can purchase gold and gems with real money through in-app purchases.

      -
    7. How can I contact the developer of King God Castle APK?
    8. -

      If you have any questions, suggestions, feedback, or issues regarding King God Castle APK, you can contact the developer of the game through the following channels:

      -
        -
      • Email: awesomepiece@naver.com
      • -
      • Facebook: https://www.facebook.com/awesomepiece
      • -
      • Twitter: https://twitter.com/awesomepiece
      • -
      • Discord: https://discord.gg/6NttC4Q
      • -
      -
    9. What are some tips and tricks for playing King God Castle APK?
    10. -

      Here are some tips and tricks that can help you play King God Castle APK better:

      -
        -
      • Try different combinations of heroes and weapons to find the best strategy for each battle.
      • -
      • Use the power of Most High wisely as it has a cooldown time and a limited duration.
      • -
      • Upgrade your castle and altar to increase your defense and power.
      • -
      • Collect and use magic spells to deal with difficult situations.
      • -
      • Check the attributes and skills of your enemies and counter them accordingly.
      • -
      -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download AirDroid Personal and Discover the Multi-Screen Life.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download AirDroid Personal and Discover the Multi-Screen Life.md deleted file mode 100644 index 09f7d127cf325c615644862ed635f1941ff6f058..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download AirDroid Personal and Discover the Multi-Screen Life.md +++ /dev/null @@ -1,127 +0,0 @@ - -

    How to Download AirDroid: The Ultimate Guide

    -

    Do you want to manage your Android devices from any computer, anywhere? Do you want to transfer files, mirror screens, control devices, and more with ease? If yes, then you need to download AirDroid, the best personal mobile device management suite.

    -

    AirDroid is a powerful tool that lets you access and manage your phone from any computer, anywhere. You can transfer files, mirror screens, control devices, and more with ease. Whether you need it for personal or business use, AirDroid has a solution for you.

    -

    download airdroid


    Download Zip ··· https://gohhs.com/2uPstR



    -

    In this article, we will show you what AirDroid is, why you should download it, and how to download it for different platforms. Let's get started!

    -

    What is AirDroid?

    -

    AirDroid is a suite of products that help you manage your Android devices from any computer, anywhere. It has three main products:

    -

    AirDroid Personal

    -

    AirDroid Personal makes your multi-screen life easier and more focused by helping you access and manage your phone from any computer, anywhere. You can transfer files, mirror screens, control devices, and more with ease.

    -

    How to download airdroid for PC
    -Download airdroid personal for file transfer and management
    -Download airdroid cast for screen mirroring
    -Download airdroid remote support for remote control
    -Download airdroid parental control for child safety
    -Download airdroid business for android MDM solution
    -Download airdroid apk for android devices
    -Download airdroid app for iOS devices
    -Download airdroid desktop client for windows or mac
    -Download airdroid web version for browser access
    -Download airdroid premium apk for free
    -Download airdroid mod apk with unlocked features
    -Download airdroid old version for compatibility
    -Download airdroid latest version for updates
    -Download airdroid crack version with license key
    -Download airdroid alternative apps for comparison
    -Download airdroid review and ratings from users
    -Download airdroid tutorial and user guide
    -Download airdroid FAQ and troubleshooting tips
    -Download airdroid customer support and feedback
    -Download airdroid features and benefits overview
    -Download airdroid pricing and plans comparison
    -Download airdroid trial and free version availability
    -Download airdroid refund and cancellation policy
    -Download airdroid security and privacy policy
    -Download airdroid integration and compatibility with other apps
    -Download airdroid promotion and discount codes
    -Download airdroid affiliate and referral program
    -Download airdroid testimonial and case study from customers
    -Download airdroid blog and news updates from developers
    -How to download airdroid on chromebook
    -How to download airdroid on firestick
    -How to download airdroid on linux
    -How to download airdroid on smart TV
    -How to download airdroid on macbook air or pro
    -How to download airdroid on windows 10 or 11
    -How to download airdroid on iPhone or iPad
    -How to download airdroid on android phone or tablet
    -How to download airdroid on samsung galaxy or note
    -How to download airdroid on huawei or xiaomi devices
    -How to download and install airmirror app for android remote control
    -How to download and use airbackup app for android data backup and restore
    -How to download and connect airconsole app for android game console emulator
    -How to download and enable airlock app for android kiosk mode lockdown
    -How to download and activate aircover app for android device security protection

    -

    AirDroid Business

    -

    AirDroid Business is a device management solution that helps you remotely manage thousands of Android devices from a web-based dashboard. You can deploy apps, track locations, set policies, and more with ease.

    -

    AirDroid Remote Support

    -

    AirDroid Remote Support is a remote assistance software that helps you provide IT support to your customers or employees. You can remotely control devices, share screens, chat, and more with ease.

    -

    Why Download AirDroid?

    -

    Downloading AirDroid can bring you many benefits, such as:

    -

    Benefits of AirDroid

    -

    File Transfer and Management

    -

    You can transfer files between your phone and computer wirelessly and quickly. You can also manage your phone's files from your computer's browser or desktop client.

    -

    Screen Mirroring and Remote Control

    -

    You can mirror your phone's screen to your computer's screen and control it with your mouse and keyboard. You can also use your phone as a remote control for another phone or tablet.

    -

    Device Management and Security

    -

    You can manage multiple devices from a web-based dashboard. You can deploy apps, track locations, set policies, and more. You can also secure your devices with features like remote lock, wipe, or ring.

    -

    Features of AirDroid

    -

    AirDroid has many features that make it easy and convenient to use. Some of them are:

    -

    Web Client

    -

    You can access your phone from any web browser without installing any software on your computer. You just need to scan a QR code or sign in with your account.

    -

    Desktop Client

    -

    You can download and install the desktop client on your Windows or Mac computer for a better experience. You can enjoy features like notifications, backup, clipboard sync, and more.

    -

    Mobile App

    -

    You can download and install the mobile app on your Android or iOS device for more functionality. You can use features like file transfer, remote control, screen mirroring, and more.

    -

    How to Download AirDroid?

    -

    Downloading AirDroid is easy and simple. You can download it for different platforms, such as computer, mobile, or web. Here are the steps to download AirDroid for each platform:

    -

    Download AirDroid for Computer

    -

    If you want to download AirDroid for your Windows or Mac computer, you can follow these steps:

    -
      -
    1. Go to the official website of AirDroid at https://www.airdroid.com/.
    2. -
    3. Click on the "Download" button at the top right corner of the page.
    4. -
    5. Select the "AirDroid Personal" option and choose your operating system (Windows or Mac).
    6. -
    7. Click on the "Download Now" button and wait for the file to download.
    8. -
    9. Open the downloaded file and follow the instructions to install AirDroid on your computer.
    10. -
    11. Launch AirDroid and sign in with your account or create a new one.
    12. -
    13. Enjoy using AirDroid on your computer!
    14. -
    -

    Download AirDroid for Mobile

    -

    If you want to download AirDroid for your Android or iOS device, you can follow these steps:

    -
      -
    1. Go to the Google Play Store or the App Store on your device.
    2. -
    3. Search for "AirDroid" and tap on the app icon.
    4. -
    5. Tap on the "Install" or "Get" button and wait for the app to download.
    6. -
    7. Open the app and sign in with your account or create a new one.
    8. -
    9. Enjoy using AirDroid on your mobile!
    10. -
    -

    Download AirDroid for Web

    -

    If you want to use AirDroid on your web browser without downloading anything, you can follow these steps:

    -
      -
    1. Go to the official website of AirDroid at https://web.airdroid.com/.
    2. -
    3. Sign in with your account or create a new one.
    4. -
    5. Scan the QR code on your phone's screen with the AirDroid app on your mobile device.
    6. -
    7. Enjoy using AirDroid on your web browser!
    8. -
    -

    Conclusion

    -

    AirDroid is a great tool that helps you manage your Android devices from any computer, anywhere. You can transfer files, mirror screens, control devices, and more with ease. You can download AirDroid for different platforms, such as computer, mobile, or web. Downloading AirDroid is easy and simple. Just follow the steps above and enjoy using AirDroid!

    -

    FAQs

    -

    Here are some frequently asked questions about AirDroid:

    -

    Is AirDroid free?

    -

    AirDroid has a free version that offers basic features like file transfer, screen mirroring, remote control, and more. However, if you want to enjoy more advanced features like backup, clipboard sync, remote camera, and more, you need to upgrade to the premium version. The premium version costs $1.99 per month or $19.99 per year.

    -

    Is AirDroid safe?

    -

    AirDroid is safe and secure to use. It uses encryption and authentication methods to protect your data and devices. You can also set a password or a PIN code to lock your devices remotely. However, you should always be careful when accessing public Wi-Fi networks or unknown devices.

    -

    How many devices can I connect with AirDroid?

    -

    The free version of AirDroid allows you to connect up to two devices at a time. The premium version allows you to connect up to six devices at a time. However, if you need to manage more devices, you can use AirDroid Business or AirDroid Remote Support.

    -

    What are the system requirements for AirDroid?

    -

    The system requirements for AirDroid are as follows:

    -
    TypeAttributeExample
    HumanNormalSoldier, Knight, Archer, Mage, etc.
    BeastFireWolf, Bear, Tiger, Dragon, etc.
    UndeadDarkZombie, Skeleton, Ghost, Lich, etc.
    DemonChaosImp, Succubus, Hellhound, Balrog, etc.
    AngelHolySeraph, Cherub, Archangel, etc.
    FairyNatureSpriggan, Pixie, Dryad, Nymph, etc.
    GiantEarthOgre, Troll, Cyclops, Golem, etc.
    - - - -
    PlatformSystem Requirements
    ComputerWindows 7 or later / Mac OS X 10.10 or later / Chrome OS / Linux
    MobileAndroid 4.0 or later / iOS 10.0 or later
    WebA modern web browser that supports HTML5 and WebSocket (e.g., Chrome, Firefox, Safari)
    -

    How can I contact AirDroid support?

    -

    If you have any questions or issues with AirDroid, you can contact AirDroid support by visiting their official website at https://www.airdroid.com/support/. You can also check their help center, forum, blog, or social media channels for more information and tips.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download QuickVPN APK and Unblock Any Website or App on Your Android Phone.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download QuickVPN APK and Unblock Any Website or App on Your Android Phone.md deleted file mode 100644 index 6f365f69d6e003493c31b7223826cfaaa41663ea..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Download QuickVPN APK and Unblock Any Website or App on Your Android Phone.md +++ /dev/null @@ -1,191 +0,0 @@ -
    -

    Quick VPN APK Download: How to Get a Fast and Secure VPN for Free

    -

    Do you want to access blocked websites, stream geo-restricted content, or protect your online privacy and security? If yes, then you need a VPN (Virtual Private Network) service that can encrypt your internet traffic and hide your IP address from prying eyes. However, not all VPNs are created equal. Some are slow, unreliable, or expensive. That's why you need Quick VPN, a free and fast VPN app that offers you a secure and unlimited internet connection with no restrictions or limitations.

    -

    In this article, we will show you how to download and install the Quick VPN APK file on your Android device, how to use it effectively and safely, and answer some frequently asked questions about this amazing app. Let's get started!

    -

    quick vpn apk download


    Download Zip »»» https://gohhs.com/2uPoPj



    -

    What is Quick VPN and Why You Need It

    -

    Quick VPN is a top-rated VPN app that offers users a fast and secure internet connection with no restrictions or limitations. It has an APK download size of only 15.86 MB and the latest version available is 1.7. It is designed for Android version 5.0 or higher.

    -

    Quick VPN Features and Benefits

    -

    Some of the features and benefits of using Quick VPN are:

    -
      -
    • It is free to download and use. You don't need to register, sign up, or pay anything to use this app.
    • -
    • It has a simple and user-friendly interface. You can easily connect to any of the available servers with just one tap.
    • -
    • It has a large number of servers in different countries around the world. You can choose from over 100 servers in more than 30 countries, including the US, UK, Canada, Australia, Germany, France, Japan, Singapore, India, and more.
    • -
    • It provides you with a fast and stable internet connection. You can enjoy high-speed browsing, streaming, gaming, downloading, and uploading without any lag or buffering.
    • -
    • It protects your online privacy and security. It encrypts your internet traffic with AES-256 encryption, the same level of encryption used by banks and governments. It also hides your IP address and location from hackers, ISPs, websites, apps, and government agencies.
    • -
    • It allows you to access blocked websites, apps, and content. You can bypass geo-restrictions, firewalls, censorship, and other barriers that prevent you from accessing your favorite websites, apps, and content. You can also watch Netflix, Hulu, BBC iPlayer, Disney+, Amazon Prime Video, YouTube, Facebook, Twitter, Instagram, WhatsApp, Telegram, TikTok, and more from anywhere in the world.
    • -
    -

    Quick VPN Compatibility and Requirements

    -

    Quick VPN is compatible with most Android devices that run on Android version 5.0 or higher. However, some devices may not support this app due to different specifications or settings. To use Quick VPN on your Android device, you need to have:

    -
      -
    • An internet connection (Wi-Fi or mobile data)
    • -
    • A device that supports unknown sources installation (see below)
    • -
    • A device that has enough storage space for the APK file (15.86 MB)
    • -
    • A device that has enough battery power for the app to run smoothly
    • -
    -

    How to Download and Install Quick VPN APK on Your Android Device

    -

    To download and install the Quick VPN APK file on your Android device, you need to follow these steps:

    -

    Step 1: Enable Unknown Sources on Your Device

    -

    Before you can install the Quick VPN APK file on your device, you need to enable unknown sources installation. This is a security feature that prevents you from installing apps from sources other than the Google Play Store. To enable unknown sources installation, you need to:

    -
      -
    1. Go to your device's Settings and tap on Security or Privacy.
    2. -
    3. Find and toggle on the option that says Unknown Sources or Allow Installation of Apps from Unknown Sources.
    4. -
    5. Confirm your choice by tapping on OK or Allow.
    6. -
    -

    Once you have enabled unknown sources installation, you can proceed to the next step.

    -

    Step 2: Download the Quick VPN APK File from a Trusted Source

    -

    The next step is to download the Quick VPN APK file from a trusted source. You can find the official download link for the latest version of Quick VPN APK on the app's website or on reputable APK websites such as APKPure or APKMirror. To download the Quick VPN APK file, you need to:

    -
      -
    1. Open your device's browser and go to the download link of your choice.
    2. -
    3. Tap on the Download or Install button and wait for the download to start.
    4. -
    5. Check your device's notification bar or download folder to see the progress of the download.
    6. -
    -

    Once the download is complete, you can move on to the next step.

    -

    quick vpn apk download for android
    -quick vpn apk download free
    -quick vpn apk download latest version
    -quick vpn apk download uptodown
    -quick vpn apk download for pc
    -quick vpn apk download for windows 10
    -quick vpn apk download for firestick
    -quick vpn apk download for ios
    -quick vpn apk download for mac
    -quick vpn apk download mod
    -quick vpn apk download premium
    -quick vpn apk download pro
    -quick vpn apk download cracked
    -quick vpn apk download unlimited
    -quick vpn apk download full
    -quick vpn apk download 2023
    -quick vpn apk download old version
    -quick vpn apk download apkpure
    -quick vpn apk download apkmirror
    -quick vpn apk download appbrain
    -quick vpn apk download appvn
    -quick vpn apk download aptoide
    -quick vpn apk download android 11
    -quick vpn apk download android 10
    -quick vpn apk download android 9
    -quick vpn apk download android 8
    -quick vpn apk download android 7
    -quick vpn apk download android 6
    -quick vpn apk download android 5
    -quick vpn apk download android 4.4.2
    -quick vpn apk download no ads
    -quick vpn apk download no root
    -quick vpn apk download no registration
    -quick vpn apk download no login
    -quick vpn apk download no subscription
    -quick vpn apk download with license key
    -quick vpn apk download with activation code
    -quick vpn apk download with serial number
    -quick vpn apk download with crack file
    -quick vpn apk download with patch file
    -quick vpn apk download with mod menu
    -quick vpn apk download with fast servers
    -quick vpn apk download with secure connection
    -quick vpn apk download with unlimited bandwidth
    -quick vpn apk download with easy interface
    -quick vpn apk download with multiple locations
    -quick vpn apk download with best reviews
    -quick vpn apk download with high ratings
    -quick vpn apk download with low size

    -

    Step 3: Locate and Install the Quick VPN APK File on Your Device

    -

    The final step is to locate and install the Quick VPN APK file on your device. To do this, you need to:

    -
      -
    1. Open your device's file manager and find the Quick VPN APK file. It should be in your download folder or in the location where you saved it.
    2. -
    3. Tap on the Quick VPN APK file and select Install.
    4. -
    5. Follow the on-screen instructions and grant the necessary permissions for the app to run.
    6. -
    7. Wait for the installation to finish and tap on Open or Done.
    8. -
    -

    Congratulations! You have successfully installed Quick VPN APK on your Android device. You can now launch the app and connect to a server of your choice.

    -

    Step 4: Launch the Quick VPN App and Connect to a Server of Your Choice

    -

    To launch the Quick VPN app and connect to a server of your choice, you need to:

    -
      -
    1. Open the Quick VPN app from your device's app drawer or home screen.
    2. -
    3. Select a server location from the list or tap on Auto Select to let the app choose the best server for you.
    4. -
    5. Tap on Connect and wait for the connection to be established.
    6. -
    7. Enjoy your fast and secure internet connection with Quick VPN!
    8. -
    -

    You can also change your server location, check your connection status, or disconnect from the app's main screen. You can also access more settings and features by tapping on the menu icon at the top left corner of the screen.

    -

    How to Use Quick VPN Effectively and Safely

    -

    Now that you have downloaded and installed Quick VPN APK on your Android device, you might be wondering how to use it effectively and safely. Here are some tips and tricks that will help you get the most out of this app:

    -

    Tips for Choosing the Best Server Location for Your Needs

    -

    One of the advantages of using Quick VPN is that you can choose from a large number of servers in different countries around the world. However, not all servers are equally suitable for your needs. Depending on what you want to do online, you might want to consider the following factors when choosing a server location:

    -
      -
    • Speed: If you want to enjoy a fast and smooth internet connection, you should choose a server that is close to your physical location or has a low ping time. You can check the ping time of each server by tapping on the speed icon next to the server name.
    • -
    • Security: If you want to protect your online privacy and security, you should choose a server that is located in a country that has strong data protection laws and does not cooperate with surveillance agencies. You should also avoid servers that are located in countries that are known for censorship, hacking, or malware.
    • -
    • Access: If you want to access blocked websites, apps, or content, you should choose a server that is located in a country that allows access to those websites, apps, or content. For example, if you want to watch Netflix US, you should choose a server that is located in the US.
    • -
    -

    You can also use the Auto Select feature to let the app choose the best server for you based on your needs and preferences.

    -

    How to Avoid Common VPN Issues and Troubleshoot Problems

    -

    Although Quick VPN is a reliable and stable app, you might encounter some issues or problems while using it. Here are some common VPN issues and how to troubleshoot them:

    -
      -
    • Connection failure: If you cannot connect to a server or the connection drops frequently, you should try the following solutions:
        -
      • Check your internet connection and make sure it is working properly.
      • -
      • Change your server location and try a different server.
      • -
      • Clear the app's cache and data from your device's settings.
      • -
      • Reinstall the app and try again.
      • -
      -
    • -
    • Slow speed: If you experience slow speed or poor performance while using Quick VPN, you should try the following solutions:
        -
      • Choose a server that is close to your physical location or has a low ping time.
      • -
      • Reduce the number of devices or apps that are using your internet connection.
      • -
      • Close any background apps or processes that might be slowing down your device.
      • -
      • Update the app to the latest version and check for any improvements.
      • -
      -
    • -
    • Access denied: If you cannot access certain websites, apps, or content while using Quick VPN, you should try the following solutions:
        -
      • Choose a server that is located in a country that allows access to those websites, apps, or content.
      • -
      • Clear your browser's cache and cookies and try again.
      • -
      • Disable any other VPNs or proxies that might be interfering with Quick VPN.
      • -
      • Contact the website, app, or content provider and ask them to whitelist Quick VPN's IP addresses.
      • -
      -
    • -
    -

    If none of these solutions work for you, you can contact Quick VPN's customer support team via email at quickvpn@gmail.com. They will help you resolve any issues or problems as soon as possible.

    -

    How to Protect Your Privacy and Security with Quick VPN

    -

    Quick VPN is designed to protect your online privacy and security by encrypting your internet traffic and hiding your IP address. However, there are some additional steps that you can take to enhance your privacy and security while using Quick VPN. Here are some tips that will help you stay safe online:

    -
      -
    • Avoid using public Wi-Fi networks or unsecured connections. They can expose your personal information and data to hackers, snoopers, or malicious actors. Use Quick VPN whenever you connect to a public Wi-Fi network or an unsecured connection.
    • -
    • Avoid logging into sensitive accounts or websites while using Quick VPN. They can store your login credentials or track your online activities. Use Quick VPN only for browsing, streaming, gaming, downloading, or uploading purposes.
    • -
    • Avoid sharing your personal information or data with anyone online. They can use it for identity theft, fraud, phishing, spamming, or other malicious purposes. Use Quick VPN to mask your identity and location online.
    • -
    • Avoid downloading or opening files from unknown sources. They can contain viruses, malware, spyware, ransomware, or other harmful software. Use Quick VPN to scan and filter any files before downloading or opening them.
    • -
    • Avoid clicking on suspicious links or ads while using Quick VPN. They can redirect you to malicious websites or apps that can harm your device or steal your information. Use Quick VPN to block any unwanted or harmful ads or pop-ups.
    • -
    -

    By following these tips, you can protect your online privacy and security with Quick VPN and enjoy a fast and secure internet connection with no restrictions or limitations.

    -

    Conclusion and FAQs

    -

    Conclusion

    -

    Quick VPN is a free and fast VPN app that offers you a secure and unlimited internet connection with no restrictions or limitations. It has a simple and user-friendly interface, a large number of servers in different countries, a high-speed and stable connection, and a strong encryption and privacy protection. It also allows you to access blocked websites, apps, and content from anywhere in the world.

    -

    To download and install the Quick VPN APK file on your Android device, you need to enable unknown sources installation, download the APK file from a trusted source, locate and install the APK file on your device, and launch the app and connect to a server of your choice. To use Quick VPN effectively and safely, you need to choose the best server location for your needs, avoid common VPN issues and troubleshoot problems, and protect your privacy and security with Quick VPN.

    -

    We hope this article has helped you learn more about Quick VPN APK download and how to use it on your Android device. If you have any questions or feedback, please feel free to contact us or leave a comment below. Thank you for reading!

    -

    FAQs

    -

    Here are some frequently asked questions about Quick VPN APK download:

    - - - - - - - - - - - - - - - - - - - - - - - - - -
    QuestionAnswer
    Is Quick VPN safe to use?Yes, Quick VPN is safe to use. It does not collect, store, or share any of your personal information or data. It also encrypts your internet traffic and hides your IP address from hackers, ISPs, websites, apps, and government agencies.
    Is Quick VPN legal to use?Yes, Quick VPN is legal to use. However, some countries or regions may have laws or regulations that restrict or prohibit the use of VPNs. You should check the local laws before using Quick VPN in those countries or regions.
    Does Quick VPN have any limitations?No, Quick VPN does not have any limitations. You can use it as much as you want, for as long as you want, with no bandwidth, speed, or time limits. You can also switch between servers as many times as you want.
    Does Quick VPN work with other devices or platforms?No, Quick VPN only works with Android devices that run on Android version 5.0 or higher. It does not work with iOS, Windows, Mac, Linux, or other devices or platforms.
    Does Quick VPN support torrenting or P2P?No, Quick VPN does not support torrenting or P2P. It is not designed for this purpose and it may cause performance issues or legal problems. You should use a dedicated VPN service that supports torrenting or P2P if you want to do this activity.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/utils/export.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/utils/export.py deleted file mode 100644 index b513b52267f7bf5aae09282c15b0a2e20c8a8fee..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/utils/export.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility to export a training checkpoint to a lightweight release checkpoint. -""" - -from pathlib import Path -import typing as tp - -from omegaconf import OmegaConf, DictConfig -import torch - - -def _clean_lm_cfg(cfg: DictConfig): - OmegaConf.set_struct(cfg, False) - # This used to be set automatically in the LM solver, need a more robust solution - # for the future. - cfg['transformer_lm']['card'] = 2048 - cfg['transformer_lm']['n_q'] = 4 - # Experimental params no longer supported. - bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters', - 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop'] - for name in bad_params: - del cfg['transformer_lm'][name] - OmegaConf.set_struct(cfg, True) - return cfg - - -def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['ema']['state']['model'], - 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']), - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file - - -def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]): - sig = Path(checkpoint_path).parent.name - assert len(sig) == 8, "Not a valid Dora signature" - pkg = torch.load(checkpoint_path, 'cpu') - new_pkg = { - 'best_state': pkg['fsdp_best_state']['model'], - 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg'])) - } - out_file = Path(out_folder) / f'{sig}.th' - torch.save(new_pkg, out_file) - return out_file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/merge-descriptors/index.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/merge-descriptors/index.js deleted file mode 100644 index 573b132eb2ba40bd26ffc8360e814d5beb4bc50f..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/merge-descriptors/index.js +++ /dev/null @@ -1,60 +0,0 @@ -/*! - * merge-descriptors - * Copyright(c) 2014 Jonathan Ong - * Copyright(c) 2015 Douglas Christopher Wilson - * MIT Licensed - */ - -'use strict' - -/** - * Module exports. - * @public - */ - -module.exports = merge - -/** - * Module variables. - * @private - */ - -var hasOwnProperty = Object.prototype.hasOwnProperty - -/** - * Merge the property descriptors of `src` into `dest` - * - * @param {object} dest Object to add descriptors to - * @param {object} src Object to clone descriptors from - * @param {boolean} [redefine=true] Redefine `dest` properties with `src` properties - * @returns {object} Reference to dest - * @public - */ - -function merge(dest, src, redefine) { - if (!dest) { - throw new TypeError('argument dest is required') - } - - if (!src) { - throw new TypeError('argument src is required') - } - - if (redefine === undefined) { - // Default to true - redefine = true - } - - Object.getOwnPropertyNames(src).forEach(function forEachOwnPropertyName(name) { - if (!redefine && hasOwnProperty.call(dest, name)) { - // Skip desriptor - return - } - - // Copy descriptor - var descriptor = Object.getOwnPropertyDescriptor(src, name) - Object.defineProperty(dest, name, descriptor) - }) - - return dest -} diff --git a/spaces/fffiloni/zeroscope/README.md b/spaces/fffiloni/zeroscope/README.md deleted file mode 100644 index 9262005bf9bcf2a62a4f82dd2b3fb98cd7c9fc7c..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/zeroscope/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Zeroscope Text-To-Video -emoji: 🐠 -colorFrom: red -colorTo: gray -sdk: gradio -python_version: 3.10.12 -sdk_version: 3.50.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/flax-community/dalle-mini/index.html b/spaces/flax-community/dalle-mini/index.html deleted file mode 100644 index a4b7ba27414ef7cfd7ebff93ab039a87d9a788d1..0000000000000000000000000000000000000000 --- a/spaces/flax-community/dalle-mini/index.html +++ /dev/null @@ -1,64 +0,0 @@ - - - - - - - - - - - - - - - - - - - - -
    - - - diff --git a/spaces/flax-community/koclip/koclip/config.py b/spaces/flax-community/koclip/koclip/config.py deleted file mode 100644 index ce1af96f6f74fa2ac2e146885b65183c9b53f8e0..0000000000000000000000000000000000000000 --- a/spaces/flax-community/koclip/koclip/config.py +++ /dev/null @@ -1,109 +0,0 @@ -import copy - -from transformers.configuration_utils import PretrainedConfig -from transformers.utils import logging - -logger = logging.get_logger(__name__) - - -class HybridCLIPConfig(PretrainedConfig): - r""" - :class:`HybridCLIPConfig` is the configuration class to store the configuration of a - :class:`~HybridCLIPModel`. It is used to instantiate HybridCLIPModel model according to the specified arguments, - defining the text model and vision model configs. - Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model - outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information. - Args: - text_config_dict (:obj:`dict`): - Dictionary of configuration options that defines text model config. - vision_config_dict (:obj:`dict`): - Dictionary of configuration options that defines vison model config. - projection_dim (:obj:`int`, `optional`, defaults to 512): - Dimentionality of text and vision projection layers. - kwargs (`optional`): - Dictionary of keyword arguments. - Examples:: - >>> from transformers import BertConfig, CLIPConfig, HybridCLIPConfig, FlaxHybridCLIP - >>> # Initializing a BERT and CLIP configuration - >>> config_text = BertConfig() - >>> config_vision = CLIPConfig() - >>> config = HybridCLIPConfig.from_text_vision_configs(config_text, config_vision, projection_dim=512) - >>> # Initializing a BERT and CLIPVision model - >>> model = EncoderDecoderModel(config=config) - >>> # Accessing the model configuration - >>> config_text = model.config.text_config - >>> config_vision = model.config.vision_config - >>> # Saving the model, including its configuration - >>> model.save_pretrained('my-model') - >>> # loading model and config from pretrained folder - >>> encoder_decoder_config = HybridCLIPConfig.from_pretrained('my-model') - >>> model = FlaxHybridCLIP.from_pretrained('my-model', config=encoder_decoder_config) - """ - - model_type = "hybrid-clip" - is_composition = True - - def __init__(self, projection_dim=512, **kwargs): - super().__init__(**kwargs) - - if "text_config" not in kwargs: - raise ValueError("`text_config` can not be `None`.") - - if "vision_config" not in kwargs: - raise ValueError("`vision_config` can not be `None`.") - - text_config = kwargs.pop("text_config") - vision_config = kwargs.pop("vision_config") - - text_model_type = text_config.pop("model_type") - vision_model_type = vision_config.pop("model_type") - - from transformers import AutoConfig - - self.text_config = AutoConfig.for_model(text_model_type, **text_config) - - if vision_model_type == "clip": - self.vision_config = AutoConfig.for_model( - vision_model_type, **vision_config - ).vision_config - elif vision_model_type == "clip_vision_model": - from transformers import CLIPVisionConfig - - self.vision_config = CLIPVisionConfig(**vision_config) - else: - self.vision_config = AutoConfig.for_model( - vision_model_type, **vision_config - ) - - self.projection_dim = projection_dim - self.initializer_factor = 1.0 - - @classmethod - def from_text_vision_configs( - cls, text_config: PretrainedConfig, vision_config: PretrainedConfig, **kwargs - ): - r""" - Instantiate a :class:`HybridCLIPConfig` (or a derived class) from text model configuration and - vision model configuration. - Returns: - :class:`HybridCLIPConfig`: An instance of a configuration object - """ - - return cls( - text_config=text_config.to_dict(), - vision_config=vision_config.to_dict(), - **kwargs - ) - - def to_dict(self): - """ - Serializes this instance to a Python dictionary. Override the default - :meth:`~transformers.PretrainedConfig.to_dict`. - Returns: - :obj:`Dict[str, any]`: Dictionary of all the attributes that make up this configuration instance, - """ - output = copy.deepcopy(self.__dict__) - output["text_config"] = self.text_config.to_dict() - output["vision_config"] = self.vision_config.to_dict() - output["model_type"] = self.__class__.model_type - return output diff --git a/spaces/florim/MedGPT/autogpt/setup.py b/spaces/florim/MedGPT/autogpt/setup.py deleted file mode 100644 index bfa68201b62bf67230a61fb1ecb00d1ab0ef0631..0000000000000000000000000000000000000000 --- a/spaces/florim/MedGPT/autogpt/setup.py +++ /dev/null @@ -1,77 +0,0 @@ -"""Set up the AI and its goals""" -from colorama import Fore, Style - -from autogpt import utils -from autogpt.config.ai_config import AIConfig -from autogpt.logs import logger - - -def prompt_user() -> AIConfig: - """Prompt the user for input - - Returns: - AIConfig: The AIConfig object containing the user's input - """ - ai_name = "" - # Construct the prompt - logger.typewriter_log( - "Welcome to Auto-GPT! ", - Fore.GREEN, - "run with '--help' for more information.", - speak_text=True, - ) - - logger.typewriter_log( - "Create an AI-Assistant:", - Fore.GREEN, - "Enter the name of your AI and its role below. Entering nothing will load" - " defaults.", - speak_text=True, - ) - - # Get AI Name from User - logger.typewriter_log( - "Name your AI: ", Fore.GREEN, "For example, 'Entrepreneur-GPT'" - ) - ai_name = utils.clean_input("AI Name: ") - if ai_name == "": - ai_name = "Entrepreneur-GPT" - - logger.typewriter_log( - f"{ai_name} here!", Fore.LIGHTBLUE_EX, "I am at your service.", speak_text=True - ) - - # Get AI Role from User - logger.typewriter_log( - "Describe your AI's role: ", - Fore.GREEN, - "For example, 'an AI designed to autonomously develop and run businesses with" - " the sole goal of increasing your net worth.'", - ) - ai_role = utils.clean_input(f"{ai_name} is: ") - if ai_role == "": - ai_role = "an AI designed to autonomously develop and run businesses with the" - " sole goal of increasing your net worth." - - # Enter up to 5 goals for the AI - logger.typewriter_log( - "Enter up to 5 goals for your AI: ", - Fore.GREEN, - "For example: \nIncrease net worth, Grow Twitter Account, Develop and manage" - " multiple businesses autonomously'", - ) - print("Enter nothing to load defaults, enter nothing when finished.", flush=True) - ai_goals = [] - for i in range(5): - ai_goal = utils.clean_input(f"{Fore.LIGHTBLUE_EX}Goal{Style.RESET_ALL} {i+1}: ") - if ai_goal == "": - break - ai_goals.append(ai_goal) - if not ai_goals: - ai_goals = [ - "Increase net worth", - "Grow Twitter Account", - "Develop and manage multiple businesses autonomously", - ] - - return AIConfig(ai_name, ai_role, ai_goals) diff --git a/spaces/fracapuano/AISandbox/qa/prompts.py b/spaces/fracapuano/AISandbox/qa/prompts.py deleted file mode 100644 index e32fdf812dd84fe13d4569af5b6724819ec0947d..0000000000000000000000000000000000000000 --- a/spaces/fracapuano/AISandbox/qa/prompts.py +++ /dev/null @@ -1,26 +0,0 @@ -from langchain.prompts import PromptTemplate - -## One might consider using a shorter template to reduce the number of tokens in the model input -template = """Create a final answer to the given questions using the provided document (in no particular order) as references. ALWAYS include a "SOURCES" section in your answer including only the minimal set of sources needed to answer the question. If you are unable to answer the question, simply state that you do not know. Do not attempt to fabricate an answer and leave the SOURCES section empty. ---------- -QUESTION: What is the purpose of ARPA-H? -========= -Content: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt’s based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. -Source: 1-32 -Content: While we’re at it, let’s make sure every American can get the health care they need. \n\nWe’ve already made historic investments in health care. \n\nWe’ve made it easier for Americans to get the care they need, when they need it. \n\nWe’ve made it easier for Americans to get the treatments they need, when they need them. \n\nWe’ve made it easier for Americans to get the medications they need, when they need them. -Source: 1-33 -Content: The V.A. is pioneering new ways of linking toxic exposures to disease, already helping veterans get the care they deserve. \n\nWe need to extend that same care to all Americans. \n\nThat’s why I’m calling on Congress to pass legislation that would establish a national registry of toxic exposures, and provide health care and financial assistance to those affected. -Source: 1-30 -========= -FINAL ANSWER: The purpose of ARPA-H is to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. -SOURCES: 1-32 ---------- -QUESTION: {question} -========= -{summaries} -========= -FINAL ANSWER:""" - -STUFF_PROMPT = PromptTemplate( - template=template, input_variables=["summaries", "question"] -) \ No newline at end of file diff --git a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Gravityengine.py b/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Gravityengine.py deleted file mode 100644 index f0cd09daaaae0adaa349f91139dc60c7ac79c028..0000000000000000000000000000000000000000 --- a/spaces/g4f/freegpt-webui/g4f/Provider/Providers/Gravityengine.py +++ /dev/null @@ -1,27 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints - -url = 'https://gpt4.xunika.uk/' -model = ['gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - headers = { - 'Content-Type': 'application/json', - } - data = { - 'model': model, - 'temperature': 0.7, - 'presence_penalty': 0, - 'messages': messages, - } - response = requests.post(url + '/api/openai/v1/chat/completions', - json=data, stream=True) - - yield response.json()['choices'][0]['message']['content'] - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/drive.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/drive.py deleted file mode 100644 index 3cbfda8ae74bdf26c5aef197ff2866a7c7ad0cfd..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmseg/datasets/drive.py +++ /dev/null @@ -1,27 +0,0 @@ -import os.path as osp - -from .builder import DATASETS -from .custom import CustomDataset - - -@DATASETS.register_module() -class DRIVEDataset(CustomDataset): - """DRIVE dataset. - - In segmentation map annotation for DRIVE, 0 stands for background, which is - included in 2 categories. ``reduce_zero_label`` is fixed to False. The - ``img_suffix`` is fixed to '.png' and ``seg_map_suffix`` is fixed to - '_manual1.png'. - """ - - CLASSES = ('background', 'vessel') - - PALETTE = [[120, 120, 120], [6, 230, 230]] - - def __init__(self, **kwargs): - super(DRIVEDataset, self).__init__( - img_suffix='.png', - seg_map_suffix='_manual1.png', - reduce_zero_label=False, - **kwargs) - assert osp.exists(self.img_dir) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/ldm/modules/diffusionmodules/openaimodel.py b/spaces/georgefen/Face-Landmark-ControlNet/ldm/modules/diffusionmodules/openaimodel.py deleted file mode 100644 index 7df6b5abfe8eff07f0c8e8703ba8aee90d45984b..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/ldm/modules/diffusionmodules/openaimodel.py +++ /dev/null @@ -1,786 +0,0 @@ -from abc import abstractmethod -import math - -import numpy as np -import torch as th -import torch.nn as nn -import torch.nn.functional as F - -from ldm.modules.diffusionmodules.util import ( - checkpoint, - conv_nd, - linear, - avg_pool_nd, - zero_module, - normalization, - timestep_embedding, -) -from ldm.modules.attention import SpatialTransformer -from ldm.util import exists - - -# dummy replace -def convert_module_to_f16(x): - pass - -def convert_module_to_f32(x): - pass - - -## go -class AttentionPool2d(nn.Module): - """ - Adapted from CLIP: https://github.com/openai/CLIP/blob/main/clip/model.py - """ - - def __init__( - self, - spacial_dim: int, - embed_dim: int, - num_heads_channels: int, - output_dim: int = None, - ): - super().__init__() - self.positional_embedding = nn.Parameter(th.randn(embed_dim, spacial_dim ** 2 + 1) / embed_dim ** 0.5) - self.qkv_proj = conv_nd(1, embed_dim, 3 * embed_dim, 1) - self.c_proj = conv_nd(1, embed_dim, output_dim or embed_dim, 1) - self.num_heads = embed_dim // num_heads_channels - self.attention = QKVAttention(self.num_heads) - - def forward(self, x): - b, c, *_spatial = x.shape - x = x.reshape(b, c, -1) # NC(HW) - x = th.cat([x.mean(dim=-1, keepdim=True), x], dim=-1) # NC(HW+1) - x = x + self.positional_embedding[None, :, :].to(x.dtype) # NC(HW+1) - x = self.qkv_proj(x) - x = self.attention(x) - x = self.c_proj(x) - return x[:, :, 0] - - -class TimestepBlock(nn.Module): - """ - Any module where forward() takes timestep embeddings as a second argument. - """ - - @abstractmethod - def forward(self, x, emb): - """ - Apply the module to `x` given `emb` timestep embeddings. - """ - - -class TimestepEmbedSequential(nn.Sequential, TimestepBlock): - """ - A sequential module that passes timestep embeddings to the children that - support it as an extra input. - """ - - def forward(self, x, emb, context=None): - for layer in self: - if isinstance(layer, TimestepBlock): - x = layer(x, emb) - elif isinstance(layer, SpatialTransformer): - x = layer(x, context) - else: - x = layer(x) - return x - - -class Upsample(nn.Module): - """ - An upsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - upsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None, padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - if use_conv: - self.conv = conv_nd(dims, self.channels, self.out_channels, 3, padding=padding) - - def forward(self, x): - assert x.shape[1] == self.channels - if self.dims == 3: - x = F.interpolate( - x, (x.shape[2], x.shape[3] * 2, x.shape[4] * 2), mode="nearest" - ) - else: - x = F.interpolate(x, scale_factor=2, mode="nearest") - if self.use_conv: - x = self.conv(x) - return x - -class TransposedUpsample(nn.Module): - 'Learned 2x upsampling without padding' - def __init__(self, channels, out_channels=None, ks=5): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - - self.up = nn.ConvTranspose2d(self.channels,self.out_channels,kernel_size=ks,stride=2) - - def forward(self,x): - return self.up(x) - - -class Downsample(nn.Module): - """ - A downsampling layer with an optional convolution. - :param channels: channels in the inputs and outputs. - :param use_conv: a bool determining if a convolution is applied. - :param dims: determines if the signal is 1D, 2D, or 3D. If 3D, then - downsampling occurs in the inner-two dimensions. - """ - - def __init__(self, channels, use_conv, dims=2, out_channels=None,padding=1): - super().__init__() - self.channels = channels - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.dims = dims - stride = 2 if dims != 3 else (1, 2, 2) - if use_conv: - self.op = conv_nd( - dims, self.channels, self.out_channels, 3, stride=stride, padding=padding - ) - else: - assert self.channels == self.out_channels - self.op = avg_pool_nd(dims, kernel_size=stride, stride=stride) - - def forward(self, x): - assert x.shape[1] == self.channels - return self.op(x) - - -class ResBlock(TimestepBlock): - """ - A residual block that can optionally change the number of channels. - :param channels: the number of input channels. - :param emb_channels: the number of timestep embedding channels. - :param dropout: the rate of dropout. - :param out_channels: if specified, the number of out channels. - :param use_conv: if True and out_channels is specified, use a spatial - convolution instead of a smaller 1x1 convolution to change the - channels in the skip connection. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param use_checkpoint: if True, use gradient checkpointing on this module. - :param up: if True, use this block for upsampling. - :param down: if True, use this block for downsampling. - """ - - def __init__( - self, - channels, - emb_channels, - dropout, - out_channels=None, - use_conv=False, - use_scale_shift_norm=False, - dims=2, - use_checkpoint=False, - up=False, - down=False, - ): - super().__init__() - self.channels = channels - self.emb_channels = emb_channels - self.dropout = dropout - self.out_channels = out_channels or channels - self.use_conv = use_conv - self.use_checkpoint = use_checkpoint - self.use_scale_shift_norm = use_scale_shift_norm - - self.in_layers = nn.Sequential( - normalization(channels), - nn.SiLU(), - conv_nd(dims, channels, self.out_channels, 3, padding=1), - ) - - self.updown = up or down - - if up: - self.h_upd = Upsample(channels, False, dims) - self.x_upd = Upsample(channels, False, dims) - elif down: - self.h_upd = Downsample(channels, False, dims) - self.x_upd = Downsample(channels, False, dims) - else: - self.h_upd = self.x_upd = nn.Identity() - - self.emb_layers = nn.Sequential( - nn.SiLU(), - linear( - emb_channels, - 2 * self.out_channels if use_scale_shift_norm else self.out_channels, - ), - ) - self.out_layers = nn.Sequential( - normalization(self.out_channels), - nn.SiLU(), - nn.Dropout(p=dropout), - zero_module( - conv_nd(dims, self.out_channels, self.out_channels, 3, padding=1) - ), - ) - - if self.out_channels == channels: - self.skip_connection = nn.Identity() - elif use_conv: - self.skip_connection = conv_nd( - dims, channels, self.out_channels, 3, padding=1 - ) - else: - self.skip_connection = conv_nd(dims, channels, self.out_channels, 1) - - def forward(self, x, emb): - """ - Apply the block to a Tensor, conditioned on a timestep embedding. - :param x: an [N x C x ...] Tensor of features. - :param emb: an [N x emb_channels] Tensor of timestep embeddings. - :return: an [N x C x ...] Tensor of outputs. - """ - return checkpoint( - self._forward, (x, emb), self.parameters(), self.use_checkpoint - ) - - - def _forward(self, x, emb): - if self.updown: - in_rest, in_conv = self.in_layers[:-1], self.in_layers[-1] - h = in_rest(x) - h = self.h_upd(h) - x = self.x_upd(x) - h = in_conv(h) - else: - h = self.in_layers(x) - emb_out = self.emb_layers(emb).type(h.dtype) - while len(emb_out.shape) < len(h.shape): - emb_out = emb_out[..., None] - if self.use_scale_shift_norm: - out_norm, out_rest = self.out_layers[0], self.out_layers[1:] - scale, shift = th.chunk(emb_out, 2, dim=1) - h = out_norm(h) * (1 + scale) + shift - h = out_rest(h) - else: - h = h + emb_out - h = self.out_layers(h) - return self.skip_connection(x) + h - - -class AttentionBlock(nn.Module): - """ - An attention block that allows spatial positions to attend to each other. - Originally ported from here, but adapted to the N-d case. - https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/models/unet.py#L66. - """ - - def __init__( - self, - channels, - num_heads=1, - num_head_channels=-1, - use_checkpoint=False, - use_new_attention_order=False, - ): - super().__init__() - self.channels = channels - if num_head_channels == -1: - self.num_heads = num_heads - else: - assert ( - channels % num_head_channels == 0 - ), f"q,k,v channels {channels} is not divisible by num_head_channels {num_head_channels}" - self.num_heads = channels // num_head_channels - self.use_checkpoint = use_checkpoint - self.norm = normalization(channels) - self.qkv = conv_nd(1, channels, channels * 3, 1) - if use_new_attention_order: - # split qkv before split heads - self.attention = QKVAttention(self.num_heads) - else: - # split heads before split qkv - self.attention = QKVAttentionLegacy(self.num_heads) - - self.proj_out = zero_module(conv_nd(1, channels, channels, 1)) - - def forward(self, x): - return checkpoint(self._forward, (x,), self.parameters(), True) # TODO: check checkpoint usage, is True # TODO: fix the .half call!!! - #return pt_checkpoint(self._forward, x) # pytorch - - def _forward(self, x): - b, c, *spatial = x.shape - x = x.reshape(b, c, -1) - qkv = self.qkv(self.norm(x)) - h = self.attention(qkv) - h = self.proj_out(h) - return (x + h).reshape(b, c, *spatial) - - -def count_flops_attn(model, _x, y): - """ - A counter for the `thop` package to count the operations in an - attention operation. - Meant to be used like: - macs, params = thop.profile( - model, - inputs=(inputs, timestamps), - custom_ops={QKVAttention: QKVAttention.count_flops}, - ) - """ - b, c, *spatial = y[0].shape - num_spatial = int(np.prod(spatial)) - # We perform two matmuls with the same number of ops. - # The first computes the weight matrix, the second computes - # the combination of the value vectors. - matmul_ops = 2 * b * (num_spatial ** 2) * c - model.total_ops += th.DoubleTensor([matmul_ops]) - - -class QKVAttentionLegacy(nn.Module): - """ - A module which performs QKV attention. Matches legacy QKVAttention + input/ouput heads shaping - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv): - """ - Apply QKV attention. - :param qkv: an [N x (H * 3 * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.reshape(bs * self.n_heads, ch * 3, length).split(ch, dim=1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", q * scale, k * scale - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum("bts,bcs->bct", weight, v) - return a.reshape(bs, -1, length) - - @staticmethod - def count_flops(model, _x, y): - return count_flops_attn(model, _x, y) - - -class QKVAttention(nn.Module): - """ - A module which performs QKV attention and splits in a different order. - """ - - def __init__(self, n_heads): - super().__init__() - self.n_heads = n_heads - - def forward(self, qkv): - """ - Apply QKV attention. - :param qkv: an [N x (3 * H * C) x T] tensor of Qs, Ks, and Vs. - :return: an [N x (H * C) x T] tensor after attention. - """ - bs, width, length = qkv.shape - assert width % (3 * self.n_heads) == 0 - ch = width // (3 * self.n_heads) - q, k, v = qkv.chunk(3, dim=1) - scale = 1 / math.sqrt(math.sqrt(ch)) - weight = th.einsum( - "bct,bcs->bts", - (q * scale).view(bs * self.n_heads, ch, length), - (k * scale).view(bs * self.n_heads, ch, length), - ) # More stable with f16 than dividing afterwards - weight = th.softmax(weight.float(), dim=-1).type(weight.dtype) - a = th.einsum("bts,bcs->bct", weight, v.reshape(bs * self.n_heads, ch, length)) - return a.reshape(bs, -1, length) - - @staticmethod - def count_flops(model, _x, y): - return count_flops_attn(model, _x, y) - - -class UNetModel(nn.Module): - """ - The full UNet model with attention and timestep embedding. - :param in_channels: channels in the input Tensor. - :param model_channels: base channel count for the model. - :param out_channels: channels in the output Tensor. - :param num_res_blocks: number of residual blocks per downsample. - :param attention_resolutions: a collection of downsample rates at which - attention will take place. May be a set, list, or tuple. - For example, if this contains 4, then at 4x downsampling, attention - will be used. - :param dropout: the dropout probability. - :param channel_mult: channel multiplier for each level of the UNet. - :param conv_resample: if True, use learned convolutions for upsampling and - downsampling. - :param dims: determines if the signal is 1D, 2D, or 3D. - :param num_classes: if specified (as an int), then this model will be - class-conditional with `num_classes` classes. - :param use_checkpoint: use gradient checkpointing to reduce memory usage. - :param num_heads: the number of attention heads in each attention layer. - :param num_heads_channels: if specified, ignore num_heads and instead use - a fixed channel width per attention head. - :param num_heads_upsample: works with num_heads to set a different number - of heads for upsampling. Deprecated. - :param use_scale_shift_norm: use a FiLM-like conditioning mechanism. - :param resblock_updown: use residual blocks for up/downsampling. - :param use_new_attention_order: use a different attention pattern for potentially - increased efficiency. - """ - - def __init__( - self, - image_size, - in_channels, - model_channels, - out_channels, - num_res_blocks, - attention_resolutions, - dropout=0, - channel_mult=(1, 2, 4, 8), - conv_resample=True, - dims=2, - num_classes=None, - use_checkpoint=False, - use_fp16=False, - num_heads=-1, - num_head_channels=-1, - num_heads_upsample=-1, - use_scale_shift_norm=False, - resblock_updown=False, - use_new_attention_order=False, - use_spatial_transformer=False, # custom transformer support - transformer_depth=1, # custom transformer support - context_dim=None, # custom transformer support - n_embed=None, # custom support for prediction of discrete ids into codebook of first stage vq model - legacy=True, - disable_self_attentions=None, - num_attention_blocks=None, - disable_middle_self_attn=False, - use_linear_in_transformer=False, - ): - super().__init__() - if use_spatial_transformer: - assert context_dim is not None, 'Fool!! You forgot to include the dimension of your cross-attention conditioning...' - - if context_dim is not None: - assert use_spatial_transformer, 'Fool!! You forgot to use the spatial transformer for your cross-attention conditioning...' - from omegaconf.listconfig import ListConfig - if type(context_dim) == ListConfig: - context_dim = list(context_dim) - - if num_heads_upsample == -1: - num_heads_upsample = num_heads - - if num_heads == -1: - assert num_head_channels != -1, 'Either num_heads or num_head_channels has to be set' - - if num_head_channels == -1: - assert num_heads != -1, 'Either num_heads or num_head_channels has to be set' - - self.image_size = image_size - self.in_channels = in_channels - self.model_channels = model_channels - self.out_channels = out_channels - if isinstance(num_res_blocks, int): - self.num_res_blocks = len(channel_mult) * [num_res_blocks] - else: - if len(num_res_blocks) != len(channel_mult): - raise ValueError("provide num_res_blocks either as an int (globally constant) or " - "as a list/tuple (per-level) with the same length as channel_mult") - self.num_res_blocks = num_res_blocks - if disable_self_attentions is not None: - # should be a list of booleans, indicating whether to disable self-attention in TransformerBlocks or not - assert len(disable_self_attentions) == len(channel_mult) - if num_attention_blocks is not None: - assert len(num_attention_blocks) == len(self.num_res_blocks) - assert all(map(lambda i: self.num_res_blocks[i] >= num_attention_blocks[i], range(len(num_attention_blocks)))) - print(f"Constructor of UNetModel received num_attention_blocks={num_attention_blocks}. " - f"This option has LESS priority than attention_resolutions {attention_resolutions}, " - f"i.e., in cases where num_attention_blocks[i] > 0 but 2**i not in attention_resolutions, " - f"attention will still not be set.") - - self.attention_resolutions = attention_resolutions - self.dropout = dropout - self.channel_mult = channel_mult - self.conv_resample = conv_resample - self.num_classes = num_classes - self.use_checkpoint = use_checkpoint - self.dtype = th.float16 if use_fp16 else th.float32 - self.num_heads = num_heads - self.num_head_channels = num_head_channels - self.num_heads_upsample = num_heads_upsample - self.predict_codebook_ids = n_embed is not None - - time_embed_dim = model_channels * 4 - self.time_embed = nn.Sequential( - linear(model_channels, time_embed_dim), - nn.SiLU(), - linear(time_embed_dim, time_embed_dim), - ) - - if self.num_classes is not None: - if isinstance(self.num_classes, int): - self.label_emb = nn.Embedding(num_classes, time_embed_dim) - elif self.num_classes == "continuous": - print("setting up linear c_adm embedding layer") - self.label_emb = nn.Linear(1, time_embed_dim) - else: - raise ValueError() - - self.input_blocks = nn.ModuleList( - [ - TimestepEmbedSequential( - conv_nd(dims, in_channels, model_channels, 3, padding=1) - ) - ] - ) - self._feature_size = model_channels - input_block_chans = [model_channels] - ch = model_channels - ds = 1 - for level, mult in enumerate(channel_mult): - for nr in range(self.num_res_blocks[level]): - layers = [ - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=mult * model_channels, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = mult * model_channels - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - if exists(disable_self_attentions): - disabled_sa = disable_self_attentions[level] - else: - disabled_sa = False - - if not exists(num_attention_blocks) or nr < num_attention_blocks[level]: - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim, - disable_self_attn=disabled_sa, use_linear=use_linear_in_transformer, - use_checkpoint=use_checkpoint - ) - ) - self.input_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - input_block_chans.append(ch) - if level != len(channel_mult) - 1: - out_ch = ch - self.input_blocks.append( - TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - down=True, - ) - if resblock_updown - else Downsample( - ch, conv_resample, dims=dims, out_channels=out_ch - ) - ) - ) - ch = out_ch - input_block_chans.append(ch) - ds *= 2 - self._feature_size += ch - - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - self.middle_block = TimestepEmbedSequential( - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( # always uses a self-attn - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim, - disable_self_attn=disable_middle_self_attn, use_linear=use_linear_in_transformer, - use_checkpoint=use_checkpoint - ), - ResBlock( - ch, - time_embed_dim, - dropout, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ), - ) - self._feature_size += ch - - self.output_blocks = nn.ModuleList([]) - for level, mult in list(enumerate(channel_mult))[::-1]: - for i in range(self.num_res_blocks[level] + 1): - ich = input_block_chans.pop() - layers = [ - ResBlock( - ch + ich, - time_embed_dim, - dropout, - out_channels=model_channels * mult, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - ) - ] - ch = model_channels * mult - if ds in attention_resolutions: - if num_head_channels == -1: - dim_head = ch // num_heads - else: - num_heads = ch // num_head_channels - dim_head = num_head_channels - if legacy: - #num_heads = 1 - dim_head = ch // num_heads if use_spatial_transformer else num_head_channels - if exists(disable_self_attentions): - disabled_sa = disable_self_attentions[level] - else: - disabled_sa = False - - if not exists(num_attention_blocks) or i < num_attention_blocks[level]: - layers.append( - AttentionBlock( - ch, - use_checkpoint=use_checkpoint, - num_heads=num_heads_upsample, - num_head_channels=dim_head, - use_new_attention_order=use_new_attention_order, - ) if not use_spatial_transformer else SpatialTransformer( - ch, num_heads, dim_head, depth=transformer_depth, context_dim=context_dim, - disable_self_attn=disabled_sa, use_linear=use_linear_in_transformer, - use_checkpoint=use_checkpoint - ) - ) - if level and i == self.num_res_blocks[level]: - out_ch = ch - layers.append( - ResBlock( - ch, - time_embed_dim, - dropout, - out_channels=out_ch, - dims=dims, - use_checkpoint=use_checkpoint, - use_scale_shift_norm=use_scale_shift_norm, - up=True, - ) - if resblock_updown - else Upsample(ch, conv_resample, dims=dims, out_channels=out_ch) - ) - ds //= 2 - self.output_blocks.append(TimestepEmbedSequential(*layers)) - self._feature_size += ch - - self.out = nn.Sequential( - normalization(ch), - nn.SiLU(), - zero_module(conv_nd(dims, model_channels, out_channels, 3, padding=1)), - ) - if self.predict_codebook_ids: - self.id_predictor = nn.Sequential( - normalization(ch), - conv_nd(dims, model_channels, n_embed, 1), - #nn.LogSoftmax(dim=1) # change to cross_entropy and produce non-normalized logits - ) - - def convert_to_fp16(self): - """ - Convert the torso of the model to float16. - """ - self.input_blocks.apply(convert_module_to_f16) - self.middle_block.apply(convert_module_to_f16) - self.output_blocks.apply(convert_module_to_f16) - - def convert_to_fp32(self): - """ - Convert the torso of the model to float32. - """ - self.input_blocks.apply(convert_module_to_f32) - self.middle_block.apply(convert_module_to_f32) - self.output_blocks.apply(convert_module_to_f32) - - def forward(self, x, timesteps=None, context=None, y=None,**kwargs): - """ - Apply the model to an input batch. - :param x: an [N x C x ...] Tensor of inputs. - :param timesteps: a 1-D batch of timesteps. - :param context: conditioning plugged in via crossattn - :param y: an [N] Tensor of labels, if class-conditional. - :return: an [N x C x ...] Tensor of outputs. - """ - assert (y is not None) == ( - self.num_classes is not None - ), "must specify y if and only if the model is class-conditional" - hs = [] - t_emb = timestep_embedding(timesteps, self.model_channels, repeat_only=False) - emb = self.time_embed(t_emb) - - if self.num_classes is not None: - assert y.shape[0] == x.shape[0] - emb = emb + self.label_emb(y) - - h = x.type(self.dtype) - for module in self.input_blocks: - h = module(h, emb, context) - hs.append(h) - h = self.middle_block(h, emb, context) - for module in self.output_blocks: - h = th.cat([h, hs.pop()], dim=1) - h = module(h, emb, context) - h = h.type(x.dtype) - if self.predict_codebook_ids: - return self.id_predictor(h) - else: - return self.out(h) diff --git a/spaces/gngpostalsrvc/COHeN_demo/README.md b/spaces/gngpostalsrvc/COHeN_demo/README.md deleted file mode 100644 index 318fec1b04c9c132b33ced3101d85d255e0edb85..0000000000000000000000000000000000000000 --- a/spaces/gngpostalsrvc/COHeN_demo/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: COHeN -emoji: ⚡ -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Download Fida Full Movie Hd 1080p In Tamil and Enjoy a Humorous and Emotional Romance.md b/spaces/gotiQspiryo/whisper-ui/examples/Download Fida Full Movie Hd 1080p In Tamil and Enjoy a Humorous and Emotional Romance.md deleted file mode 100644 index 8c853f99e1b09a68ef6a352b573f4b0de399f919..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Download Fida Full Movie Hd 1080p In Tamil and Enjoy a Humorous and Emotional Romance.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Fida Full Movie Hd 1080p In Tamil Download Movie


    Downloadhttps://urlgoal.com/2uyMJ4



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Medal Of Honor Warfighter Bug Fixer Free !!EXCLUSIVE!! Download.md b/spaces/gotiQspiryo/whisper-ui/examples/Medal Of Honor Warfighter Bug Fixer Free !!EXCLUSIVE!! Download.md deleted file mode 100644 index bb38597591d88523033f4a9011749c16feb55bdf..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Medal Of Honor Warfighter Bug Fixer Free !!EXCLUSIVE!! Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Medal Of Honor Warfighter Bug Fixer Free Download


    Download >>>>> https://urlgoal.com/2uyLYG



    - -How to fix Medal of Honor Warfighter direct x error and game freeze. See later. Share. Copy the link. Information. In Medal of Honor Warfighter direct x does not work, the picture freezes and the game crashes. You can fix the error in some cases by adjusting the graphics and monitor resolution. See below. You can also download Medal of Honor Warfighter direct x and then install it if possible. Update: If you cannot fix the error, then you will need to install Microsoft Visual C++ 2005, Microsoft Visual C++ 2008, Microsoft Visual C++ 2010, or Microsoft Visual C++ 2012. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/gradio-discord-bots/StableBeluga-7B-Chat/README.md b/spaces/gradio-discord-bots/StableBeluga-7B-Chat/README.md deleted file mode 100644 index 5bb88ad7ee4c647f07ca6ae456c6de51aa552731..0000000000000000000000000000000000000000 --- a/spaces/gradio-discord-bots/StableBeluga-7B-Chat/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: StableBeluga 7B Chat -emoji: 🦀 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: Sentdex/StableBeluga-7B-Chat ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/gradio/HuBERT/examples/criss/download_and_preprocess_flores_test.sh b/spaces/gradio/HuBERT/examples/criss/download_and_preprocess_flores_test.sh deleted file mode 100644 index ed4b390fbdee3991efeb298050e12065d7fe605b..0000000000000000000000000000000000000000 --- a/spaces/gradio/HuBERT/examples/criss/download_and_preprocess_flores_test.sh +++ /dev/null @@ -1,64 +0,0 @@ -#!/bin/bash -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -SPM_ENCODE=flores/scripts/spm_encode.py -DATA=data_tmp -SPM_MODEL=criss_checkpoints/sentence.bpe.model -DICT=criss_checkpoints/dict.txt - -download_data() { - CORPORA=$1 - URL=$2 - - if [ -f $CORPORA ]; then - echo "$CORPORA already exists, skipping download" - else - echo "Downloading $URL" - wget $URL -O $CORPORA --no-check-certificate || rm -f $CORPORA - if [ -f $CORPORA ]; then - echo "$URL successfully downloaded." - else - echo "$URL not successfully downloaded." - rm -f $CORPORA - fi - fi -} - -if [[ -f flores ]]; then - echo "flores already cloned" -else - git clone https://github.com/facebookresearch/flores -fi - -mkdir -p $DATA -download_data $DATA/wikipedia_en_ne_si_test_sets.tgz "https://github.com/facebookresearch/flores/raw/master/data/wikipedia_en_ne_si_test_sets.tgz" -pushd $DATA -pwd -tar -vxf wikipedia_en_ne_si_test_sets.tgz -popd - - -for lang in ne_NP si_LK; do - datadir=$DATA/${lang}-en_XX-flores - rm -rf $datadir - mkdir -p $datadir - TEST_PREFIX=$DATA/wikipedia_en_ne_si_test_sets/wikipedia.test - python $SPM_ENCODE \ - --model ${SPM_MODEL} \ - --output_format=piece \ - --inputs ${TEST_PREFIX}.${lang:0:2}-en.${lang:0:2} ${TEST_PREFIX}.${lang:0:2}-en.en \ - --outputs $datadir/test.bpe.${lang}-en_XX.${lang} $datadir/test.bpe.${lang}-en_XX.en_XX - - # binarize data - fairseq-preprocess \ - --source-lang ${lang} --target-lang en_XX \ - --testpref $datadir/test.bpe.${lang}-en_XX \ - --destdir $datadir \ - --srcdict ${DICT} \ - --joined-dictionary \ - --workers 4 -done diff --git a/spaces/gradio/neon-tts-plugin-coqui/DESCRIPTION.md b/spaces/gradio/neon-tts-plugin-coqui/DESCRIPTION.md deleted file mode 100644 index d9e5f1cc47c696247c5e3bc6ed193f3992dff75a..0000000000000000000000000000000000000000 --- a/spaces/gradio/neon-tts-plugin-coqui/DESCRIPTION.md +++ /dev/null @@ -1 +0,0 @@ -This demo converts text to speech in 14 languages. \ No newline at end of file diff --git a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/apps/render_data.py b/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/apps/render_data.py deleted file mode 100644 index 563c03fba6e304eced73ca283152a968a65c3b8e..0000000000000000000000000000000000000000 --- a/spaces/grisiemjahand/Image-and-3D-Model-Creator/PIFu/apps/render_data.py +++ /dev/null @@ -1,290 +0,0 @@ -#from data.config import raw_dataset, render_dataset, archive_dataset, model_list, zip_path - -from lib.renderer.camera import Camera -import numpy as np -from lib.renderer.mesh import load_obj_mesh, compute_tangent, compute_normal, load_obj_mesh_mtl -from lib.renderer.camera import Camera -import os -import cv2 -import time -import math -import random -import pyexr -import argparse -from tqdm import tqdm - - -def make_rotate(rx, ry, rz): - sinX = np.sin(rx) - sinY = np.sin(ry) - sinZ = np.sin(rz) - - cosX = np.cos(rx) - cosY = np.cos(ry) - cosZ = np.cos(rz) - - Rx = np.zeros((3,3)) - Rx[0, 0] = 1.0 - Rx[1, 1] = cosX - Rx[1, 2] = -sinX - Rx[2, 1] = sinX - Rx[2, 2] = cosX - - Ry = np.zeros((3,3)) - Ry[0, 0] = cosY - Ry[0, 2] = sinY - Ry[1, 1] = 1.0 - Ry[2, 0] = -sinY - Ry[2, 2] = cosY - - Rz = np.zeros((3,3)) - Rz[0, 0] = cosZ - Rz[0, 1] = -sinZ - Rz[1, 0] = sinZ - Rz[1, 1] = cosZ - Rz[2, 2] = 1.0 - - R = np.matmul(np.matmul(Rz,Ry),Rx) - return R - -def rotateSH(SH, R): - SHn = SH - - # 1st order - SHn[1] = R[1,1]*SH[1] - R[1,2]*SH[2] + R[1,0]*SH[3] - SHn[2] = -R[2,1]*SH[1] + R[2,2]*SH[2] - R[2,0]*SH[3] - SHn[3] = R[0,1]*SH[1] - R[0,2]*SH[2] + R[0,0]*SH[3] - - # 2nd order - SHn[4:,0] = rotateBand2(SH[4:,0],R) - SHn[4:,1] = rotateBand2(SH[4:,1],R) - SHn[4:,2] = rotateBand2(SH[4:,2],R) - - return SHn - -def rotateBand2(x, R): - s_c3 = 0.94617469575 - s_c4 = -0.31539156525 - s_c5 = 0.54627421529 - - s_c_scale = 1.0/0.91529123286551084 - s_c_scale_inv = 0.91529123286551084 - - s_rc2 = 1.5853309190550713*s_c_scale - s_c4_div_c3 = s_c4/s_c3 - s_c4_div_c3_x2 = (s_c4/s_c3)*2.0 - - s_scale_dst2 = s_c3 * s_c_scale_inv - s_scale_dst4 = s_c5 * s_c_scale_inv - - sh0 = x[3] + x[4] + x[4] - x[1] - sh1 = x[0] + s_rc2*x[2] + x[3] + x[4] - sh2 = x[0] - sh3 = -x[3] - sh4 = -x[1] - - r2x = R[0][0] + R[0][1] - r2y = R[1][0] + R[1][1] - r2z = R[2][0] + R[2][1] - - r3x = R[0][0] + R[0][2] - r3y = R[1][0] + R[1][2] - r3z = R[2][0] + R[2][2] - - r4x = R[0][1] + R[0][2] - r4y = R[1][1] + R[1][2] - r4z = R[2][1] + R[2][2] - - sh0_x = sh0 * R[0][0] - sh0_y = sh0 * R[1][0] - d0 = sh0_x * R[1][0] - d1 = sh0_y * R[2][0] - d2 = sh0 * (R[2][0] * R[2][0] + s_c4_div_c3) - d3 = sh0_x * R[2][0] - d4 = sh0_x * R[0][0] - sh0_y * R[1][0] - - sh1_x = sh1 * R[0][2] - sh1_y = sh1 * R[1][2] - d0 += sh1_x * R[1][2] - d1 += sh1_y * R[2][2] - d2 += sh1 * (R[2][2] * R[2][2] + s_c4_div_c3) - d3 += sh1_x * R[2][2] - d4 += sh1_x * R[0][2] - sh1_y * R[1][2] - - sh2_x = sh2 * r2x - sh2_y = sh2 * r2y - d0 += sh2_x * r2y - d1 += sh2_y * r2z - d2 += sh2 * (r2z * r2z + s_c4_div_c3_x2) - d3 += sh2_x * r2z - d4 += sh2_x * r2x - sh2_y * r2y - - sh3_x = sh3 * r3x - sh3_y = sh3 * r3y - d0 += sh3_x * r3y - d1 += sh3_y * r3z - d2 += sh3 * (r3z * r3z + s_c4_div_c3_x2) - d3 += sh3_x * r3z - d4 += sh3_x * r3x - sh3_y * r3y - - sh4_x = sh4 * r4x - sh4_y = sh4 * r4y - d0 += sh4_x * r4y - d1 += sh4_y * r4z - d2 += sh4 * (r4z * r4z + s_c4_div_c3_x2) - d3 += sh4_x * r4z - d4 += sh4_x * r4x - sh4_y * r4y - - dst = x - dst[0] = d0 - dst[1] = -d1 - dst[2] = d2 * s_scale_dst2 - dst[3] = -d3 - dst[4] = d4 * s_scale_dst4 - - return dst - -def render_prt_ortho(out_path, folder_name, subject_name, shs, rndr, rndr_uv, im_size, angl_step=4, n_light=1, pitch=[0]): - cam = Camera(width=im_size, height=im_size) - cam.ortho_ratio = 0.4 * (512 / im_size) - cam.near = -100 - cam.far = 100 - cam.sanity_check() - - # set path for obj, prt - mesh_file = os.path.join(folder_name, subject_name + '_100k.obj') - if not os.path.exists(mesh_file): - print('ERROR: obj file does not exist!!', mesh_file) - return - prt_file = os.path.join(folder_name, 'bounce', 'bounce0.txt') - if not os.path.exists(prt_file): - print('ERROR: prt file does not exist!!!', prt_file) - return - face_prt_file = os.path.join(folder_name, 'bounce', 'face.npy') - if not os.path.exists(face_prt_file): - print('ERROR: face prt file does not exist!!!', prt_file) - return - text_file = os.path.join(folder_name, 'tex', subject_name + '_dif_2k.jpg') - if not os.path.exists(text_file): - print('ERROR: dif file does not exist!!', text_file) - return - - texture_image = cv2.imread(text_file) - texture_image = cv2.cvtColor(texture_image, cv2.COLOR_BGR2RGB) - - vertices, faces, normals, faces_normals, textures, face_textures = load_obj_mesh(mesh_file, with_normal=True, with_texture=True) - vmin = vertices.min(0) - vmax = vertices.max(0) - up_axis = 1 if (vmax-vmin).argmax() == 1 else 2 - - vmed = np.median(vertices, 0) - vmed[up_axis] = 0.5*(vmax[up_axis]+vmin[up_axis]) - y_scale = 180/(vmax[up_axis] - vmin[up_axis]) - - rndr.set_norm_mat(y_scale, vmed) - rndr_uv.set_norm_mat(y_scale, vmed) - - tan, bitan = compute_tangent(vertices, faces, normals, textures, face_textures) - prt = np.loadtxt(prt_file) - face_prt = np.load(face_prt_file) - rndr.set_mesh(vertices, faces, normals, faces_normals, textures, face_textures, prt, face_prt, tan, bitan) - rndr.set_albedo(texture_image) - - rndr_uv.set_mesh(vertices, faces, normals, faces_normals, textures, face_textures, prt, face_prt, tan, bitan) - rndr_uv.set_albedo(texture_image) - - os.makedirs(os.path.join(out_path, 'GEO', 'OBJ', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'PARAM', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'RENDER', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'MASK', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_RENDER', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_MASK', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_POS', subject_name),exist_ok=True) - os.makedirs(os.path.join(out_path, 'UV_NORMAL', subject_name),exist_ok=True) - - if not os.path.exists(os.path.join(out_path, 'val.txt')): - f = open(os.path.join(out_path, 'val.txt'), 'w') - f.close() - - # copy obj file - cmd = 'cp %s %s' % (mesh_file, os.path.join(out_path, 'GEO', 'OBJ', subject_name)) - print(cmd) - os.system(cmd) - - for p in pitch: - for y in tqdm(range(0, 360, angl_step)): - R = np.matmul(make_rotate(math.radians(p), 0, 0), make_rotate(0, math.radians(y), 0)) - if up_axis == 2: - R = np.matmul(R, make_rotate(math.radians(90),0,0)) - - rndr.rot_matrix = R - rndr_uv.rot_matrix = R - rndr.set_camera(cam) - rndr_uv.set_camera(cam) - - for j in range(n_light): - sh_id = random.randint(0,shs.shape[0]-1) - sh = shs[sh_id] - sh_angle = 0.2*np.pi*(random.random()-0.5) - sh = rotateSH(sh, make_rotate(0, sh_angle, 0).T) - - dic = {'sh': sh, 'ortho_ratio': cam.ortho_ratio, 'scale': y_scale, 'center': vmed, 'R': R} - - rndr.set_sh(sh) - rndr.analytic = False - rndr.use_inverse_depth = False - rndr.display() - - out_all_f = rndr.get_color(0) - out_mask = out_all_f[:,:,3] - out_all_f = cv2.cvtColor(out_all_f, cv2.COLOR_RGBA2BGR) - - np.save(os.path.join(out_path, 'PARAM', subject_name, '%d_%d_%02d.npy'%(y,p,j)),dic) - cv2.imwrite(os.path.join(out_path, 'RENDER', subject_name, '%d_%d_%02d.jpg'%(y,p,j)),255.0*out_all_f) - cv2.imwrite(os.path.join(out_path, 'MASK', subject_name, '%d_%d_%02d.png'%(y,p,j)),255.0*out_mask) - - rndr_uv.set_sh(sh) - rndr_uv.analytic = False - rndr_uv.use_inverse_depth = False - rndr_uv.display() - - uv_color = rndr_uv.get_color(0) - uv_color = cv2.cvtColor(uv_color, cv2.COLOR_RGBA2BGR) - cv2.imwrite(os.path.join(out_path, 'UV_RENDER', subject_name, '%d_%d_%02d.jpg'%(y,p,j)),255.0*uv_color) - - if y == 0 and j == 0 and p == pitch[0]: - uv_pos = rndr_uv.get_color(1) - uv_mask = uv_pos[:,:,3] - cv2.imwrite(os.path.join(out_path, 'UV_MASK', subject_name, '00.png'),255.0*uv_mask) - - data = {'default': uv_pos[:,:,:3]} # default is a reserved name - pyexr.write(os.path.join(out_path, 'UV_POS', subject_name, '00.exr'), data) - - uv_nml = rndr_uv.get_color(2) - uv_nml = cv2.cvtColor(uv_nml, cv2.COLOR_RGBA2BGR) - cv2.imwrite(os.path.join(out_path, 'UV_NORMAL', subject_name, '00.png'),255.0*uv_nml) - - -if __name__ == '__main__': - shs = np.load('./env_sh.npy') - - parser = argparse.ArgumentParser() - parser.add_argument('-i', '--input', type=str, default='/home/shunsuke/Downloads/rp_dennis_posed_004_OBJ') - parser.add_argument('-o', '--out_dir', type=str, default='/home/shunsuke/Documents/hf_human') - parser.add_argument('-m', '--ms_rate', type=int, default=1, help='higher ms rate results in less aliased output. MESA renderer only supports ms_rate=1.') - parser.add_argument('-e', '--egl', action='store_true', help='egl rendering option. use this when rendering with headless server with NVIDIA GPU') - parser.add_argument('-s', '--size', type=int, default=512, help='rendering image size') - args = parser.parse_args() - - # NOTE: GL context has to be created before any other OpenGL function loads. - from lib.renderer.gl.init_gl import initialize_GL_context - initialize_GL_context(width=args.size, height=args.size, egl=args.egl) - - from lib.renderer.gl.prt_render import PRTRender - rndr = PRTRender(width=args.size, height=args.size, ms_rate=args.ms_rate, egl=args.egl) - rndr_uv = PRTRender(width=args.size, height=args.size, uv_mode=True, egl=args.egl) - - if args.input[-1] == '/': - args.input = args.input[:-1] - subject_name = args.input.split('/')[-1][:-4] - render_prt_ortho(args.out_dir, args.input, subject_name, shs, rndr, rndr_uv, args.size, 1, 1, pitch=[0]) \ No newline at end of file diff --git a/spaces/gsaivinay/open_llm_leaderboard/src/rate_limiting.py b/spaces/gsaivinay/open_llm_leaderboard/src/rate_limiting.py deleted file mode 100644 index 49ae190069d8e68ffe69b54e01661d217459e118..0000000000000000000000000000000000000000 --- a/spaces/gsaivinay/open_llm_leaderboard/src/rate_limiting.py +++ /dev/null @@ -1,13 +0,0 @@ -from datetime import datetime, timedelta, timezone - - -def user_submission_permission(submission_name, users_to_submission_dates, rate_limit_period): - org_or_user, _ = submission_name.split("/") - if org_or_user not in users_to_submission_dates: - return 0 - submission_dates = sorted(users_to_submission_dates[org_or_user]) - - time_limit = (datetime.now(timezone.utc) - timedelta(days=rate_limit_period)).strftime("%Y-%m-%dT%H:%M:%SZ") - submissions_after_timelimit = [d for d in submission_dates if d > time_limit] - - return len(submissions_after_timelimit) diff --git a/spaces/gstaff/KiteWind/templates/stlite/stlite-template.html b/spaces/gstaff/KiteWind/templates/stlite/stlite-template.html deleted file mode 100644 index aaf596bfa36de3c143df61d6d54f8ef3d1aec798..0000000000000000000000000000000000000000 --- a/spaces/gstaff/KiteWind/templates/stlite/stlite-template.html +++ /dev/null @@ -1,31 +0,0 @@ - - - - - - - stlite app - - - - -
    - - - - diff --git a/spaces/gwang-kim/DATID-3D/pose_estimation/util/html.py b/spaces/gwang-kim/DATID-3D/pose_estimation/util/html.py deleted file mode 100644 index cc3262a1eafda34842e4dbad47bb6ba72f0c5a68..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/pose_estimation/util/html.py +++ /dev/null @@ -1,86 +0,0 @@ -import dominate -from dominate.tags import meta, h3, table, tr, td, p, a, img, br -import os - - -class HTML: - """This HTML class allows us to save images and write texts into a single HTML file. - - It consists of functions such as (add a text header to the HTML file), - (add a row of images to the HTML file), and (save the HTML to the disk). - It is based on Python library 'dominate', a Python library for creating and manipulating HTML documents using a DOM API. - """ - - def __init__(self, web_dir, title, refresh=0): - """Initialize the HTML classes - - Parameters: - web_dir (str) -- a directory that stores the webpage. HTML file will be created at /index.html; images will be saved at 0: - with self.doc.head: - meta(http_equiv="refresh", content=str(refresh)) - - def get_image_dir(self): - """Return the directory that stores images""" - return self.img_dir - - def add_header(self, text): - """Insert a header to the HTML file - - Parameters: - text (str) -- the header text - """ - with self.doc: - h3(text) - - def add_images(self, ims, txts, links, width=400): - """add images to the HTML file - - Parameters: - ims (str list) -- a list of image paths - txts (str list) -- a list of image names shown on the website - links (str list) -- a list of hyperref links; when you click an image, it will redirect you to a new page - """ - self.t = table(border=1, style="table-layout: fixed;") # Insert a table - self.doc.add(self.t) - with self.t: - with tr(): - for im, txt, link in zip(ims, txts, links): - with td(style="word-wrap: break-word;", halign="center", valign="top"): - with p(): - with a(href=os.path.join('images', link)): - img(style="width:%dpx" % width, src=os.path.join('images', im)) - br() - p(txt) - - def save(self): - """save the current content to the HMTL file""" - html_file = '%s/index.html' % self.web_dir - f = open(html_file, 'wt') - f.write(self.doc.render()) - f.close() - - -if __name__ == '__main__': # we show an example usage here. - html = HTML('web/', 'test_html') - html.add_header('hello world') - - ims, txts, links = [], [], [] - for n in range(4): - ims.append('image_%d.png' % n) - txts.append('text_%d' % n) - links.append('image_%d.png' % n) - html.add_images(ims, txts, links) - html.save() diff --git a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/utils/models_utils.py b/spaces/gyugnsu/DragGan-Inversion/stylegan_human/utils/models_utils.py deleted file mode 100644 index 53b2c3fa9d7035364dd34384fcdab78c1ae5c6af..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/stylegan_human/utils/models_utils.py +++ /dev/null @@ -1,28 +0,0 @@ -# Copyright (c) SenseTime Research. All rights reserved. - - -import pickle -import functools -import torch -from pti.pti_configs import paths_config, global_config - - -def toogle_grad(model, flag=True): - for p in model.parameters(): - p.requires_grad = flag - - -def load_tuned_G(run_id, type): - new_G_path = f'{paths_config.checkpoints_dir}/model_{run_id}_{type}.pt' - with open(new_G_path, 'rb') as f: - new_G = torch.load(f).to(global_config.device).eval() - new_G = new_G.float() - toogle_grad(new_G, False) - return new_G - - -def load_old_G(): - with open(paths_config.stylegan2_ada_shhq, 'rb') as f: - old_G = pickle.load(f)['G_ema'].to(global_config.device).eval() - old_G = old_G.float() - return old_G diff --git a/spaces/halfdevil/demochat/onetime.py b/spaces/halfdevil/demochat/onetime.py deleted file mode 100644 index d4e61835206f613c99570be85fae3270c9362fe4..0000000000000000000000000000000000000000 --- a/spaces/halfdevil/demochat/onetime.py +++ /dev/null @@ -1,5 +0,0 @@ -from gpt4all import GPT4All - -# Instantiate the GPT4All model -#gptj = GPT4All("ggml-gpt4all-j-v1.3-groovy") -gptj = GPT4All("GPT4All-13B-snoozy.ggmlv3.q4_0") diff --git a/spaces/hamacojr/CAT-Seg/datasets/prepare_voc.py b/spaces/hamacojr/CAT-Seg/datasets/prepare_voc.py deleted file mode 100644 index 6ab2ca43ada301d72ec09df61c82bf30d2f20036..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/datasets/prepare_voc.py +++ /dev/null @@ -1,77 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# Copyright (c) Meta Platforms, Inc. All Rights Reserved -# Modified by Feng Liang from https://github.com/MendelXu/zsseg.baseline/blob/master/datasets/prepare_voc_sem_seg.py -# Modified by Heeseong Shin from https://github.com/facebookresearch/ov-seg/blob/main/datasets/prepare_voc_sem_seg.py - -import os -import os.path as osp -from pathlib import Path -import tqdm - -import numpy as np -from PIL import Image - - -clsID_to_trID = { - 0: 255, - 1: 0, - 2: 1, - 3: 2, - 4: 3, - 5: 4, - 6: 5, - 7: 6, - 8: 7, - 9: 8, - 10: 9, - 11: 10, - 12: 11, - 13: 12, - 14: 13, - 15: 14, - 16: 15, - 17: 16, - 18: 17, - 19: 18, - 20: 19, - 255: 255, -} -clsID_to_trID_bg = clsID_to_trID.copy() -clsID_to_trID_bg[0] = 20 - -def convert_to_trainID( - maskpath, out_mask_dir, is_train, clsID_to_trID=clsID_to_trID, suffix="" -): - mask = np.array(Image.open(maskpath)) - mask_copy = np.ones_like(mask, dtype=np.uint8) * 255 - for clsID, trID in clsID_to_trID.items(): - mask_copy[mask == clsID] = trID - seg_filename = ( - osp.join(out_mask_dir, "train" + suffix, osp.basename(maskpath)) - if is_train - else osp.join(out_mask_dir, "val" + suffix, osp.basename(maskpath)) - ) - if len(np.unique(mask_copy)) == 1 and np.unique(mask_copy)[0] == 255: - return - Image.fromarray(mask_copy).save(seg_filename, "PNG") - - - -if __name__ == "__main__": - dataset_dir = Path(os.getenv("DETECTRON2_DATASETS", "datasets")) - print('Caution: we only generate the validation set!') - voc_path = dataset_dir / "VOCdevkit" / "VOC2012" - out_mask_dir = voc_path / "annotations_detectron2" - out_mask_dir_bg = voc_path / "annotations_detectron2_bg" - #out_image_dir = voc_path / "images_detectron2" - for name in ["val"]: - os.makedirs((out_mask_dir / name), exist_ok=True) - os.makedirs((out_mask_dir_bg / name), exist_ok=True) - #os.makedirs((out_image_dir / name), exist_ok=True) - val_list = [ - osp.join(voc_path, "SegmentationClassAug", f + ".png") - for f in np.loadtxt(osp.join(voc_path, "ImageSets/Segmentation/val.txt"), dtype=np.str).tolist() - ] - for file in tqdm.tqdm(val_list): - convert_to_trainID(file, out_mask_dir, is_train=False) - convert_to_trainID(file, out_mask_dir_bg, is_train=False, clsID_to_trID=clsID_to_trID_bg) \ No newline at end of file diff --git a/spaces/hamacojr/CAT-Seg/open_clip/src/open_clip/openai.py b/spaces/hamacojr/CAT-Seg/open_clip/src/open_clip/openai.py deleted file mode 100644 index cc4e13e876d6a7a3463b457e62c517cb063b1356..0000000000000000000000000000000000000000 --- a/spaces/hamacojr/CAT-Seg/open_clip/src/open_clip/openai.py +++ /dev/null @@ -1,144 +0,0 @@ -""" OpenAI pretrained model functions - -Adapted from https://github.com/openai/CLIP. Originally MIT License, Copyright (c) 2021 OpenAI. -""" - -import os -import warnings -from typing import List, Optional, Union - -import torch - -from .model import build_model_from_openai_state_dict, convert_weights_to_lp, get_cast_dtype -from .pretrained import get_pretrained_url, list_pretrained_models_by_tag, download_pretrained_from_url - -__all__ = ["list_openai_models", "load_openai_model"] - - -def list_openai_models() -> List[str]: - """Returns the names of available CLIP models""" - return list_pretrained_models_by_tag('openai') - - -def load_openai_model( - name: str, - precision: Optional[str] = None, - device: Optional[Union[str, torch.device]] = None, - jit: bool = True, - cache_dir: Optional[str] = None, -): - """Load a CLIP model - - Parameters - ---------- - name : str - A model name listed by `clip.available_models()`, or the path to a model checkpoint containing the state_dict - precision: str - Model precision, if None defaults to 'fp32' if device == 'cpu' else 'fp16'. - device : Union[str, torch.device] - The device to put the loaded model - jit : bool - Whether to load the optimized JIT model (default) or more hackable non-JIT model. - cache_dir : Optional[str] - The directory to cache the downloaded model weights - - Returns - ------- - model : torch.nn.Module - The CLIP model - preprocess : Callable[[PIL.Image], torch.Tensor] - A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input - """ - if device is None: - device = "cuda" if torch.cuda.is_available() else "cpu" - if precision is None: - precision = 'fp32' if device == 'cpu' else 'fp16' - - if get_pretrained_url(name, 'openai'): - model_path = download_pretrained_from_url(get_pretrained_url(name, 'openai'), cache_dir=cache_dir) - elif os.path.isfile(name): - model_path = name - else: - raise RuntimeError(f"Model {name} not found; available models = {list_openai_models()}") - - try: - # loading JIT archive - model = torch.jit.load(model_path, map_location=device if jit else "cpu").eval() - state_dict = None - except RuntimeError: - # loading saved state dict - if jit: - warnings.warn(f"File {model_path} is not a JIT archive. Loading as a state dict instead") - jit = False - state_dict = torch.load(model_path, map_location="cpu") - - if not jit: - # Build a non-jit model from the OpenAI jitted model state dict - cast_dtype = get_cast_dtype(precision) - try: - model = build_model_from_openai_state_dict(state_dict or model.state_dict(), cast_dtype=cast_dtype) - except KeyError: - sd = {k[7:]: v for k, v in state_dict["state_dict"].items()} - model = build_model_from_openai_state_dict(sd, cast_dtype=cast_dtype) - - # model from OpenAI state dict is in manually cast fp16 mode, must be converted for AMP/fp32/bf16 use - model = model.to(device) - if precision.startswith('amp') or precision == 'fp32': - model.float() - elif precision == 'bf16': - convert_weights_to_lp(model, dtype=torch.bfloat16) - - return model - - # patch the device names - device_holder = torch.jit.trace(lambda: torch.ones([]).to(torch.device(device)), example_inputs=[]) - device_node = [n for n in device_holder.graph.findAllNodes("prim::Constant") if "Device" in repr(n)][-1] - - def patch_device(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("prim::Constant"): - if "value" in node.attributeNames() and str(node["value"]).startswith("cuda"): - node.copyAttributes(device_node) - - model.apply(patch_device) - patch_device(model.encode_image) - patch_device(model.encode_text) - - # patch dtype to float32 (typically for CPU) - if precision == 'fp32': - float_holder = torch.jit.trace(lambda: torch.ones([]).float(), example_inputs=[]) - float_input = list(float_holder.graph.findNode("aten::to").inputs())[1] - float_node = float_input.node() - - def patch_float(module): - try: - graphs = [module.graph] if hasattr(module, "graph") else [] - except RuntimeError: - graphs = [] - - if hasattr(module, "forward1"): - graphs.append(module.forward1.graph) - - for graph in graphs: - for node in graph.findAllNodes("aten::to"): - inputs = list(node.inputs()) - for i in [1, 2]: # dtype can be the second or third argument to aten::to() - if inputs[i].node()["value"] == 5: - inputs[i].node().copyAttributes(float_node) - - model.apply(patch_float) - patch_float(model.encode_image) - patch_float(model.encode_text) - model.float() - - # ensure image_size attr available at consistent location for both jit and non-jit - model.visual.image_size = model.input_resolution.item() - return model diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/docs/tutorials/models.md b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/docs/tutorials/models.md deleted file mode 100644 index 456f36d1c03f657ba0b63eb6f26506c4b1b0d60f..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/docs/tutorials/models.md +++ /dev/null @@ -1,151 +0,0 @@ -# Use Models - -Models (and their sub-models) in detectron2 are built by -functions such as `build_model`, `build_backbone`, `build_roi_heads`: -```python -from detectron2.modeling import build_model -model = build_model(cfg) # returns a torch.nn.Module -``` - -`build_model` only builds the model structure, and fill it with random parameters. -See below for how to load an existing checkpoint to the model, -and how to use the `model` object. - -### Load/Save a Checkpoint -```python -from detectron2.checkpoint import DetectionCheckpointer -DetectionCheckpointer(model).load(file_path) # load a file to model - -checkpointer = DetectionCheckpointer(model, save_dir="output") -checkpointer.save("model_999") # save to output/model_999.pth -``` - -Detectron2's checkpointer recognizes models in pytorch's `.pth` format, as well as the `.pkl` files -in our model zoo. -See [API doc](../modules/checkpoint.html#detectron2.checkpoint.DetectionCheckpointer) -for more details about its usage. - -The model files can be arbitrarily manipulated using `torch.{load,save}` for `.pth` files or -`pickle.{dump,load}` for `.pkl` files. - -### Use a Model - -A model can be called by `outputs = model(inputs)`, where `inputs` is a `list[dict]`. -Each dict corresponds to one image and the required keys -depend on the type of model, and whether the model is in training or evaluation mode. -For example, in order to do inference, -all existing models expect the "image" key, and optionally "height" and "width". -The detailed format of inputs and outputs of existing models are explained below. - -When in training mode, all models are required to be used under an `EventStorage`. -The training statistics will be put into the storage: -```python -from detectron2.utils.events import EventStorage -with EventStorage() as storage: - losses = model(inputs) -``` - -If you only want to do simple inference using an existing model, -[DefaultPredictor](../modules/engine.html#detectron2.engine.defaults.DefaultPredictor) -is a wrapper around model that provides such basic functionality. -It includes default behavior including model loading, preprocessing, -and operates on single image rather than batches. - -### Model Input Format - -Users can implement custom models that support any arbitrary input format. -Here we describe the standard input format that all builtin models support in detectron2. -They all take a `list[dict]` as the inputs. Each dict -corresponds to information about one image. - -The dict may contain the following keys: - -* "image": `Tensor` in (C, H, W) format. The meaning of channels are defined by `cfg.INPUT.FORMAT`. - Image normalization, if any, will be performed inside the model using - `cfg.MODEL.PIXEL_{MEAN,STD}`. -* "instances": an [Instances](../modules/structures.html#detectron2.structures.Instances) - object, with the following fields: - + "gt_boxes": a [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing N boxes, one for each instance. - + "gt_classes": `Tensor` of long type, a vector of N labels, in range [0, num_categories). - + "gt_masks": a [PolygonMasks](../modules/structures.html#detectron2.structures.PolygonMasks) - or [BitMasks](../modules/structures.html#detectron2.structures.BitMasks) object storing N masks, one for each instance. - + "gt_keypoints": a [Keypoints](../modules/structures.html#detectron2.structures.Keypoints) - object storing N keypoint sets, one for each instance. -* "proposals": an [Instances](../modules/structures.html#detectron2.structures.Instances) - object used only in Fast R-CNN style models, with the following fields: - + "proposal_boxes": a [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing P proposal boxes. - + "objectness_logits": `Tensor`, a vector of P scores, one for each proposal. -* "height", "width": the **desired** output height and width, which is not necessarily the same - as the height or width of the `image` input field. - For example, the `image` input field might be a resized image, - but you may want the outputs to be in **original** resolution. - - If provided, the model will produce output in this resolution, - rather than in the resolution of the `image` as input into the model. This is more efficient and accurate. -* "sem_seg": `Tensor[int]` in (H, W) format. The semantic segmentation ground truth. - Values represent category labels starting from 0. - - -#### How it connects to data loader: - -The output of the default [DatasetMapper]( ../modules/data.html#detectron2.data.DatasetMapper) is a dict -that follows the above format. -After the data loader performs batching, it becomes `list[dict]` which the builtin models support. - - -### Model Output Format - -When in training mode, the builtin models output a `dict[str->ScalarTensor]` with all the losses. - -When in inference mode, the builtin models output a `list[dict]`, one dict for each image. -Based on the tasks the model is doing, each dict may contain the following fields: - -* "instances": [Instances](../modules/structures.html#detectron2.structures.Instances) - object with the following fields: - * "pred_boxes": [Boxes](../modules/structures.html#detectron2.structures.Boxes) object storing N boxes, one for each detected instance. - * "scores": `Tensor`, a vector of N scores. - * "pred_classes": `Tensor`, a vector of N labels in range [0, num_categories). - + "pred_masks": a `Tensor` of shape (N, H, W), masks for each detected instance. - + "pred_keypoints": a `Tensor` of shape (N, num_keypoint, 3). - Each row in the last dimension is (x, y, score). Scores are larger than 0. -* "sem_seg": `Tensor` of (num_categories, H, W), the semantic segmentation prediction. -* "proposals": [Instances](../modules/structures.html#detectron2.structures.Instances) - object with the following fields: - * "proposal_boxes": [Boxes](../modules/structures.html#detectron2.structures.Boxes) - object storing N boxes. - * "objectness_logits": a torch vector of N scores. -* "panoptic_seg": A tuple of `(Tensor, list[dict])`. The tensor has shape (H, W), where each element - represent the segment id of the pixel. Each dict describes one segment id and has the following fields: - * "id": the segment id - * "isthing": whether the segment is a thing or stuff - * "category_id": the category id of this segment. It represents the thing - class id when `isthing==True`, and the stuff class id otherwise. - - -### Partially execute a model: - -Sometimes you may want to obtain an intermediate tensor inside a model. -Since there are typically hundreds of intermediate tensors, there isn't an API that provides you -the intermediate result you need. -You have the following options: - -1. Write a (sub)model. Following the [tutorial](./write-models.md), you can - rewrite a model component (e.g. a head of a model), such that it - does the same thing as the existing component, but returns the output - you need. -2. Partially execute a model. You can create the model as usual, - but use custom code to execute it instead of its `forward()`. For example, - the following code obtains mask features before mask head. - -```python -images = ImageList.from_tensors(...) # preprocessed input tensor -model = build_model(cfg) -features = model.backbone(images.tensor) -proposals, _ = model.proposal_generator(images, features) -instances = model.roi_heads._forward_box(features, proposals) -mask_features = [features[f] for f in model.roi_heads.in_features] -mask_features = model.roi_heads.mask_pooler(mask_features, [x.pred_boxes for x in instances]) -``` - -Note that both options require you to read the existing forward code to understand -how to write code to obtain the outputs you need. diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/vis/bounding_box.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/vis/bounding_box.py deleted file mode 100644 index d7951d69e4a92d638debc79458dd2cfe58c650e3..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/projects/DensePose/densepose/vis/bounding_box.py +++ /dev/null @@ -1,37 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .base import RectangleVisualizer, TextVisualizer - - -class BoundingBoxVisualizer(object): - def __init__(self): - self.rectangle_visualizer = RectangleVisualizer() - - def visualize(self, image_bgr, boxes_xywh): - for bbox_xywh in boxes_xywh: - image_bgr = self.rectangle_visualizer.visualize(image_bgr, bbox_xywh) - return image_bgr - - -class ScoredBoundingBoxVisualizer(object): - def __init__(self, bbox_visualizer_params=None, score_visualizer_params=None): - if bbox_visualizer_params is None: - bbox_visualizer_params = {} - if score_visualizer_params is None: - score_visualizer_params = {} - self.visualizer_bbox = RectangleVisualizer(**bbox_visualizer_params) - self.visualizer_score = TextVisualizer(**score_visualizer_params) - - def visualize(self, image_bgr, scored_bboxes): - boxes_xywh, box_scores = scored_bboxes - assert len(boxes_xywh) == len( - box_scores - ), "Number of bounding boxes {} should be equal to the number of scores {}".format( - len(boxes_xywh), len(box_scores) - ) - for i, box_xywh in enumerate(boxes_xywh): - score_i = box_scores[i] - image_bgr = self.visualizer_bbox.visualize(image_bgr, box_xywh) - score_txt = "{0:6.4f}".format(score_i) - topleft_xy = box_xywh[0], box_xywh[1] - image_bgr = self.visualizer_score.visualize(image_bgr, score_txt, topleft_xy) - return image_bgr diff --git a/spaces/hf-hackathon-2023-01/Spotify/README.md b/spaces/hf-hackathon-2023-01/Spotify/README.md deleted file mode 100644 index 462acea5fa397521430793a32c923f31b91f8f4c..0000000000000000000000000000000000000000 --- a/spaces/hf-hackathon-2023-01/Spotify/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Spotify -emoji: 🎶 -colorFrom: green -colorTo: yellow -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/hsukqilee/NSFW-API/.eslintrc.js b/spaces/hsukqilee/NSFW-API/.eslintrc.js deleted file mode 100644 index 0d034af88807fc12eb62c9c7582e22fd68276966..0000000000000000000000000000000000000000 --- a/spaces/hsukqilee/NSFW-API/.eslintrc.js +++ /dev/null @@ -1,48 +0,0 @@ -module.exports = { - env: {node: true}, - root: true, - parser: '@typescript-eslint/parser', - plugins: [ - '@typescript-eslint', - ], - extends: [ - 'eslint:recommended', - 'plugin:@typescript-eslint/recommended', - ], - rules: { - 'no-async-promise-executor': 0, - 'no-case-declarations': ['warn'], - 'no-irregular-whitespace': 0, - 'require-await': 1, - 'no-console': 0, - 'indent': 'off', - '@typescript-eslint/indent': [ - 'error', - 2, - { - 'SwitchCase': 1, - 'ignoredNodes': [ - 'FunctionExpression > .params[decorators.length > 0]', - 'FunctionExpression > .params > :matches(Decorator, :not(:first-child))', - 'ClassBody.body > PropertyDefinition[decorators.length > 0] > .key' - ] - } - ], - 'linebreak-style': ['error', 'unix'], - 'quotes': ['error', 'single'], - 'semi': ['error', 'always'], - 'prefer-const': ['warn'], - '@typescript-eslint/no-unused-vars': [ - 'warn', - {'vars': 'all', 'args': 'after-used', 'ignoreRestSiblings': false} - ], - 'switch-colon-spacing': [ - 'error', - {'after': false, 'before': false} - ], - '@typescript-eslint/no-explicit-any': 0, - '@typescript-eslint/explicit-module-boundary-types': 0, - '@typescript-eslint/no-var-requires': 0, - '@typescript-eslint/ban-ts-comment': 0, - }, -}; diff --git a/spaces/huaiji3y/bingo-Public/src/components/ui/select.tsx b/spaces/huaiji3y/bingo-Public/src/components/ui/select.tsx deleted file mode 100644 index 77f12c2996f541b97663de4c9e20ab34d4ec2fac..0000000000000000000000000000000000000000 --- a/spaces/huaiji3y/bingo-Public/src/components/ui/select.tsx +++ /dev/null @@ -1,123 +0,0 @@ -'use client' - -import * as React from 'react' -import * as SelectPrimitive from '@radix-ui/react-select' - -import { cn } from '@/lib/utils' -import { - IconArrowDown, - IconCheck, - IconChevronUpDown -} from '@/components/ui/icons' - -const Select = SelectPrimitive.Root - -const SelectGroup = SelectPrimitive.Group - -const SelectValue = SelectPrimitive.Value - -const SelectTrigger = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - {children} - - - - -)) -SelectTrigger.displayName = SelectPrimitive.Trigger.displayName - -const SelectContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, position = 'popper', ...props }, ref) => ( - - - - {children} - - - -)) -SelectContent.displayName = SelectPrimitive.Content.displayName - -const SelectLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectLabel.displayName = SelectPrimitive.Label.displayName - -const SelectItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, children, ...props }, ref) => ( - - - - - - - {children} - -)) -SelectItem.displayName = SelectPrimitive.Item.displayName - -const SelectSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -SelectSeparator.displayName = SelectPrimitive.Separator.displayName - -export { - Select, - SelectGroup, - SelectValue, - SelectTrigger, - SelectContent, - SelectLabel, - SelectItem, - SelectSeparator -} diff --git a/spaces/hunz/web2inpaint/app.py b/spaces/hunz/web2inpaint/app.py deleted file mode 100644 index dc0e6d1611792b9b20645a61df56f4789c042ce1..0000000000000000000000000000000000000000 --- a/spaces/hunz/web2inpaint/app.py +++ /dev/null @@ -1,175 +0,0 @@ -import gradio as gr -from PIL import Image -import torch -import cv2 -import mediapipe as mp -from PIL import ImageFont, ImageDraw, Image -import matplotlib.pyplot as plt -import numpy as np -import time - - -def v_capture(cap): - cap = cv2.VideoCapture(0) - mp_drawing = mp.solutions.drawing_utils - mp_hands = mp.solutions.hands - mp_drawing_styles = mp.solutions.drawing_styles - - with mp_hands.Hands( - min_detection_confidence=0.5, - min_tracking_confidence=0.5) as hands: - - - while cap.isOpened(): - success, image = cap.read() - - if not success: - print("Ignoring empty camera frame.") - - # If loading a video, use 'break' instead of 'continue'. - continue - - # Flip the image horizontally for a later selfie-view display, and convert - # the BGR image to RGB. - image = cv2.cvtColor(cv2.flip(image, 1), cv2.COLOR_BGR2RGB) - - # To improve performance, optionally mark the image as not writeable to - # pass by reference. - image.flags.writeable = False - results = hands.process(image) - - # Draw the hand annotations on the image. - image.flags.writeable = True - image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR) - image_height, image_width, _ = image.shape - - if results.multi_hand_landmarks: - for hand_landmarks in results.multi_hand_landmarks: - - # 엄지를 제외한 나머지 4개 손가락의 마디 위치 관계를 확인하여 플래그 변수를 설정합니다. 손가락을 일자로 편 상태인지 확인합니다. - thumb_finger_state = 0 - if hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_CMC].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_MCP].y * image_height: - if hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_MCP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_IP].y * image_height: - if hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_IP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.THUMB_TIP].y * image_height: - thumb_finger_state = 1 - - index_finger_state = 0 - if hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_MCP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_PIP].y * image_height: - if hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_PIP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_DIP].y * image_height: - if hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_DIP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.INDEX_FINGER_TIP].y * image_height: - index_finger_state = 1 - - middle_finger_state = 0 - if hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_MCP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_PIP].y * image_height: - if hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_PIP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_DIP].y * image_height: - if hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_DIP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.MIDDLE_FINGER_TIP].y * image_height: - middle_finger_state = 1 - - ring_finger_state = 0 - if hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_MCP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_PIP].y * image_height: - if hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_PIP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_DIP].y * image_height: - if hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_DIP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.RING_FINGER_TIP].y * image_height: - ring_finger_state = 1 - - pinky_finger_state = 0 - if hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_MCP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_PIP].y * image_height: - if hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_PIP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_DIP].y * image_height: - if hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_DIP].y * image_height >= hand_landmarks.landmark[mp_hands.HandLandmark.PINKY_TIP].y * image_height: - pinky_finger_state = 1 - - # 손가락 위치 확인한 값을 사용하여 가위,바위,보 중 하나를 출력 해줍니다. - font = ImageFont.truetype("fonts/gulim.ttc", 60) - capture = image - image = Image.fromarray(image) - draw = ImageDraw.Draw(image) - - text = "" - if middle_finger_state == 1 and ring_finger_state == 0 and pinky_finger_state == 0: - text = "fuck you" - - if index_finger_state == 1 and middle_finger_state == 1: - text = "가위" - time.sleep(0.2) - cv2.imwrite('frame.png', capture) - - - if thumb_finger_state == 1 and index_finger_state == 1 and middle_finger_state == 1 and ring_finger_state == 1 and pinky_finger_state == 1: - text = "보" - - if index_finger_state == 0 and middle_finger_state == 0 and ring_finger_state == 0 and pinky_finger_state == 0: - text = "주먹" - - l,t,r,b = font.getbbox(text) - w,h = r-l, b-t - - x = 50 - y = 50 - - draw.rectangle((x, y, x + w, y + h), fill='black') - draw.text((x, y), text, font=font, fill=(255, 255, 255)) - image = np.array(image) - - mp_drawing.draw_landmarks( - image, - hand_landmarks, - mp_hands.HAND_CONNECTIONS, - mp_drawing_styles.get_default_hand_landmarks_style(), - mp_drawing_styles.get_default_hand_connections_style()) - - cv2.imshow('MediaPipe Hands', image) - - if cv2.waitKey(5) & 0xFF == 27: - break - - return capture - -device = 'cuda' if torch.cuda.is_available() else 'cpu' - -token = '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' - -model1 = torch.hub.load('bryandlee/animegan2-pytorch:main','generator',pretrained='face_paint_512_v1',device=device) -model2 = torch.hub.load('bryandlee/animegan2-pytorch:main','generator',pretrained='face_paint_512_v2',device=device) -model3 = torch.hub.load('bryandlee/animegan2-pytorch:main','generator',pretrained='celeba_distill', device=device) -model4 = torch.hub.load('bryandlee/animegan2-pytorch:main','generator',pretrained='paprika',device=device) - -face2paint = torch.hub.load( - 'bryandlee/animegan2-pytorch:main', 'face2paint', - size=512, device=device,side_by_side=False -) -def inference(img, ver): - img = Image.fromarray(img) - if ver == 'version 1': - return face2paint(model1,img) - elif ver == 'version 2': - return face2paint(model2,img) - elif ver == 'version 3': - return face2paint(model3, img) - elif ver == 'version 4': - return face2paint(model4, img) - -with gr.Blocks() as demo: - with gr.Row(): - with gr.Column(): - image = gr.Image(label="Input Image", source="webcam") - print(image) - ver = gr.Radio(['version 1','version 2','version 3','version 4'],label='version') - with gr.Column(): - out = gr.Image(label='Output Image') - - run = gr.Button("Run") - run.click(inference,inputs=[image,ver], outputs=out) - -# with gr.Blocks() as demo: -# with gr.Row(): -# with gr.Column(): -# image = gr.Image(label="Input Image", source="webcam") - -# #ver = gr.Radio(['version 1','version 2','version 3','version 4'],label='version') -# with gr.Column(): -# out = gr.Image(label='Output Image') - -# run = gr.Button("Run") -# run.click(v_capture,inputs=image, outputs=out) - - -demo.launch() \ No newline at end of file diff --git a/spaces/imseldrith/Imagine/templates/404.html b/spaces/imseldrith/Imagine/templates/404.html deleted file mode 100644 index 3fc924538b7274909f52c9098cfb742adbfe48e1..0000000000000000000000000000000000000000 --- a/spaces/imseldrith/Imagine/templates/404.html +++ /dev/null @@ -1,10 +0,0 @@ - - - - 404 Page Not Found - - -

    404 Page Not Found

    - Error Image - - \ No newline at end of file diff --git a/spaces/innnky/nyaru4.0/inference/infer_tool_grad.py b/spaces/innnky/nyaru4.0/inference/infer_tool_grad.py deleted file mode 100644 index b75af49c08e2e724839828bc419792ed580809bb..0000000000000000000000000000000000000000 --- a/spaces/innnky/nyaru4.0/inference/infer_tool_grad.py +++ /dev/null @@ -1,160 +0,0 @@ -import hashlib -import json -import logging -import os -import time -from pathlib import Path -import io -import librosa -import maad -import numpy as np -from inference import slicer -import parselmouth -import soundfile -import torch -import torchaudio - -from hubert import hubert_model -import utils -from models import SynthesizerTrn -logging.getLogger('numba').setLevel(logging.WARNING) -logging.getLogger('matplotlib').setLevel(logging.WARNING) - -def resize2d_f0(x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp(np.arange(0, len(source) * target_len, len(source)) / target_len, np.arange(0, len(source)), - source) - res = np.nan_to_num(target) - return res - -def get_f0(x, p_len,f0_up_key=0): - - time_step = 160 / 16000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 16000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0 - -def clean_pitch(input_pitch): - num_nan = np.sum(input_pitch == 1) - if num_nan / len(input_pitch) > 0.9: - input_pitch[input_pitch != 1] = 1 - return input_pitch - - -def plt_pitch(input_pitch): - input_pitch = input_pitch.astype(float) - input_pitch[input_pitch == 1] = np.nan - return input_pitch - - -def f0_to_pitch(ff): - f0_pitch = 69 + 12 * np.log2(ff / 440) - return f0_pitch - - -def fill_a_to_b(a, b): - if len(a) < len(b): - for _ in range(0, len(b) - len(a)): - a.append(a[0]) - - -def mkdir(paths: list): - for path in paths: - if not os.path.exists(path): - os.mkdir(path) - - -class VitsSvc(object): - def __init__(self): - self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - self.SVCVITS = None - self.hps = None - self.speakers = None - self.hubert_soft = utils.get_hubert_model() - - def set_device(self, device): - self.device = torch.device(device) - self.hubert_soft.to(self.device) - if self.SVCVITS != None: - self.SVCVITS.to(self.device) - - def loadCheckpoint(self, path): - self.hps = utils.get_hparams_from_file(f"checkpoints/{path}/config.json") - self.SVCVITS = SynthesizerTrn( - self.hps.data.filter_length // 2 + 1, - self.hps.train.segment_size // self.hps.data.hop_length, - **self.hps.model) - _ = utils.load_checkpoint(f"checkpoints/{path}/model.pth", self.SVCVITS, None) - _ = self.SVCVITS.eval().to(self.device) - self.speakers = self.hps.spk - - def get_units(self, source, sr): - source = source.unsqueeze(0).to(self.device) - with torch.inference_mode(): - units = self.hubert_soft.units(source) - return units - - - def get_unit_pitch(self, in_path, tran): - source, sr = torchaudio.load(in_path) - source = torchaudio.functional.resample(source, sr, 16000) - if len(source.shape) == 2 and source.shape[1] >= 2: - source = torch.mean(source, dim=0).unsqueeze(0) - soft = self.get_units(source, sr).squeeze(0).cpu().numpy() - f0_coarse, f0 = get_f0(source.cpu().numpy()[0], soft.shape[0]*2, tran) - return soft, f0 - - def infer(self, speaker_id, tran, raw_path): - speaker_id = self.speakers[speaker_id] - sid = torch.LongTensor([int(speaker_id)]).to(self.device).unsqueeze(0) - soft, pitch = self.get_unit_pitch(raw_path, tran) - f0 = torch.FloatTensor(clean_pitch(pitch)).unsqueeze(0).to(self.device) - stn_tst = torch.FloatTensor(soft) - with torch.no_grad(): - x_tst = stn_tst.unsqueeze(0).to(self.device) - x_tst = torch.repeat_interleave(x_tst, repeats=2, dim=1).transpose(1, 2) - audio = self.SVCVITS.infer(x_tst, f0=f0, g=sid)[0,0].data.float() - return audio, audio.shape[-1] - - def inference(self,srcaudio,chara,tran,slice_db): - sampling_rate, audio = srcaudio - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - soundfile.write("tmpwav.wav", audio, 16000, format="wav") - chunks = slicer.cut("tmpwav.wav", db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio("tmpwav.wav", chunks) - audio = [] - for (slice_tag, data) in audio_data: - length = int(np.ceil(len(data) / audio_sr * self.hps.data.sampling_rate)) - raw_path = io.BytesIO() - soundfile.write(raw_path, data, audio_sr, format="wav") - raw_path.seek(0) - if slice_tag: - _audio = np.zeros(length) - else: - out_audio, out_sr = self.infer(chara, tran, raw_path) - _audio = out_audio.cpu().numpy() - audio.extend(list(_audio)) - audio = (np.array(audio) * 32768.0).astype('int16') - return (self.hps.data.sampling_rate,audio) diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Free _HOT_ Download Ebook Novel Horor Indonesia.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Free _HOT_ Download Ebook Novel Horor Indonesia.md deleted file mode 100644 index 554cfbe4f05a447470647ea5a5cf5ad1987c39c1..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Free _HOT_ Download Ebook Novel Horor Indonesia.md +++ /dev/null @@ -1,24 +0,0 @@ -

    free download ebook novel horor indonesia


    Download Zip ✵✵✵ https://urlin.us/2uEwqH



    -
    -The Captain’s Bedtime Stories are tales of the Victorian era, featuring two heroes who grew up together and became best friends. When we grow up, we give up the special time with our nieces and nephews. - -For Captain Toto, a boy raised in the stories of the childhood of The Young Amelia Earhart, the summer of 1947 proved to be a turning point in his life. Amelia, a brilliant and courageous woman, was. Landline. "A dead person has died and is dead. - -It’s the end of the day, and the lights have come on in the dark, deserted house, which is lit with creepy and often scary stories. Put your headphones on and read along with your favorite scary stories from Goosebumps. - -As a boy, Abe was raised on fairy tales and stories from the Grimm Brothers. Abe dreamt of being a writer one day. This all changed, however, when he was served with divorce papers by his wife, who told him she was leaving because she couldn’t have children. - -Captain Toto, The Amazing Story of the Boy Who Dreamed of Living on the Moon. One would think that children, particularly boys, would be totally impressed by space. I believe, however, that if you were to ask any one, male or female, of the last 20 years to name a space exploration. - -Now you can hear Peter ask, “When I was a boy, I had a dog. Her name was Toto. Toto was always there for me. She was my best friend.”. - -How much can a dog earn? How much will he work for his bread and butter? After one particularly hard day, a boy and his dog returned to the boy’s house. His dog was tired and needed to be fed. But he had an idea. What if he could earn. - -Captain Toto is a book for both children and adults. Read by Luke and his two young friends, Jack and George, it tells the tale of a boy and his dog. - -Fairy tales were often retold in songs of different kinds. Eventually, folk songs often featured similar tales. While these were easy to recognize, it is not always easy to hear the connection between the different lyrics and the source of these tales. In this book, I show you the. - -As a boy, Abe was raised on fairy tales and stories from the Grimm Brothers. Abe dream 4fefd39f24
    -
    -
    -

    diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Indice Himnario Adventista Nuevo Pdf Download __LINK__.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Indice Himnario Adventista Nuevo Pdf Download __LINK__.md deleted file mode 100644 index c07e8aeb3fc70c97e64a8b28bad22de490aa4dfb..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Indice Himnario Adventista Nuevo Pdf Download __LINK__.md +++ /dev/null @@ -1,20 +0,0 @@ -
    -

    How to Download the New Adventist Hymnal in PDF Format

    -

    If you are looking for a way to download the new Adventist hymnal in PDF format, you are in luck. The new Adventist hymnal, which was released in 2009, contains 613 hymns in Spanish, many of which are also available in English. The hymnal also includes music scores and chords for each hymn, making it a valuable resource for musicians and singers.

    -

    To download the new Adventist hymnal in PDF format, you can follow these simple steps:

    -

    Indice Himnario Adventista Nuevo Pdf Download


    Downloadhttps://urlin.us/2uExPN



    -
      -
    1. Go to this link, which will take you to a Scribd document that contains the index of the hymnal.
    2. -
    3. Click on the "Download" button at the top right corner of the document. You may need to create a free account or sign in with your Facebook or Google account to access the download option.
    4. -
    5. Choose the format you want to download the document in. You can select PDF or TXT. The PDF format will preserve the original layout and quality of the document, while the TXT format will only contain the text without any formatting.
    6. -
    7. Save the document to your device or cloud storage. You can also print it if you prefer a hard copy.
    8. -
    -

    Alternatively, you can also download the new Adventist hymnal in PDF format from this website, which offers a direct link to the PDF file without requiring any account or subscription. However, this website may not be updated or reliable, so use it at your own risk.

    -

    We hope this article has helped you find and download the new Adventist hymnal in PDF format. Enjoy singing and playing along with these beautiful hymns that praise God and express our faith.

    - -

    The new Adventist hymnal is the result of a long and careful process of revision and selection of the best hymns from the previous editions. The hymnal committee, composed of pastors, musicians, theologians, and lay members, worked for several years to produce a hymnal that reflects the diversity and richness of the Adventist heritage and mission.

    -

    The new Adventist hymnal contains hymns from various sources, such as the Bible, the writings of Ellen G. White, the Reformation, the Wesleyan tradition, the Hispanic culture, and contemporary composers. The hymnal also includes hymns from different regions and languages of the world, such as Africa, Asia, Europe, and Latin America. The hymnal aims to foster unity and harmony among Adventists from different backgrounds and cultures.

    -

    The new Adventist hymnal is not only a collection of songs, but also a tool for worship and education. The hymnal contains various sections that correspond to different aspects of the Christian life and faith, such as praise, adoration, confession, assurance, dedication, service, prayer, thanksgiving, hope, and joy. The hymnal also includes responsive readings, scripture passages, creeds, confessions, and vows that can be used in various occasions and ceremonies.

    -

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Jurm Hindi Movie 1080p Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Jurm Hindi Movie 1080p Download.md deleted file mode 100644 index 3f93c7a2616ea14369dd6c3c6310c1478d3a7142..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Jurm Hindi Movie 1080p Download.md +++ /dev/null @@ -1,16 +0,0 @@ - -

    Jurm Hindi Movie 1080p Download: A Mystery-Thriller Film You Don't Want to Miss

    -

    If you are looking for a suspenseful and gripping movie to watch, you should check out Jurm, a Hindi mystery-thriller film written and directed by Vikram Bhatt. The movie was released in 2005 and stars Bobby Deol, Lara Dutta, Milind Soman, Gul Panag, Shakti Kapoor and Vivek Shauq. The plot of the movie is similar to the 1999 Hollywood movie Double Jeopardy, where a woman is framed for her husband's murder and has to prove her innocence.

    -

    In Jurm, Bobby Deol plays Avinash Malhotra, a successful businessman who is married to Sanjana (Lara Dutta), a former model. Their marriage is not happy, as Sanjana feels neglected by Avinash and suspects him of having an affair. One night, Avinash takes Sanjana to a yacht for a romantic getaway, but things go horribly wrong when he is shot by an unknown assailant and falls into the sea. Sanjana is arrested for his murder, but she manages to escape from custody with the help of Rohit (Milind Soman), a friend of Avinash.

    -

    Jurm Hindi Movie 1080p Download


    Download File »»» https://urlin.us/2uEwr7



    -

    Sanjana and Rohit then embark on a quest to find out who killed Avinash and why. They discover that Avinash had many enemies, including his business rivals, his ex-girlfriend Sonia (Gul Panag), and his corrupt lawyer Tarun (Vivek Shauq). They also learn that Avinash had a dark past that he had hidden from Sanjana. As they get closer to the truth, they realize that they are in grave danger and that nothing is what it seems.

    -

    Jurm is a movie that will keep you on the edge of your seat with its twists and turns. The movie has a lot of action, drama, romance and suspense. The performances of the actors are also commendable, especially Bobby Deol, who portrays the complex character of Avinash with conviction. The movie also has some catchy songs composed by Anu Malik, such as "O Sanam O Sanam" and "Meri Chahaton Ka Samunder".

    -

    If you want to watch Jurm in full HD quality, you can download it online from various platforms. However, you should be careful of illegal and pirated websites that may harm your device or compromise your data. You should always use legal and safe websites that offer high-quality video and audio. One such website is ZEE5[^1^], where you can watch Jurm full movie online in HD[^1^]. You can also enjoy other Hindi movies and shows on ZEE5 with a subscription.

    -

    So what are you waiting for? Download Jurm Hindi movie 1080p today and enjoy this thrilling film with your friends and family.

    - -

    Jurm is not just a movie, but a lesson in life. The movie shows how greed, lust and betrayal can ruin a person's life and how one should always be honest and loyal to their loved ones. The movie also teaches us to never lose hope and to fight for justice, no matter how difficult the situation may be. The movie has a powerful message that will resonate with the audience and make them think.

    -

    The movie also has some memorable scenes that will stay with you for a long time. One such scene is when Sanjana confronts Sonia in her apartment and slaps her for lying about Avinash. Another scene is when Avinash reveals his true identity to Sanjana and confesses his love for her. The climax of the movie is also very thrilling and shocking, as the real culprit behind Avinash's murder is exposed.

    -

    -

    Jurm is a movie that you should not miss if you are a fan of mystery-thriller genre. The movie has everything that you would expect from a Vikram Bhatt film: a gripping story, a talented star cast, a catchy soundtrack and a stunning cinematography. The movie will keep you hooked till the end and leave you satisfied.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Movavi Video Editor Plus 14.5.0 Crack [CracksMind] Full [EXCLUSIVE] Version.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Movavi Video Editor Plus 14.5.0 Crack [CracksMind] Full [EXCLUSIVE] Version.md deleted file mode 100644 index 8ef54ee6ef4394560c849777f29ab16ec30b10af..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Movavi Video Editor Plus 14.5.0 Crack [CracksMind] Full [EXCLUSIVE] Version.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Movavi Video Editor Plus 14.5.0 Crack [CracksMind] full version


    Download File ✏ ✏ ✏ https://urlin.us/2uExS9



    -
    -+ Crack .rar By ... HACK Movavi Video Editor Plus 17.6.0 + Crack by adenocul - issuu.. Universal. Adobe Patcher 4.6 with Update Management Tool full version ... 4d29de3e1b
    -
    -
    -

    diff --git a/spaces/inreVtussa/clothingai/Examples/Able2Extract Professional 14.0.12.0 With !!TOP!! Crack.md b/spaces/inreVtussa/clothingai/Examples/Able2Extract Professional 14.0.12.0 With !!TOP!! Crack.md deleted file mode 100644 index 5902dd3b316232612014a6764ac3167ce1e85fc7..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Able2Extract Professional 14.0.12.0 With !!TOP!! Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Able2Extract Professional 14.0.12.0 with Crack


    Download File 🌟 https://tiurll.com/2uClqs



    - - d5da3c52bf
    -
    -
    -

    diff --git a/spaces/ivntl/MMS/vits/monotonic_align/__init__.py b/spaces/ivntl/MMS/vits/monotonic_align/__init__.py deleted file mode 100644 index 3d7009c40fea3a98168e3e3bc9ae061e91327422..0000000000000000000000000000000000000000 --- a/spaces/ivntl/MMS/vits/monotonic_align/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -import numpy as np -import torch -from .monotonic_align.core import maximum_path_c - - -def maximum_path(neg_cent, mask): - """ Cython optimized version. - neg_cent: [b, t_t, t_s] - mask: [b, t_t, t_s] - """ - device = neg_cent.device - dtype = neg_cent.dtype - neg_cent = neg_cent.data.cpu().numpy().astype(np.float32) - path = np.zeros(neg_cent.shape, dtype=np.int32) - - t_t_max = mask.sum(1)[:, 0].data.cpu().numpy().astype(np.int32) - t_s_max = mask.sum(2)[:, 0].data.cpu().numpy().astype(np.int32) - maximum_path_c(path, neg_cent, t_t_max, t_s_max) - return torch.from_numpy(path).to(device=device, dtype=dtype) diff --git a/spaces/jackli888/stable-diffusion-webui/modules/deepbooru.py b/spaces/jackli888/stable-diffusion-webui/modules/deepbooru.py deleted file mode 100644 index 122fce7f569dbd28f9c6d83af874bb3efed34a5e..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/deepbooru.py +++ /dev/null @@ -1,99 +0,0 @@ -import os -import re - -import torch -from PIL import Image -import numpy as np - -from modules import modelloader, paths, deepbooru_model, devices, images, shared - -re_special = re.compile(r'([\\()])') - - -class DeepDanbooru: - def __init__(self): - self.model = None - - def load(self): - if self.model is not None: - return - - files = modelloader.load_models( - model_path=os.path.join(paths.models_path, "torch_deepdanbooru"), - model_url='https://github.com/AUTOMATIC1111/TorchDeepDanbooru/releases/download/v1/model-resnet_custom_v3.pt', - ext_filter=[".pt"], - download_name='model-resnet_custom_v3.pt', - ) - - self.model = deepbooru_model.DeepDanbooruModel() - self.model.load_state_dict(torch.load(files[0], map_location="cpu")) - - self.model.eval() - self.model.to(devices.cpu, devices.dtype) - - def start(self): - self.load() - self.model.to(devices.device) - - def stop(self): - if not shared.opts.interrogate_keep_models_in_memory: - self.model.to(devices.cpu) - devices.torch_gc() - - def tag(self, pil_image): - self.start() - res = self.tag_multi(pil_image) - self.stop() - - return res - - def tag_multi(self, pil_image, force_disable_ranks=False): - threshold = shared.opts.interrogate_deepbooru_score_threshold - use_spaces = shared.opts.deepbooru_use_spaces - use_escape = shared.opts.deepbooru_escape - alpha_sort = shared.opts.deepbooru_sort_alpha - include_ranks = shared.opts.interrogate_return_ranks and not force_disable_ranks - - pic = images.resize_image(2, pil_image.convert("RGB"), 512, 512) - a = np.expand_dims(np.array(pic, dtype=np.float32), 0) / 255 - - with torch.no_grad(), devices.autocast(): - x = torch.from_numpy(a).to(devices.device) - y = self.model(x)[0].detach().cpu().numpy() - - probability_dict = {} - - for tag, probability in zip(self.model.tags, y): - if probability < threshold: - continue - - if tag.startswith("rating:"): - continue - - probability_dict[tag] = probability - - if alpha_sort: - tags = sorted(probability_dict) - else: - tags = [tag for tag, _ in sorted(probability_dict.items(), key=lambda x: -x[1])] - - res = [] - - filtertags = set([x.strip().replace(' ', '_') for x in shared.opts.deepbooru_filter_tags.split(",")]) - - for tag in [x for x in tags if x not in filtertags]: - probability = probability_dict[tag] - tag_outformat = tag - if use_spaces: - tag_outformat = tag_outformat.replace('_', ' ') - if use_escape: - tag_outformat = re.sub(re_special, r'\\\1', tag_outformat) - if include_ranks: - tag_outformat = f"({tag_outformat}:{probability:.3f})" - - res.append(tag_outformat) - - return ", ".join(res) - - -model = DeepDanbooru() diff --git a/spaces/jackyccl/segment-anything/segment_anything/predictor.py b/spaces/jackyccl/segment-anything/segment_anything/predictor.py deleted file mode 100644 index 8a6e6d816955b4c6097e1de6ce6e4ed3bafe327c..0000000000000000000000000000000000000000 --- a/spaces/jackyccl/segment-anything/segment_anything/predictor.py +++ /dev/null @@ -1,269 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import numpy as np -import torch - -from segment_anything.modeling import Sam - -from typing import Optional, Tuple - -from .utils.transforms import ResizeLongestSide - - -class SamPredictor: - def __init__( - self, - sam_model: Sam, - ) -> None: - """ - Uses SAM to calculate the image embedding for an image, and then - allow repeated, efficient mask prediction given prompts. - - Arguments: - sam_model (Sam): The model to use for mask prediction. - """ - super().__init__() - self.model = sam_model - self.transform = ResizeLongestSide(sam_model.image_encoder.img_size) - self.reset_image() - - def set_image( - self, - image: np.ndarray, - image_format: str = "RGB", - ) -> None: - """ - Calculates the image embeddings for the provided image, allowing - masks to be predicted with the 'predict' method. - - Arguments: - image (np.ndarray): The image for calculating masks. Expects an - image in HWC uint8 format, with pixel values in [0, 255]. - image_format (str): The color format of the image, in ['RGB', 'BGR']. - """ - assert image_format in [ - "RGB", - "BGR", - ], f"image_format must be in ['RGB', 'BGR'], is {image_format}." - if image_format != self.model.image_format: - image = image[..., ::-1] - - # Transform the image to the form expected by the model - input_image = self.transform.apply_image(image) - input_image_torch = torch.as_tensor(input_image, device=self.device) - input_image_torch = input_image_torch.permute(2, 0, 1).contiguous()[None, :, :, :] - - self.set_torch_image(input_image_torch, image.shape[:2]) - - @torch.no_grad() - def set_torch_image( - self, - transformed_image: torch.Tensor, - original_image_size: Tuple[int, ...], - ) -> None: - """ - Calculates the image embeddings for the provided image, allowing - masks to be predicted with the 'predict' method. Expects the input - image to be already transformed to the format expected by the model. - - Arguments: - transformed_image (torch.Tensor): The input image, with shape - 1x3xHxW, which has been transformed with ResizeLongestSide. - original_image_size (tuple(int, int)): The size of the image - before transformation, in (H, W) format. - """ - assert ( - len(transformed_image.shape) == 4 - and transformed_image.shape[1] == 3 - and max(*transformed_image.shape[2:]) == self.model.image_encoder.img_size - ), f"set_torch_image input must be BCHW with long side {self.model.image_encoder.img_size}." - self.reset_image() - - self.original_size = original_image_size - self.input_size = tuple(transformed_image.shape[-2:]) - input_image = self.model.preprocess(transformed_image) - self.features = self.model.image_encoder(input_image) - self.is_image_set = True - - def predict( - self, - point_coords: Optional[np.ndarray] = None, - point_labels: Optional[np.ndarray] = None, - box: Optional[np.ndarray] = None, - mask_input: Optional[np.ndarray] = None, - multimask_output: bool = True, - return_logits: bool = False, - ) -> Tuple[np.ndarray, np.ndarray, np.ndarray]: - """ - Predict masks for the given input prompts, using the currently set image. - - Arguments: - point_coords (np.ndarray or None): A Nx2 array of point prompts to the - model. Each point is in (X,Y) in pixels. - point_labels (np.ndarray or None): A length N array of labels for the - point prompts. 1 indicates a foreground point and 0 indicates a - background point. - box (np.ndarray or None): A length 4 array given a box prompt to the - model, in XYXY format. - mask_input (np.ndarray): A low resolution mask input to the model, typically - coming from a previous prediction iteration. Has form 1xHxW, where - for SAM, H=W=256. - multimask_output (bool): If true, the model will return three masks. - For ambiguous input prompts (such as a single click), this will often - produce better masks than a single prediction. If only a single - mask is needed, the model's predicted quality score can be used - to select the best mask. For non-ambiguous prompts, such as multiple - input prompts, multimask_output=False can give better results. - return_logits (bool): If true, returns un-thresholded masks logits - instead of a binary mask. - - Returns: - (np.ndarray): The output masks in CxHxW format, where C is the - number of masks, and (H, W) is the original image size. - (np.ndarray): An array of length C containing the model's - predictions for the quality of each mask. - (np.ndarray): An array of shape CxHxW, where C is the number - of masks and H=W=256. These low resolution logits can be passed to - a subsequent iteration as mask input. - """ - if not self.is_image_set: - raise RuntimeError("An image must be set with .set_image(...) before mask prediction.") - - # Transform input prompts - coords_torch, labels_torch, box_torch, mask_input_torch = None, None, None, None - if point_coords is not None: - assert ( - point_labels is not None - ), "point_labels must be supplied if point_coords is supplied." - point_coords = self.transform.apply_coords(point_coords, self.original_size) - coords_torch = torch.as_tensor(point_coords, dtype=torch.float, device=self.device) - labels_torch = torch.as_tensor(point_labels, dtype=torch.int, device=self.device) - coords_torch, labels_torch = coords_torch[None, :, :], labels_torch[None, :] - if box is not None: - box = self.transform.apply_boxes(box, self.original_size) - box_torch = torch.as_tensor(box, dtype=torch.float, device=self.device) - box_torch = box_torch[None, :] - if mask_input is not None: - mask_input_torch = torch.as_tensor(mask_input, dtype=torch.float, device=self.device) - mask_input_torch = mask_input_torch[None, :, :, :] - - masks, iou_predictions, low_res_masks = self.predict_torch( - coords_torch, - labels_torch, - box_torch, - mask_input_torch, - multimask_output, - return_logits=return_logits, - ) - - masks_np = masks[0].detach().cpu().numpy() - iou_predictions_np = iou_predictions[0].detach().cpu().numpy() - low_res_masks_np = low_res_masks[0].detach().cpu().numpy() - return masks_np, iou_predictions_np, low_res_masks_np - - @torch.no_grad() - def predict_torch( - self, - point_coords: Optional[torch.Tensor], - point_labels: Optional[torch.Tensor], - boxes: Optional[torch.Tensor] = None, - mask_input: Optional[torch.Tensor] = None, - multimask_output: bool = True, - return_logits: bool = False, - ) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]: - """ - Predict masks for the given input prompts, using the currently set image. - Input prompts are batched torch tensors and are expected to already be - transformed to the input frame using ResizeLongestSide. - - Arguments: - point_coords (torch.Tensor or None): A BxNx2 array of point prompts to the - model. Each point is in (X,Y) in pixels. - point_labels (torch.Tensor or None): A BxN array of labels for the - point prompts. 1 indicates a foreground point and 0 indicates a - background point. - boxes (np.ndarray or None): A Bx4 array given a box prompt to the - model, in XYXY format. - mask_input (np.ndarray): A low resolution mask input to the model, typically - coming from a previous prediction iteration. Has form Bx1xHxW, where - for SAM, H=W=256. Masks returned by a previous iteration of the - predict method do not need further transformation. - multimask_output (bool): If true, the model will return three masks. - For ambiguous input prompts (such as a single click), this will often - produce better masks than a single prediction. If only a single - mask is needed, the model's predicted quality score can be used - to select the best mask. For non-ambiguous prompts, such as multiple - input prompts, multimask_output=False can give better results. - return_logits (bool): If true, returns un-thresholded masks logits - instead of a binary mask. - - Returns: - (torch.Tensor): The output masks in BxCxHxW format, where C is the - number of masks, and (H, W) is the original image size. - (torch.Tensor): An array of shape BxC containing the model's - predictions for the quality of each mask. - (torch.Tensor): An array of shape BxCxHxW, where C is the number - of masks and H=W=256. These low res logits can be passed to - a subsequent iteration as mask input. - """ - if not self.is_image_set: - raise RuntimeError("An image must be set with .set_image(...) before mask prediction.") - - if point_coords is not None: - points = (point_coords, point_labels) - else: - points = None - - # Embed prompts - sparse_embeddings, dense_embeddings = self.model.prompt_encoder( - points=points, - boxes=boxes, - masks=mask_input, - ) - - # Predict masks - low_res_masks, iou_predictions = self.model.mask_decoder( - image_embeddings=self.features, - image_pe=self.model.prompt_encoder.get_dense_pe(), - sparse_prompt_embeddings=sparse_embeddings, - dense_prompt_embeddings=dense_embeddings, - multimask_output=multimask_output, - ) - - # Upscale the masks to the original image resolution - masks = self.model.postprocess_masks(low_res_masks, self.input_size, self.original_size) - - if not return_logits: - masks = masks > self.model.mask_threshold - - return masks, iou_predictions, low_res_masks - - def get_image_embedding(self) -> torch.Tensor: - """ - Returns the image embeddings for the currently set image, with - shape 1xCxHxW, where C is the embedding dimension and (H,W) are - the embedding spatial dimension of SAM (typically C=256, H=W=64). - """ - if not self.is_image_set: - raise RuntimeError( - "An image must be set with .set_image(...) to generate an embedding." - ) - assert self.features is not None, "Features must exist if an image has been set." - return self.features - - @property - def device(self) -> torch.device: - return self.model.device - - def reset_image(self) -> None: - """Resets the currently set image.""" - self.is_image_set = False - self.features = None - self.orig_h = None - self.orig_w = None - self.input_h = None - self.input_w = None diff --git a/spaces/jiejiejie0420/bingo/src/components/turn-counter.tsx b/spaces/jiejiejie0420/bingo/src/components/turn-counter.tsx deleted file mode 100644 index 08a9e488f044802a8600f4d195b106567c35aab4..0000000000000000000000000000000000000000 --- a/spaces/jiejiejie0420/bingo/src/components/turn-counter.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import React from 'react' -import { Throttling } from '@/lib/bots/bing/types' - -export interface TurnCounterProps { - throttling?: Throttling -} - -export function TurnCounter({ throttling }: TurnCounterProps) { - if (!throttling) { - return null - } - - return ( -
    -
    - {throttling.numUserMessagesInConversation} - - {throttling.maxNumUserMessagesInConversation} -
    -
    -
    - ) -} diff --git a/spaces/jmcob/Transformers-StoryWriting/app.py b/spaces/jmcob/Transformers-StoryWriting/app.py deleted file mode 100644 index ddc65f3de41702c8da214f25de21d9b193c5a5f3..0000000000000000000000000000000000000000 --- a/spaces/jmcob/Transformers-StoryWriting/app.py +++ /dev/null @@ -1,34 +0,0 @@ -import gradio as gr -import transformers as tr -import numpy as np - -generator1 = gr.Interface.load("huggingface/gpt2-large") -generator2 = gr.Interface.load("huggingface/EleutherAI/gpt-neo-2.7B") -generator3 = gr.Interface.load("huggingface/EleutherAI/gpt-j-6B") - - -demo = gr.Blocks() - -def f1(x): - return generator1(x) -def f2(x): - return generator2(x) -def f3(x): - return generator3(x) - - -with demo: - textIn = gr.Textbox() - textOut1 = gr.Textbox() - textOut2 = gr.Textbox() - textOut3 = gr.Textbox() - - b1 = gr.Button("gpt2-large") - b2 = gr.Button("gpt-neo-2.7B") - b3 = gr.Button("gpt-j-6B") - - b1.click(f1, inputs=textIn, outputs=textOut1 ) - b2.click(f2, inputs=textIn, outputs=textOut2 ) - b3.click(f3, inputs=textIn, outputs=textOut3 ) - -demo.launch() \ No newline at end of file diff --git a/spaces/jordonpeter01/MusicGen/tests/modules/test_codebooks_patterns.py b/spaces/jordonpeter01/MusicGen/tests/modules/test_codebooks_patterns.py deleted file mode 100644 index b658f4779a369f9ec8dde692a61b7f0fe3485724..0000000000000000000000000000000000000000 --- a/spaces/jordonpeter01/MusicGen/tests/modules/test_codebooks_patterns.py +++ /dev/null @@ -1,246 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import pytest -import torch - -from audiocraft.modules.codebooks_patterns import ( - DelayedPatternProvider, - ParallelPatternProvider, - Pattern, - UnrolledPatternProvider, -) - - -class TestParallelPatternProvider: - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [0, 1, 16, 100]) - def test_get_pattern(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - # + 1 to account for 1st step - assert len(pattern.layout) == timesteps + 1 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_content(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - for s, v in enumerate(pattern.layout): - for i, code in enumerate(v): - assert i == code.q - assert code.t == s - 1 # account for the 1st empty step - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_max_delay(self, n_q: int, timesteps: int): - provider = ParallelPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == 0 - assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay - - -class TestDelayedPatternProvider: - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [0, 1, 16, 100]) - def test_get_pattern(self, n_q: int, timesteps: int): - delays = [ - list(range(n_q)), - [0] + [1] * (n_q - 1), - [0] + [4] * (n_q - 1), - ] - for delay in delays: - provider = DelayedPatternProvider(n_q, delay) - pattern = provider.get_pattern(timesteps) - # + 1 to account for 1st step - assert len(pattern.layout) == timesteps + max(delay) + 1 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - def test_pattern_content(self, n_q: int, timesteps: int): - provider = DelayedPatternProvider(n_q) - pattern = provider.get_pattern(timesteps) - for s, v in enumerate(pattern.layout): - for i, code in enumerate(v): - assert i == code.q - assert code.t == max(0, s - code.q - 1) - - @pytest.mark.parametrize("timesteps", [8, 16, 100]) - @pytest.mark.parametrize("delay", [[0, 1, 2, 3], [0, 1, 1, 1], [0, 3, 3, 3], [0, 3]]) - def test_pattern_max_delay(self, timesteps: int, delay: list): - provider = DelayedPatternProvider(len(delay), delay) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == max(delay) - assert len(pattern.valid_layout) == len(pattern.layout) - pattern.max_delay - - -class TestUnrolledPatternProvider: - - @pytest.mark.parametrize("timesteps", [0, 1, 16]) - @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]]) - @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]]) - def test_get_pattern(self, timesteps: int, flattening: list, delays: list): - n_q = len(flattening) - max_delay = max(delays) - provider = UnrolledPatternProvider(n_q, flattening, delays) - pattern = provider.get_pattern(timesteps) - assert len(pattern.layout) == provider.num_virtual_steps(timesteps) + max_delay - - @pytest.mark.parametrize("timesteps", [0, 1, 16]) - @pytest.mark.parametrize("flattening", [[0, 1, 2], [0, 1, 1]]) - @pytest.mark.parametrize("delays", [[0, 0, 0], [0, 5, 5]]) - def test_pattern_max_delay(self, timesteps: int, flattening: list, delays: list): - n_q = len(flattening) - max_delay = max(delays) - provider = UnrolledPatternProvider(n_q, flattening, delays) - pattern = provider.get_pattern(timesteps) - assert pattern.max_delay == max_delay - - -class TestPattern: - - def ref_build_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int): - """Reference method to build the sequence from the pattern without using fancy scatter.""" - bs, n_q, T = z.shape - z = z.cpu().numpy() - assert n_q == pattern.n_q - assert T <= pattern.timesteps - inp = torch.full((bs, n_q, len(pattern.layout)), special_token, dtype=torch.long).numpy() - inp[:] = special_token - for s, v in enumerate(pattern.layout): - for (t, q) in v: - if t < T: - inp[:, q, s] = z[:, q, t] - return torch.from_numpy(inp) - - def ref_revert_pattern_sequence(self, z: torch.Tensor, pattern: Pattern, special_token: int): - """Reference method to revert the sequence from the pattern without using fancy scatter.""" - z = z.cpu().numpy() - bs, n_q, S = z.shape - assert pattern.n_q == n_q - inp = torch.full((bs, pattern.n_q, pattern.timesteps), special_token, dtype=torch.long).numpy() - inp[:] = special_token - for s, v in enumerate(pattern.layout): - for (t, q) in v: - if t < pattern.timesteps: - inp[:, q, t] = z[:, q, s] - return torch.from_numpy(inp) - - def ref_revert_pattern_logits(self, z: torch.Tensor, pattern: Pattern, special_token: float): - """Reference method to revert the logits from the pattern without using fancy scatter.""" - z = z.cpu().numpy() - bs, card, n_q, S = z.shape - assert pattern.n_q == n_q - ref_layout = pattern.layout - inp = torch.full((bs, card, pattern.n_q, pattern.timesteps), special_token, dtype=torch.float).numpy() - inp[:] = special_token - for s, v in enumerate(ref_layout[1:]): - if s < S: - for (t, q) in v: - if t < pattern.timesteps: - inp[:, :, q, t] = z[:, :, q, s] - return torch.from_numpy(inp) - - def _get_pattern_providers(self, n_q: int): - pattern_provider_1 = ParallelPatternProvider(n_q) - pattern_provider_2 = DelayedPatternProvider(n_q, list(range(n_q))) - pattern_provider_3 = DelayedPatternProvider(n_q, [0] + [1] * (n_q - 1)) - pattern_provider_4 = UnrolledPatternProvider( - n_q, flattening=list(range(n_q)), delays=[0] * n_q - ) - pattern_provider_5 = UnrolledPatternProvider( - n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] * n_q - ) - pattern_provider_6 = UnrolledPatternProvider( - n_q, flattening=[0] + [1] * (n_q - 1), delays=[0] + [5] * (n_q - 1) - ) - return [ - pattern_provider_1, - pattern_provider_2, - pattern_provider_3, - pattern_provider_4, - pattern_provider_5, - pattern_provider_6, - ] - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - def test_build_pattern_sequence(self, n_q: int, timesteps: int): - bs = 2 - card = 256 - special_token = card - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # we can correctly build the sequence from the pattern - z = torch.randint(0, card, (bs, n_q, timesteps)) - ref_res = self.ref_build_pattern_sequence(z, pattern, special_token) - res, indexes, mask = pattern.build_pattern_sequence(z, special_token) - assert (res == ref_res).float().mean() == 1.0 - - # expected assertion fails on the number of timesteps - invalid_timesteps = [timesteps + 1] - if pattern.num_sequence_steps != pattern.timesteps: - invalid_timesteps.append(pattern.num_sequence_steps) - for i_timesteps in invalid_timesteps: - z2 = torch.randint(0, card, (bs, n_q, i_timesteps)) - with pytest.raises(AssertionError): - pattern.build_pattern_sequence(z2, special_token) - - # expected assertion fails on the number of codebooks - invalid_qs = [0, n_q - 1, n_q + 1] - for i_q in invalid_qs: - z3 = torch.randint(0, card, (bs, i_q, timesteps)) - with pytest.raises(AssertionError): - pattern.build_pattern_sequence(z3, special_token) - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - def test_revert_pattern_sequence(self, n_q: int, timesteps: int): - bs = 2 - card = 256 - special_token = card - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # this works assuming previous tests are successful - z = torch.randint(0, card, (bs, n_q, timesteps)) - s = self.ref_build_pattern_sequence(z, pattern, special_token) - ref_out = self.ref_revert_pattern_sequence(s, pattern, special_token) - # ensure our reference script retrieve the original sequence - assert z.shape == ref_out.shape - assert (z == ref_out).float().mean() == 1.0 - # now we can test the scatter version - out, indexes, mask = pattern.revert_pattern_sequence(s, special_token) - assert out.shape == ref_out.shape - assert (out == ref_out).float().mean() == 1.0 - - @pytest.mark.parametrize("n_q", [1, 4, 32]) - @pytest.mark.parametrize("timesteps", [16, 72]) - @pytest.mark.parametrize("card", [1, 2, 256, 1024]) - def test_revert_pattern_logits(self, n_q: int, timesteps: int, card: int): - bs = 2 - special_token = card - logits_special_token = float('nan') - - pattern_providers = self._get_pattern_providers(n_q) - for pattern_provider in pattern_providers: - pattern = pattern_provider.get_pattern(timesteps) - # this works assuming previous tests are successful - z = torch.randint(0, card, (bs, n_q, timesteps)) - s = self.ref_build_pattern_sequence(z, pattern, special_token) - logits = torch.randn((bs, card, n_q, s.shape[-1])) - ref_out = self.ref_revert_pattern_logits(logits, pattern, logits_special_token) - # ensure our reference script retrieve the original sequence - assert ref_out.shape == torch.Size([bs, card, n_q, timesteps]) - # now we can test the scatter version - out, indexes, mask = pattern.revert_pattern_logits(logits, logits_special_token) - assert out.shape == ref_out.shape - assert (out == ref_out).float().mean() == 1.0 diff --git a/spaces/joshen/gpt-academic/crazy_functions/test_project/cpp/cppipc/policy.h b/spaces/joshen/gpt-academic/crazy_functions/test_project/cpp/cppipc/policy.h deleted file mode 100644 index f88ab5d8cb343f97026966b402eaeed8831e356a..0000000000000000000000000000000000000000 --- a/spaces/joshen/gpt-academic/crazy_functions/test_project/cpp/cppipc/policy.h +++ /dev/null @@ -1,25 +0,0 @@ -#pragma once - -#include - -#include "libipc/def.h" -#include "libipc/prod_cons.h" - -#include "libipc/circ/elem_array.h" - -namespace ipc { -namespace policy { - -template